Search is not available for this dataset
project
stringlengths 1
235
| source
stringclasses 16
values | language
stringclasses 48
values | content
stringlengths 909
64.8M
|
---|---|---|---|
cologne_phonetic_ex | hex | Erlang | Toggle Theme
cologne_phonetic_ex v1.0.0
API Reference
===
Modules
---
[ColognePhoneticEx](ColognePhoneticEx.html)
**Cologne phonetics** (also Kölner Phonetik, Cologne process) is a phonetic algorithm which assigns to words a sequence of digits, the phonetic code.
The aim of this procedure is that identical sounding words have the same code assigned to them. The algorithm can be used to perform a similarity search between words. For example, it is possible in a name list to find entries like “Meier” under different spellings such as “Maier”, “Mayer”, or “Mayr”.
The Cologne phonetics is related to the well known Soundex phoneticalgorithm but is optimized to match the German language
Toggle Theme
cologne_phonetic_ex v1.0.0
ColognePhoneticEx
===
**Cologne phonetics** (also Kölner Phonetik, Cologne process) is a phonetic algorithm which assigns to words a sequence of digits, the phonetic code.
The aim of this procedure is that identical sounding words have the same code assigned to them. The algorithm can be used to perform a similarity search between words. For example, it is possible in a name list to find entries like “Meier” under different spellings such as “Maier”, “Mayer”, or “Mayr”.
The Cologne phonetics is related to the well known Soundex phoneticalgorithm but is optimized to match the German language.
[de.wikipedia.org/wiki/Kölner_Phonetik](http://de.wikipedia.org/wiki/K%C3%B6lner_Phonetik)
Copyright © 2018 <NAME>. All rights reserved.
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[as_cologne_phonetic(term)](#as_cologne_phonetic/1)
Calculates and returns the “Cologne Phonetic” (Kölner Phonetik) code for the given string.
It’s the phonetic code for the German language
[Link to this section](#functions)
Functions
===
[Link to this function](#as_cologne_phonetic/1 "Link to this function")
as_cologne_phonetic(term)
```
as_cologne_phonetic(String) :: String
```
Calculates and returns the “Cologne Phonetic” (Kölner Phonetik) code for the given string.
It’s the phonetic code for the German language.
Examples
---
```
iex> ColognePhoneticEx.as_cologne_phonetic("Bühler")
"157"
``` |
dGAselID | cran | R | Package ‘dGAselID’
October 13, 2022
Type Package
Title Genetic Algorithm with Incomplete Dominance for Feature
Selection
Version 1.2
Author <NAME>
Maintainer <NAME> <<EMAIL>>
Description Feature selection from high dimensional data using a diploid
genetic algorithm with Incomplete Dominance for genotype to phenotype mapping
and Random Assortment of chromosomes approach to recombination.
Depends R (>= 3.3.1), Biobase, MLInterfaces
Imports genefilter, ALL, grDevices, graphics, stats, utils
License MIT + file LICENSE
LazyData TRUE
RoxygenNote 5.0.1
NeedsCompilation no
Repository CRAN
Date/Publication 2017-07-10 05:02:55 UTC
R topics documented:
AnalyzeResult... 2
Crossove... 3
dGAselI... 4
Elitis... 6
EmbryonicSelectio... 7
EvaluationFunctio... 8
frameShiftMutatio... 9
Individual... 10
InitialPopulatio... 10
largeSegmentDeletio... 11
nonSenseMutatio... 12
PlotGenAl... 13
pointMutatio... 13
RandomAssortmen... 14
RandomizePo... 15
splitChromosome... 15
transposo... 16
wholeChromosomeDeletio... 17
AnalyzeResults AnalyzeResults
Description
Ranks individuals according to their fitness and records the results.
Usage
AnalyzeResults(individuals, results, randomAssortment = TRUE, chrConf)
Arguments
individuals Population of individuals with diploid genotypes.
results Results returned by EvaluationFunction().
randomAssortment
Random Assortment of Chromosomes for recombinations. The default value is
TRUE.
chrConf Configuration of chromosomes returned by splitChromosomes().
Examples
## Not run:
library(genefilter)
library(ALL)
data(ALL)
bALL = ALL[, substr(ALL$BT,1,1) == "B"]
smallALL = bALL[, bALL$mol.biol %in% c("BCR/ABL", "NEG")]
smallALL$mol.biol = factor(smallALL$mol.biol)
smallALL$BT = factor(smallALL$BT)
f1 <- pOverA(0.25, log2(100))
f2 <- function(x) (IQR(x) > 0.5)
f3 <- ttest(smallALL$mol.biol, p=0.1)
ff <- filterfun(f1, f2, f3)
selectedsmallALL <- genefilter(exprs(smallALL), ff)
smallALL = smallALL[selectedsmallALL, ]
rm(f1)
rm(f2)
rm(f3)
rm(ff)
rm(bALL)
sum(selectedsmallALL)
set.seed(1357)
population0<-InitialPopulation(smallALL, 14, 10, FALSE)
individuals0<-Individuals(population0)
results0<-EvaluationFunction(smallALL, individuals0, response="mol.biol",
method=knn.cvI(k=3, l=2), trainTest="LOG")
chrConf0<-splitChromosomes(smallALL)
iterRes0<-AnalyzeResults(individuals0, results0, randomAssortment=TRUE, chrConf0)
## End(Not run)
Crossover Crossover
Description
Two-point crossover operator.
Usage
Crossover(c1, c2, chrConf)
Arguments
c1 Set of chromosomes.
c2 Set of chromosomes.
chrConf Configuration of chromosomes returned by splitChromosomes().
Examples
## Not run:
library(ALL)
data(ALL)
demoALL<-ALL[1:12,1:8]
set.seed(1357)
population02<-InitialPopulation(demoALL, 2, 4, FALSE)
chrConf02<-splitChromosomes(demoALL, 2)
chrConf02
population02[1:2,]
Crossover(population02[1,], population02[2,], chrConf02)
## End(Not run)
dGAselID dGAselID
Description
Initializes and starts the search with the genetic algorithm.
Usage
dGAselID(x, response, method = knn.cvI(k = 3, l = 2), trainTest = "LOG",
startGenes, populationSize, iterations, noChr = 22, elitism = NA,
ID = "ID1", pMutationChance = 0, nSMutationChance = 0,
fSMutationChance = 0, lSDeletionChance = 0, wChrDeletionChance = 0,
transposonChance = 0, randomAssortment = TRUE, embryonicSelection = NA,
EveryGeneInInitialPopulation = TRUE, nnetSize = NA, nnetDecay = NA,
rdaAlpha = NA, rdaDelta = NA, ...)
Arguments
x Dataset in ExpressionSet format.
response Response variable
method Supervised classifier for fitness evaluation. Most of the supervised classifiers in
MLInterfaces are acceptable. The default is knn.cvI(k=3, l=2).
trainTest Cross-validation method. The default is "LOG".
startGenes Genes in the genotypes at initialization.
populationSize Number of genotypes in initial population.
iterations Number of iterations.
noChr Number of chromosomes. The default value is 22.
elitism Elite population in percentages.
ID Dominance. The default value is "ID1". Use "ID2" for Incomplete Dominance.
pMutationChance
Chance for a Point Mutation to occur. The default value is 0.
nSMutationChance
Chance for a Non-sense Mutation to occur. The default value is 0.
fSMutationChance
Chance for a Frameshift Mutation to occur. The default value is 0.
lSDeletionChance
Chance for a Large Segment Deletion to occur. The default value is 0.
wChrDeletionChance
Chance for a Whole Chromosome Deletion to occur. The default value is 0.
transposonChance
Chance for a Transposon Mutation to occur. The default value is 0.
randomAssortment
Random Assortment of Chromosomes for recombinations. The default value is
TRUE.
embryonicSelection
Remove chromosomes with fitness < specified value. The default value is NA.
EveryGeneInInitialPopulation
Request for every gene to be present in the initial population. The default value
is TRUE.
nnetSize for nnetI. The default value is NA.
nnetDecay for nnetI. The default value is NA.
rdaAlpha for rdaI. The default value is NA.
rdaDelta for rdaI. The default value is NA.
... Additional arguments.
Value
The output is a list containing 5 named vectors, records of the evolution:
DGenes The occurrences in selected genotypes for every gene,
dGenes The occurrences in discarded genotypes for every gene,
MaximumAccuracy
Maximum accuracy in every generation,
MeanAccuracy Average accuracy in every generation,
MinAccuracy Minimum accuracy in every generation,
BestIndividuals
Best individual in every generation.
Examples
## Not run:
library(genefilter)
library(ALL)
data(ALL)
bALL = ALL[, substr(ALL$BT,1,1) == "B"]
smallALL = bALL[, bALL$mol.biol %in% c("BCR/ABL", "NEG")]
smallALL$mol.biol = factor(smallALL$mol.biol)
smallALL$BT = factor(smallALL$BT)
f1 <- pOverA(0.25, log2(100))
f2 <- function(x) (IQR(x) > 0.5)
f3 <- ttest(smallALL$mol.biol, p=0.1)
ff <- filterfun(f1, f2, f3)
selectedsmallALL <- genefilter(exprs(smallALL), ff)
smallALL = smallALL[selectedsmallALL, ]
rm(f1)
rm(f2)
rm(f3)
rm(ff)
rm(bALL)
sum(selectedsmallALL)
set.seed(149)
res<-dGAselID(smallALL, "mol.biol", trainTest=1:79, startGenes=12, populationSize=200,
iterations=150, noChr=5, pMutationChance=0.0075, elitism=4)
## End(Not run)
Elitism Elitism
Description
Operator for elitism.
Usage
Elitism(results, elitism, ID)
Arguments
results Results returned by EvaluationFunction().
elitism Elite population in percentages.
ID Dominance. The default value is "ID1". Use "ID2" for Incomplete Dominance.
Examples
## Not run:
library(genefilter)
library(ALL)
data(ALL)
bALL = ALL[, substr(ALL$BT,1,1) == "B"]
smallALL = bALL[, bALL$mol.biol %in% c("BCR/ABL", "NEG")]
smallALL$mol.biol = factor(smallALL$mol.biol)
smallALL$BT = factor(smallALL$BT)
f1 <- pOverA(0.25, log2(100))
f2 <- function(x) (IQR(x) > 0.5)
f3 <- ttest(smallALL$mol.biol, p=0.1)
ff <- filterfun(f1, f2, f3)
selectedsmallALL <- genefilter(exprs(smallALL), ff)
smallALL = smallALL[selectedsmallALL, ]
rm(f1)
rm(f2)
rm(f3)
rm(ff)
rm(bALL)
sum(selectedsmallALL)
set.seed(1357)
population0<-InitialPopulation(smallALL, 14, 8, FALSE)
individuals0<-Individuals(population0)
results0<-EvaluationFunction(smallALL, individuals0, response="mol.biol",
method=knn.cvI(k=3, l=2), trainTest="LOG")
Elitism(results0, 25, ID="ID1")
Elitism(results0, 25, ID="ID2")
## End(Not run)
EmbryonicSelection EmbryonicSelection
Description
Function for deleting individuals with a fitness below a specified threshold.
Usage
EmbryonicSelection(population, results, embryonicSelection)
Arguments
population Population of individuals with diploid genotypes.
results Results returned by EvaluationFunction().
embryonicSelection
Threshold value. The default value is NA.
Examples
## Not run:
library(genefilter)
library(ALL)
data(ALL)
bALL = ALL[, substr(ALL$BT,1,1) == "B"]
smallALL = bALL[, bALL$mol.biol %in% c("BCR/ABL", "NEG")]
smallALL$mol.biol = factor(smallALL$mol.biol)
smallALL$BT = factor(smallALL$BT)
f1 <- pOverA(0.25, log2(100))
f2 <- function(x) (IQR(x) > 0.5)
f3 <- ttest(smallALL$mol.biol, p=0.1)
ff <- filterfun(f1, f2, f3)
selectedsmallALL <- genefilter(exprs(smallALL), ff)
smallALL = smallALL[selectedsmallALL, ]
rm(f1)
rm(f2)
rm(f3)
rm(ff)
rm(bALL)
sum(selectedsmallALL)
set.seed(1357)
population0<-InitialPopulation(smallALL, 14, 8, FALSE)
individuals0<-Individuals(population0)
results0<-EvaluationFunction(smallALL, individuals0, response="mol.biol",
method=knn.cvI(k=3, l=2), trainTest="LOG")
EmbryonicSelection(individuals0, results0, 0.5)
## End(Not run)
EvaluationFunction EvaluationFunction
Description
Evaluates the individuals’ fitnesses.
Usage
EvaluationFunction(x, individuals, response, method, trainTest, nnetSize = NA,
nnetDecay = NA, rdaAlpha = NA, rdaDelta = NA, ...)
Arguments
x Dataset in ExpressionSet format.
individuals Population of individuals with diploid genotypes.
response Response variable.
method Supervised classifier for fitness evaluation. Most of the supervised classifiers in
MLInterfaces are acceptable. The default is knn.cvI(k=3, l=2).
trainTest Cross-validation method. The default is "LOG".
nnetSize for nnetI. The default value is NA.
nnetDecay for nnetI. The default value is NA.
rdaAlpha for rdaI. The default value is NA.
rdaDelta for rdaI. The default value is NA.
... Additional arguments.
Examples
## Not run:
library(genefilter)
library(ALL)
data(ALL)
bALL = ALL[, substr(ALL$BT,1,1) == "B"]
smallALL = bALL[, bALL$mol.biol %in% c("BCR/ABL", "NEG")]
smallALL$mol.biol = factor(smallALL$mol.biol)
smallALL$BT = factor(smallALL$BT)
f1 <- pOverA(0.25, log2(100))
f2 <- function(x) (IQR(x) > 0.5)
f3 <- ttest(smallALL$mol.biol, p=0.1)
ff <- filterfun(f1, f2, f3)
selectedsmallALL <- genefilter(exprs(smallALL), ff)
smallALL = smallALL[selectedsmallALL, ]
rm(f1)
rm(f2)
rm(f3)
rm(ff)
rm(bALL)
sum(selectedsmallALL)
set.seed(1357)
population0<-InitialPopulation(smallALL, 14, 8, FALSE)
individuals0<-Individuals(population0)
results<-EvaluationFunction(smallALL, individuals0, response="mol.biol",
method=knn.cvI(k=3, l=2), trainTest="LOG")
## End(Not run)
frameShiftMutation frameShiftMutation
Description
Operator for the frameshift mutation.
Usage
frameShiftMutation(individuals, chrConf, mutationChance)
Arguments
individuals dataset returned by Individuals().
chrConf Configuration of chromosomes returned by splitChromosomes().
mutationChance Chance for a frameshift mutation to occur.
Examples
## Not run:
library(ALL)
data(ALL)
demoALL<-ALL[1:12,1:8]
set.seed(1234)
population<-InitialPopulation(demoALL, 4, 9)
individuals<-Individuals(population)
chrConf<-splitChromosomes(demoALL, 2)
chrConf
individuals
set.seed(123)
frameShiftMutation(individuals, chrConf, 20)
## End(Not run)
Individuals Individuals
Description
Generates individuals with diploid genotypes.
Usage
Individuals(population)
Arguments
population Population of haploid genotypes.
Examples
## Not run:
library(ALL)
data(ALL)
demoALL<-ALL[1:12,1:8]
population02<-InitialPopulation(demoALL, 20, 4, FALSE)
individuals02<-Individuals(population02)
## End(Not run)
InitialPopulation InitialPopulation
Description
Generates an initial randomly generated population of haploid genotypes.
Usage
InitialPopulation(x, populationSize, startGenes,
EveryGeneInInitialPopulation = TRUE)
Arguments
x Dataset in ExpressionSet format.
populationSize Number of genotypes in initial population.
startGenes Genes in the genotypes at initialization.
EveryGeneInInitialPopulation
Request for every gene to be present in the initial population. The default value
is TRUE.
Examples
## Not run:
library(ALL)
data(ALL)
demoALL<-ALL[1:12,1:8]
population01<-InitialPopulation(demoALL, 4, 4)
population02<-InitialPopulation(demoALL, 20, 4, FALSE)
## End(Not run)
largeSegmentDeletion largeSegmentDeletion
Description
Operator for the large segment deletion.
Usage
largeSegmentDeletion(individuals, chrConf, mutationChance)
Arguments
individuals dataset returned by Individuals().
chrConf Configuration of chromosomes returned by splitChromosomes().
mutationChance Chance for a large segment deletion mutation to occur.
Examples
## Not run:
library(ALL)
data(ALL)
demoALL<-ALL[1:12,1:8]
set.seed(1234)
population<-InitialPopulation(demoALL, 4, 9)
individuals<-Individuals(population)
chrConf<-splitChromosomes(demoALL, 2)
chrConf
individuals
set.seed(123)
largeSegmentDeletion(individuals, chrConf, 20)
## End(Not run)
nonSenseMutation nonSenseMutation
Description
Operator for the nonsense mutation.
Usage
nonSenseMutation(individuals, chrConf, mutationChance)
Arguments
individuals dataset returned by Individuals().
chrConf Configuration of chromosomes returned by splitChromosomes().
mutationChance Chance for a nonsense mutation to occur.
Examples
## Not run:
library(ALL)
data(ALL)
demoALL<-ALL[1:12,1:8]
set.seed(1234)
population<-InitialPopulation(demoALL, 4, 9)
individuals<-Individuals(population)
chrConf<-splitChromosomes(demoALL, 2)
chrConf
individuals
set.seed(123)
nonSenseMutation(individuals, chrConf, 20)
## End(Not run)
PlotGenAlg PlotGenAlg
Description
Function for graphically representing the evolution.
Usage
PlotGenAlg(DGenes, dGenes, maxEval, meanEval)
Arguments
DGenes Occurences of genes as dominant.
dGenes Occurences of genes as recessive. For future developments.
maxEval Maximum fitness.
meanEval Average fitness.
Examples
## Not run:
#Graphical representation of the evolution after each generation.
#Intended to be used by dGAselID() only.
#Please refer to the example for dGAselID().
## End(Not run)
pointMutation pointMutation
Description
Operator for the point mutation.
Usage
pointMutation(individuals, mutationChance)
Arguments
individuals dataset returned by Individuals().
mutationChance chance for a point mutation to occur.
Examples
## Not run:
library(ALL)
data(ALL)
demoALL<-ALL[1:12,1:8]
set.seed(1234)
population<-InitialPopulation(demoALL, 4, 9)
individuals<-Individuals(population)
individuals
set.seed(123)
pointMutation(individuals, 4)
## End(Not run)
RandomAssortment RandomAssortment
Description
Random assortment of chromosomes operator.
Usage
RandomAssortment(newChrs, chrConf)
Arguments
newChrs Set of chromosomes.
chrConf Configuration of chromosomes returned by splitChromosomes().
Examples
## Not run:
library(ALL)
data(ALL)
demoALL<-ALL[1:12,1:8]
population02<-InitialPopulation(demoALL, 2, 4, FALSE)
chrConf02<-splitChromosomes(demoALL, 4)
set.seed(1357)
cr1<-Crossover(population02[1,], population02[2,], chrConf02)
RandomAssortment(cr1, chrConf02)
cr1
chrConf02
## End(Not run)
RandomizePop RandomizePop
Description
Generates a random population for the next generation.
Usage
RandomizePop(population)
Arguments
population Population of chromosome sets in current generation.
Examples
## Not run:
library(ALL)
data(ALL)
demoALL<-ALL[1:12,1:8]
population01<-InitialPopulation(demoALL, 4, 4)
population01
RandomizePop(population01)
## End(Not run)
splitChromosomes splitChromosomes
Description
Divides the genotypes into sets with a desired number of chromosomes.
Usage
splitChromosomes(x, noChr = 22)
Arguments
x Dataset in ExpressionSet format.
noChr Desired number of chromosomes. The default value is 22.
Examples
## Not run:
library(ALL)
data(ALL)
demoALL<-ALL[1:12,1:8]
splitChromosomes(demoALL, 3)
splitChromosomes(demoALL)
## End(Not run)
transposon transposon
Description
Operator for transposons.
Usage
transposon(individuals, chrConf, mutationChance)
Arguments
individuals dataset returned by Individuals().
chrConf Configuration of chromosomes returned by splitChromosomes().
mutationChance Chance for a transposon mutation to occur.
Examples
## Not run:
library(ALL)
data(ALL)
demoALL<-ALL[1:12,1:8]
set.seed(1234)
population<-InitialPopulation(demoALL, 4, 9)
individuals<-Individuals(population)
chrConf<-splitChromosomes(demoALL, 2)
chrConf
individuals
set.seed(123)
transposon(individuals, chrConf, 20)
## End(Not run)
wholeChromosomeDeletion
wholeChromosomeDeletion
Description
Operator for the deletion of a whole chromosome.
Usage
wholeChromosomeDeletion(individuals, chrConf, mutationChance)
Arguments
individuals dataset returned by Individuals().
chrConf Configuration of chromosomes returned by splitChromosomes().
mutationChance Chance for a deletion of a whole chromosome mutation to occur.
Examples
## Not run:
library(ALL)
data(ALL)
demoALL<-ALL[1:12,1:8]
set.seed(1234)
population<-InitialPopulation(demoALL, 4, 9)
individuals<-Individuals(population)
chrConf<-splitChromosomes(demoALL, 2)
chrConf
individuals
set.seed(123)
wholeChromosomeDeletion(individuals, chrConf, 20)
## End(Not run) |
unitquantreg | cran | R | Package ‘unitquantreg’
September 6, 2023
Title Parametric Quantile Regression Models for Bounded Data
Version 0.0.6
Maintainer <NAME> <<EMAIL>>
Description
A collection of parametric quantile regression models for bounded data. At present, the pack-
age provides 13 parametric quantile regression models. It can specify regression struc-
ture for any quantile and shape parameters. It also provides several S3 methods to extract infor-
mation from fitted model, such as residual analysis, prediction, plotting, and model compari-
son. For more computation efficient the [dpqr]'s, likelihood, score and hessian functions are writ-
ten in C++. For further details see Mazucheli et. al (2022) <doi:10.1016/j.cmpb.2022.106816>.
License Apache License (>= 2)
Encoding UTF-8
ByteCompile yes
LazyData true
LinkingTo Rcpp
Imports Rcpp, optimx, stats, quantreg, Formula, MASS, numDeriv
Suggests testthat (>= 3.0.0), rmarkdown, knitr, lmtest, ggplot2, covr
Depends R (>= 3.5.0)
RoxygenNote 7.2.1
NeedsCompilation yes
URL https://andrmenezes.github.io/unitquantreg/
BugReports https://github.com/AndrMenezes/unitquantreg/issues
VignetteBuilder knitr
Author <NAME> [aut, cre]
(<https://orcid.org/0000-0002-3320-9834>),
<NAME> [aut] (<https://orcid.org/0000-0001-6740-0445>)
Repository CRAN
Date/Publication 2023-09-06 09:10:02 UTC
R topics documented:
unitquantreg-packag... 2
ash... 3
bodyfa... 4
hn... 5
johnsons... 7
ku... 9
lee... 11
likelihood_stat... 13
loglike_unitquantre... 14
methods-unitquantre... 15
pairwise.vuong.tes... 17
plot.unitquantre... 18
plot.unitquantreg... 19
predict.unitquantre... 21
residuals.unitquantre... 22
sim_bounde... 23
ub... 24
uburrxi... 26
uche... 28
ughn... 29
ughn... 31
ugompert... 33
ugumbe... 35
ulogisti... 36
unitquantre... 38
unitquantreg.contro... 41
uweibul... 43
vuong.tes... 44
wate... 46
unitquantreg-package Overview of the unitquantreg package
Description
The unitquantreg R package provides a collection of parametric quantile regression models for
bounded data. At present, the package provides 13 parametric quantile regression models. It also
enables several S3 methods to extract information from fitted model, such as residual analysis,
prediction, plotting, and model comparison.
Author(s)
<NAME> <<EMAIL>>
<NAME> <<EMAIL>>
ashw The arcsecant hyperbolic Weibull distribution
Description
Density function, distribution function, quantile function and random number generation function
for the arcsecant hyperbolic Weibull distribution reparametrized in terms of the τ -th quantile, τ ∈
(0, 1).
Usage
dashw(x, mu, theta, tau = 0.5, log = FALSE)
pashw(q, mu, theta, tau = 0.5, lower.tail = TRUE, log.p = FALSE)
qashw(p, mu, theta, tau = 0.5, lower.tail = TRUE, log.p = FALSE)
rashw(n, mu, theta, tau = 0.5)
Arguments
x, q vector of positive quantiles.
mu location parameter indicating the τ -th quantile, τ ∈ (0, 1).
theta shape parameter.
tau the parameter to specify which quantile use in the parametrization.
log, log.p logical; If TRUE, probabilities p are given as log(p).
lower.tail logical; If TRUE, (default), P (X ≤ x) are returned, otherwise P (X > x).
p vector of probabilities.
n number of observations. If length(n) > 1, the length is taken to be the number
required.
Details
Probability density function
αθ
f (y; α, θ) = p
y 1−y 2
Cumulative distribution function
F (y; α, θ) = exp −αarcsech(y)θ
Quantile function n 1 o
Q(τ ; α, θ) = sech −α−1 log(τ ) θ
Reparameterization
log(τ )
α = g −1 (µ) = −
arcsech(µ)θ
h p i
where θ > 0 is the shape parameter and arcsech(y) = log 1+ 1 − y 2 /y .
Value
dashw gives the density, pashw gives the distribution function, qashw gives the quantile function
and rashw generates random deviates.
Invalid arguments will return an error message.
Author(s)
<NAME>
<NAME>
References
<NAME>., <NAME>. and <NAME>., (2021). A new alternative quantile regression
model for the bounded response with educational measurements applications of OECD countries.
Journal of Applied Statistics, 1–25.
Examples
set.seed(6969)
x <- rashw(n = 1000, mu = 0.5, theta = 2.5, tau = 0.5)
R <- range(x)
S <- seq(from = R[1L], to = R[2L], by = 0.01)
hist(x, prob = TRUE, main = 'arcsecant hyperbolic Weibull')
lines(S, dashw(x = S, mu = 0.5, theta = 2.5, tau = 0.5), col = 2)
plot(ecdf(x))
lines(S, pashw(q = S, mu = 0.5, theta = 2.5, tau = 0.5), col = 2)
plot(quantile(x, probs = S), type = "l")
lines(qashw(p = S, mu = 0.5, theta = 2.5, tau = 0.5), col = 2)
bodyfat Percentage of body fat data set
Description
The body fat percentage of individuals assisted in a public hospital in Curitiba, Paraná, Brasil.
Usage
data(bodyfat, package = "unitquantreg")
Format
A data.frame with 298 observations and 9 columns:
• arms: Arms fat percentage.
• legs: Legs fat percentage.
• body: Body fat percentage.
• android: Android fat percentage.
• gynecoid: Ginecoid fat percentage.
• bmi: Body mass index - 24.71577.
• age: Age - 46.00.
• sex: Sex of individual. Female or male.
• ipaq: Factor variable indicating the sedentary, insufficiently active or active.
Author(s)
<NAME>
<NAME>
Source
http://www.leg.ufpr.br/doku.php/publications:papercompanions:multquasibeta
References
<NAME>., <NAME>., <NAME>., <NAME>., and <NAME>., (2020). Multi-
variate quasi-beta regression models for continuous bounded data. The International Journal of
Biostatistics, 1–15, (preprint).
<NAME>., <NAME>., <NAME>., and <NAME>., (2021). A new quantile regression for
modeling bounded data under a unit Birnbaum-Saunders distribution with applications in medicine
and politics. Symmetry, 13(4) 1–21.
hnp (Half-)Normal probability plots with simulated envelopes for
unitquantreg objects
Description
Produces a (half-)normal probability plot from a fitted model object of class unitquantreg.
Usage
hnp(object, ...)
## S3 method for class 'unitquantreg'
hnp(
object,
nsim = 99,
halfnormal = TRUE,
plot = TRUE,
output = TRUE,
level = 0.95,
resid.type = c("quantile", "cox-snell"),
...
)
Arguments
object fitted model object of class unitquantreg.
... currently not used.
nsim number of simulations used to compute envelope. Default is 99.
halfnormal logical. If TRUE, a half-normal plot is produced. If FALSE, a normal plot is
produced.
plot Should the (half-)normal plot be plotted? Default is TRUE.
output Should the output be returned? Default is TRUE.
level confidence level of the simulated envelope. Default is 0.95.
resid.type type of residuals to be used. The default is quantile. See residuals.unitquantreg
for further details.
Details
Residuals plots with simulated envelope were proposed by Atkinson (1981) and can be construct as
follows:
1. generate sample set of n independent observations from the estimated parameters of the fitted
model;
2. fit the model using the generated sample, if halfnormal is TRUE compute the absolute values
of the residuals and arrange them in order;
3. repeat steps (1) and (2) nsim number of times;
4. consider the n sets of the nsim ordered statistics of the residuals, then for each set compute
the quantile level/2, the median and the quantile 1 - level/2;
5. plot these values and the ordered residuals of the original sample set versus the expected order
statistics of a (half)-normal distribution, which is approximated as
G−1
for half-normal plots, i.e., halfnormal=TRUE or
G
for normal plots, i.e., halfnormal=FALSE, where G(·) is the the cumulative distribution func-
tion of standard Normal distribution for quantile residuals or the standard exponential dis-
tribution for the cox-snell residuals.
According to Atkinson (1981), if the model was correctly specified then no more than level100%
of the observations are expected to appear outside the envelope bands. Additionally, if a large
proportion of the observations lies outside the envelope, thus one has evidence against the adequacy
of the fitted model.
Value
A list with the following components in ordered (and absolute if halfnormal is TRUE) values:
obs the observed residuals.
teo the theoretical residuals.
lower lower envelope band.
median median envelope band.
upper upper envelope band.
time_elapsed time elapsed to fit the nsim models.
Author(s)
<NAME>
References
At<NAME>., (1981). Two graphical displays for outlying and influential observations in regres-
sion. Biometrika 68(1), 13–20.
See Also
residuals.unitquantreg
johnsonsb The Johnson SB distribution
Description
Density function, distribution function, quantile function and random number generation function
for the Johnson SB distribution reparametrized in terms of the τ -th quantile, τ ∈ (0, 1).
Usage
djohnsonsb(x, mu, theta, tau = 0.5, log = FALSE)
pjohnsonsb(q, mu, theta, tau = 0.5, lower.tail = TRUE, log.p = FALSE)
qjohnsonsb(p, mu, theta, tau = 0.5, lower.tail = TRUE, log.p = FALSE)
rjohnsonsb(n, mu, theta, tau = 0.5)
Arguments
x, q vector of positive quantiles.
mu location parameter indicating the τ -th quantile, τ ∈ (0, 1).
theta nonnegative shape parameter.
tau the parameter to specify which quantile is to used.
log, log.p logical; If TRUE, probabilities p are given as log(p).
lower.tail logical; If TRUE, (default), P (X ≤ x) are returned, otherwise P (X > x).
p vector of probabilities.
n number of observations. If length(n) > 1, the length is taken to be the number
required.
Details
Probability density function
θ 1 1 y
f (y | α, θ) = √ exp − α + θ log
2π y(1 − y) 2 1−y
Cumulative distribution function
y
F (y | α, θ) = Φ α + θ log
Quantile function
h i
exp θ
Q(τ | α, θ) = h i
Reparameterization
µ
α = g −1 (µ) = Φ−1 (τ ) − θ log
Value
djohnsonsb gives the density, pjohnsonsb gives the distribution function, qjohnsonsb gives the
quantile function and rjohnsonsb generates random deviates.
Invalid arguments will return an error message.
Author(s)
<NAME>
<NAME>
References
<NAME>. and <NAME>., (2015). New class of Johnson SB distributions and its associated
regression model for rates and proportions. Biometrical Journal, 58(4), 727–746.
<NAME>., (1949). Systems of frequency curves generated by methods of translation. Biometrika,
36(1), 149–176.
Examples
set.seed(123)
x <- rjohnsonsb(n = 1000, mu = 0.5, theta = 1.5, tau = 0.5)
R <- range(x)
S <- seq(from = R[1], to = R[2], by = 0.01)
hist(x, prob = TRUE, main = 'Johnson SB')
lines(S, djohnsonsb(x = S, mu = 0.5, theta = 1.5, tau = 0.5), col = 2)
plot(ecdf(x))
lines(S, pjohnsonsb(q = S, mu = 0.5, theta = 1.5, tau = 0.5), col = 2)
plot(quantile(x, probs = S), type = "l")
lines(qjohnsonsb(p = S, mu = 0.5, theta = 1.5, tau = 0.5), col = 2)
kum The Kumaraswamy distribution
Description
Density function, distribution function, quantile function and random number generation for the
Kumaraswamy distribution reparametrized in terms of the τ -th quantile, τ ∈ (0, 1).
Usage
dkum(x, mu, theta, tau = 0.5, log = FALSE)
pkum(q, mu, theta, tau = 0.5, lower.tail = TRUE, log.p = FALSE)
qkum(p, mu, theta, tau = 0.5, lower.tail = TRUE, log.p = FALSE)
rkum(n, mu, theta, tau = 0.5)
Arguments
x, q vector of positive quantiles.
mu location parameter indicating the τ -th quantile, τ ∈ (0, 1).
theta nonnegative shape parameter.
tau the parameter to specify which quantile is to used.
log, log.p logical; If TRUE, probabilities p are given as log(p).
lower.tail logical; If TRUE, (default), P (X ≤ x) are returned, otherwise P (X > x).
p vector of probabilities.
n number of observations. If length(n) > 1, the length is taken to be the number
required.
Details
Probability density function
f (y | α, θ) = αθy θ−1 (1 − y θ )α−1
Cumulative distribution function
α
F (y | α, θ) = 1 − 1 − y θ
Quantile function
Q(τ | α, θ) = 1 − (1 − τ ) α
Reparameterization
α = g −1 (µ) =
Value
dkum gives the density, pkum gives the distribution function, qkum gives the quantile function and
rkum generates random deviates.
Invalid arguments will return an error message.
Author(s)
<NAME>
<NAME>
References
<NAME>., (1980). A generalized probability density function for double-bounded random
processes. Journal of Hydrology, 46(1), 79–88.
<NAME>., (2009). Kumaraswamy’s distribution: A beta-type distribution with some tractability
advantages. Statistical Methodology, 6(1), 70-81.
Examples
set.seed(123)
x <- rkum(n = 1000, mu = 0.5, theta = 1.5, tau = 0.5)
R <- range(x)
S <- seq(from = R[1], to = R[2], by = 0.01)
hist(x, prob = TRUE, main = 'Kumaraswamy')
lines(S, dkum(x = S, mu = 0.5, theta = 1.5, tau = 0.5), col = 2)
plot(ecdf(x))
lines(S, pkum(q = S, mu = 0.5, theta = 1.5, tau = 0.5), col = 2)
plot(quantile(x, probs = S), type = "l")
lines(qkum(p = S, mu = 0.5, theta = 1.5, tau = 0.5), col = 2)
leeg The Log-extended exponential-geometric distribution
Description
Density function, distribution function, quantile function and random number generation function
for the Log-extended exponential-geometric distribution reparametrized in terms of the τ -th quan-
tile, τ ∈ (0, 1).
Usage
dleeg(x, mu, theta, tau = 0.5, log = FALSE)
pleeg(q, mu, theta, tau = 0.5, lower.tail = TRUE, log.p = FALSE)
qleeg(p, mu, theta, tau = 0.5, lower.tail = TRUE, log.p = FALSE)
rleeg(n, mu, theta, tau = 0.5)
Arguments
x, q vector of positive quantiles.
mu location parameter indicating the τ -th quantile, τ ∈ (0, 1).
theta nonnegative shape parameter.
tau the parameter to specify which quantile is to be used.
log, log.p logical; If TRUE, probabilities p are given as log(p).
lower.tail logical; If TRUE, (default), P (X ≤ x) are returned, otherwise P (X > x).
p vector of probabilities.
n number of observations. If length(n) > 1, the length is taken to be the number
required.
Details
Probability density function
f (y | α, θ) = 2
Cumulative distribution function
F (y | α, θ) =
Quantile function
τ
Q(τ | α, θ) =
Reparameterization
α = g −1 (µ) = −
Value
dleeg gives the density, pleeg gives the distribution function, qleeg gives the quantile function
and rleeg generates random deviates.
Invalid arguments will return an error message.
Author(s)
<NAME> <<EMAIL>>
<NAME> <<EMAIL>>
References
<NAME>. and <NAME>., (2020). A quantile regression model for bounded responses
based on the exponential-geometric distribution. Revstat - Statistical Journal, 18(4), 415–436.
Examples
set.seed(123)
x <- rleeg(n = 1000, mu = 0.5, theta = 1.5, tau = 0.5)
R <- range(x)
S <- seq(from = R[1], to = R[2], by = 0.01)
hist(x, prob = TRUE, main = 'Log-extended exponential-geometric')
lines(S, dleeg(x = S, mu = 0.5, theta = 1.5, tau = 0.5), col = 2)
plot(ecdf(x))
lines(S, pleeg(q = S, mu = 0.5, theta = 1.5, tau = 0.5), col = 2)
plot(quantile(x, probs = S), type = "l")
lines(qleeg(p = S, mu = 0.5, theta = 1.5, tau = 0.5), col = 2)
likelihood_stats Likelihood-based statistics of fit for unitquantreg objects.
Description
Computes the likelihood-based statistics (Neg2LogLike, AIC, BIC and HQIC) from unitquantreg
objects.
Usage
likelihood_stats(..., lt = NULL)
## S3 method for class 'likelihood_stats'
print(x, ...)
Arguments
... unitquantreg objects separated by commas. Not use in print method.
lt a list with one or more unitquantreg objects.
x object of class likelihood_stats obtained from likelihood_stats function.
Details
Neg2LogLike: The log-likelihood is reported as
N eg2LogLike = −2 log(L)
AIC: The Akaike’s information criterion (AIC) is defined as
AIC = −2 log(L) + 2p
BIC: The Schwarz Bayesian information criterion (BIC) is defined as
BIC = −2 log(L) + p log(n)
HQIC: The Hannan and Quinn information criterion (HQIC) is defined as
HQIC = −2 log(L) + 2p log[log(n)]
where L is the likelihood function.
Value
A list with class "likelihood_stats" containing the following components:
call the matched call.
stats ordered matrix according AIC value containg the likelihood based statistics.
Author(s)
<NAME>
<NAME>
References
<NAME>. (1974). A new look at the statistical model identification. IEEE Transaction on Auto-
matic Control, 19(6), 716–723.
<NAME>. and <NAME>. (1979). The determination of the order of an autoregression. Journal
of the Royal Statistical Society, Series B, 41(2), 190–195.
<NAME>. (1978). Estimating the dimension of a model. Annals of Statistics, 6(2), 461–464.
Examples
data(sim_bounded, package = "unitquantreg")
sim_bounded_curr <- sim_bounded[sim_bounded$family == "uweibull", ]
models <- c("uweibull", "kum", "ulogistic")
lt_fits <- lapply(models, function(fam) {
unitquantreg(formula = y1 ~ x, tau = 0.5, data = sim_bounded_curr,
family = fam)
})
ans <- likelihood_stats(lt = lt_fits)
ans
loglike_unitquantreg Log-likelihood, score vector and hessian matrix.
Description
Internal functions using in unitquantreg.fit to compute the negative log-likelihood function, the
score vector and the hessian matrix using analytic expressions written in C++.
Usage
loglike_unitquantreg(par, tau, family, linkobj, linkobj.theta, X, Z, y)
Arguments
par vector of regression model coefficients for µ and/or θ.
tau quantile level, value between 0 and 1.
family specify the distribution family name.
linkobj, linkobj.theta
a function, usually obtained from make.link for link function of µ and θ, re-
spectively.
X design matrix related to the µ parameter.
Z design matrix related to the θ parameter.
y vector of response variable.
methods-unitquantreg Methods for unitquantreg and unitquantregs objects
Description
Methods for extracting information from fitted regression models objects of class unitquantreg
and unitquantregs.
Usage
## S3 method for class 'unitquantreg'
print(x, digits = max(4, getOption("digits") - 3), ...)
## S3 method for class 'unitquantreg'
summary(object, correlation = FALSE, ...)
## S3 method for class 'unitquantreg'
coef(object, type = c("full", "quantile", "shape"), ...)
## S3 method for class 'unitquantreg'
vcov(object, ...)
## S3 method for class 'unitquantreg'
logLik(object, ...)
## S3 method for class 'unitquantreg'
confint(object, parm, level = 0.95, ...)
## S3 method for class 'unitquantreg'
fitted(object, type = c("quantile", "shape", "full"), ...)
## S3 method for class 'unitquantreg'
terms(x, type = c("quantile", "shape"), ...)
## S3 method for class 'unitquantreg'
model.frame(formula, ...)
## S3 method for class 'unitquantreg'
model.matrix(object, type = c("quantile", "shape"), ...)
## S3 method for class 'unitquantreg'
update(object, formula., ..., evaluate = TRUE)
## S3 method for class 'unitquantregs'
print(x, digits = max(3, getOption("digits") - 3), ...)
## S3 method for class 'unitquantregs'
summary(object, digits = max(3, getOption("digits") - 3), ...)
Arguments
digits minimal number of significant digits.
... additional argument(s) for methods. Currently not used.
object, x fitted model object of class unitquantreg.
correlation logical; if TRUE, the correlation matrix of the estimated parameters is returned
and printed. Default is FALSE.
type character indicating type of fitted values to return.
parm a specification of which parameters are to be given confidence intervals, either
a vector of numbers or a vector of names. If missing, all parameters are consid-
ered.
level the confidence level required.
formula an R formula.
formula. Changes to the formula see update.formula for details.
evaluate If true evaluate the new call else return the call.
Value
The summary method gives Wald tests for the regressions coefficients based on the observed Fisher
information matrix. As usual the summary method returns a list with relevant model statistics and
estimates, which can be printed using the print method.
The coef, vcov, confint and fitted methods can be use to extract, respectively, the estimated
coefficients, the estimated covariance matrix, the Wald confidence intervals, and fitted values.
A logLik method is also provide, then the AIC function can be use to calculated the Akaike Infor-
mation Criterion.
The generic methods terms, model.frame, model.matrix, update and are also provided.
Author(s)
<NAME>
Examples
data(sim_bounded, package = "unitquantreg")
sim_bounded_curr <- sim_bounded[sim_bounded$family == "uweibull", ]
fit_1 <- unitquantreg(formula = y1 ~ x + z + I(x^2) | z + x,
data = sim_bounded_curr,
family = "uweibull",
tau = 0.5, link.theta = "log")
fit_1
summary(fit_1)
vcov(fit_1)
coef(fit_1)
confint(fit_1)
terms(fit_1)
model.frame(fit_1)[1:5, ]
model.matrix(fit_1)[1:5, ]
update(fit_1, . ~ . -x)
update(fit_1, . ~ . -z)
update(fit_1, . ~ . -I(x^2))
update(fit_1, . ~ . | . -z)
update(fit_1, . ~ . | . -x)
pairwise.vuong.test Pairwise vuong test
Description
Calculate pairwise comparisons between fitted models performing vuong test for objects of class
unitquantreg.
Usage
pairwise.vuong.test(
...,
lt,
p.adjust.method = p.adjust.methods,
alternative = c("two.sided", "less", "greater")
)
Arguments
... unitquantreg objects separated by commas.
lt a list with one or more unitquantreg objects.
p.adjust.method
a character string specifying the method for multiple testing adjustment; almost
always one of p.adjust.methods. Can be abbreviated.
alternative indicates the alternative hypothesis and must be one of "two.sided" (default),
"less", or "greater". Can be abbreviated.
Value
Object of class "pairwise.htest"
See Also
vuong.test, p.adjust
Examples
data(sim_bounded, package = "unitquantreg")
sim_bounded_curr <- sim_bounded[sim_bounded$family == "uweibull", ]
models <- c("uweibull", "kum", "ulogistic")
lt_fits <- lapply(models, function(fam) {
unitquantreg(formula = y1 ~ x, tau = 0.5, data = sim_bounded_curr,
family = fam)
})
ans <- pairwise.vuong.test(lt = lt_fits)
ans
plot.unitquantreg Plot method for unitquantreg objects
Description
Provide diagnostic plots to check model assumptions for fitted model of class unitquantreg.
Usage
## S3 method for class 'unitquantreg'
plot(
x,
which = 1L:4L,
caption = c("Residuals vs. indices of obs.", "Residuals vs. linear predictor",
"Working response vs. linear predictor", "Half-normal plot of residuals"),
sub.caption = paste(deparse(x$call), collapse = "\n"),
main = "",
ask = prod(par("mfcol")) < length(which) && dev.interactive(),
...,
add.smooth = getOption("add.smooth"),
type = "quantile",
nsim = 99L,
level = 0.95
)
Arguments
x fitted model object of class unitquantreg.
which integer. if a subset of the plots is required, specify a subset of the numbers 1 to
4, see below for further details.
caption character. Captions to appear above the plots.
sub.caption character. Common title-above figures if there are multiple.
main character. Title to each plot in addition to the above caption.
ask logical. If TRUE, the user is asked before each plot.
... other parameters to be passed through to plotting functions.
add.smooth logical. Indicates if a smoother should be added to most plots
type character. Indicates type of residual to be used, see residuals.unitquantreg.
nsim integer. Number of simulations in half-normal plots, see hnp.unitquantreg.
level numeric. Confidence level of the simulated envelope, see hnp.unitquantreg.
Details
The plot method for unitquantreg objects produces four types of diagnostic plot.
The which argument can be used to select a subset of currently four supported plot, which are:
Residuals versus indices of observations (which = 1); Residuals versus linear predictor (which =
2); Working response versus linear predictor (which = 3) to check possible misspecification of link
function; Half-normal plot of residuals (which = 4) to check distribution assumption.
Value
No return value, called for side effects.
Author(s)
<NAME>
References
<NAME>. and <NAME>. (2018) Generalized Linear Models With Examples in R, Springer,
New York.
See Also
residuals.unitquantreg, hnp.unitquantreg, unitquantreg.
plot.unitquantregs Plot method for unitquantregs objects
Description
Provide two type of plots for unitquantregs objects.
Usage
## S3 method for class 'unitquantregs'
plot(
x,
which = c("coef", "conddist"),
output_df = FALSE,
parm = NULL,
level = 0.95,
mean_effect = FALSE,
mfrow = NULL,
mar = NULL,
ylim = NULL,
main = NULL,
col = gray(c(0, 0.75)),
border = NULL,
cex = 1,
pch = 20,
type = "b",
xlab = bquote("Quantile level (" * tau * ")"),
ylab = "Estimate effect",
dist_type = c("density", "cdf"),
at_avg = TRUE,
at_obs = NULL,
legend_position = "topleft",
...
)
Arguments
x fitted model object of class unitquantregs.
which character. Indicate the type of plot. Currently supported are "coef" which
provide the estimated coefficients for several quantiles and "conddist" which
provide the conditional distribution (cdf or pdf) at specific values of covariates.
output_df logical. Should data.frame used to plot be returned?
parm a specification of which parameters are to be plotted, either a vector of numbers
or a vector of names. By default, all parameters are considered.
level level of significance for the confidence interval of parameters.
mean_effect logical. Should a line for the mean effect coefficients be added?
mfrow, mar, ylim, main, col, border, cex, pch, type, xlab, ylab
graphical parameters.
dist_type character. Which conditional distribution should be plotted? The options are
"density" or "cdf".
at_avg logical. Should consider the conditional distribution at average values of covari-
ates?
at_obs list. List with name and values for each covariate.
legend_position
character. The legend position argument used in legend function.
... other parameters to be passed through to plotting functions.
Details
The plot method for unitquantregs objects is inspired in PROC QUANTREG of SAS/STAT. This
plot method provide two type of visualizations.
If which = "coef" plot the estimated coefficients for several quantiles.
If which = "conddist" plot the conditional distribution at specific values of covariates. The condi-
tional distribution could be the cumulative distribution function if dist_type = "cdf" or the prob-
ability density function if dist_type = "pdf".
Value
If output_df = TRUE then returns a data.frame used to plot. Otherwise, no return value, called for
side effects.
Author(s)
<NAME>
See Also
plot.unitquantreg.
predict.unitquantreg Prediction method for unitquantreg class
Description
Extract various types of predictions from unit quantile regression models.
Usage
## S3 method for class 'unitquantreg'
predict(
object,
newdata,
type = c("link", "quantile", "shape", "terms"),
interval = c("none", "confidence"),
level = 0.95,
se.fit = FALSE,
...
)
Arguments
object fitted model object of class unitquantreg.
newdata optionally, a data frame in which to look for variables with which to predict. If
omitted, the original observations are used.
type character indicating type of predictions. The options are link, quantile, shape
and terms.
interval type of interval desired. The options are none and confidence. The "terms"
option returns a matrix giving the fitted values of each term in the model formula
on the linear predictor scale.
level coverage probability for the confidence intervals. Default is 0.95.
se.fit logical. If TRUE return the asymptotic standard errors.
... currently not used.
Value
If se.fit = FALSE then returns a data.frame with predict values and confidence interval if interval
= TRUE.
If se.fit = TRUE returns a list with components:
fit Predictions, as for se.fit = FALSE.
se.fit Estimated standard errors.
For type = "terms" the output is a data.frame with a columns per term.
Author(s)
<NAME>
residuals.unitquantreg
Residuals method for unitquantreg objects
Description
Extract various types of residuals from unit quantile regression models.
Usage
## S3 method for class 'unitquantreg'
residuals(object, type = c("quantile", "cox-snell", "working", "partial"), ...)
Arguments
object fitted model object of class unitquantreg.
type character indicating type of residuals. The options are "quantile", "cox-snell",
"working" and "partial".
... currently not used.
Details
The residuals method can compute quantile and Cox-Snell residuals. These residuals are defined,
respectively, by
h i
rQ = Φ−1 F (yi | µ bi , θbi )
and
h i
rCS = − log 1 − F (yi | µ bi , θbi )
where µbi and θbi are the fitted values of parameters µ and θ, F (· | ·, ·) is the cumulative distribution
function (c.d.f.) and Φ(·) is the c.d.f. of standard Normal distribution.
Apart from the variability due the estimates of parameters,if the fitted regression model is correctly
specified then the quantile residuals, rQ , follow a standard Normal distribution and the Cox-Snell
residuals, rCS , follow a standard exponential distribution.
Value
Numeric vector of residuals extract from an object of class unitquantreg.
Author(s)
<NAME>
References
<NAME>. and <NAME>., (1968). A general definition of residuals. Journal of the Royal Statistical
Society - Series B, 30(2), 248–265.
<NAME>. and <NAME>., (1996). Randomized quantile residuals. Journal of Computational
and Graphical Statistics, 5(3), 236–244.
sim_bounded Simulated data set
Description
This data set was simulated from all families of distributions available in unitquantreg package
considering the median, i.e., τ = 0.5.
Usage
data(sim_bounded, package = "unitquantreg")
Format
data.frame with 1300 observations and 5 columns:
• y1: simulated response variable with constant shape parameter, θ = 2.
• y2: simulated response variable with regression structure in the shape parameter, θi = exp(ζi ),
where ζi = z> i γ.
• x: covariate related to µi , i.e., the median.
• z: covariate related to θi , i.e., the shape parameter.
• family: string indicating the family of distribution.
Details
There are two response variable, namely y1 and y2. The former was simulated considering a regres-
sion structure for µ and one covariate simulated from a standard uniform distribution, where the true
vector of coefficients for µ is β = (1, 2) and θ = 2. The latter was simulated assuming a regression
structure for both µ and θ (shape parameter) and only one independent covariates simulated from
two standard uniform distributions. The true vectors of coefficients for µ and θ are β = (1, 2) and
γ = (−1, 1), respectively.
Author(s)
<NAME>
ubs The unit-Birnbaum-Saunders distribution
Description
Density function, distribution function, quantile function and random number generation function
for the unit-Birnbaum-Saunders distribution reparametrized in terms of the τ -th quantile, τ ∈ (0, 1).
Usage
dubs(x, mu, theta, tau = 0.5, log = FALSE)
pubs(q, mu, theta, tau = 0.5, lower.tail = TRUE, log.p = FALSE)
qubs(p, mu, theta, tau = 0.5, lower.tail = TRUE, log.p = FALSE)
rubs(n, mu, theta, tau = 0.5)
Arguments
x, q vector of positive quantiles.
mu location parameter indicating the τ -th quantile, τ ∈ (0, 1).
theta nonnegative shape parameter.
tau the parameter to specify which quantile is to be used.
log, log.p logical; If TRUE, probabilities p are given as log(p).
lower.tail logical; If TRUE, (default), P (X ≤ x) are returned, otherwise P (X > x).
p vector of probabilities.
n number of observations. If length(n) > 1, the length is taken to be the number
required.
Details
Probability density function
" 21 23 #
1 α α 1 log(y) α
f (y | α, θ) = √ − + − exp 2+ +
2yαθ 2π log(y) log(y) 2θ2 α log(y)
Cumulative distribution function
( " 1 12 #)
F (y | α, θ) = 1 − Φ − − −
θ α log(y)
Quantile function
2α
Q (τ | α, θ) = exp − q
2 + [θΦ−1 (1 − τ )] − θΦ−1 (1 − τ ) 4 + [θΦ−1 (1 − τ )]
Reparameterization
α = g −1 (µ) = log (µ) g (θ, τ )
n 2 p o
where g (θ, τ ) = − 12 2 + θΦ−1 (1 − τ ) − θΦ−1 (1 − τ ) 4 + θΦ−1 (1 − τ ) .
Value
dubs gives the density, pubs gives the distribution function, qubs gives the quantile function and
rubs generates random deviates.
Invalid arguments will return an error message.
Author(s)
<NAME> <<EMAIL>>
<NAME> <<EMAIL>>
References
Birnbaum, <NAME>. and <NAME>., (1969). A new family of life distributions. Journal of
Applied Probability, 6(2), 637–652. <NAME>., <NAME>. and <NAME>., (2018). The
unit-Birnbaum-Saunders distribution with applications. Chilean Journal of Statistics, 9(1), 47–57.
<NAME>., <NAME>. and <NAME>., (2021). A new quantile regression for modeling
bounded data under a unit Birnbaum-Saunders distribution with applications. Simmetry, (), 1–28.
Examples
set.seed(123)
x <- rubs(n = 1000, mu = 0.5, theta = 1.5, tau = 0.5)
R <- range(x)
S <- seq(from = R[1], to = R[2], by = 0.01)
hist(x, prob = TRUE, main = 'unit-Birnbaum-Saunders')
lines(S, dubs(x = S, mu = 0.5, theta = 1.5, tau = 0.5), col = 2)
plot(ecdf(x))
lines(S, pubs(q = S, mu = 0.5, theta = 1.5, tau = 0.5), col = 2)
plot(quantile(x, probs = S), type = "l")
lines(qubs(p = S, mu = 0.5, theta = 1.5, tau = 0.5), col = 2)
uburrxii The unit-Burr-XII distribution
Description
Density function, distribution function, quantile function and random number generation function
for the unit-Burr-XII distribution reparametrized in terms of the τ -th quantile, τ ∈ (0, 1).
Usage
duburrxii(x, mu, theta, tau = 0.5, log = FALSE)
puburrxii(q, mu, theta, tau = 0.5, lower.tail = TRUE, log.p = FALSE)
quburrxii(p, mu, theta, tau = 0.5, lower.tail = TRUE, log.p = FALSE)
ruburrxii(n, mu, theta, tau = 0.5)
Arguments
x, q vector of positive quantiles.
mu location parameter indicating the τ -th quantile, τ ∈ (0, 1).
theta nonnegative shape parameter.
tau the parameter to specify which quantile is to used.
log, log.p logical; If TRUE, probabilities p are given as log(p).
lower.tail logical; If TRUE, (default), P (X ≤ x) are returned, otherwise P (X > x).
p vector of probabilities.
n number of observations. If length(n) > 1, the length is taken to be the number
required.
Details
Probability density function
αθ n o−α−1
f (y | α, θ) = [− log(y)] 1 + [− log(y)]
y
Cumulative distribution function
n o−α
θ
F (y | α, θ) = 1 + [− log(y)]
Quantile function
−α
Q(τ | α, θ) = exp − τ −1
Reparameterization
−1
α=g (µ) = θ
Value
duburrxii gives the density, puburrxii gives the distribution function, quburrxii gives the quan-
tile function and ruburrxii generates random deviates.
Invalid arguments will return an error message.
Author(s)
<NAME> <<EMAIL>>
<NAME> <<EMAIL>>
References
<NAME>. and Chesneau, C., (2021). On the unit Burr-XII distribution with the quantile
regression modeling and applications. Computational and Applied Mathematics, 40(29), 1–26.
Examples
set.seed(123)
x <- ruburrxii(n = 1000, mu = 0.5, theta = 1.5, tau = 0.5)
R <- range(x)
S <- seq(from = R[1], to = R[2], by = 0.01)
hist(x, prob = TRUE, main = 'unit-Burr-XII')
lines(S, duburrxii(x = S, mu = 0.5, theta = 1.5, tau = 0.5), col = 2)
plot(ecdf(x))
lines(S, puburrxii(q = S, mu = 0.5, theta = 1.5, tau = 0.5), col = 2)
plot(quantile(x, probs = S), type = "l")
lines(quburrxii(p = S, mu = 0.5, theta = 1.5, tau = 0.5), col = 2)
uchen The unit-Chen distribution
Description
Density function, distribution function, quantile function and random number generation function
for the unit-Chen distribution reparametrized in terms of the τ -th quantile, τ ∈ (0, 1).
Usage
duchen(x, mu, theta, tau = 0.5, log = FALSE)
puchen(q, mu, theta, tau = 0.5, lower.tail = TRUE, log.p = FALSE)
quchen(p, mu, theta, tau = 0.5, lower.tail = TRUE, log.p = FALSE)
ruchen(n, mu, theta, tau = 0.5)
Arguments
x, q vector of positive quantiles.
mu location parameter indicating the τ -th quantile, τ ∈ (0, 1).
theta nonnegative shape parameter.
tau the parameter to specify which quantile is to be used.
log, log.p logical; If TRUE, probabilities p are given as log(p).
lower.tail logical; If TRUE, (default), P (X ≤ x) are returned, otherwise P (X > x).
p vector of probabilities.
n number of observations. If length(n) > 1, the length is taken to be the number
required.
Details
Probability density function
αθ θ−1
n
θ
o n n h
θ
ioo
f (y | α, θ) = [− log(y)] exp [− log (y)] exp α 1 − exp (− log(y))
y
Cumulative distribution function
n n h ioo
θ
F (y | α, θ) = exp α 1 − exp (− log(y))
Quantile function
( θ1 )
log (τ )
Q (τ | α, θ) = exp − log 1 −
α
Reparameterization
log (τ )
α = g −1 (µ) = h i
θ
Value
duchen gives the density, puchen gives the distribution function, quchen gives the quantile function
and ruchen generates random deviates.
Invalid arguments will return an error message.
Author(s)
<NAME> <<EMAIL>>
<NAME> <<EMAIL>>
References
<NAME>., <NAME>., <NAME>. and <NAME>., (2020). On the unit-Chen distribution
with associated quantile regression and applications. Journal of Applied Statistics, 44(1) 1–22.
Examples
set.seed(123)
x <- ruchen(n = 1000, mu = 0.5, theta = 1.5, tau = 0.5)
R <- range(x)
S <- seq(from = R[1], to = R[2], by = 0.01)
hist(x, prob = TRUE, main = 'unit-Chen')
lines(S, duchen(x = S, mu = 0.5, theta = 1.5, tau = 0.5), col = 2)
plot(ecdf(x))
lines(S, puchen(q = S, mu = 0.5, theta = 1.5, tau = 0.5), col = 2)
plot(quantile(x, probs = S), type = "l")
lines(quchen(p = S, mu = 0.5, theta = 1.5, tau = 0.5), col = 2)
ughne The unit-Half-Normal-E distribution
Description
Density function, distribution function, quantile function and random number generation function
for the unit-Half-Normal-E distribution reparametrized in terms of the τ -th quantile, τ ∈ (0, 1).
Usage
dughne(x, mu, theta, tau = 0.5, log = FALSE)
pughne(q, mu, theta, tau = 0.5, lower.tail = TRUE, log.p = FALSE)
qughne(p, mu, theta, tau = 0.5, lower.tail = TRUE, log.p = FALSE)
rughne(n, mu, theta, tau = 0.5)
Arguments
x, q vector of positive quantiles.
mu location parameter indicating the τ -th quantile, τ ∈ (0, 1).
theta nonnegative shape parameter.
tau the parameter to specify which quantile is to be used.
log, log.p logical; If TRUE, probabilities p are given as log(p).
lower.tail logical; If TRUE, (default), P (X ≤ x) are returned, otherwise P (X > x).
p vector of probabilities.
n number of observations. If length(n) > 1, the length is taken to be the number
required.
Details
Probability density function
r θ ( 2θ )
2 θ log (y) 1 log (y)
f (y | α, θ) = − exp − −
π y [− log (y)] α 2 α
Cumulative distribution function
" θ #
log (y)
F (y | α, θ) = 2Φ − −
α
Quantile function h τ i θ1
Q(τ | α, θ) = exp −α −Φ−1
Reparameterization
α = g −1 (µ) = − log (µ) −Φ−1
Value
dughne gives the density, pughne gives the distribution function, qughne gives the quantile function
and rughne generates random deviates.
Invalid arguments will return an error message.
Author(s)
<NAME> <<EMAIL>>
<NAME> <<EMAIL>>
References
<NAME>., (2020). The unit generalized half normal distribution: A new bounded distribution
with inference and application. University Politehnica of Bucharest Scientific, 82(2), 133–140.
Examples
set.seed(123)
x <- rughne(n = 1000, mu = 0.5, theta = 2, tau = 0.5)
R <- range(x)
S <- seq(from = R[1], to = R[2], by = 0.01)
hist(x, prob = TRUE, main = 'unit-Half-Normal-E')
lines(S, dughne(x = S, mu = 0.5, theta = 2, tau = 0.5), col = 2)
plot(ecdf(x))
lines(S, pughne(q = S, mu = 0.5, theta = 2, tau = 0.5), col = 2)
plot(quantile(x, probs = S), type = "l")
lines(qughne(p = S, mu = 0.5, theta = 2, tau = 0.5), col = 2)
ughnx The unit-Half-Normal-X distribution
Description
Density function, distribution function, quantile function and random number generation function
for the unit-Half-Normal-X distribution reparametrized in terms of the τ -th quantile, τ ∈ (0, 1).
Usage
dughnx(x, mu, theta, tau = 0.5, log = FALSE)
pughnx(q, mu, theta, tau = 0.5, lower.tail = TRUE, log.p = FALSE)
qughnx(p, mu, theta, tau = 0.5, lower.tail = TRUE, log.p = FALSE)
rughnx(n, mu, theta, tau = 0.5)
Arguments
x, q vector of positive quantiles.
mu location parameter indicating the τ -th quantile, τ ∈ (0, 1).
theta nonnegative shape parameter.
tau the parameter to specify which quantile is to be used.
log, log.p logical; If TRUE, probabilities p are given as log(p).
lower.tail logical; If TRUE, (default), P (X ≤ x) are returned, otherwise P (X > x).
p vector of probabilities.
n number of observations. If length(n) > 1, the length is taken to be the number
required.
Details
Probability density function
r θ ( 2θ )
2 θ y 1 y
f (y | α, θ) = exp −
π y (1 − y) α (1 − y) 2 α (1 − y)
Cumulative density function
" θ #
y
F (y | α, θ) = 2Φ −1
Quantile Function
Q(τ | α) = θ1
Reparametrization
µ
α = g −1 (µ) = θ1
Value
dughnx gives the density, pughnx gives the distribution function, qughnx gives the quantile function
and rughnx generates random deviates.
Invalid arguments will return an error message.
Author(s)
<NAME> <<EMAIL>>
<NAME> <<EMAIL>>
References
<NAME>., <NAME>., <NAME>. and <NAME>., (2021). A flexible probability model
for proportion data: Unit-Half-Normal distribution. Communications in Statistics: CaseStudies,
Data Analysis and Applications, 0(0), 1–18.
Examples
set.seed(123)
x <- rughnx(n = 1000, mu = 0.5, theta = 2, tau = 0.5)
R <- range(x)
S <- seq(from = R[1], to = R[2], by = 0.01)
hist(x, prob = TRUE, main = 'unit-Half-Normal-X')
lines(S, dughnx(x = S, mu = 0.5, theta = 2, tau = 0.5), col = 2)
plot(ecdf(x))
lines(S, pughnx(q = S, mu = 0.5, theta = 2, tau = 0.5), col = 2)
plot(quantile(x, probs = S), type = "l")
lines(qughnx(p = S, mu = 0.5, theta = 2, tau = 0.5), col = 2)
ugompertz The unit-Gompertz distribution
Description
Density function, distribution function, quantile function and random number deviates for the unit-
Gompertz distribution reparametrized in terms of the τ -th quantile, τ ∈ (0, 1).
Usage
dugompertz(x, mu, theta, tau = 0.5, log = FALSE)
pugompertz(q, mu, theta, tau = 0.5, lower.tail = TRUE, log.p = FALSE)
qugompertz(p, mu, theta, tau = 0.5, lower.tail = TRUE, log.p = FALSE)
rugompertz(n, mu, theta, tau = 0.5)
Arguments
x, q vector of positive quantiles.
mu location parameter indicating the τ -th quantile, τ ∈ (0, 1).
theta nonnegative shape parameter.
tau the parameter to specify which quantile is to be used.
log, log.p logical; If TRUE, probabilities p are given as log(p).
lower.tail logical; If TRUE, (default), P (X ≤ x) are returned, otherwise P (X > x).
p vector of probabilities.
n number of observations. If length(n) > 1, the length is taken to be the number
required.
Details
Probability density function
αθ
f (y | α, θ) = exp {α − θ log (y) − α exp [−θ log (y)]}
x
Cumulative density function
F (y | α, θ) = exp α 1 − y θ
Quantile Function
α − log (τ )
Q(τ | α, θ) =
α
Reparameterization
log (τ )
α = g −1 (µ) =
Value
dugompertz gives the density, pugompertz gives the distribution function, qugompertz gives the
quantile function and rugompertz generates random deviates.
Invalid arguments will return an error message.
Author(s)
<NAME> <<EMAIL>>
<NAME> <<EMAIL>>
References
<NAME>., <NAME>. and <NAME>., (2019). Unit-Gompertz Distribution with Applications.
Statistica, 79(1), 25-43.
Examples
set.seed(123)
x <- rugompertz(n = 1000, mu = 0.5, theta = 2, tau = 0.5)
R <- range(x)
S <- seq(from = R[1], to = R[2], by = 0.01)
hist(x, prob = TRUE, main = 'unit-Gompertz')
lines(S, dugompertz(x = S, mu = 0.5, theta = 2, tau = 0.5), col = 2)
plot(ecdf(x))
lines(S, pugompertz(q = S, mu = 0.5, theta = 2, tau = 0.5), col = 2)
plot(quantile(x, probs = S), type = "l")
lines(qugompertz(p = S, mu = 0.5, theta = 2, tau = 0.5), col = 2)
ugumbel The unit-Gumbel distribution
Description
Density function, distribution function, quantile function and random number generation function
for the unit-Gumbel distribution reparametrized in terms of the τ -th quantile, τ ∈ (0, 1).
Usage
dugumbel(x, mu, theta, tau = 0.5, log = FALSE)
pugumbel(q, mu, theta, tau = 0.5, lower.tail = TRUE, log.p = FALSE)
qugumbel(p, mu, theta, tau = 0.5, lower.tail = TRUE, log.p = FALSE)
rugumbel(n, mu, theta, tau = 0.5)
Arguments
x, q vector of positive quantiles.
mu location parameter indicating the τ -th quantile, τ ∈ (0, 1).
theta nonnegative shape parameter.
tau the parameter to specify which quantile use in the parametrization.
log, log.p logical; If TRUE, probabilities p are given as log(p).
lower.tail logical; If TRUE, (default), P (X ≤ x) are returned, otherwise P (X > x).
p vector of probabilities.
n number of observations. If length(n) > 1, the length is taken to be the number
required.
Details
Probability density function
θ y y
f (y | α, θ) = exp −α − θ log − exp −α − θ log
y(1 − y) 1−y 1−y
Cumulative distribution function
" θ #
F (y | α, θ) = exp −exp (−α)
y
Quantile function
− log(τ )
Q(τ | α, θ) = h i θ1
)
Reparameterization
−1 1−µ 1
α=g (µ) = θ log + log −
µ log (τ )
where 0 < y < 1 and θ > 0 is the shape parameter.
Value
dugumbel gives the density, pugumbel gives the distribution function, qugumbel gives the quantile
function and rugumbel generates random deviates.
Invalid arguments will return an error message.
Author(s)
<NAME>
<NAME>
References
Mazucheli, J. and Alves, B., (2021). The unit-Gumbel Quantile Regression Model for Proportion
Data. Under Review.
<NAME>., (1941). The return period of flood flows. The Annals of Mathematical Statistics,
12(2), 163–190.
Examples
set.seed(6969)
x <- rugumbel(n = 1000, mu = 0.5, theta = 1.5, tau = 0.5)
R <- range(x)
S <- seq(from = R[1], to = R[2], by = 0.01)
hist(x, prob = TRUE, main = 'unit-Gumbel')
lines(S, dugumbel(x = S, mu = 0.5, theta = 1.5, tau = 0.5), col = 2)
plot(ecdf(x))
lines(S, pugumbel(q = S, mu = 0.5, theta = 1.5, tau = 0.5), col = 2)
plot(quantile(x, probs = S), type = "l")
lines(qugumbel(p = S, mu = 0.5, theta = 1.5, tau = 0.5), col = 2)
ulogistic The unit-Logistic distribution
Description
Density function, distribution function, quantile function and random number generation for the
unit-Logistic distribution reparametrized in terms of the τ -th quantile, τ ∈ (0, 1).
Usage
dulogistic(x, mu, theta, tau = 0.5, log = FALSE)
pulogistic(q, mu, theta, tau = 0.5, lower.tail = TRUE, log.p = FALSE)
qulogistic(p, mu, theta, tau = 0.5, lower.tail = TRUE, log.p = FALSE)
rulogistic(n, mu, theta, tau = 0.5)
Arguments
x, q vector of positive quantiles.
mu location parameter indicating the τ -th quantile, τ ∈ (0, 1).
theta nonnegative shape parameter.
tau the parameter to specify which quantile is to used.
log, log.p logical; If TRUE, probabilities p are given as log(p).
lower.tail logical; If TRUE, (default), P (X ≤ x) are returned, otherwise P (X > x).
p vector of probabilities.
n number of observations. If length(n) > 1, the length is taken to be the number
required.
Details
Probability density function
y
f (y | α, θ) = θ 2
y
Cumulative distribution function
θ
y
F (y | α, θ) = θ
y
Quantile function
τ
Q(τ | α, θ) = θ1
τ
Reparameterization
τ µ
α = g −1 (µ) = log − θ log
Value
dulogistic gives the density, pulogistic gives the distribution function, qulogistic gives the
quantile function and rulogistic generates random deviates.
Invalid arguments will return an error message.
Author(s)
<NAME> <<EMAIL>>
<NAME> <<EMAIL>>
References
<NAME>., <NAME>. and <NAME>., 2019. L-Logistic regression models: Prior sensitivity
analysis, robustness to outliers and applications. Brazilian Journal of Probability and Statistics,
33(3), 455–479.
Examples
set.seed(123)
x <- rulogistic(n = 1000, mu = 0.5, theta = 1.5, tau = 0.5)
R <- range(x)
S <- seq(from = R[1], to = R[2], by = 0.01)
hist(x, prob = TRUE, main = 'unit-Logistic')
lines(S, dulogistic(x = S, mu = 0.5, theta = 1.5, tau = 0.5), col = 2)
plot(ecdf(x))
lines(S, pulogistic(q = S, mu = 0.5, theta = 1.5, tau = 0.5), col = 2)
plot(quantile(x, probs = S), type = "l")
lines(qulogistic(p = S, mu = 0.5, theta = 1.5, tau = 0.5), col = 2)
unitquantreg Parametric unit quantile regression models
Description
Fit a collection of parametric unit quantile regression model by maximum likelihood using the log-
likelihood function, the score vector and the hessian matrix implemented in C++.
Usage
unitquantreg(
formula,
data,
subset,
na.action,
tau,
family,
link = c("logit", "probit", "cloglog", "cauchit"),
link.theta = c("identity", "log", "sqrt"),
start = NULL,
control = unitquantreg.control(),
model = TRUE,
x = FALSE,
y = TRUE
)
unitquantreg.fit(
y,
X,
Z = NULL,
tau,
family,
link,
link.theta,
start = NULL,
control = unitquantreg.control()
)
Arguments
formula symbolic description of the quantile model like y ~ x or y ~ x | z. See below for
details.
data data.frame contain the variables in the model.
subset an optional vector specifying a subset of observations to be used in the fitting
process.
na.action a function which indicates what should happen when the data contain NAs.
tau numeric vector. The quantile(s) to be estimated, i.e., number between 0 and 1.
If just one quantile is specified an object of class unitquantreg is returned.
If a numeric vector of values between 0 and 1 is specified an object of class
unitquantregs is returned. See below for details.
family character. Specify the distribution family.
link character. Specify the link function in the quantile model. Currently supported
are logit, probit, cloglog and cauchit. Default is logit.
link.theta character. Specify the link function in the shape model. Currently supported are
identity, log and sqrt. Default is log.
start numeric vector. An optional vector with starting values for all parameters.
control list. Control arguments specified via unitquantreg.control.
model logical. Indicates whether model frame should be included as a component of
the returned value.
x, y logical. If TRUE the corresponding components of the fit (model frame, response,
model matrix) are returned. For unitquantreg.fit y should be the numeric
response vector with values in (0,1).
X, Z numeric matrix. Regressor matrix for the quantile and shape model, respec-
tively. Default is constant shape model, i.e., Z is matrix with column of ones.
Details
The parameter estimation and inference are performed under the frequentist paradigm. The optimx
R package is use, since allows different optimization technique to maximize the log-likelihood
function. The analytical score function are use in the maximization and the standard errors are
computed using the analytical hessian matrix, both are implemented in efficient away using C++.
Value
unitquantreg can return an object of class unitquantreg if tau is a scalar, i.e., a list with the
following components.
family the distribution family name.
coefficients a list with elements "quantile" and "shape" containing the coefficients from
the respective models.
fitted.values a list with elements "quantile" and "shape" containing the fitted parameters
from the respective models.
linear.predictors
a list with elements "quantile" and "shape" containing the fitted linear pre-
dictors from the respective models.
link a list with elements "quantile" and "shape" containing the link objects from
the respective models.
tau the quantile specify.
loglik log-likelihood of the fitted model.
gradient gradient evaluate at maximum likelihood estimates.
vcov covariance matrix of all parameters in the model.
nobs number of observations.
npar number of parameters.
df.residual residual degrees of freedom in the fitted model.
theta_const logical indicating if the θ parameter was treated as nuisance parameter.
control the control parameters used to fit the model.
iterations number of iterations of optimization method.
converged logical, if TRUE indicates successful convergence.
kkt a list of logical kkt1 and kkt2 provide check on Kuhn-Karush-Tucker condi-
tions, first-order KKT test (kkt1) checks whether the gradient at the final param-
eters estimates is "small" and the second-order KKT test (kkte) checks whether
the Hessian at the final parameters estimates is positive definite.
elapsed_time time elapsed to fit the model.
call the original function call.
formula the original model formula.
terms a list with elements "quantile", "shape" and "full" containing the terms
objects for the respective models.
model the full model frame, if model = TRUE.
y the response vector, if y = TRUE.
x a list with elements "quantile" and "shape" containing the model matrices
from the respective models, if x = TRUE.
While unitquantreg.fit returns an unclassed list with components up to elapsed_time.
If tau is a numeric vector with length greater than one an object of class unitquantregs is returned,
which consist of list of objects of class unitquantreg for each specified quantiles.
Author(s)
<NAME>
unitquantreg.control Control parameters for unit quantile regression
Description
Auxiliary function that control fitting of unit quantile regression models using unitquantreg.
Usage
unitquantreg.control(
method = "BFGS",
hessian = FALSE,
gradient = TRUE,
maxit = 5000,
factr = 1e+07,
reltol = sqrt(.Machine$double.eps),
trace = 0L,
starttests = FALSE,
dowarn = FALSE,
...
)
Arguments
method string. Specify the method argument passed to optimx.
hessian logical. Should use the numerically Hessian matrix to compute variance-covariance?
Default is FALSE, i.e., use the analytic Hessian.
gradient logical. Should use the analytic gradient? Default is TRUE.
maxit integer. Specify the maximal number of iterations passed to optimx.
factr numeric.Controls the convergence of the "L-BFGS-B" method.
reltol numeric. Relative convergence tolerance passed to optimx.
trace non-negative integer. If positive, tracing information on the progress of the op-
timization is produced.
starttests logical. Should optimx run tests of the functions and parameters? Default is
FALSE.
dowarn logical. Show warnings generated by optimx? Default is FALSE.
... arguments passed to optimx.
Details
The control argument of unitquantreg uses the arguments of unitquantreg.control. In par-
ticular, the parameters in unitquantreg are estimated by maximum likelihood using the optimx,
which is a general-purpose optimization wrapper function that calls other R tools for optimization,
including the existing optim function. The main advantage of optimx is to unify the tools allowing
a number of different optimization methods and provide sanity checks.
Value
A list with components named as the arguments.
Author(s)
<NAME>
References
<NAME>. and <NAME>. (2011). Unifying Optimization Algorithms to Aid Software System
Users: optimx for R., Journal of Statistical Software, 43(9), 1–14.
See Also
optimx for more details about control parameters and unitquantreg.fit the fitting procedure
used by unitquantreg.
Examples
data(sim_bounded, package = "unitquantreg")
sim_bounded_curr <- sim_bounded[sim_bounded$family == "uweibull", ]
# Fitting using the analytical gradient
fit_gradient <- unitquantreg(formula = y1 ~ x,
data = sim_bounded_curr, tau = 0.5,
family = "uweibull",
control = unitquantreg.control(gradient = TRUE,
# Fitting without using the analytical gradient
fit_nogradient <- unitquantreg(formula = y1 ~ x,
data = sim_bounded_curr, tau = 0.5,
family = "uweibull",
control = unitquantreg.control(gradient = FALSE,
# Compare estimated coefficients
cbind(gradient = coef(fit_gradient), no_gradient = coef(fit_nogradient))
uweibull The unit-Weibull distribution
Description
Density function, distribution function, quantile function and random number generation function
for the unit-Weibull distribution reparametrized in terms of the τ -th quantile, τ ∈ (0, 1).
Usage
duweibull(x, mu, theta, tau = 0.5, log = FALSE)
puweibull(q, mu, theta, tau = 0.5, lower.tail = TRUE, log.p = FALSE)
quweibull(p, mu, theta, tau = 0.5, lower.tail = TRUE, log.p = FALSE)
ruweibull(n, mu, theta, tau = 0.5)
Arguments
x, q vector of positive quantiles.
mu location parameter indicating the τ -th quantile, τ ∈ (0, 1).
theta nonnegative shape parameter.
tau the parameter to specify which quantile use in the parametrization.
log, log.p logical; If TRUE, probabilities p are given as log(p).
lower.tail logical; If TRUE, (default), P (X ≤ x) are returned, otherwise P (X > x).
p vector of probabilities.
n number of observations. If length(n) > 1, the length is taken to be the number
required.
Details
Probability density function
αθ θ−1
n
θ
o
f (y | α, θ) = [− log(y)] exp −α [− log(y)]
y
Cumulative distribution function
n o
θ
F (y | α, θ) = exp −α [− log(y)]
Quantile function ( 1 )
log(τ ) θ
Q (τ | α, θ) = exp − −
α
Reparameterization
log(τ )
α = g −1 (µ) = −
[− log(µ)]θ
Value
duweibull gives the density, puweibull gives the distribution function, quweibull gives the quan-
tile function and ruweibull generates random deviates.
Invalid arguments will return an error message.
Author(s)
<NAME>
<NAME>
References
<NAME>., <NAME> and <NAME>., (2018). The unit-Weibull distribution and
associated inference. Journal of Applied Probability and Statistics, 13(2), 1–22.
<NAME>., <NAME>., <NAME>., <NAME>. and <NAME>., (2020).
The unit-Weibull distribution as an alternative to the Kumaraswamy distribution for the modeling
of quantiles conditional on covariates. Journal of Applied Statistics, 47(6), 954–974.
<NAME>., <NAME>., <NAME>. and <NAME>., (2021). Bias-Corrected Maximum
Likelihood Estimators of the Parameters of the Unit-Weibull Distribution. Austrian Journal of
Statistics, 50(3), 41–53.
Examples
set.seed(6969)
x <- ruweibull(n = 1000, mu = 0.5, theta = 1.5, tau = 0.5)
R <- range(x)
S <- seq(from = R[1], to = R[2], by = 0.01)
hist(x, prob = TRUE, main = 'unit-Weibull')
lines(S, duweibull(x = S, mu = 0.5, theta = 1.5, tau = 0.5), col = 2)
plot(ecdf(x))
lines(S, puweibull(q = S, mu = 0.5, theta = 1.5, tau = 0.5), col = 2)
plot(quantile(x, probs = S), type = "l")
lines(quweibull(p = S, mu = 0.5, theta = 1.5, tau = 0.5), col = 2)
vuong.test Vuong test
Description
Performs Vuong test between two fitted objects of class unitquantreg
Usage
vuong.test(object1, object2, alternative = c("two.sided", "less", "greater"))
Arguments
object1, object2
objects of class unitquantreg containing the fitted models.
alternative indicates the alternative hypothesis and must be one of "two.sided" (default),
"less", or "greater". You can specify just the initial letter of the value, but
the argument name must be given in full. See ‘Details’ for the meanings of the
possible values.
Details
The statistic of Vuong likelihood ratio test for compare two non-nested regression models is defined
by
n
1 X f (yi | xi , θ)
b
T = 2√ log
ω
b n i=1 g(yi | xi , γ b)
where !2 " !#2
n n
2 1 X f (yi | xi , θ)
b
ω
b = log − log
n i=1 g(yi | xi , γb) n i=1 g(yi | xi , γb)
n
X f (yi | xi , θ)
b
is an estimator for the variance of √1 log , f (yi | xi , θ)
b and g(yi | xi , γ
b ) are the
n g(yi | xi , γb)
i=1
corresponding rival densities evaluated at the maximum likelihood estimates.
When n → ∞ we have that T → N (0, 1) in distribution. Therefore, at α% level of significance the
null hypothesis of the equivalence of the competing models is rejected if |T | > zα/2 , where zα/2 is
the α/2 quantile of standard normal distribution.
In practical terms, f (yi | xi , θ)
b is better (worse) than g(yi | xi , γ b ) if T > zα/2 (or T < −zα/2 ).
Value
A list with class "htest" containing the following components:
statistic the value of the test statistic.
p.value the p-value of the test.
alternative a character string describing the alternative hypothesis.
method a character string with the method used.
data.name a character string ginven the name of families models under comparison.
Author(s)
<NAME>
<NAME>
References
Vuong, Q. (1989). Likelihood ratio tests for model selection and non-nested hypotheses. Econo-
metrica, 57(2), 307–333.
Examples
data(sim_bounded, package = "unitquantreg")
sim_bounded_curr <- sim_bounded[sim_bounded$family == "uweibull", ]
fit_uweibull <- unitquantreg(formula = y1 ~ x, tau = 0.5,
data = sim_bounded_curr,
family = "uweibull")
fit_kum <- unitquantreg(formula = y1 ~ x, tau = 0.5,
data = sim_bounded_curr,
family = "kum")
ans <- vuong.test(object1 = fit_uweibull, object2 = fit_kum)
ans
str(ans)
water Access to piped water supply data set
Description
The access of people in households with piped water supply in the cities of Brazil from the Southeast
and Northeast regions. Information obtained during the census of 2010.
Usage
data(water, package = "unitquantreg")
Format
data.frame with 3457 observations and 5 columns:
• phpws: the proportion of households with piped water supply.
• mhdi: municipal human development index.
• incpc: per capita income.
• region: 0 for Southeast, 1 for Northeast.
• pop: total population.
Author(s)
<NAME>
References
<NAME>., <NAME>., <NAME>., <NAME>. and <NAME>., (2020).
The unit-Weibull distribution as an alternative to the Kumaraswamy distribution for the modeling
of quantiles conditional on covariates. Jounal of Applied Statistics, 47(6), 954–974. |
flow | cran | R | Package ‘flow’
June 6, 2023
Title View and Browse Code Using Flow Diagrams
Version 0.2.0
Description Visualize as flow diagrams the logic of functions, expressions or
scripts in a static way or when running a call, visualize the dependencies between
functions or between modules in a shiny app, and more.
License MIT + file LICENSE
URL https://github.com/moodymudskipper/flow,
https://moodymudskipper.github.io/flow/
BugReports https://github.com/moodymudskipper/flow/issues
Encoding UTF-8
Imports nomnoml, utils, htmlwidgets, rstudioapi, webshot, styler,
methods, here, lifecycle
Suggests testthat (>= 3.0.0), covr, knitr, rmarkdown, esquisse,
tidyselect, purrr
RoxygenNote 7.2.3
VignetteBuilder knitr
Config/testthat/edition 3
NeedsCompilation no
Author <NAME> [aut, cre]
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2023-06-06 12:40:02 UTC
R topics documented:
flow_debu... 2
flow_do... 3
flow_dra... 4
flow_embe... 5
flow_tes... 5
flow_vie... 6
flow_view_dep... 9
flow_view_shin... 10
flow_view_source_call... 11
flow_view_use... 12
flow_view_var... 13
flow_debug Debug With Flow Diagrams
Description
These functions are named after the base functions debug() and undebug(). flow_debug() will
call flow_run(), with the same additional arguments, on all the following calls to f() until flow_undebug()
is called.
Usage
flow_debug(
f,
prefix = NULL,
code = TRUE,
narrow = FALSE,
truncate = NULL,
swap = TRUE,
out = NULL,
browse = FALSE
)
flow_undebug(f)
Arguments
f function to debug
prefix prefix to use for special comments in our code used as block headers, must start
with "#", several prefixes can be provided
code Whether to display the code in code blocks or only the header, to be more com-
pact, if NA, the code will be displayed only if no header is defined by special
comments
narrow TRUE makes sure the diagram stays centered on one column (they’ll be longer
but won’t shift to the right)
truncate maximum number of characters to be printed per line
swap whether to change var <- if(cond) expr into if(cond) var <- expr so the
diagram displays better
out a path to save the diagram to. Special values "html", "htm", "png", "pdf", "jpg"
and "jpeg" can be used to export the object to a temp file of the relevant for-
mat and open it, if a regular path is used the format will be guessed from the
extension.
browse whether to debug step by step (block by block), can also be a vector of block
ids, in this case browser() calls will be inserted at the start of these blocks
Details
By default, unlike debug(), flow_debug() doesn’t trigger a debugger but only draw diagrams, this
is consistent with flow_run()’s defaults. To browse through the code, use the browse argument.
Value
These functions return NULL invisibly (called for side effects)
flow_doc Draw Flow Diagrams for an Entire Package
Description
Draw Flow Diagrams for an Entire Package
Usage
flow_doc(
pkg = NULL,
prefix = NULL,
code = TRUE,
narrow = FALSE,
truncate = NULL,
swap = TRUE,
out = NULL,
engine = c("nomnoml", "plantuml")
)
Arguments
pkg package name as a string, or NULL to signify currently developed package.
prefix prefix to use for special comments in our code used as block headers, must start
with "#", several prefixes can be provided
code Whether to display the code in code blocks or only the header, to be more com-
pact, if NA, the code will be displayed only if no header is defined by special
comments
narrow TRUE makes sure the diagram stays centered on one column (they’ll be longer
but won’t shift to the right)
truncate maximum number of characters to be printed per line
swap whether to change var <- if(cond) expr into if(cond) var <- expr so the
diagram displays better
out path to html output, if left NULL a temp html file will be created and opened
engine either "nomnoml" (default) or "plantuml" (experimental, brittle mostly for rea-
sons out of our control), if the latter, arguments prefix, narrow, and code are
ignored
Value
Returns NULL invisibly (called for side effects).
flow_draw Draw Diagram From Debugger
Description
flow_draw() should only be used in the debugger triggered by a call to flow_run(), or following a
call to flow_debug(). d is an active binding to flow_draw(), it means you can just type d (without
parentheses) instead of flow_draw().
Usage
flow_draw()
d
Details
d was designed to look like the other shortcuts detailed in ?browser, such as f, c etc... It differs
however in that it can be overridden. For instance if the function uses a variable d or that a parent
environment contains a variable d, flow::d won’t be found. In that case you will have to use
flow_draw().
If d or flow_draw() are called outside of the debugger they will return NULL silently.
Value
Returns NULL invisibly (called for side effects)
flow_embed Embed chart in roxygen doc
Description
Include a call `r_flow::flow_embed(...)` in your doc and a diagram will be included.
Usage
flow_embed(call, name, width = 1, alt = name)
Arguments
call A call to a flow function, prefixed with flow::
name A name for the png file that will be created under ’man/figures’, without exten-
sion.
width width, relative if < 1, pixels otherwise
alt alt text
Details
• As with images in general the image might not be visible when viewing temp doc with the
devtools workflow.
• Don’t forget to add flow to Suggests in your DESCRIPTION file.
• We don’t monitor files created under ’man/figures’, so if you remove a diagram from the doc
make sure to also remove it from the folder.
• We also don’t overwrite created files, so we don’t slow down the documentation process, so if
you want to print a different diagram for the same name remove the file first.
Value
Called for side effects, should only be used in roxygen doc
flow_test Build Report From Tests
Description
Build a markdown report from test scripts, showing the paths taken in tested functions, and where
they fail if they do. See also the vignette "Build reports to document functions and unit tests".
Usage
flow_test(
prefix = NULL,
code = TRUE,
narrow = FALSE,
truncate = NULL,
swap = TRUE,
out = NULL,
failed_only = FALSE
)
Arguments
prefix prefix to use for special comments in our code used as block headers, must start
with "#", several prefixes can be provided
code Whether to display the code in code blocks or only the header, to be more com-
pact, if NA, the code will be displayed only if no header is defined by special
comments
narrow TRUE makes sure the diagram stays centered on one column (they’ll be longer
but won’t shift to the right)
truncate maximum number of characters to be printed per line
swap whether to change var <- if(cond) expr into if(cond) var <- expr so the
diagram displays better
out path to html output, if left NULL a temp html file will be created and opened.
failed_only whether to restrict the report to failing tests only
Value
Returns NULL invisibly (called for side effects)
flow_view View function as flow chart
Description
• flow_view() shows the code of a function as a flow diagram
• flow_run() runs a call and draws the logical path taken by the code.
• flow_compare_runs() shows on the same diagrams 2 calls to the same functions, code blocks
that are only touched by the ref call are colored green, code blocks that are only touched by
the x call are colored orange.
Usage
flow_view(
x,
prefix = NULL,
code = TRUE,
narrow = FALSE,
truncate = NULL,
nested_fun = NULL,
swap = TRUE,
out = NULL,
engine = c("nomnoml", "plantuml")
)
flow_run(
x,
prefix = NULL,
code = TRUE,
narrow = FALSE,
truncate = NULL,
swap = TRUE,
out = NULL,
browse = FALSE
)
flow_compare_runs(
x,
ref,
prefix = NULL,
code = TRUE,
narrow = FALSE,
truncate = NULL,
swap = TRUE,
out = NULL
)
Arguments
x a call, a function, or a path to a script
prefix prefix to use for special comments in our code used as block headers, must start
with "#", several prefixes can be provided
code Whether to display the code in code blocks or only the header, to be more com-
pact, if NA, the code will be displayed only if no header is defined by special
comments
narrow TRUE makes sure the diagram stays centered on one column (they’ll be longer
but won’t shift to the right)
truncate maximum number of characters to be printed per line
nested_fun if not NULL, the index or name of the function definition found in x that we wish
to inspect
swap whether to change var <- if(cond) expr into if(cond) var <- expr so the
diagram displays better
out a path to save the diagram to. Special values "html", "htm", "png", "pdf", "jpg"
and "jpeg" can be used to export the object to a temp file of the relevant for-
mat and open it, if a regular path is used the format will be guessed from the
extension.
engine either "nomnoml" (default) or "plantuml" (experimental, brittle mostly for rea-
sons out of our control), if the latter, arguments prefix, narrow, and code are
ignored
browse whether to debug step by step (block by block), can also be a vector of block
ids, in this case browser() calls will be inserted at the start of these blocks
ref the reference expression for flow_compare_runs()
Details
On some systems the output might sometimes display the box character when using the nom-
noml engine, this is due to the system not recognizing the Braille character \u2800. This char-
acter is used to circumvent a shortcoming of the nomnoml library: lines can’t start with a stan-
dard space and multiple subsequent spaces might be collapsed. To choose another character, set
the option flow.indenter, for instance : options(flow.indenter = "\u00b7"). Setting the
options(flow.svg = FALSE) might also help.
Value
depending on out :
• NULL (default) : flow_view() and flow_compare_runs() return a "flow_diagram" object,
containing the diagram, the diagram’s code and the data used to build the code. flow_run()
returns the output of the call.
• An output path or a file extension : the path where the file is saved
• "data": a list of 2 data frames "nodes" and "edges"
• "code": A character vector of class "flow_code"
Examples
flow_view(rle)
flow_run(rle(c(1, 2, 2, 3)))
flow_compare_runs(rle(NULL), rle(c(1, 2, 2, 3)))
flow_view_deps Show dependency graph of a function
Description
[Experimental]
Usage
flow_view_deps(
fun,
max_depth = Inf,
trim = NULL,
promote = NULL,
demote = NULL,
hide = NULL,
show_imports = c("functions", "packages", "none"),
out = NULL,
lines = TRUE,
include_formals = TRUE
)
Arguments
fun A function, can be of the form fun, pkg::fun, pkg:::fun, if in the form fun,
the binding should be located in a package namespace or the global environ-
ment. It can also be a named list of functions, such as one you’d create with
dplyr::lst(), for instance lst(fun1, pkg::fun2).
max_depth An integer, the maximum depth to display
trim A vector or list of function names where the recursion will stop
promote A vector or list of external functions to show as internal functions
demote A vector or list of internal functions to show as external functions
hide A vector or list of internal functions to completely remove from the chart
show_imports Whether to show imported "functions", only "packages", or "none"
out a path to save the diagram to. Special values "html", "htm", "png", "pdf", "jpg"
and "jpeg" can be used to export the object to a temp file of the relevant for-
mat and open it, if a regular path is used the format will be guessed from the
extension.
lines Whether to show the number of lines of code next to the function name
include_formals
Whether to fetch dependencies in the default values of the function’s arguments
Details
Exported objects are shown in blue, unexported objects are shown in yellow.
Regular expressions can be used in trim, promote, demote and hide, they will be used on function
names in the form pkg::fun or pkg:::fun where pkg can be any package mentioned in these ar-
guments, the namespace of the explored function, or any of the direct dependencies of the package.
These arguments must be named, using the name "pattern". See examples below.
Value
flow_view_deps() returns a "flow_diagram" object by default, and the output path invisibly if
out is not NULL (called for side effects).
Examples
flow_view_deps(here::i_am)
flow_view_deps(here::i_am, demote = "format_dr_here")
flow_view_deps(here::i_am, trim = "format_dr_here")
flow_view_deps(here::i_am, hide = "format_dr_here")
flow_view_deps(here::i_am, promote = "rprojroot::get_root_desc")
flow_view_deps(here::i_am, promote = c(pattern = ".*::g"))
flow_view_deps(here::i_am, promote = c(pattern = "rprojroot::.*"))
flow_view_deps(here::i_am, hide = c(pattern = "here:::s"))
flow_view_shiny Visualize a shiny app’s dependency graph
Description
[Experimental] This function displays a shiny app’s module structure, assuming it is built on top
of module functions named a certain way (adjustable through the pattern argument) and calling
each other. If you call for instance flow_view_shiny() on a function that runs the app and uses
both the main server and ui functions, you’ll display the full graph of server and ui modules.
Usage
flow_view_shiny(
fun,
max_depth = Inf,
trim = NULL,
promote = NULL,
demote = NULL,
hide = NULL,
show_imports = c("functions", "packages", "none"),
out = NULL,
lines = TRUE,
pattern = "(_ui)|(_server)|(Ui)|(Server)|(UI)|(SERVER)"
)
Arguments
fun The function that runs the app
max_depth An integer, the maximum depth to display
trim A vector or list of function names where the recursion will stop
promote A vector or list of external functions to show as internal functions
demote A vector or list of internal functions to show as external functions
hide A vector or list of internal functions to completely remove from the chart
show_imports Whether to show imported "functions", only "packages", or "none"
out a path to save the diagram to. Special values "html", "htm", "png", "pdf", "jpg"
and "jpeg" can be used to export the object to a temp file of the relevant for-
mat and open it, if a regular path is used the format will be guessed from the
extension.
lines Whether to show the number of lines of code next to the function name
pattern A regular expression used to detect ui and server functions
Details
It is wrapper around flow_view_deps() which demotes every object that is not a server function,
a ui function or a function calling either. What is or isn’t considered as a server or ui function
depends on a regular expression provided through the pattern argument. For a more general way
of displaying all dependencies (not focused on modules), use flow_view_deps().
Value
A flow diagram object.
Examples
if (requireNamespace("esquisse", quietly = TRUE)) {
flow_view_shiny(esquisse::esquisser, show_imports = "none")
}
flow_view_source_calls
Draw diagram of source dependencies
Description
Assuming a project where files source each other, draw their dependency graph.
Usage
flow_view_source_calls(
paths = ".",
recursive = TRUE,
basename = TRUE,
extension = FALSE,
smart = TRUE,
out = NULL
)
Arguments
paths Paths to scripts or folders containing scripts By default explores the working
directory.
recursive Passed to list.files() when paths contains directories
basename Whether to display only the base name of the script
extension Whether to display the extension
smart Whether to parse complex source calls for strings that look like script and match
those to files found in paths
out a path to save the diagram to. Special values "html", "htm", "png", "pdf", "jpg"
and "jpeg" can be used to export the object to a temp file of the relevant for-
mat and open it, if a regular path is used the format will be guessed from the
extension.
Details
This evaluates the file argument of source in the global environment, when this fails, as it might
with constructs like for (file in files) source(file) the unevaluated argument is printed in-
stead between backticks. Since this messes up the relationships in the graph, an warning is thus
issued. In a case like source(file.path(my_dir, "foo.R") defining my_dir will be enough to
solve the issue. In the latter case, if smart is TRUE, the function will check in all the paths in scope
if any script is named "foo.R" and will consider it if a single fitting candidate is found.
Value
flow_view_source_calls() returns a "flow_diagram" object by default, and the output path
invisibly if out is not NULL (called for side effects). flow_run() returns the output of the wrapped
call.
flow_view_uses Show graph of callers of a function
Description
Experimental function that displays for a given object or function all functions that call it directly
or indirectly.
Usage
flow_view_uses(x, pkg = NULL, out = NULL)
Arguments
x An object
pkg A package or environment to fetch callers from, by default fun’s environment
out a path to save the diagram to. Special values "html", "htm", "png", "pdf", "jpg"
and "jpeg" can be used to export the object to a temp file of the relevant for-
mat and open it, if a regular path is used the format will be guessed from the
extension.
Details
The function is not very robust yet, but already useful for many usecases.
Value
flow_view_uses() returns a "flow_diagram" object by default, and the output path invisibly if
out is not NULL (called for side effects).
Examples
flow_view_uses(flow_run)
flow_view_vars Draw the dependencies of variables in a function
Description
[Experimental]
This draws the dependencies between variables. This function is useful to detect dead code and
variable clusters. By default the variable is shown a new time when it’s overwritten or modified,
this can be changed by setting expand to FALSE.
Usage
flow_view_vars(
x,
expand = TRUE,
refactor = c("refactored", "original"),
out = NULL
)
Arguments
x The function, script or expression to draw
expand A boolean, if FALSE a variable name is only shown once, else (the default) it’s
repeated and suffixed with a number of *
refactor If using ’refactor’ package, whether to consider original or refactored code
out a path to save the diagram to. Special values "html", "htm", "png", "pdf", "jpg"
and "jpeg" can be used to export the object to a temp file of the relevant for-
mat and open it, if a regular path is used the format will be guessed from the
extension.
Details
Colors and lines are to be understood as follows:
• The function is blue
• The arguments are green
• The variables starting as constants are yellow
• The dead code or pure side effect branches are orange and dashed
• dashed lines represent how variables are undirectly impacted by control flow conditions, for
instance the expression if(z == 1) x <- y would give you a full arrow from y to x and a dashed
arrow from z to x
expand = TRUE gives a sense of the chronology, and keep separate the unrelated uses of temp vari-
ables. expand = FALSE is more compact and shows you directly what variables might impact a given
variable, and what variables it impacts.
This function will work best if the function doesn’t draw from or assign to other environments and
doesn’t use assign() or attach(). The output might be polluted by variable names found in some
lazily evaluated function arguments. We ignore variable names found in calls to quote() and ~ as
well as nested function definitions, but complete robustness is probably impossible.
The diagram assumes that for / while / repeat loops were at least run once, if a value is modified in
a branch of an if call (or both branches) and expand is TRUE, the modified variable(s) will point to
a new one at the end of the ìf call.
Value
flow_vars() returns a "flow_diagram" object by default, and the output path invisibly if out is
not NULL (called for side effects).
Examples
flow_view_vars(ave) |
jax | readthedoc | Python | JAX documentation
[Skip to main content](#main-content)
Back to top
`Ctrl`+`K`
Getting Started
Further Resources
JAX: High-Performance Array Computing
===
Contents
---
Getting Started
* [Installing JAX](index.html#document-installation)
* [JAX Quickstart](index.html#document-notebooks/quickstart)
* [How to Think in JAX](index.html#document-notebooks/thinking_in_jax)
* [🔪 JAX - The Sharp Bits 🔪](index.html#document-notebooks/Common_Gotchas_in_JAX)
* [JAX Frequently Asked Questions (FAQ)](index.html#document-faq)
* [Tutorial: JAX 101](index.html#document-jax-101/index)
Further Resources
* [User Guides](index.html#document-user_guides)
+ [Profiling JAX programs](index.html#document-profiling)
+ [Device Memory Profiling](index.html#document-device_memory_profiling)
+ [Runtime value debugging in JAX](index.html#document-debugging/index)
+ [Understanding Jaxprs](index.html#document-jaxpr)
+ [External Callbacks in JAX](index.html#document-notebooks/external_callbacks)
+ [Type promotion semantics](index.html#document-type_promotion)
+ [Pytrees](index.html#document-pytrees)
+ [Ahead-of-time lowering and compilation](index.html#document-aot)
+ [JAX Errors](index.html#document-errors)
+ [Transfer guard](index.html#document-transfer_guard)
+ [Pallas: a JAX kernel language](index.html#document-pallas/index)
* [Advanced Tutorials](index.html#document-advanced_guide)
+ [Training a Simple Neural Network, with tensorflow/datasets Data Loading](index.html#document-notebooks/neural_network_with_tfds_data)
+ [Training a Simple Neural Network, with PyTorch Data Loading](index.html#document-notebooks/Neural_Network_and_Data_Loading)
+ [Autobatching for Bayesian Inference](index.html#document-notebooks/vmapped_log_probs)
+ [Using JAX in multi-host and multi-process environments](index.html#document-multi_process)
+ [Distributed arrays and automatic parallelization](index.html#document-notebooks/Distributed_arrays_and_automatic_parallelization)
+ [Named axes and easy-to-revise parallelism with `xmap`](index.html#document-notebooks/xmap_tutorial)
+ [The Autodiff Cookbook](index.html#document-notebooks/autodiff_cookbook)
+ [Custom derivative rules for JAX-transformable Python functions](index.html#document-notebooks/Custom_derivative_rules_for_Python_code)
+ [Control autodiff’s saved values with `jax.checkpoint` (aka `jax.remat`)](index.html#document-notebooks/autodiff_remat)
+ [How JAX primitives work](index.html#document-notebooks/How_JAX_primitives_work)
+ [Writing custom Jaxpr interpreters in JAX](index.html#document-notebooks/Writing_custom_interpreters_in_Jax)
+ [Custom operations for GPUs with C++ and CUDA](index.html#document-Custom_Operation_for_GPUs)
+ [Generalized Convolutions in JAX](index.html#document-notebooks/convolutions)
* [Developer Documentation](index.html#document-contributor_guide)
+ [Contributing to JAX](index.html#document-contributing)
+ [Building from source](index.html#document-developer)
+ [Internal APIs](index.html#document-jax_internal_api)
+ [Autodidax: JAX core from scratch](index.html#document-autodidax)
+ [JAX Enhancement Proposals (JEPs)](index.html#document-jep/index)
* [Building on JAX](index.html#document-building_on_jax)
+ [Gradient Computation](index.html#gradient-computation)
+ [Computational Speedup on a Single Core across Multiple Devices](index.html#computational-speedup-on-a-single-core-across-multiple-devices)
+ [Single and Multi Computer Speedup Using Parallelization](index.html#single-and-multi-computer-speedup-using-parallelization)
+ [Incorporating JAX code into your, or your users, workflows](index.html#incorporating-jax-code-into-your-or-your-users-workflows)
* [Notes](index.html#document-notes)
+ [API compatibility](index.html#document-api_compatibility)
+ [Python and NumPy version support policy](index.html#document-deprecation)
+ [jax.Array migration](index.html#document-jax_array_migration)
+ [Asynchronous dispatch](index.html#document-async_dispatch)
+ [Concurrency](index.html#document-concurrency)
+ [GPU memory allocation](index.html#document-gpu_memory_allocation)
+ [Rank promotion warning](index.html#document-rank_promotion_warning)
* [Public API: jax package](index.html#document-jax)
+ [Subpackages](index.html#subpackages)
+ [Configuration](index.html#configuration)
+ [Just-in-time compilation (`jit`)](index.html#just-in-time-compilation-jit)
+ [Automatic differentiation](index.html#automatic-differentiation)
+ [jax.Array (`jax.Array`)](index.html#jax-array-jax-array)
+ [Vectorization (`vmap`)](index.html#vectorization-vmap)
+ [Parallelization (`pmap`)](index.html#parallelization-pmap)
+ [Callbacks](index.html#callbacks)
+ [Miscellaneous](index.html#miscellaneous)
* [Change log](index.html#document-changelog)
* [jax 0.4.20](index.html#jax-0-4-20)
* [jaxlib 0.4.20](index.html#jaxlib-0-4-20)
* [jax 0.4.19 (Oct 19, 2023)](index.html#jax-0-4-19-oct-19-2023)
* [jaxlib 0.4.19 (Oct 19, 2023)](index.html#jaxlib-0-4-19-oct-19-2023)
* [jax 0.4.18 (Oct 6, 2023)](index.html#jax-0-4-18-oct-6-2023)
* [jaxlib 0.4.18 (Oct 6, 2023)](index.html#jaxlib-0-4-18-oct-6-2023)
* [jax 0.4.17 (Oct 3, 2023)](index.html#jax-0-4-17-oct-3-2023)
* [jaxlib 0.4.17 (Oct 3, 2023)](index.html#jaxlib-0-4-17-oct-3-2023)
* [JAX Glossary of Terms](index.html#document-glossary)
JAX: High-Performance Array Computing[#](#jax-high-performance-array-computing)
===
JAX is [Autograd](https://github.com/hips/autograd) and [XLA](https://www.tensorflow.org/xla), brought together for high-performance numerical computing.
Familiar API JAX provides a familiar NumPy-style API for ease of adoption by researchers and engineers.
Transformations JAX includes composable function transformations for compilation, batching, automatic differentiation, and parallelization.
Run Anywhere The same code executes on multiple backends, including CPU, GPU, & TPU
Getting Started
User Guides
Developer Docs
Installing JAX[#](#installing-jax)
---
JAX is written in pure Python, but it depends on XLA, which needs to be installed as the `jaxlib` package. Use the following instructions to install a binary package with `pip` or `conda`, to use a
[Docker container](#docker-containers-nvidia-gpu), or to [build JAX from source](index.html#id1).
### Supported platforms[#](#supported-platforms)
| | Linux x86_64 | Linux aarch64 | Mac x86_64 | Mac ARM | Windows x86_64 | Windows WSL2 x86_64 |
| --- | --- | --- | --- | --- | --- | --- |
| CPU | [yes](#cpu) | [yes](#cpu) | [yes](#cpu) | [yes](#cpu) | [yes](#cpu) | [yes](#cpu) |
| NVIDIA GPU | [yes](#nvidia-gpu) | [yes](#nvidia-gpu) | no | n/a | no | [experimental](#nvidia-gpu) |
| Google TPU | [yes](#google-tpu) | n/a | n/a | n/a | n/a | n/a |
| AMD GPU | [experimental](#amd-gpu) | no | no | n/a | no | no |
| Apple GPU | n/a | no | [experimental](#apple-gpu) | [experimental](#apple-gpu) | n/a | n/a |
We support installing or building `jaxlib` on Linux (Ubuntu 20.04 or later) and macOS (10.12 or later) platforms. There is also *experimental* native Windows support.
Windows users can use JAX on CPU and GPU via the [Windows Subsystem for Linux](https://docs.microsoft.com/en-us/windows/wsl/about), or alternatively they can use the native Windows CPU-only support.
### CPU[#](#cpu)
#### pip installation: CPU[#](#pip-installation-cpu)
We currently release `jaxlib` wheels for the following operating systems and architectures:
* Linux, x86-64
* Mac, Intel
* Mac, ARM
* Windows, x86-64 (*experimental*)
To install a CPU-only version of JAX, which might be useful for doing local development on a laptop, you can run
```
pip install --upgrade pip pip install --upgrade "jax[cpu]"
```
On Windows, you may also need to install the
[Microsoft Visual Studio 2019 Redistributable](https://learn.microsoft.com/en-US/cpp/windows/latest-supported-vc-redist?view=msvc-170#visual-studio-2015-2017-2019-and-2022)
if it is not already installed on your machine.
Other operating systems and architectures require building from source. Trying to pip install on other operating systems and architectures may lead to `jaxlib`
not being installed alongside `jax`, although `jax` may successfully install
(but fail at runtime).
### NVIDIA GPU[#](#nvidia-gpu)
JAX supports NVIDIA GPUs that have SM version 5.2 (Maxwell) or newer.
Note that Kepler-series GPUs are no longer supported by JAX since NVIDIA has dropped support for Kepler GPUs in its software.
You must first install the NVIDIA driver. We recommend installing the newest driver available from NVIDIA, but the driver must be version >= 525.60.13 for CUDA 12 and >= 450.80.02 for CUDA 11 on Linux.
If you need to use a newer CUDA toolkit with an older driver, for example on a cluster where you cannot update the NVIDIA driver easily, you may be able to use the
[CUDA forward compatibility packages](https://docs.nvidia.com/deploy/cuda-compatibility/)
that NVIDIA provides for this purpose.
#### pip installation: GPU (CUDA, installed via pip, easier)[#](#pip-installation-gpu-cuda-installed-via-pip-easier)
There are two ways to install JAX with NVIDIA GPU support: using CUDA and CUDNN installed from pip wheels, and using a self-installed CUDA/CUDNN. We strongly recommend installing CUDA and CUDNN using the pip wheels, since it is much easier! This method is only supported on x86_64, because NVIDIA has not released aarch64 CUDA pip packages.
```
pip install --upgrade pip
# CUDA 12 installation
# Note: wheels only available on linux.
pip install --upgrade "jax[cuda12_pip]" -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html
# CUDA 11 installation
# Note: wheels only available on linux.
pip install --upgrade "jax[cuda11_pip]" -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html
```
If JAX detects the wrong version of the CUDA libraries, there are several things to check:
* make sure that `LD_LIBRARY_PATH` is not set, since `LD_LIBRARY_PATH` can override the CUDA libraries.
* make sure that the CUDA libraries installed are those requested by JAX.
Rerunning the installation command above should work.
#### pip installation: GPU (CUDA, installed locally, harder)[#](#pip-installation-gpu-cuda-installed-locally-harder)
If you prefer to use a preinstalled copy of CUDA, you must first install [CUDA](https://developer.nvidia.com/cuda-downloads) and
[CuDNN](https://developer.nvidia.com/CUDNN).
JAX provides pre-built CUDA-compatible wheels for **Linux x86_64 only**. Other combinations of operating system and architecture are possible, but require
[building from source](index.html#id1).
You should use an NVIDIA driver version that is at least as new as your
[CUDA toolkit’s corresponding driver version](https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html#cuda-major-component-versions__table-cuda-toolkit-driver-versions).
If you need to use a newer CUDA toolkit with an older driver, for example on a cluster where you cannot update the NVIDIA driver easily, you may be able to use the
[CUDA forward compatibility packages](https://docs.nvidia.com/deploy/cuda-compatibility/)
that NVIDIA provides for this purpose.
JAX currently ships two CUDA wheel variants:
* CUDA 12.2, cuDNN 8.9, NCCL 2.16
* CUDA 11.8, cuDNN 8.6, NCCL 2.16
You may use a JAX wheel provided the major version of your CUDA, cuDNN, and NCCL installations match, and the minor versions are the same or newer.
JAX checks the versions of your libraries, and will report an error if they are not sufficiently new.
NCCL is an optional dependency, required only if you are performing multi-GPU computations.
To install, run
```
pip install --upgrade pip
# Installs the wheel compatible with CUDA 12 and cuDNN 8.9 or newer.
# Note: wheels only available on linux.
pip install --upgrade "jax[cuda12_local]" -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html
# Installs the wheel compatible with CUDA 11 and cuDNN 8.6 or newer.
# Note: wheels only available on linux.
pip install --upgrade "jax[cuda11_local]" -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html
```
**These `pip` installations do not work with Windows, and may fail silently; see
[above](#installing-jax).**
You can find your CUDA version with the command:
```
nvcc --version
```
JAX uses `LD_LIBRARY_PATH` to find CUDA libraries and `PATH` to find binaries
(`ptxas`, `nvlink`). Please make sure that these paths point to the correct CUDA installation.
Please let us know on [the issue tracker](https://github.com/google/jax/issues)
if you run into any errors or problems with the prebuilt wheels.
#### Docker containers: NVIDIA GPU[#](#docker-containers-nvidia-gpu)
NVIDIA provides the [JAX Toolbox](https://github.com/NVIDIA/JAX-Toolbox) containers, which are bleeding edge containers containing nightly releases of jax and some models/frameworks.
### Google TPU[#](#google-tpu)
#### pip installation: Google Cloud TPU[#](#pip-installation-google-cloud-tpu)
JAX provides pre-built wheels for
[Google Cloud TPU](https://cloud.google.com/tpu/docs/users-guide-tpu-vm).
To install JAX along with appropriate versions of `jaxlib` and `libtpu`, you can run the following in your cloud TPU VM:
```
pip install jax[tpu] -f https://storage.googleapis.com/jax-releases/libtpu_releases.html
```
For interactive notebook users: Colab TPUs no longer support JAX as of JAX version 0.4. However, for an interactive TPU notebook in the cloud, you can use [Kaggle TPU notebooks](https://www.kaggle.com/docs/tpu), which fully support JAX.
### Apple GPU[#](#apple-gpu)
#### pip installation: Apple GPUs[#](#pip-installation-apple-gpus)
Apple provides an experimental Metal plugin for Apple GPU hardware. For details,
see
[Apple’s JAX on Metal documentation](https://developer.apple.com/metal/jax/).
There are several caveats with the Metal plugin:
* the Metal plugin is new and experimental and has a number of
[known issues](https://github.com/google/jax/issues?q=is%3Aissue+is%3Aopen+label%3A%22Apple+GPU+%28Metal%29+plugin%22).
Please report any issues on the JAX issue tracker.
* the Metal plugin currently requires very specific versions of `jax` and
`jaxlib`. This restriction will be relaxed over time as the plugin API matures.
### AMD GPU[#](#amd-gpu)
JAX has experimental ROCM support. There are two ways to install JAX:
* use [AMD’s docker container](https://hub.docker.com/r/rocm/jax), or
* [build from source](index.html#additional-notes-for-building-a-rocm-jaxlib-for-amd-gpus).
### Conda[#](#conda)
#### Conda installation[#](#conda-installation)
There is a community-supported Conda build of `jax`. To install using `conda`,
simply run
```
conda install jax -c conda-forge
```
To install on a machine with an NVIDIA GPU, run
```
conda install jaxlib=*=*cuda* jax cuda-nvcc -c conda-forge -c nvidia
```
Note the `cudatoolkit` distributed by `conda-forge` is missing `ptxas`, which JAX requires. You must therefore either install the `cuda-nvcc` package from the `nvidia` channel, or install CUDA on your machine separately so that `ptxas`
is in your path. The channel order above is important (`conda-forge` before
`nvidia`).
If you would like to override which release of CUDA is used by JAX, or to install the CUDA build on a machine without GPUs, follow the instructions in the
[Tips & tricks](https://conda-forge.org/docs/user/tipsandtricks.html#installing-cuda-enabled-packages-like-tensorflow-and-pytorch)
section of the `conda-forge` website.
See the `conda-forge`
[jaxlib](https://github.com/conda-forge/jaxlib-feedstock#installing-jaxlib) and
[jax](https://github.com/conda-forge/jax-feedstock#installing-jax) repositories for more details.
### Building JAX from source[#](#building-jax-from-source)
See [Building JAX from source](index.html#id1).
JAX Quickstart[#](#jax-quickstart)
---
**JAX is NumPy on the CPU, GPU, and TPU, with great automatic differentiation for high-performance machine learning research.**
With its updated version of [Autograd](https://github.com/hips/autograd), JAX can automatically differentiate native Python and NumPy code. It can differentiate through a large subset of Python’s features, including loops, ifs,
recursion, and closures, and it can even take derivatives of derivatives of derivatives. It supports reverse-mode as well as forward-mode differentiation, and the two can be composed arbitrarily to any order.
What’s new is that JAX uses
[XLA](https://www.tensorflow.org/xla)
to compile and run your NumPy code on accelerators, like GPUs and TPUs.
Compilation happens under the hood by default, with library calls getting just-in-time compiled and executed. But JAX even lets you just-in-time compile your own Python functions into XLA-optimized kernels using a one-function API.
Compilation and automatic differentiation can be composed arbitrarily, so you can express sophisticated algorithms and get maximal performance without having to leave Python.
```
import jax.numpy as jnp from jax import grad, jit, vmap from jax import random
```
### Multiplying Matrices[#](#multiplying-matrices)
We’ll be generating random data in the following examples. One big difference between NumPy and JAX is how you generate random numbers. For more details, see [Common Gotchas in JAX](https://jax.readthedocs.io/en/latest/notebooks/Common_Gotchas_in_JAX.html#%F0%9F%94%AA-Random-Numbers).
```
key = random.PRNGKey(0)
x = random.normal(key, (10,))
print(x)
```
```
[-0.3721109 0.26423115 -0.18252768 -0.7368197 -0.44030377 -0.1521442
-0.67135346 -0.5908641 0.73168886 0.5673026 ]
```
Let’s dive right in and multiply two big matrices.
```
size = 3000 x = random.normal(key, (size, size), dtype=jnp.float32)
%timeit jnp.dot(x, x.T).block_until_ready() # runs on the GPU
```
```
13.5 ms ± 1.89 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
```
We added that `block_until_ready` because JAX uses asynchronous execution by default (see [Asynchronous dispatch](index.html#async-dispatch)).
JAX NumPy functions work on regular NumPy arrays.
```
import numpy as np x = np.random.normal(size=(size, size)).astype(np.float32)
%timeit jnp.dot(x, x.T).block_until_ready()
```
```
80 ms ± 30.2 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
```
That’s slower because it has to transfer data to the GPU every time. You can ensure that an NDArray is backed by device memory using [`device_put()`](index.html#jax.device_put).
```
from jax import device_put
x = np.random.normal(size=(size, size)).astype(np.float32)
x = device_put(x)
%timeit jnp.dot(x, x.T).block_until_ready()
```
```
15.8 ms ± 113 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
```
The output of [`device_put()`](index.html#jax.device_put) still acts like an NDArray, but it only copies values back to the CPU when they’re needed for printing, plotting, saving to disk, branching, etc. The behavior of [`device_put()`](index.html#jax.device_put) is equivalent to the function `jit(lambda x: x)`, but it’s faster.
If you have a GPU (or TPU!) these calls run on the accelerator and have the potential to be much faster than on CPU.
See [Is JAX faster than NumPy?](index.html#faq-jax-vs-numpy) for more comparison of performance characteristics of NumPy and JAX
JAX is much more than just a GPU-backed NumPy. It also comes with a few program transformations that are useful when writing numerical code. For now, there are three main ones:
* [`jit()`](index.html#jax.jit), for speeding up your code
* [`grad()`](index.html#jax.grad), for taking derivatives
* [`vmap()`](index.html#jax.vmap), for automatic vectorization or batching.
Let’s go over these, one-by-one. We’ll also end up composing these in interesting ways.
### Using [`jit()`](index.html#jax.jit) to speed up functions[#](#using-jit-to-speed-up-functions)
JAX runs transparently on the GPU or TPU (falling back to CPU if you don’t have one). However, in the above example, JAX is dispatching kernels to the GPU one operation at a time. If we have a sequence of operations, we can use the `@jit` decorator to compile multiple operations together using [XLA](https://www.tensorflow.org/xla). Let’s try that.
```
def selu(x, alpha=1.67, lmbda=1.05):
return lmbda * jnp.where(x > 0, x, alpha * jnp.exp(x) - alpha)
x = random.normal(key, (1000000,))
%timeit selu(x).block_until_ready()
```
```
1.07 ms ± 261 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
```
We can speed it up with `@jit`, which will jit-compile the first time `selu` is called and will be cached thereafter.
```
selu_jit = jit(selu)
%timeit selu_jit(x).block_until_ready()
```
```
127 µs ± 1.43 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
```
### Taking derivatives with [`grad()`](index.html#jax.grad)[#](#taking-derivatives-with-grad)
In addition to evaluating numerical functions, we also want to transform them. One transformation is [automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation). In JAX, just like in [Autograd](https://github.com/HIPS/autograd), you can compute gradients with the [`grad()`](index.html#jax.grad) function.
```
def sum_logistic(x):
return jnp.sum(1.0 / (1.0 + jnp.exp(-x)))
x_small = jnp.arange(3.)
derivative_fn = grad(sum_logistic)
print(derivative_fn(x_small))
```
```
[0.25 0.19661194 0.10499357]
```
Let’s verify with finite differences that our result is correct.
```
def first_finite_differences(f, x):
eps = 1e-3
return jnp.array([(f(x + eps * v) - f(x - eps * v)) / (2 * eps)
for v in jnp.eye(len(x))])
print(first_finite_differences(sum_logistic, x_small))
```
```
[0.24998187 0.1965761 0.10502338]
```
Taking derivatives is as easy as calling [`grad()`](index.html#jax.grad). [`grad()`](index.html#jax.grad) and [`jit()`](index.html#jax.jit) compose and can be mixed arbitrarily. In the above example we jitted `sum_logistic` and then took its derivative. We can go further:
```
print(grad(jit(grad(jit(grad(sum_logistic)))))(1.0))
```
```
-0.0353256
```
For more advanced autodiff, you can use [`jax.vjp()`](index.html#jax.vjp) for reverse-mode vector-Jacobian products and [`jax.jvp()`](index.html#jax.jvp) for forward-mode Jacobian-vector products. The two can be composed arbitrarily with one another, and with other JAX transformations. Here’s one way to compose them to make a function that efficiently computes full Hessian matrices:
```
from jax import jacfwd, jacrev def hessian(fun):
return jit(jacfwd(jacrev(fun)))
```
### Auto-vectorization with [`vmap()`](index.html#jax.vmap)[#](#auto-vectorization-with-vmap)
JAX has one more transformation in its API that you might find useful: [`vmap()`](index.html#jax.vmap), the vectorizing map. It has the familiar semantics of mapping a function along array axes, but instead of keeping the loop on the outside, it pushes the loop down into a function’s primitive operations for better performance. When composed with [`jit()`](index.html#jax.jit), it can be just as fast as adding the batch dimensions by hand.
We’re going to work with a simple example, and promote matrix-vector products into matrix-matrix products using [`vmap()`](index.html#jax.vmap). Although this is easy to do by hand in this specific case, the same technique can apply to more complicated functions.
```
mat = random.normal(key, (150, 100))
batched_x = random.normal(key, (10, 100))
def apply_matrix(v):
return jnp.dot(mat, v)
```
Given a function such as `apply_matrix`, we can loop over a batch dimension in Python, but usually the performance of doing so is poor.
```
def naively_batched_apply_matrix(v_batched):
return jnp.stack([apply_matrix(v) for v in v_batched])
print('Naively batched')
%timeit naively_batched_apply_matrix(batched_x).block_until_ready()
```
```
Naively batched 3.12 ms ± 176 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
```
We know how to batch this operation manually. In this case, `jnp.dot` handles extra batch dimensions transparently.
```
@jit def batched_apply_matrix(v_batched):
return jnp.dot(v_batched, mat.T)
print('Manually batched')
%timeit batched_apply_matrix(batched_x).block_until_ready()
```
```
Manually batched 45.6 µs ± 5.03 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
```
However, suppose we had a more complicated function without batching support. We can use [`vmap()`](index.html#jax.vmap) to add batching support automatically.
```
@jit def vmap_batched_apply_matrix(v_batched):
return vmap(apply_matrix)(v_batched)
print('Auto-vectorized with vmap')
%timeit vmap_batched_apply_matrix(batched_x).block_until_ready()
```
```
Auto-vectorized with vmap 48.3 µs ± 1.06 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
```
Of course, [`vmap()`](index.html#jax.vmap) can be arbitrarily composed with [`jit()`](index.html#jax.jit), [`grad()`](index.html#jax.grad), and any other JAX transformation.
This is just a taste of what JAX can do. We’re really excited to see what you do with it!
How to Think in JAX[#](#how-to-think-in-jax)
---
JAX provides a simple and powerful API for writing accelerated numerical code, but working effectively in JAX sometimes requires extra consideration. This document is meant to help build a ground-up understanding of how JAX operates, so that you can use it more effectively.
### JAX vs. NumPy[#](#jax-vs-numpy)
**Key Concepts:**
* JAX provides a NumPy-inspired interface for convenience.
* Through duck-typing, JAX arrays can often be used as drop-in replacements of NumPy arrays.
* Unlike NumPy arrays, JAX arrays are always immutable.
NumPy provides a well-known, powerful API for working with numerical data. For convenience, JAX provides `jax.numpy` which closely mirrors the numpy API and provides easy entry into JAX. Almost anything that can be done with `numpy` can be done with `jax.numpy`:
```
import matplotlib.pyplot as plt import numpy as np
x_np = np.linspace(0, 10, 1000)
y_np = 2 * np.sin(x_np) * np.cos(x_np)
plt.plot(x_np, y_np);
```
```
import jax.numpy as jnp
x_jnp = jnp.linspace(0, 10, 1000)
y_jnp = 2 * jnp.sin(x_jnp) * jnp.cos(x_jnp)
plt.plot(x_jnp, y_jnp);
```
The code blocks are identical aside from replacing `np` with `jnp`, and the results are the same. As we can see, JAX arrays can often be used directly in place of NumPy arrays for things like plotting.
The arrays themselves are implemented as different Python types:
```
type(x_np)
```
```
numpy.ndarray
```
```
type(x_jnp)
```
```
jaxlib.xla_extension.ArrayImpl
```
Python’s [duck-typing](https://en.wikipedia.org/wiki/Duck_typing) allows JAX arrays and NumPy arrays to be used interchangeably in many places.
However, there is one important difference between JAX and NumPy arrays: JAX arrays are immutable, meaning that once created their contents cannot be changed.
Here is an example of mutating an array in NumPy:
```
# NumPy: mutable arrays x = np.arange(10)
x[0] = 10 print(x)
```
```
[10 1 2 3 4 5 6 7 8 9]
```
The equivalent in JAX results in an error, as JAX arrays are immutable:
```
%xmode minimal
```
```
Exception reporting mode: Minimal
```
```
# JAX: immutable arrays x = jnp.arange(10)
x[0] = 10
```
```
TypeError: '<class 'jaxlib.xla_extension.ArrayImpl'>' object does not support item assignment. JAX arrays are immutable. Instead of ``x[idx] = y``, use ``x = x.at[idx].set(y)`` or another .at[] method: https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.ndarray.at.html
```
For updating individual elements, JAX provides an [indexed update syntax](https://jax.readthedocs.io/en/latest/jax.ops.html#indexed-update-operators) that returns an updated copy:
```
y = x.at[0].set(10)
print(x)
print(y)
```
```
[0 1 2 3 4 5 6 7 8 9]
[10 1 2 3 4 5 6 7 8 9]
```
### NumPy, lax & XLA: JAX API layering[#](#numpy-lax-xla-jax-api-layering)
**Key Concepts:**
* `jax.numpy` is a high-level wrapper that provides a familiar interface.
* `jax.lax` is a lower-level API that is stricter and often more powerful.
* All JAX operations are implemented in terms of operations in [XLA](https://www.tensorflow.org/xla/) – the Accelerated Linear Algebra compiler.
If you look at the source of `jax.numpy`, you’ll see that all the operations are eventually expressed in terms of functions defined in `jax.lax`. You can think of `jax.lax` as a stricter, but often more powerful, API for working with multi-dimensional arrays.
For example, while `jax.numpy` will implicitly promote arguments to allow operations between mixed data types, `jax.lax` will not:
```
import jax.numpy as jnp jnp.add(1, 1.0) # jax.numpy API implicitly promotes mixed types.
```
```
Array(2., dtype=float32, weak_type=True)
```
```
from jax import lax lax.add(1, 1.0) # jax.lax API requires explicit type promotion.
```
```
MLIRError: Verification failed:
error: "jit(add)/jit(main)/add"("/tmp/ipykernel_1544/3435837498.py":2:0): 'stablehlo.add' op requires compatible types for all operands and results
note: "jit(add)/jit(main)/add"("/tmp/ipykernel_1544/3435837498.py":2:0): see current operation: %0 = "stablehlo.add"(%arg0, %arg1) : (tensor<i32>, tensor<f32>) -> tensor<i32The above exception was the direct cause of the following exception:
ValueError: Cannot lower jaxpr with verifier errors:
'stablehlo.add' op requires compatible types for all operands and results
at loc("jit(add)/jit(main)/add"("/tmp/ipykernel_1544/3435837498.py":2:0))
see current operation: %0 = "stablehlo.add"(%arg0, %arg1) : (tensor<i32>, tensor<f32>) -> tensor<i32>
at loc("jit(add)/jit(main)/add"("/tmp/ipykernel_1544/3435837498.py":2:0))
Module string:
#loc = loc(unknown)
"builtin.module"() <{sym_name = "jit_add"}> ({
"func.func"() <{arg_attrs = [{mhlo.sharding = "{replicated}"}, {mhlo.sharding = "{replicated}"}], function_type = (tensor<i32>, tensor<f32>) -> tensor<i32>, res_attrs = [{}], sym_name = "main", sym_visibility = "public"}> ({
^bb0(%arg0: tensor<i32> loc(unknown), %arg1: tensor<f32> loc(unknown)):
%0 = "stablehlo.add"(%arg0, %arg1) : (tensor<i32>, tensor<f32>) -> tensor<i32> loc(#loc2)
"func.return"(%0) : (tensor<i32>) -> () loc(#loc)
}) : () -> () loc(#loc)
}) {mhlo.num_partitions = 1 : i32, mhlo.num_replicas = 1 : i32} : () -> () loc(#loc)
#loc1 = loc("/tmp/ipykernel_1544/3435837498.py":2:0)
#loc2 = loc("jit(add)/jit(main)/add"(#loc1))
```
If using `jax.lax` directly, you’ll have to do type promotion explicitly in such cases:
```
lax.add(jnp.float32(1), 1.0)
```
```
Array(2., dtype=float32)
```
Along with this strictness, `jax.lax` also provides efficient APIs for some more general operations than are supported by NumPy.
For example, consider a 1D convolution, which can be expressed in NumPy this way:
```
x = jnp.array([1, 2, 1])
y = jnp.ones(10)
jnp.convolve(x, y)
```
```
Array([1., 3., 4., 4., 4., 4., 4., 4., 4., 4., 3., 1.], dtype=float32)
```
Under the hood, this NumPy operation is translated to a much more general convolution implemented by [`lax.conv_general_dilated`](https://jax.readthedocs.io/en/latest/_autosummary/jax.lax.conv_general_dilated.html):
```
from jax import lax result = lax.conv_general_dilated(
x.reshape(1, 1, 3).astype(float), # note: explicit promotion
y.reshape(1, 1, 10),
window_strides=(1,),
padding=[(len(y) - 1, len(y) - 1)]) # equivalent of padding='full' in NumPy result[0, 0]
```
```
Array([1., 3., 4., 4., 4., 4., 4., 4., 4., 4., 3., 1.], dtype=float32)
```
This is a batched convolution operation designed to be efficient for the types of convolutions often used in deep neural nets. It requires much more boilerplate, but is far more flexible and scalable than the convolution provided by NumPy (See [Convolutions in JAX](https://jax.readthedocs.io/en/latest/notebooks/convolutions.html) for more detail on JAX convolutions).
At their heart, all `jax.lax` operations are Python wrappers for operations in XLA; here, for example, the convolution implementation is provided by [XLA:ConvWithGeneralPadding](https://www.tensorflow.org/xla/operation_semantics#convwithgeneralpadding_convolution).
Every JAX operation is eventually expressed in terms of these fundamental XLA operations, which is what enables just-in-time (JIT) compilation.
### To JIT or not to JIT[#](#to-jit-or-not-to-jit)
**Key Concepts:**
* By default JAX executes operations one at a time, in sequence.
* Using a just-in-time (JIT) compilation decorator, sequences of operations can be optimized together and run at once.
* Not all JAX code can be JIT compiled, as it requires array shapes to be static & known at compile time.
The fact that all JAX operations are expressed in terms of XLA allows JAX to use the XLA compiler to execute blocks of code very efficiently.
For example, consider this function that normalizes the rows of a 2D matrix, expressed in terms of `jax.numpy` operations:
```
import jax.numpy as jnp
def norm(X):
X = X - X.mean(0)
return X / X.std(0)
```
A just-in-time compiled version of the function can be created using the `jax.jit` transform:
```
from jax import jit norm_compiled = jit(norm)
```
This function returns the same results as the original, up to standard floating-point accuracy:
```
np.random.seed(1701)
X = jnp.array(np.random.rand(10000, 10))
np.allclose(norm(X), norm_compiled(X), atol=1E-6)
```
```
True
```
But due to the compilation (which includes fusing of operations, avoidance of allocating temporary arrays, and a host of other tricks), execution times can be orders of magnitude faster in the JIT-compiled case (note the use of `block_until_ready()` to account for JAX’s [asynchronous dispatch](https://jax.readthedocs.io/en/latest/async_dispatch.html)):
```
%timeit norm(X).block_until_ready()
%timeit norm_compiled(X).block_until_ready()
```
```
339 µs ± 2.73 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
290 µs ± 1.69 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
```
That said, `jax.jit` does have limitations: in particular, it requires all arrays to have static shapes. That means that some JAX operations are incompatible with JIT compilation.
For example, this operation can be executed in op-by-op mode:
```
def get_negatives(x):
return x[x < 0]
x = jnp.array(np.random.randn(10))
get_negatives(x)
```
```
Array([-0.10570311, -0.59403396, -0.8680282 , -0.23489487], dtype=float32)
```
But it returns an error if you attempt to execute it in jit mode:
```
jit(get_negatives)(x)
```
```
NonConcreteBooleanIndexError: Array boolean indices must be concrete; got ShapedArray(bool[10])
See https://jax.readthedocs.io/en/latest/errors.html#jax.errors.NonConcreteBooleanIndexError
```
This is because the function generates an array whose shape is not known at compile time: the size of the output depends on the values of the input array, and so it is not compatible with JIT.
### JIT mechanics: tracing and static variables[#](#jit-mechanics-tracing-and-static-variables)
**Key Concepts:**
* JIT and other JAX transforms work by *tracing* a function to determine its effect on inputs of a specific shape and type.
* Variables that you don’t want to be traced can be marked as *static*
To use `jax.jit` effectively, it is useful to understand how it works. Let’s put a few `print()` statements within a JIT-compiled function and then call the function:
```
@jit def f(x, y):
print("Running f():")
print(f" x = {x}")
print(f" y = {y}")
result = jnp.dot(x + 1, y + 1)
print(f" result = {result}")
return result
x = np.random.randn(3, 4)
y = np.random.randn(4)
f(x, y)
```
```
Running f():
x = Traced<ShapedArray(float32[3,4])>with<DynamicJaxprTrace(level=1/0)>
y = Traced<ShapedArray(float32[4])>with<DynamicJaxprTrace(level=1/0)>
result = Traced<ShapedArray(float32[3])>with<DynamicJaxprTrace(level=1/0)>
```
```
Array([0.25773212, 5.3623195 , 5.403243 ], dtype=float32)
```
Notice that the print statements execute, but rather than printing the data we passed to the function, though, it prints *tracer* objects that stand-in for them.
These tracer objects are what `jax.jit` uses to extract the sequence of operations specified by the function. Basic tracers are stand-ins that encode the **shape** and **dtype** of the arrays, but are agnostic to the values. This recorded sequence of computations can then be efficiently applied within XLA to new inputs with the same shape and dtype, without having to re-execute the Python code.
When we call the compiled function again on matching inputs, no re-compilation is required and nothing is printed because the result is computed in compiled XLA rather than in Python:
```
x2 = np.random.randn(3, 4)
y2 = np.random.randn(4)
f(x2, y2)
```
```
Array([1.4344584, 4.3004413, 7.9897013], dtype=float32)
```
The extracted sequence of operations is encoded in a JAX expression, or *jaxpr* for short. You can view the jaxpr using the `jax.make_jaxpr` transformation:
```
from jax import make_jaxpr
def f(x, y):
return jnp.dot(x + 1, y + 1)
make_jaxpr(f)(x, y)
```
```
{ lambda ; a:f32[3,4] b:f32[4]. let
c:f32[3,4] = add a 1.0
d:f32[4] = add b 1.0
e:f32[3] = dot_general[
dimension_numbers=(([1], [0]), ([], []))
preferred_element_type=float32
] c d
in (e,) }
```
Note one consequence of this: because JIT compilation is done *without* information on the content of the array, control flow statements in the function cannot depend on traced values. For example, this fails:
```
@jit def f(x, neg):
return -x if neg else x
f(1, True)
```
```
TracerBoolConversionError: Attempted boolean conversion of traced array with shape bool[]..
The error occurred while tracing the function f at /tmp/ipykernel_1544/2422663986.py:1 for jit. This concrete value was not available in Python because it depends on the value of the argument neg.
See https://jax.readthedocs.io/en/latest/errors.html#jax.errors.TracerBoolConversionError
```
If there are variables that you would not like to be traced, they can be marked as static for the purposes of JIT compilation:
```
from functools import partial
@partial(jit, static_argnums=(1,))
def f(x, neg):
return -x if neg else x
f(1, True)
```
```
Array(-1, dtype=int32, weak_type=True)
```
Note that calling a JIT-compiled function with a different static argument results in re-compilation, so the function still works as expected:
```
f(1, False)
```
```
Array(1, dtype=int32, weak_type=True)
```
Understanding which values and operations will be static and which will be traced is a key part of using `jax.jit` effectively.
### Static vs Traced Operations[#](#static-vs-traced-operations)
**Key Concepts:**
* Just as values can be either static or traced, operations can be static or traced.
* Static operations are evaluated at compile-time in Python; traced operations are compiled & evaluated at run-time in XLA.
* Use `numpy` for operations that you want to be static; use `jax.numpy` for operations that you want to be traced.
This distinction between static and traced values makes it important to think about how to keep a static value static. Consider this function:
```
import jax.numpy as jnp from jax import jit
@jit def f(x):
return x.reshape(jnp.array(x.shape).prod())
x = jnp.ones((2, 3))
f(x)
```
```
TypeError: Shapes must be 1D sequences of concrete values of integer type, got [Traced<ShapedArray(int32[])>with<DynamicJaxprTrace(level=1/0)>].
If using `jit`, try using `static_argnums` or applying `jit` to smaller subfunctions.
The error occurred while tracing the function f at /tmp/ipykernel_1544/1983583872.py:4 for jit. This value became a tracer due to JAX operations on these lines:
operation a:i32[2] = convert_element_type[new_dtype=int32 weak_type=False] b
from line /tmp/ipykernel_1544/1983583872.py:6 (f)
```
This fails with an error specifying that a tracer was found instead of a 1D sequence of concrete values of integer type. Let’s add some print statements to the function to understand why this is happening:
```
@jit def f(x):
print(f"x = {x}")
print(f"x.shape = {x.shape}")
print(f"jnp.array(x.shape).prod() = {jnp.array(x.shape).prod()}")
# comment this out to avoid the error:
# return x.reshape(jnp.array(x.shape).prod())
f(x)
```
```
x = Traced<ShapedArray(float32[2,3])>with<DynamicJaxprTrace(level=1/0)>
x.shape = (2, 3)
jnp.array(x.shape).prod() = Traced<ShapedArray(int32[])>with<DynamicJaxprTrace(level=1/0)>
```
Notice that although `x` is traced, `x.shape` is a static value. However, when we use `jnp.array` and `jnp.prod` on this static value, it becomes a traced value, at which point it cannot be used in a function like `reshape()` that requires a static input (recall: array shapes must be static).
A useful pattern is to use `numpy` for operations that should be static (i.e. done at compile-time), and use `jax.numpy` for operations that should be traced (i.e. compiled and executed at run-time). For this function, it might look like this:
```
from jax import jit import jax.numpy as jnp import numpy as np
@jit def f(x):
return x.reshape((np.prod(x.shape),))
f(x)
```
```
Array([1., 1., 1., 1., 1., 1.], dtype=float32)
```
For this reason, a standard convention in JAX programs is to `import numpy as np` and `import jax.numpy as jnp` so that both interfaces are available for finer control over whether operations are performed in a static matter (with `numpy`, once at compile-time) or a traced manner (with `jax.numpy`, optimized at run-time).
🔪 JAX - The Sharp Bits 🔪[#](#jax-the-sharp-bits)
---
*levskaya@ <EMAIL>jj@*
When walking about the countryside of Italy, the people will not hesitate to tell you that **JAX** has [*“una anima di pura programmazione funzionale”*](https://www.sscardapane.it/iaml-backup/jax-intro/).
**JAX** is a language for **expressing** and **composing** **transformations** of numerical programs. **JAX** is also able to **compile** numerical programs for CPU or accelerators (GPU/TPU).
JAX works great for many numerical and scientific programs, but **only if they are written with certain constraints** that we describe below.
```
import numpy as np from jax import grad, jit from jax import lax from jax import random import jax import jax.numpy as jnp
```
### 🔪 Pure functions[#](#pure-functions)
JAX transformation and compilation are designed to work only on Python functions that are functionally pure: all the input data is passed through the function parameters, all the results are output through the function results. A pure function will always return the same result if invoked with the same inputs.
Here are some examples of functions that are not functionally pure for which JAX behaves differently than the Python interpreter. Note that these behaviors are not guaranteed by the JAX system; the proper way to use JAX is to use it only on functionally pure Python functions.
```
def impure_print_side_effect(x):
print("Executing function") # This is a side-effect
return x
# The side-effects appear during the first run print ("First call: ", jit(impure_print_side_effect)(4.))
# Subsequent runs with parameters of same type and shape may not show the side-effect
# This is because JAX now invokes a cached compilation of the function print ("Second call: ", jit(impure_print_side_effect)(5.))
# JAX re-runs the Python function when the type or shape of the argument changes print ("Third call, different type: ", jit(impure_print_side_effect)(jnp.array([5.])))
```
```
Executing function First call: 4.0 Second call: 5.0 Executing function Third call, different type: [5.]
```
```
g = 0.
def impure_uses_globals(x):
return x + g
# JAX captures the value of the global during the first run print ("First call: ", jit(impure_uses_globals)(4.))
g = 10. # Update the global
# Subsequent runs may silently use the cached value of the globals print ("Second call: ", jit(impure_uses_globals)(5.))
# JAX re-runs the Python function when the type or shape of the argument changes
# This will end up reading the latest value of the global print ("Third call, different type: ", jit(impure_uses_globals)(jnp.array([4.])))
```
```
First call: 4.0 Second call: 5.0 Third call, different type: [14.]
```
```
g = 0.
def impure_saves_global(x):
global g
g = x
return x
# JAX runs once the transformed function with special Traced values for arguments print ("First call: ", jit(impure_saves_global)(4.))
print ("Saved global: ", g) # Saved global has an internal JAX value
```
```
First call: 4.0 Saved global: Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>
```
A Python function can be functionally pure even if it actually uses stateful objects internally, as long as it does not read or write external state:
```
def pure_uses_internal_state(x):
state = dict(even=0, odd=0)
for i in range(10):
state['even' if i % 2 == 0 else 'odd'] += x
return state['even'] + state['odd']
print(jit(pure_uses_internal_state)(5.))
```
```
50.0
```
It is not recommended to use iterators in any JAX function you want to `jit` or in any control-flow primitive. The reason is that an iterator is a python object which introduces state to retrieve the next element. Therefore, it is incompatible with JAX functional programming model. In the code below, there are some examples of incorrect attempts to use iterators with JAX. Most of them return an error, but some give unexpected results.
```
import jax.numpy as jnp import jax.lax as lax from jax import make_jaxpr
# lax.fori_loop array = jnp.arange(10)
print(lax.fori_loop(0, 10, lambda i,x: x+array[i], 0)) # expected result 45 iterator = iter(range(10))
print(lax.fori_loop(0, 10, lambda i,x: x+next(iterator), 0)) # unexpected result 0
# lax.scan def func11(arr, extra):
ones = jnp.ones(arr.shape)
def body(carry, aelems):
ae1, ae2 = aelems
return (carry + ae1 * ae2 + extra, carry)
return lax.scan(body, 0., (arr, ones))
make_jaxpr(func11)(jnp.arange(16), 5.)
# make_jaxpr(func11)(iter(range(16)), 5.) # throws error
# lax.cond array_operand = jnp.array([0.])
lax.cond(True, lambda x: x+1, lambda x: x-1, array_operand)
iter_operand = iter(range(10))
# lax.cond(True, lambda x: next(x)+1, lambda x: next(x)-1, iter_operand) # throws error
```
```
45 0
```
### 🔪 In-Place Updates[#](#in-place-updates)
In Numpy you’re used to doing this:
```
numpy_array = np.zeros((3,3), dtype=np.float32)
print("original array:")
print(numpy_array)
# In place, mutating update numpy_array[1, :] = 1.0 print("updated array:")
print(numpy_array)
```
```
original array:
[[0. 0. 0.]
[0. 0. 0.]
[0. 0. 0.]]
updated array:
[[0. 0. 0.]
[1. 1. 1.]
[0. 0. 0.]]
```
If we try to update a JAX device array in-place, however, we get an **error**! (☉_☉)
```
%xmode Minimal
```
```
Exception reporting mode: Minimal
```
```
jax_array = jnp.zeros((3,3), dtype=jnp.float32)
# In place update of JAX's array will yield an error!
jax_array[1, :] = 1.0
```
```
TypeError: '<class 'jaxlib.xla_extension.ArrayImpl'>' object does not support item assignment. JAX arrays are immutable. Instead of ``x[idx] = y``, use ``x = x.at[idx].set(y)`` or another .at[] method: https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.ndarray.at.html
```
Allowing mutation of variables in-place makes program analysis and transformation difficult. JAX requires that programs are pure functions.
Instead, JAX offers a *functional* array update using the [`.at` property on JAX arrays](https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.ndarray.at.html#jax.numpy.ndarray.at).
️⚠️ inside `jit`’d code and `lax.while_loop` or `lax.fori_loop` the **size** of slices can’t be functions of argument *values* but only functions of argument *shapes* – the slice start indices have no such restriction. See the below **Control Flow** Section for more information on this limitation.
#### Array updates: `x.at[idx].set(y)`[#](#array-updates-x-at-idx-set-y)
For example, the update above can be written as:
```
updated_array = jax_array.at[1, :].set(1.0)
print("updated array:\n", updated_array)
```
```
updated array:
[[0. 0. 0.]
[1. 1. 1.]
[0. 0. 0.]]
```
JAX’s array update functions, unlike their NumPy versions, operate out-of-place. That is, the updated array is returned as a new array and the original array is not modified by the update.
```
print("original array unchanged:\n", jax_array)
```
```
original array unchanged:
[[0. 0. 0.]
[0. 0. 0.]
[0. 0. 0.]]
```
However, inside **jit**-compiled code, if the **input value** `x` of `x.at[idx].set(y)` is not reused, the compiler will optimize the array update to occur *in-place*.
#### Array updates with other operations[#](#array-updates-with-other-operations)
Indexed array updates are not limited simply to overwriting values. For example, we can perform indexed addition as follows:
```
print("original array:")
jax_array = jnp.ones((5, 6))
print(jax_array)
new_jax_array = jax_array.at[::2, 3:].add(7.)
print("new array post-addition:")
print(new_jax_array)
```
```
original array:
[[1. 1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1. 1.]]
new array post-addition:
[[1. 1. 1. 8. 8. 8.]
[1. 1. 1. 1. 1. 1.]
[1. 1. 1. 8. 8. 8.]
[1. 1. 1. 1. 1. 1.]
[1. 1. 1. 8. 8. 8.]]
```
For more details on indexed array updates, see the [documentation for the `.at` property](https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.ndarray.at.html#jax.numpy.ndarray.at).
### 🔪 Out-of-Bounds Indexing[#](#out-of-bounds-indexing)
In Numpy, you are used to errors being thrown when you index an array outside of its bounds, like this:
```
np.arange(10)[11]
```
```
IndexError: index 11 is out of bounds for axis 0 with size 10
```
However, raising an error from code running on an accelerator can be difficult or impossible. Therefore, JAX must choose some non-error behavior for out of bounds indexing (akin to how invalid floating point arithmetic results in `NaN`). When the indexing operation is an array index update (e.g. `index_add` or `scatter`-like primitives), updates at out-of-bounds indices will be skipped; when the operation is an array index retrieval (e.g. NumPy indexing or `gather`-like primitives) the index is clamped to the bounds of the array since **something** must be returned. For example, the last value of the array will be returned from this indexing operation:
```
jnp.arange(10)[11]
```
```
Array(9, dtype=int32)
```
If you would like finer-grained control over the behavior for out-of-bound indices, you can use the optional parameters of [`ndarray.at`](https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.ndarray.at.html); for example:
```
jnp.arange(10.0).at[11].get()
```
```
Array(9., dtype=float32)
```
```
jnp.arange(10.0).at[11].get(mode='fill', fill_value=jnp.nan)
```
```
Array(nan, dtype=float32)
```
Note that due to this behavior for index retrieval, functions like `jnp.nanargmin` and `jnp.nanargmax` return -1 for slices consisting of NaNs whereas Numpy would throw an error.
Note also that, as the two behaviors described above are not inverses of each other, reverse-mode automatic differentiation (which turns index updates into index retrievals and vice versa) [will not preserve the semantics of out of bounds indexing](https://github.com/google/jax/issues/5760). Thus it may be a good idea to think of out-of-bounds indexing in JAX as a case of [undefined behavior](https://en.wikipedia.org/wiki/Undefined_behavior).
### 🔪 Non-array inputs: NumPy vs. JAX[#](#non-array-inputs-numpy-vs-jax)
NumPy is generally happy accepting Python lists or tuples as inputs to its API functions:
```
np.sum([1, 2, 3])
```
```
6
```
JAX departs from this, generally returning a helpful error:
```
jnp.sum([1, 2, 3])
```
```
TypeError: sum requires ndarray or scalar arguments, got <class 'list'> at position 0.
```
This is a deliberate design choice, because passing lists or tuples to traced functions can lead to silent performance degradation that might otherwise be difficult to detect.
For example, consider the following permissive version of `jnp.sum` that allows list inputs:
```
def permissive_sum(x):
return jnp.sum(jnp.array(x))
x = list(range(10))
permissive_sum(x)
```
```
Array(45, dtype=int32)
```
The output is what we would expect, but this hides potential performance issues under the hood. In JAX’s tracing and JIT compilation model, each element in a Python list or tuple is treated as a separate JAX variable, and individually processed and pushed to device. This can be seen in the jaxpr for the `permissive_sum` function above:
```
make_jaxpr(permissive_sum)(x)
```
```
{ lambda ; a:i32[] b:i32[] c:i32[] d:i32[] e:i32[] f:i32[] g:i32[] h:i32[] i:i32[]
j:i32[]. let
k:i32[] = convert_element_type[new_dtype=int32 weak_type=False] a
l:i32[] = convert_element_type[new_dtype=int32 weak_type=False] b
m:i32[] = convert_element_type[new_dtype=int32 weak_type=False] c
n:i32[] = convert_element_type[new_dtype=int32 weak_type=False] d
o:i32[] = convert_element_type[new_dtype=int32 weak_type=False] e
p:i32[] = convert_element_type[new_dtype=int32 weak_type=False] f
q:i32[] = convert_element_type[new_dtype=int32 weak_type=False] g
r:i32[] = convert_element_type[new_dtype=int32 weak_type=False] h
s:i32[] = convert_element_type[new_dtype=int32 weak_type=False] i
t:i32[] = convert_element_type[new_dtype=int32 weak_type=False] j
u:i32[1] = broadcast_in_dim[broadcast_dimensions=() shape=(1,)] k
v:i32[1] = broadcast_in_dim[broadcast_dimensions=() shape=(1,)] l
w:i32[1] = broadcast_in_dim[broadcast_dimensions=() shape=(1,)] m
x:i32[1] = broadcast_in_dim[broadcast_dimensions=() shape=(1,)] n
y:i32[1] = broadcast_in_dim[broadcast_dimensions=() shape=(1,)] o
z:i32[1] = broadcast_in_dim[broadcast_dimensions=() shape=(1,)] p
ba:i32[1] = broadcast_in_dim[broadcast_dimensions=() shape=(1,)] q
bb:i32[1] = broadcast_in_dim[broadcast_dimensions=() shape=(1,)] r
bc:i32[1] = broadcast_in_dim[broadcast_dimensions=() shape=(1,)] s
bd:i32[1] = broadcast_in_dim[broadcast_dimensions=() shape=(1,)] t
be:i32[10] = concatenate[dimension=0] u v w x y z ba bb bc bd
bf:i32[] = reduce_sum[axes=(0,)] be
in (bf,) }
```
Each entry of the list is handled as a separate input, resulting in a tracing & compilation overhead that grows linearly with the size of the list. To prevent surprises like this, JAX avoids implicit conversions of lists and tuples to arrays.
If you would like to pass a tuple or list to a JAX function, you can do so by first explicitly converting it to an array:
```
jnp.sum(jnp.array(x))
```
```
Array(45, dtype=int32)
```
### 🔪 Random Numbers[#](#random-numbers)
> *If all scientific papers whose results are in doubt because of bad
> `rand()`s were to disappear from library shelves, there would be a
> gap on each shelf about as big as your fist.* - Numerical Recipes
#### RNGs and State[#](#rngs-and-state)
You’re used to *stateful* pseudorandom number generators (PRNGs) from numpy and other libraries, which helpfully hide a lot of details under the hood to give you a ready fountain of pseudorandomness:
```
print(np.random.random())
print(np.random.random())
print(np.random.random())
```
```
0.9172495847121656 0.27475964363165173 0.20870490579191459
```
Underneath the hood, numpy uses the [Mersenne Twister](https://en.wikipedia.org/wiki/Mersenne_Twister) PRNG to power its pseudorandom functions. The PRNG has a period of \(2^{19937}-1\) and at any point can be described by **624 32bit unsigned ints** and a **position** indicating how much of this “entropy” has been used up.
```
np.random.seed(0)
rng_state = np.random.get_state()
# print(rng_state)
# --> ('MT19937', array([0, 1, 1812433255, 1900727105, 1208447044,
# 2481403966, 4042607538, 337614300, ... 614 more numbers...,
# 3048484911, 1796872496], dtype=uint32), 624, 0, 0.0)
```
This pseudorandom state vector is automagically updated behind the scenes every time a random number is needed, “consuming” 2 of the uint32s in the Mersenne twister state vector:
```
_ = np.random.uniform()
rng_state = np.random.get_state()
#print(rng_state)
# --> ('MT19937', array([2443250962, 1093594115, 1878467924,
# ..., 2648828502, 1678096082], dtype=uint32), 2, 0, 0.0)
# Let's exhaust the entropy in this PRNG statevector for i in range(311):
_ = np.random.uniform()
rng_state = np.random.get_state()
#print(rng_state)
# --> ('MT19937', array([2443250962, 1093594115, 1878467924,
# ..., 2648828502, 1678096082], dtype=uint32), 624, 0, 0.0)
# Next call iterates the RNG state for a new batch of fake "entropy".
_ = np.random.uniform()
rng_state = np.random.get_state()
# print(rng_state)
# --> ('MT19937', array([1499117434, 2949980591, 2242547484,
# 4162027047, 3277342478], dtype=uint32), 2, 0, 0.0)
```
The problem with magic PRNG state is that it’s hard to reason about how it’s being used and updated across different threads, processes, and devices, and it’s *very easy* to screw up when the details of entropy production and consumption are hidden from the end user.
The Mersenne Twister PRNG is also known to have a [number](https://cs.stackexchange.com/a/53475) of problems, it has a large 2.5Kb state size, which leads to problematic [initialization issues](https://dl.acm.org/citation.cfm?id=1276928). It [fails](http://www.pcg-random.org/pdf/toms-oneill-pcg-family-v1.02.pdf) modern BigCrush tests, and is generally slow.
#### JAX PRNG[#](#jax-prng)
JAX instead implements an *explicit* PRNG where entropy production and consumption are handled by explicitly passing and iterating PRNG state. JAX uses a modern [Threefry counter-based PRNG](https://github.com/google/jax/blob/main/docs/jep/263-prng.md) that’s **splittable**. That is, its design allows us to **fork** the PRNG state into new PRNGs for use with parallel stochastic generation.
The random state is described by two unsigned-int32s that we call a **key**:
```
from jax import random key = random.PRNGKey(0)
key
```
```
Array([0, 0], dtype=uint32)
```
JAX’s random functions produce pseudorandom numbers from the PRNG state, but **do not** change the state!
Reusing the same state will cause **sadness** and **monotony**, depriving the end user of **lifegiving chaos**:
```
print(random.normal(key, shape=(1,)))
print(key)
# No no no!
print(random.normal(key, shape=(1,)))
print(key)
```
```
[-0.20584226]
[0 0]
[-0.20584226]
[0 0]
```
Instead, we **split** the PRNG to get usable **subkeys** every time we need a new pseudorandom number:
```
print("old key", key)
key, subkey = random.split(key)
normal_pseudorandom = random.normal(subkey, shape=(1,))
print(" \---SPLIT --> new key ", key)
print(" \--> new subkey", subkey, "--> normal", normal_pseudorandom)
```
```
old key [0 0]
\---SPLIT --> new key [4146024105 967050713]
\--> new subkey [2718843009 1272950319] --> normal [-1.2515389]
```
We propagate the **key** and make new **subkeys** whenever we need a new random number:
```
print("old key", key)
key, subkey = random.split(key)
normal_pseudorandom = random.normal(subkey, shape=(1,))
print(" \---SPLIT --> new key ", key)
print(" \--> new subkey", subkey, "--> normal", normal_pseudorandom)
```
```
old key [4146024105 967050713]
\---SPLIT --> new key [2384771982 3928867769]
\--> new subkey [1278412471 2182328957] --> normal [-0.58665055]
```
We can generate more than one **subkey** at a time:
```
key, *subkeys = random.split(key, 4)
for subkey in subkeys:
print(random.normal(subkey, shape=(1,)))
```
```
[-0.37533438]
[0.98645043]
[0.14553197]
```
### 🔪 Control Flow[#](#control-flow)
#### ✔ python control_flow + autodiff ✔[#](#python-control-flow-autodiff)
If you just want to apply `grad` to your python functions, you can use regular python control-flow constructs with no problems, as if you were using [Autograd](https://github.com/hips/autograd) (or Pytorch or TF Eager).
```
def f(x):
if x < 3:
return 3. * x ** 2
else:
return -4 * x
print(grad(f)(2.)) # ok!
print(grad(f)(4.)) # ok!
```
```
12.0
-4.0
```
#### python control flow + JIT[#](#python-control-flow-jit)
Using control flow with `jit` is more complicated, and by default it has more constraints.
This works:
```
@jit def f(x):
for i in range(3):
x = 2 * x
return x
print(f(3))
```
```
24
```
So does this:
```
@jit def g(x):
y = 0.
for i in range(x.shape[0]):
y = y + x[i]
return y
print(g(jnp.array([1., 2., 3.])))
```
```
6.0
```
But this doesn’t, at least by default:
```
@jit def f(x):
if x < 3:
return 3. * x ** 2
else:
return -4 * x
# This will fail!
f(2)
```
```
TracerBoolConversionError: Attempted boolean conversion of traced array with shape bool[]..
The error occurred while tracing the function f at /tmp/ipykernel_1294/3402096563.py:1 for jit. This concrete value was not available in Python because it depends on the value of the argument x.
See https://jax.readthedocs.io/en/latest/errors.html#jax.errors.TracerBoolConversionError
```
**What gives!?**
When we `jit`-compile a function, we usually want to compile a version of the function that works for many different argument values, so that we can cache and reuse the compiled code. That way we don’t have to re-compile on each function evaluation.
For example, if we evaluate an `@jit` function on the array `jnp.array([1., 2., 3.], jnp.float32)`, we might want to compile code that we can reuse to evaluate the function on `jnp.array([4., 5., 6.], jnp.float32)` to save on compile time.
To get a view of your Python code that is valid for many different argument values, JAX traces it on *abstract values* that represent sets of possible inputs. There are [multiple different levels of abstraction](https://github.com/google/jax/blob/main/jax/_src/abstract_arrays.py), and different transformations use different abstraction levels.
By default, `jit` traces your code on the `ShapedArray` abstraction level, where each abstract value represents the set of all array values with a fixed shape and dtype. For example, if we trace using the abstract value `ShapedArray((3,), jnp.float32)`, we get a view of the function that can be reused for any concrete value in the corresponding set of arrays. That means we can save on compile time.
But there’s a tradeoff here: if we trace a Python function on a `ShapedArray((), jnp.float32)` that isn’t committed to a specific concrete value, when we hit a line like `if x < 3`, the expression `x < 3` evaluates to an abstract `ShapedArray((), jnp.bool_)` that represents the set `{True, False}`. When Python attempts to coerce that to a concrete `True` or `False`, we get an error: we don’t know which branch to take, and can’t continue tracing! The tradeoff is that with higher levels of abstraction we gain a more general view of the Python code (and thus save on re-compilations), but we require more constraints on the Python code to complete the trace.
The good news is that you can control this tradeoff yourself. By having `jit` trace on more refined abstract values, you can relax the traceability constraints. For example, using the `static_argnums` argument to `jit`, we can specify to trace on concrete values of some arguments. Here’s that example function again:
```
def f(x):
if x < 3:
return 3. * x ** 2
else:
return -4 * x
f = jit(f, static_argnums=(0,))
print(f(2.))
```
```
12.0
```
Here’s another example, this time involving a loop:
```
def f(x, n):
y = 0.
for i in range(n):
y = y + x[i]
return y
f = jit(f, static_argnums=(1,))
f(jnp.array([2., 3., 4.]), 2)
```
```
Array(5., dtype=float32)
```
In effect, the loop gets statically unrolled. JAX can also trace at *higher* levels of abstraction, like `Unshaped`, but that’s not currently the default for any transformation
️⚠️ **functions with argument-**value** dependent shapes**
These control-flow issues also come up in a more subtle way: numerical functions we want to **jit** can’t specialize the shapes of internal arrays on argument *values* (specializing on argument **shapes** is ok). As a trivial example, let’s make a function whose output happens to depend on the input variable `length`.
```
def example_fun(length, val):
return jnp.ones((length,)) * val
# un-jit'd works fine print(example_fun(5, 4))
```
```
[4. 4. 4. 4. 4.]
```
```
bad_example_jit = jit(example_fun)
# this will fail:
bad_example_jit(10, 4)
```
```
TypeError: Shapes must be 1D sequences of concrete values of integer type, got (Traced<ShapedArray(int32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>,).
If using `jit`, try using `static_argnums` or applying `jit` to smaller subfunctions.
The error occurred while tracing the function example_fun at /tmp/ipykernel_1294/1210496444.py:1 for jit. This concrete value was not available in Python because it depends on the value of the argument length.
```
```
# static_argnums tells JAX to recompile on changes at these argument positions:
good_example_jit = jit(example_fun, static_argnums=(0,))
# first compile print(good_example_jit(10, 4))
# recompiles print(good_example_jit(5, 4))
```
```
[4. 4. 4. 4. 4. 4. 4. 4. 4. 4.]
[4. 4. 4. 4. 4.]
```
`static_argnums` can be handy if `length` in our example rarely changes, but it would be disastrous if it changed a lot!
Lastly, if your function has global side-effects, JAX’s tracer can cause weird things to happen. A common gotcha is trying to print arrays inside **jit**’d functions:
```
@jit def f(x):
print(x)
y = 2 * x
print(y)
return y f(2)
```
```
Traced<ShapedArray(int32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>
Traced<ShapedArray(int32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>
```
```
Array(4, dtype=int32, weak_type=True)
```
#### Structured control flow primitives[#](#structured-control-flow-primitives)
There are more options for control flow in JAX. Say you want to avoid re-compilations but still want to use control flow that’s traceable, and that avoids un-rolling large loops. Then you can use these 4 structured control flow primitives:
* `lax.cond` *differentiable*
* `lax.while_loop` **fwd-mode-differentiable**
* `lax.fori_loop` **fwd-mode-differentiable** in general; **fwd and rev-mode differentiable** if endpoints are static.
* `lax.scan` *differentiable*
##### `cond`[#](#cond)
python equivalent:
```
def cond(pred, true_fun, false_fun, operand):
if pred:
return true_fun(operand)
else:
return false_fun(operand)
```
```
from jax import lax
operand = jnp.array([0.])
lax.cond(True, lambda x: x+1, lambda x: x-1, operand)
# --> array([1.], dtype=float32)
lax.cond(False, lambda x: x+1, lambda x: x-1, operand)
# --> array([-1.], dtype=float32)
```
```
Array([-1.], dtype=float32)
```
`jax.lax` provides two other functions that allow branching on dynamic predicates:
* [`lax.select`](https://jax.readthedocs.io/en/latest/_autosummary/jax.lax.select.html) is like a batched version of `lax.cond`, with the choices expressed as pre-computed arrays rather than as functions.
* [`lax.switch`](https://jax.readthedocs.io/en/latest/_autosummary/jax.lax.switch.html) is like `lax.cond`, but allows switching between any number of callable choices.
In addition, `jax.numpy` provides several numpy-style interfaces to these functions:
* [`jnp.where`](https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.where.html) with three arguments is the numpy-style wrapper of `lax.select`.
* [`jnp.piecewise`](https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.piecewise.html)
is a numpy-style wrapper of `lax.switch`, but switches on a list of boolean conditions rather than a single scalar index.
* [`jnp.select`](https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.select.html) has an API similar to `jnp.piecewise`, but the choices are given as pre-computed arrays rather than as functions. It is implemented in terms of multiple calls to `lax.select`.
##### `while_loop`[#](#while-loop)
python equivalent:
```
def while_loop(cond_fun, body_fun, init_val):
val = init_val
while cond_fun(val):
val = body_fun(val)
return val
```
```
init_val = 0 cond_fun = lambda x: x<10 body_fun = lambda x: x+1 lax.while_loop(cond_fun, body_fun, init_val)
# --> array(10, dtype=int32)
```
```
Array(10, dtype=int32, weak_type=True)
```
##### `fori_loop`[#](#fori-loop)
python equivalent:
```
def fori_loop(start, stop, body_fun, init_val):
val = init_val
for i in range(start, stop):
val = body_fun(i, val)
return val
```
```
init_val = 0 start = 0 stop = 10 body_fun = lambda i,x: x+i lax.fori_loop(start, stop, body_fun, init_val)
# --> array(45, dtype=int32)
```
```
Array(45, dtype=int32, weak_type=True)
```
##### Summary[#](#summary)
\[\begin{split}
\begin{array} {r|rr}
\hline \
\textrm{construct}
& \textrm{jit}
& \textrm{grad} \\
\hline \
\textrm{if} & ❌ & ✔ \\
\textrm{for} & ✔* & ✔\\
\textrm{while} & ✔* & ✔\\
\textrm{lax.cond} & ✔ & ✔\\
\textrm{lax.while_loop} & ✔ & \textrm{fwd}\\
\textrm{lax.fori_loop} & ✔ & \textrm{fwd}\\
\textrm{lax.scan} & ✔ & ✔\\
\hline
\end{array}
\end{split}\]
\(\ast\) = argument-**value**-independent loop condition - unrolls the loop
### 🔪 Dynamic Shapes[#](#dynamic-shapes)
JAX code used within transforms like `jax.jit`, `jax.vmap`, `jax.grad`, etc. requires all output arrays and intermediate arrays to have static shape: that is, the shape cannot depend on values within other arrays.
For example, if you were implementing your own version of `jnp.nansum`, you might start with something like this:
```
def nansum(x):
mask = ~jnp.isnan(x) # boolean mask selecting non-nan values
x_without_nans = x[mask]
return x_without_nans.sum()
```
Outside JIT and other transforms, this works as expected:
```
x = jnp.array([1, 2, jnp.nan, 3, 4])
print(nansum(x))
```
```
10.0
```
If you attempt to apply `jax.jit` or another transform to this function, it will error:
```
jax.jit(nansum)(x)
```
```
NonConcreteBooleanIndexError: Array boolean indices must be concrete; got ShapedArray(bool[5])
See https://jax.readthedocs.io/en/latest/errors.html#jax.errors.NonConcreteBooleanIndexError
```
The problem is that the size of `x_without_nans` is dependent on the values within `x`, which is another way of saying its size is *dynamic*.
Often in JAX it is possible to work-around the need for dynamically-sized arrays via other means.
For example, here it is possible to use the three-argument form of `jnp.where` to replace the NaN values with zeros, thus computing the same result while avoiding dynamic shapes:
```
@jax.jit def nansum_2(x):
mask = ~jnp.isnan(x) # boolean mask selecting non-nan values
return jnp.where(mask, x, 0).sum()
print(nansum_2(x))
```
```
10.0
```
Similar tricks can be played in other situations where dynamically-shaped arrays occur.
### 🔪 NaNs[#](#nans)
#### Debugging NaNs[#](#debugging-nans)
If you want to trace where NaNs are occurring in your functions or gradients, you can turn on the NaN-checker by:
* setting the `JAX_DEBUG_NANS=True` environment variable;
* adding `from jax import config` and `config.update("jax_debug_nans", True)` near the top of your main file;
* adding `from jax import config` and `config.parse_flags_with_absl()` to your main file, then set the option using a command-line flag like `--jax_debug_nans=True`;
This will cause computations to error-out immediately on production of a NaN. Switching this option on adds a nan check to every floating point type value produced by XLA. That means values are pulled back to the host and checked as ndarrays for every primitive operation not under an `@jit`. For code under an `@jit`, the output of every `@jit` function is checked and if a nan is present it will re-run the function in de-optimized op-by-op mode, effectively removing one level of `@jit` at a time.
There could be tricky situations that arise, like nans that only occur under a `@jit` but don’t get produced in de-optimized mode. In that case you’ll see a warning message print out but your code will continue to execute.
If the nans are being produced in the backward pass of a gradient evaluation, when an exception is raised several frames up in the stack trace you will be in the backward_pass function, which is essentially a simple jaxpr interpreter that walks the sequence of primitive operations in reverse. In the example below, we started an ipython repl with the command line `env JAX_DEBUG_NANS=True ipython`, then ran this:
```
In [1]: import jax.numpy as jnp
In [2]: jnp.divide(0., 0.)
---
FloatingPointError Traceback (most recent call last)
<ipython-input-2-f2e2c413b437> in <module>()
---> 1 jnp.divide(0., 0.)
.../jax/jax/numpy/lax_numpy.pyc in divide(x1, x2)
343 return floor_divide(x1, x2)
344 else:
--> 345 return true_divide(x1, x2)
346
347
.../jax/jax/numpy/lax_numpy.pyc in true_divide(x1, x2)
332 x1, x2 = _promote_shapes(x1, x2)
333 return lax.div(lax.convert_element_type(x1, result_dtype),
--> 334 lax.convert_element_type(x2, result_dtype))
335
336
.../jax/jax/lax.pyc in div(x, y)
244 def div(x, y):
245 r"""Elementwise division: :math:`x \over y`."""
--> 246 return div_p.bind(x, y)
247
248 def rem(x, y):
... stack trace ...
.../jax/jax/interpreters/xla.pyc in handle_result(device_buffer)
103 py_val = device_buffer.to_py()
104 if np.any(np.isnan(py_val)):
--> 105 raise FloatingPointError("invalid value")
106 else:
107 return Array(device_buffer, *result_shape)
FloatingPointError: invalid value
```
The nan generated was caught. By running `%debug`, we can get a post-mortem debugger. This also works with functions under `@jit`, as the example below shows.
```
In [4]: from jax import jit
In [5]: @jit
...: def f(x, y):
...: a = x * y
...: b = (x + y) / (x - y)
...: c = a + 2
...: return a + b * c
...:
In [6]: x = jnp.array([2., 0.])
In [7]: y = jnp.array([3., 0.])
In [8]: f(x, y)
Invalid value encountered in the output of a jit function. Calling the de-optimized version.
---
FloatingPointError Traceback (most recent call last)
<ipython-input-8-811b7ddb3300> in <module>()
---> 1 f(x, y)
... stack trace ...
<ipython-input-5-619b39acbaac> in f(x, y)
2 def f(x, y):
3 a = x * y
---> 4 b = (x + y) / (x - y)
5 c = a + 2
6 return a + b * c
.../jax/jax/numpy/lax_numpy.pyc in divide(x1, x2)
343 return floor_divide(x1, x2)
344 else:
--> 345 return true_divide(x1, x2)
346
347
.../jax/jax/numpy/lax_numpy.pyc in true_divide(x1, x2)
332 x1, x2 = _promote_shapes(x1, x2)
333 return lax.div(lax.convert_element_type(x1, result_dtype),
--> 334 lax.convert_element_type(x2, result_dtype))
335
336
.../jax/jax/lax.pyc in div(x, y)
244 def div(x, y):
245 r"""Elementwise division: :math:`x \over y`."""
--> 246 return div_p.bind(x, y)
247
248 def rem(x, y):
... stack trace ...
```
When this code sees a nan in the output of an `@jit` function, it calls into the de-optimized code, so we still get a clear stack trace. And we can run a post-mortem debugger with `%debug` to inspect all the values to figure out the error.
⚠️ You shouldn’t have the NaN-checker on if you’re not debugging, as it can introduce lots of device-host round-trips and performance regressions!
⚠️ The NaN-checker doesn’t work with `pmap`. To debug nans in `pmap` code, one thing to try is replacing `pmap` with `vmap`.
### 🔪 Double (64bit) precision[#](#double-64bit-precision)
At the moment, JAX by default enforces single-precision numbers to mitigate the Numpy API’s tendency to aggressively promote operands to `double`. This is the desired behavior for many machine-learning applications, but it may catch you by surprise!
```
x = random.uniform(random.PRNGKey(0), (1000,), dtype=jnp.float64)
x.dtype
```
```
/tmp/ipykernel_1294/735204598.py:1: UserWarning: Explicitly requested dtype <class 'jax.numpy.float64'> is not available, and will be truncated to dtype float32. To enable more dtypes, set the jax_enable_x64 configuration option or the JAX_ENABLE_X64 shell environment variable. See https://github.com/google/jax#current-gotchas for more.
x = random.uniform(random.PRNGKey(0), (1000,), dtype=jnp.float64)
```
```
dtype('float32')
```
To use double-precision numbers, you need to set the `jax_enable_x64` configuration variable **at startup**.
There are a few ways to do this:
1. You can enable 64bit mode by setting the environment variable `JAX_ENABLE_X64=True`.
2. You can manually set the `jax_enable_x64` configuration flag at startup:
```
# again, this only works on startup!
from jax import config config.update("jax_enable_x64", True)
```
3. You can parse command-line flags with `absl.app.run(main)`
```
from jax import config config.config_with_absl()
```
4. If you want JAX to run absl parsing for you, i.e. you don’t want to do `absl.app.run(main)`, you can instead use
```
from jax import config if __name__ == '__main__':
# calls config.config_with_absl() *and* runs absl parsing
config.parse_flags_with_absl()
```
Note that #2-#4 work for *any* of JAX’s configuration options.
We can then confirm that `x64` mode is enabled:
```
import jax.numpy as jnp from jax import random x = random.uniform(random.PRNGKey(0), (1000,), dtype=jnp.float64)
x.dtype # --> dtype('float64')
```
```
/tmp/ipykernel_1294/1336263954.py:3: UserWarning: Explicitly requested dtype <class 'jax.numpy.float64'> is not available, and will be truncated to dtype float32. To enable more dtypes, set the jax_enable_x64 configuration option or the JAX_ENABLE_X64 shell environment variable. See https://github.com/google/jax#current-gotchas for more.
x = random.uniform(random.PRNGKey(0), (1000,), dtype=jnp.float64)
```
```
dtype('float32')
```
#### Caveats[#](#caveats)
⚠️ XLA doesn’t support 64-bit convolutions on all backends!
### 🔪 Miscellaneous Divergences from NumPy[#](#miscellaneous-divergences-from-numpy)
While `jax.numpy` makes every attempt to replicate the behavior of numpy’s API, there do exist corner cases where the behaviors differ.
Many such cases are discussed in detail in the sections above; here we list several other known places where the APIs diverge.
* For binary operations, JAX’s type promotion rules differ somewhat from those used by NumPy. See [Type Promotion Semantics](https://jax.readthedocs.io/en/latest/type_promotion.html) for more details.
* When performing unsafe type casts (i.e. casts in which the target dtype cannot represent the input value), JAX’s behavior may be backend dependent, and in general may diverge from NumPy’s behavior. Numpy allows control over the result in these scenarios via the `casting` argument (see [`np.ndarray.astype`](https://numpy.org/devdocs/reference/generated/numpy.ndarray.astype.html)); JAX does not provide any such configuration, instead directly inheriting the behavior of [XLA:ConvertElementType](https://www.tensorflow.org/xla/operation_semantics#convertelementtype).
Here is an example of an unsafe cast with differing results between NumPy and JAX:
```
>>> np.arange(254.0, 258.0).astype('uint8')
array([254, 255, 0, 1], dtype=uint8)
>>> jnp.arange(254.0, 258.0).astype('uint8')
Array([254, 255, 255, 255], dtype=uint8)
```
This sort of mismatch would typically arise when casting extreme values from floating to integer types or vice versa.
### Fin.[#](#fin)
If something’s not covered here that has caused you weeping and gnashing of teeth, please let us know and we’ll extend these introductory *advisos*!
JAX Frequently Asked Questions (FAQ)[#](#jax-frequently-asked-questions-faq)
---
We are collecting here answers to frequently asked questions.
Contributions welcome!
### `jit` changes the behavior of my function[#](#jit-changes-the-behavior-of-my-function)
If you have a Python function that changes behavior after using [`jax.jit()`](index.html#jax.jit), perhaps your function uses global state, or has side-effects. In the following code, the
`impure_func` uses the global `y` and has a side-effect due to `print`:
```
y = 0
# @jit # Different behavior with jit def impure_func(x):
print("Inside:", y)
return x + y
for y in range(3):
print("Result:", impure_func(y))
```
Without `jit` the output is:
```
Inside: 0 Result: 0 Inside: 1 Result: 2 Inside: 2 Result: 4
```
and with `jit` it is:
```
Inside: 0 Result: 0 Result: 1 Result: 2
```
For [`jax.jit()`](index.html#jax.jit), the function is executed once using the Python interpreter, at which time the
`Inside` printing happens, and the first value of `y` is observed. Then, the function is compiled and cached, and executed multiple times with different values of `x`, but with the same first value of `y`.
Additional reading:
> * [JAX - The Sharp Bits](https://jax.readthedocs.io/en/latest/notebooks/Common_Gotchas_in_JAX.html)
### `jit` changes the exact numerics of outputs[#](#jit-changes-the-exact-numerics-of-outputs)
Sometimes users are surprised by the fact that wrapping a function with `jit()`
can change the function’s outputs. For example:
```
>>> from jax import jit
>>> import jax.numpy as jnp
>>> def f(x):
... return jnp.log(jnp.sqrt(x))
>>> x = jnp.pi
>>> print(f(x))
0.572365
```
```
>>> print(jit(f)(x))
0.5723649
```
This slight difference in output comes from optimizations within the XLA compiler:
during compilation, XLA will sometimes rearrange or elide certain operations to make the overall computation more efficient.
In this case, XLA utilizes the properties of the logarithm to replace `log(sqrt(x))`
with `0.5 * log(x)`, which is a mathematically identical expression that can be computed more efficiently than the original. The difference in output comes from the fact that floating point arithmetic is only a close approximation of real math,
so different ways of computing the same expression may have subtly different results.
Other times, XLA’s optimizations may lead to even more drastic differences.
Consider the following example:
```
>>> def f(x):
... return jnp.log(jnp.exp(x))
>>> x = 100.0
>>> print(f(x))
inf
```
```
>>> print(jit(f)(x))
100.0
```
In non-JIT-compiled op-by-op mode, the result is `inf` because `jnp.exp(x)`
overflows and returns `inf`. Under JIT, however, XLA recognizes that `log` is the inverse of `exp`, and removes the operations from the compiled function,
simply returning the input. In this case, JIT compilation produces a more accurate floating point approximation of the real result.
Unfortunately the full list of XLA’s algebraic simplifications is not well documented, but if you’re familiar with C++ and curious about what types of optimizations the XLA compiler makes, you can see them in the source code:
[algebraic_simplifier.cc](https://github.com/tensorflow/tensorflow/blob/v2.10.0/tensorflow/compiler/xla/service/algebraic_simplifier.cc#L3266).
### `jit` decorated function is very slow to compile[#](#jit-decorated-function-is-very-slow-to-compile)
If your `jit` decorated function takes tens of seconds (or more!) to run the first time you call it, but executes quickly when called again, JAX is taking a long time to trace or compile your code.
This is usually a sign that calling your function generates a large amount of code in JAX’s internal representation, typically because it makes heavy use of Python control flow such as `for` loops. For a handful of loop iterations,
Python is OK, but if you need *many* loop iterations, you should rewrite your code to make use of JAX’s
[structured control flow primitives](https://jax.readthedocs.io/en/latest/notebooks/Common_Gotchas_in_JAX.html#Structured-control-flow-primitives)
(such as `lax.scan()`) or avoid wrapping the loop with `jit` (you can still use `jit` decorated functions *inside* the loop).
If you’re not sure if this is the problem, you can try running
[`jax.make_jaxpr()`](index.html#jax.make_jaxpr) on your function. You can expect slow compilation if the output is many hundreds or thousands of lines long.
Sometimes it isn’t obvious how to rewrite your code to avoid Python loops because your code makes use of many arrays with different shapes. The recommended solution in this case is to make use of functions like
[`jax.numpy.where()`](index.html#jax.numpy.where) to do your computation on padded arrays with fixed shape.
If your functions are slow to compile for another reason, please open an issue on GitHub.
### How to use `jit` with methods?[#](#how-to-use-jit-with-methods)
Most examples of [`jax.jit()`](index.html#jax.jit) concern decorating stand-alone Python functions,
but decorating a method within a class introduces some complication. For example,
consider the following simple class, where we’ve used a standard [`jit()`](index.html#jax.jit)
annotation on a method:
```
>>> import jax.numpy as jnp
>>> from jax import jit
>>> class CustomClass:
... def __init__(self, x: jnp.ndarray, mul: bool):
... self.x = x
... self.mul = mul
...
... @jit # <--- How to do this correctly?
... def calc(self, y):
... if self.mul:
... return self.x * y
... return y
```
However, this approach will result in an error when you attempt to call this method:
```
>>> c = CustomClass(2, True)
>>> c.calc(3)
---
TypeError Traceback (most recent call last)
File "<stdin>", line 1, in <module TypeError: Argument '<CustomClass object at 0x7f7dd4125890>' of type <class 'CustomClass'> is not a valid JAX type.
```
The problem is that the first argument to the function is `self`, which has type
`CustomClass`, and JAX does not know how to handle this type.
There are three basic strategies we might use in this case, and we’ll discuss them below.
#### Strategy 1: JIT-compiled helper function[#](#strategy-1-jit-compiled-helper-function)
The most straightforward approach is to create a helper function external to the class that can be JIT-decorated in the normal way. For example:
```
>>> from functools import partial
>>> class CustomClass:
... def __init__(self, x: jnp.ndarray, mul: bool):
... self.x = x
... self.mul = mul
...
... def calc(self, y):
... return _calc(self.mul, self.x, y)
>>> @partial(jit, static_argnums=0)
... def _calc(mul, x, y):
... if mul:
... return x * y
... return y
```
The result will work as expected:
```
>>> c = CustomClass(2, True)
>>> print(c.calc(3))
6
```
The benefit of such an approach is that it is simple, explicit, and it avoids the need to teach JAX how to handle objects of type `CustomClass`. However, you may wish to keep all the method logic in the same place.
#### Strategy 2: Marking `self` as static[#](#strategy-2-marking-self-as-static)
Another common pattern is to use `static_argnums` to mark the `self` argument as static.
But this must be done with care to avoid unexpected results.
You may be tempted to simply do this:
```
>>> class CustomClass:
... def __init__(self, x: jnp.ndarray, mul: bool):
... self.x = x
... self.mul = mul
...
... # WARNING: this example is broken, as we'll see below. Don't copy & paste!
... @partial(jit, static_argnums=0)
... def calc(self, y):
... if self.mul:
... return self.x * y
... return y
```
If you call the method, it will no longer raise an error:
```
>>> c = CustomClass(2, True)
>>> print(c.calc(3))
6
```
However, there is a catch: if you mutate the object after the first method call, the subsequent method call may return an incorrect result:
```
>>> c.mul = False
>>> print(c.calc(3)) # Should print 3 6
```
Why is this? When you mark an object as static, it will effectively be used as a dictionary key in JIT’s internal compilation cache, meaning its hash (i.e. `hash(obj)`) equality
(i.e. `obj1 == obj2`) and object identity (i.e. `obj1 is obj2`) will be assumed to have consistent behavior. The default `__hash__` for a custom object is its object ID, and so JAX has no way of knowing that a mutated object should trigger a re-compilation.
You can partially address this by defining an appropriate `__hash__` and `__eq__` methods for your object; for example:
```
>>> class CustomClass:
... def __init__(self, x: jnp.ndarray, mul: bool):
... self.x = x
... self.mul = mul
...
... @partial(jit, static_argnums=0)
... def calc(self, y):
... if self.mul:
... return self.x * y
... return y
...
... def __hash__(self):
... return hash((self.x, self.mul))
...
... def __eq__(self, other):
... return (isinstance(other, CustomClass) and
... (self.x, self.mul) == (other.x, other.mul))
```
(see the [`object.__hash__()`](https://docs.python.org/3/reference/datamodel.html#object.__hash__) documentation for more discussion of the requirements when overriding `__hash__`).
This should work correctly with JIT and other transforms **so long as you never mutate your object**. Mutations of objects used as hash keys lead to several subtle problems,
which is why for example mutable Python containers (e.g. [`dict`](https://docs.python.org/3/library/stdtypes.html#dict), [`list`](https://docs.python.org/3/library/stdtypes.html#list))
don’t define `__hash__`, while their immutable counterparts (e.g. [`tuple`](https://docs.python.org/3/library/stdtypes.html#tuple)) do.
If your class relies on in-place mutations (such as setting ``self.attr = ...` within its methods), then your object is not really “static” and marking it as such may lead to problems.
Fortunately, there’s another option for this case.
#### Strategy 3: Making `CustomClass` a PyTree[#](#strategy-3-making-customclass-a-pytree)
The most flexible approach to correctly JIT-compiling a class method is to register the type as a custom PyTree object; see [Extending pytrees](index.html#extending-pytrees). This lets you specify exactly which components of the class should be treated as static and which should be treated as dynamic. Here’s how it might look:
```
>>> class CustomClass:
... def __init__(self, x: jnp.ndarray, mul: bool):
... self.x = x
... self.mul = mul
...
... @jit
... def calc(self, y):
... if self.mul:
... return self.x * y
... return y
...
... def _tree_flatten(self):
... children = (self.x,) # arrays / dynamic values
... aux_data = {'mul': self.mul} # static values
... return (children, aux_data)
...
... @classmethod
... def _tree_unflatten(cls, aux_data, children):
... return cls(*children, **aux_data)
>>> from jax import tree_util
>>> tree_util.register_pytree_node(CustomClass,
... CustomClass._tree_flatten,
... CustomClass._tree_unflatten)
```
This is certainly more involved, but it solves all the issues associated with the simpler approaches used above:
```
>>> c = CustomClass(2, True)
>>> print(c.calc(3))
6
>>> c.mul = False # mutation is detected
>>> print(c.calc(3))
3
>>> c = CustomClass(jnp.array(2), True) # non-hashable x is supported
>>> print(c.calc(3))
6
```
So long as your `tree_flatten` and `tree_unflatten` functions correctly handle all relevant attributes in the class, you should be able to use objects of this type directly as arguments to JIT-compiled functions, without any special annotations.
### Controlling data and computation placement on devices[#](#controlling-data-and-computation-placement-on-devices)
Let’s first look at the principles of data and computation placement in JAX.
In JAX, the computation follows data placement. JAX arrays have two placement properties: 1) the device where the data resides;
and 2) whether it is **committed** to the device or not (the data is sometimes referred to as being *sticky* to the device).
By default, JAX arrays are placed uncommitted on the default device
(`jax.devices()[0]`), which is the first GPU or TPU by default. If no GPU or TPU is present, `jax.devices()[0]` is the CPU. The default device can temporarily overridden with the [`jax.default_device()`](index.html#jax.default_device) context manager, or set for the whole process by setting the environment variable `JAX_PLATFORMS`
or the absl flag `--jax_platforms` to “cpu”, “gpu”, or “tpu”
(`JAX_PLATFORMS` can also be a list of platforms, which determines which platforms are available in priority order).
```
>>> from jax import numpy as jnp
>>> print(jnp.ones(3).device_buffer.device())
gpu:0
```
Computations involving uncommitted data are performed on the default device and the results are uncommitted on the default device.
Data can also be placed explicitly on a device using [`jax.device_put()`](index.html#jax.device_put)
with a `device` parameter, in which case the data becomes **committed** to the device:
```
>>> import jax
>>> from jax import device_put
>>> print(device_put(1, jax.devices()[2]).device_buffer.device())
gpu:2
```
Computations involving some committed inputs will happen on the committed device and the result will be committed on the same device. Invoking an operation on arguments that are committed to more than one device will raise an error.
You can also use [`jax.device_put()`](index.html#jax.device_put) without a `device` parameter. If the data is already on a device (committed or not), it’s left as-is. If the data isn’t on any device—that is, it’s a regular Python or NumPy value—it’s placed uncommitted on the default device.
Jitted functions behave like any other primitive operations—they will follow the data and will show errors if invoked on data committed on more than one device.
(Before [PR #6002](https://github.com/google/jax/pull/6002) in March 2021 there was some laziness in creation of array constants, so that
`jax.device_put(jnp.zeros(...), jax.devices()[1])` or similar would actually create the array of zeros on `jax.devices()[1]`, instead of creating the array on the default device then moving it. But this optimization was removed so as to simplify the implementation.)
(As of April 2020, [`jax.jit()`](index.html#jax.jit) has a device parameter that affects the device placement. That parameter is experimental, is likely to be removed or changed,
and its use is not recommended.)
For a worked-out example, we recommend reading through
`test_computation_follows_data` in
[multi_device_test.py](https://github.com/google/jax/blob/main/tests/multi_device_test.py).
### Benchmarking JAX code[#](#benchmarking-jax-code)
You just ported a tricky function from NumPy/SciPy to JAX. Did that actuallly speed things up?
Keep in mind these important differences from NumPy when measuring the speed of code using JAX:
1. **JAX code is Just-In-Time (JIT) compiled.** Most code written in JAX can be written in such a way that it supports JIT compilation, which can make it run
*much faster* (see [To JIT or not to JIT](https://jax.readthedocs.io/en/latest/notebooks/thinking_in_jax.html#to-jit-or-not-to-jit)). To get maximum performance from JAX, you should apply [`jax.jit()`](index.html#jax.jit) on your outer-most function calls.
Keep in mind that the first time you run JAX code, it will be slower because it is being compiled. This is true even if you don’t use `jit` in your own code, because JAX’s builtin functions are also JIT compiled.
2. **JAX has asynchronous dispatch.** This means that you need to call
`.block_until_ready()` to ensure that computation has actually happened
(see [Asynchronous dispatch](index.html#async-dispatch)).
3. **JAX by default only uses 32-bit dtypes.** You may want to either explicitly use 32-bit dtypes in NumPy or enable 64-bit dtypes in JAX (see
[Double (64 bit) precision](https://jax.readthedocs.io/en/latest/notebooks/Common_Gotchas_in_JAX.html#double-64bit-precision)) for a fair comparison.
4. **Transferring data between CPUs and accelerators takes time.** If you only want to measure the how long it takes to evaluate a function, you may want to transfer data to the device on which you want to run it first (see
[Controlling data and computation placement on devices](#faq-data-placement)).
Here’s an example of how to put together all these tricks into a microbenchmark for comparing JAX versus NumPy, making using of IPython’s convenient
[%time and %timeit magics](https://ipython.readthedocs.io/en/stable/interactive/magics.html#magic-time):
```
import numpy as np import jax.numpy as jnp import jax
def f(x): # function we're benchmarking (works in both NumPy & JAX)
return x.T @ (x - x.mean(axis=0))
x_np = np.ones((1000, 1000), dtype=np.float32) # same as JAX default dtype
%timeit f(x_np) # measure NumPy runtime
%time x_jax = jax.device_put(x_np) # measure JAX device transfer time f_jit = jax.jit(f)
%time f_jit(x_jax).block_until_ready() # measure JAX compilation time
%timeit f_jit(x_jax).block_until_ready() # measure JAX runtime
```
When run with a GPU in [Colab](https://colab.research.google.com/), we see:
* NumPy takes 16.2 ms per evaluation on the CPU
* JAX takes 1.26 ms to copy the NumPy arrays onto the GPU
* JAX takes 193 ms to compile the function
* JAX takes 485 µs per evaluation on the GPU
In this case, we see that once the data is transferred and the function is compiled, JAX on the GPU is about 30x faster for repeated evaluations.
Is this a fair comparison? Maybe. The performance that ultimately matters is for running full applications, which inevitably include some amount of both data transfer and compilation. Also, we were careful to pick large enough arrays
(1000x1000) and an intensive enough computation (the `@` operator is performing matrix-matrix multiplication) to amortize the increased overhead of JAX/accelerators vs NumPy/CPU. For example, if we switch this example to use 10x10 input instead, JAX/GPU runs 10x slower than NumPy/CPU (100 µs vs 10 µs).
#### Is JAX faster than NumPy?[#](#is-jax-faster-than-numpy)
One question users frequently attempt to answer with such benchmarks is whether JAX is faster than NumPy; due to the difference in the two packages, there is not a simple answer.
Broadly speaking:
* NumPy operations are executed eagerly, synchronously, and only on CPU.
* JAX operations may be executed eagerly or after compilation (if inside `jit()`);
they are dispatched asynchronously (see [Asynchronous dispatch](index.html#async-dispatch)); and they can be executed on CPU, GPU, or TPU, each of which have vastly different and continuously evolving performance characteristics.
These architectural differences make meaningful direct benchmark comparisons between NumPy and JAX difficult.
Additionally, these differences have led to different engineering focus between the packages: for example, NumPy has put significant effort into decreasing the per-call dispatch overhead for individual array operations, because in NumPy’s computational model that overhead cannot be avoided.
JAX, on the other hand, has several ways to avoid dispatch overhead (e.g. JIT compilation, asynchronous dispatch, batching transforms, etc.), and so reducing per-call overhead has been less of a priority.
Keeping all that in mind, in summary: if you’re doing microbenchmarks of individual array operations on CPU, you can generally expect NumPy to outperform JAX due to its lower per-operation dispatch overhead. If you’re running your code on GPU or TPU,
or are benchmarking more complicated JIT-compiled sequences of operations on CPU, you can generally expect JAX to outperform NumPy.
### Different kinds of JAX values[#](#different-kinds-of-jax-values)
In the process of transforming functions, JAX replaces some function arguments with special tracer values.
You could see this if you use a `print` statement:
```
def func(x):
print(x)
return jnp.cos(x)
res = jax.jit(func)(0.)
```
The above code does return the correct value `1.` but it also prints
`Traced<ShapedArray(float32[])>` for the value of `x`. Normally, JAX handles these tracer values internally in a transparent way, e.g.,
in the numeric JAX primitives that are used to implement the
`jax.numpy` functions. This is why `jnp.cos` works in the example above.
More precisely, a **tracer** value is introduced for the argument of a JAX-transformed function, except the arguments identified by special parameters such as `static_argnums` for [`jax.jit()`](index.html#jax.jit) or
`static_broadcasted_argnums` for [`jax.pmap()`](index.html#jax.pmap). Typically, computations that involve at least a tracer value will produce a tracer value. Besides tracer values, there are **regular** Python values: values that are computed outside JAX transformations, or arise from above-mentioned static arguments of certain JAX transformations, or computed solely from other regular Python values.
These are the values that are used everywhere in absence of JAX transformations.
A tracer value carries an **abstract** value, e.g., `ShapedArray` with information about the shape and dtype of an array. We will refer here to such tracers as
**abstract tracers**. Some tracers, e.g., those that are introduced for arguments of autodiff transformations, carry `ConcreteArray`
abstract values that actually include the regular array data, and are used,
e.g., for resolving conditionals. We will refer here to such tracers as **concrete tracers**. Tracer values computed from these concrete tracers,
perhaps in combination with regular values, result in concrete tracers.
A **concrete value** is either a regular value or a concrete tracer.
Most often values computed from tracer values are themselves tracer values.
There are very few exceptions, when a computation can be entirely done using the abstract value carried by a tracer, in which case the result can be a regular value. For example, getting the shape of a tracer with `ShapedArray` abstract value. Another example is when explicitly casting a concrete tracer value to a regular type, e.g., `int(x)` or
`x.astype(float)`.
Another such situation is for `bool(x)`, which produces a Python bool when concreteness makes it possible. That case is especially salient because of how often it arises in control flow.
Here is how the transformations introduce abstract or concrete tracers:
* [`jax.jit()`](index.html#jax.jit): introduces **abstract tracers** for all positional arguments except those denoted by `static_argnums`, which remain regular values.
* [`jax.pmap()`](index.html#jax.pmap): introduces **abstract tracers** for all positional arguments except those denoted by `static_broadcasted_argnums`.
* [`jax.vmap()`](index.html#jax.vmap), [`jax.make_jaxpr()`](index.html#jax.make_jaxpr), `xla_computation()`:
introduce **abstract tracers** for all positional arguments.
* [`jax.jvp()`](index.html#jax.jvp) and [`jax.grad()`](index.html#jax.grad) introduce **concrete tracers**
for all positional arguments. An exception is when these transformations are within an outer transformation and the actual arguments are themselves abstract tracers; in that case, the tracers introduced by the autodiff transformations are also abstract tracers.
* All higher-order control-flow primitives (`lax.cond()`, `lax.while_loop()`,
`lax.fori_loop()`, `lax.scan()`) when they process the functionals introduce **abstract tracers**, whether or not there is a JAX transformation in progress.
All of this is relevant when you have code that can operate only on regular Python values, such as code that has conditional control-flow based on data:
```
def divide(x, y):
return x / y if y >= 1. else 0.
```
If we want to apply [`jax.jit()`](index.html#jax.jit), we must ensure to specify `static_argnums=1`
to ensure `y` stays a regular value. This is due to the boolean expression
`y >= 1.`, which requires concrete values (regular or tracers). The same would happen if we write explicitly `bool(y >= 1.)`, or `int(y)`,
or `float(y)`.
Interestingly, `jax.grad(divide)(3., 2.)`, works because [`jax.grad()`](index.html#jax.grad)
uses concrete tracers, and resolves the conditional using the concrete value of `y`.
### Buffer donation[#](#buffer-donation)
When JAX executes a computation it uses buffers on the device for all inputs and outputs.
If you know that one of the inputs is not needed after the computation, and if it matches the shape and element type of one of the outputs, you can specify that you want the corresponding input buffer to be donated to hold an output. This will reduce the memory required for the execution by the size of the donated buffer.
If you have something like the following pattern, you can use buffer donation:
```
params, state = jax.pmap(update_fn, donate_argnums=(0, 1))(params, state)
```
You can think of this as a way to do a memory-efficient functional update on your immutable JAX arrays. Within the boundaries of a computation XLA can make this optimization for you, but at the jit/pmap boundary you need to guarantee to XLA that you will not use the donated input buffer after calling the donating function.
You achieve this by using the donate_argnums parameter to the functions [`jax.jit()`](index.html#jax.jit),
`jax.pjit()`, and [`jax.pmap()`](index.html#jax.pmap). This parameter is a sequence of indices (0 based) into the positional argument list:
```
def add(x, y):
return x + y
x = jax.device_put(np.ones((2, 3)))
y = jax.device_put(np.ones((2, 3)))
# Execute `add` with donation of the buffer for `y`. The result has
# the same shape and type as `y`, so it will share its buffer.
z = jax.jit(add, donate_argnums=(1,))(x, y)
```
Note that this currently does not work when calling your function with key-word arguments!
The following code will not donate any buffers:
```
params, state = jax.pmap(update_fn, donate_argnums=(0, 1))(params=params, state=state)
```
If an argument whose buffer is donated is a pytree, then all the buffers for its components are donated:
```
def add_ones(xs: List[Array]):
return [x + 1 for x in xs]
xs = [jax.device_put(np.ones((2, 3)), jax.device_put(np.ones((3, 4))]
# Execute `add_ones` with donation of all the buffers for `xs`.
# The outputs have the same shape and type as the elements of `xs`,
# so they will share those buffers.
z = jax.jit(add_ones, donate_argnums=0)(xs)
```
It is not allowed to donate a buffer that is used subsequently in the computation,
and JAX will give an error because the buffer for y has become invalid after it was donated:
```
# Donate the buffer for `y`
z = jax.jit(add, donate_argnums=(1,))(x, y)
w = y + 1 # Reuses `y` whose buffer was donated above
# >> RuntimeError: Invalid argument: CopyToHostAsync() called on invalid buffer
```
You will get a warning if the donated buffer is not used, e.g., because there are more donated buffers than can be used for the outputs:
```
# Execute `add` with donation of the buffers for both `x` and `y`.
# One of those buffers will be used for the result, but the other will
# not be used.
z = jax.jit(add, donate_argnums=(0, 1))(x, y)
# >> UserWarning: Some donated buffers were not usable: f32[2,3]{1,0}
```
The donation may also be unused if there is no output whose shape matches the donation:
```
y = jax.device_put(np.ones((1, 3))) # `y` has different shape than the output
# Execute `add` with donation of the buffer for `y`.
z = jax.jit(add, donate_argnums=(1,))(x, y)
# >> UserWarning: Some donated buffers were not usable: f32[1,3]{1,0}
```
### Gradients contain NaN where using `where`[#](#gradients-contain-nan-where-using-where)
If you define a function using `where` to avoid an undefined value, if you are not careful you may obtain a `NaN` for reverse differentiation:
```
def my_log(x):
return jnp.where(x > 0., jnp.log(x), 0.)
my_log(0.) ==> 0. # Ok jax.grad(my_log)(0.) ==> NaN
```
A short explanation is that during `grad` computation the adjoint corresponding to the undefined `jnp.log(x)` is a `NaN` and it gets accumulated to the adjoint of the `jnp.where`. The correct way to write such functions is to ensure that there is a `jnp.where` *inside* the partially-defined function, to ensure that the adjoint is always finite:
```
def safe_for_grad_log(x):
return jnp.log(jnp.where(x > 0., x, 1.))
safe_for_grad_log(0.) ==> 0. # Ok jax.grad(safe_for_grad_log)(0.) ==> 0. # Ok
```
The inner `jnp.where` may be needed in addition to the original one, e.g.:
```
def my_log_or_y(x, y):
"""Return log(x) if x > 0 or y"""
return jnp.where(x > 0., jnp.log(jnp.where(x > 0., x, 1.), y)
```
Additional reading:
> * [Issue: gradients through jnp.where when one of branches is nan](https://github.com/google/jax/issues/1052#issuecomment-514083352).
> * [How to avoid NaN gradients when using where](https://github.com/tensorflow/probability/blob/master/discussion/where-nan.pdf).
### Why are gradients zero for functions based on sort order?[#](#why-are-gradients-zero-for-functions-based-on-sort-order)
If you define a function that processes the input using operations that depend on the relative ordering of inputs (e.g. `max`, `greater`, `argsort`, etc.) then you may be surprised to find that the gradient is everywhere zero.
Here is an example, where we define f(x) to be a step function that returns 0 when x is negative, and 1 when x is positive:
```
import jax import numpy as np import jax.numpy as jnp
def f(x):
return (x > 0).astype(float)
df = jax.vmap(jax.grad(f))
x = jnp.array([-1.0, -0.5, 0.0, 0.5, 1.0])
print(f"f(x) = {f(x)}")
# f(x) = [0. 0. 0. 1. 1.]
print(f"df(x) = {df(x)}")
# df(x) = [0. 0. 0. 0. 0.]
```
The fact that the gradient is everywhere zero may be confusing at first glance:
after all, the output does change in response to the input, so how can the gradient be zero? However, zero turns out to be the correct result in this case.
Why is this? Remember that what differentiation is measuring the change in `f`
given an infinitesimal change in `x`. For `x=1.0`, `f` returns `1.0`.
If we perturb `x` to make it slightly larger or smaller, this does not change the output, so by definition, `grad(f)(1.0)` should be zero.
This same logic holds for all values of `f` greater than zero: infinitessimally perturbing the input does not change the output, so the gradient is zero.
Similarly, for all values of `x` less than zero, the output is zero.
Perturbing `x` does not change this output, so the gradient is zero.
That leaves us with the tricky case of `x=0`. Surely, if you perturb `x` upward,
it will change the output, but this is problematic: an infinitesimal change in `x`
produces a finite change in the function value, which implies the gradient is undefined.
Fortunately, there’s another way for us to measure the gradient in this case: we perturb the function downward, in which case the output does not change, and so the gradient is zero.
JAX and other autodiff systems tend to handle discontinuities in this way: if the positive gradient and negative gradient disagree, but one is defined and the other is not, we use the one that is defined.
Under this definition of the gradient, mathematically and numerically the gradient of this function is everywhere zero.
The problem stems from the fact that our function has a discontinuity at `x = 0`.
Our `f` here is essentially a [Heaviside Step Function](https://en.wikipedia.org/wiki/Heaviside_step_function), and we can use a
[Sigmoid Function](https://en.wikipedia.org/wiki/Sigmoid_function) as a smoothed replacement.
The sigmoid is approximately equal to the heaviside function when x is far from zero,
but replaces the discontinuity at `x = 0` with a smooth, differentiable curve.
As a result of using [`jax.nn.sigmoid()`](index.html#jax.nn.sigmoid), we get a similar computation with well-defined gradients:
```
def g(x):
return jax.nn.sigmoid(x)
dg = jax.vmap(jax.grad(g))
x = jnp.array([-10.0, -1.0, 0.0, 1.0, 10.0])
with np.printoptions(suppress=True, precision=2):
print(f"g(x) = {g(x)}")
# g(x) = [0. 0.27 0.5 0.73 1. ]
print(f"dg(x) = {dg(x)}")
# dg(x) = [0. 0.2 0.25 0.2 0. ]
```
The [`jax.nn`](index.html#module-jax.nn) submodule also has smooth versions of other common rank-based functions, for example [`jax.nn.softmax()`](index.html#jax.nn.softmax) can replace uses of
[`jax.numpy.argmax()`](index.html#jax.numpy.argmax), [`jax.nn.soft_sign()`](index.html#jax.nn.soft_sign) can replace uses of
[`jax.numpy.sign()`](index.html#jax.numpy.sign), [`jax.nn.softplus()`](index.html#jax.nn.softplus) can replace uses of
[`jax.nn.relu()`](index.html#jax.nn.relu), etc.
### How can I convert a JAX Tracer to a NumPy array?[#](#how-can-i-convert-a-jax-tracer-to-a-numpy-array)
When inspecting a transformed JAX function at runtime, you’ll find that array values are replaced by `Tracer` objects:
```
@jax.jit def f(x):
print(type(x))
return x
f(jnp.arange(5))
```
This prints the following:
```
<class 'jax.interpreters.partial_eval.DynamicJaxprTracer'>
```
A frequent question is how such a tracer can be converted back to a normal NumPy array. In short, **it is impossible to convert a Tracer to a NumPy array**, because a tracer is an abstract representation of *every possible* value with a given shape and dtype, while a numpy array is a concrete member of that abstract class.
For more discussion of how tracers work within the context of JAX transformations,
see [JIT mechanics](https://jax.readthedocs.io/en/latest/notebooks/thinking_in_jax.html#jit-mechanics-tracing-and-static-variables).
The question of converting Tracers back to arrays usually comes up within the context of another goal, related to accessing intermediate values in a computation at runtime. For example:
* If you wish to print a traced value at runtime for debugging purposes, you might consider using [`jax.debug.print()`](index.html#jax.debug.print).
* If you wish to call non-JAX code within a transformed JAX function, you might consider using [`jax.pure_callback()`](index.html#jax.pure_callback), an example of which is available at
[Pure callback example](https://jax.readthedocs.io/en/latest/notebooks/external_callbacks.html#example-pure-callback-with-custom-jvp).
* If you wish to input or output array buffers at runtime (for example, load data from file, or log the contents of the array to disk), you might consider using
[`jax.experimental.io_callback()`](index.html#jax.experimental.io_callback), an example of which can be found at
[IO callback example](https://jax.readthedocs.io/en/latest/notebooks/external_callbacks.html#exploring-jax-experimental-io-callback).
For more information on runtime callbacks and examples of their use,
see [External callbacks in JAX](https://jax.readthedocs.io/en/latest/notebooks/external_callbacks.html).
Tutorial: JAX 101[#](#tutorial-jax-101)
---
This is a tutorial developed by engineers and researchers at [DeepMind](http://deepmind.com).
### JAX As Accelerated NumPy[#](#jax-as-accelerated-numpy)
*Authors: <NAME> & <NAME>*
In this first section you will learn the very fundamentals of JAX.
#### Getting started with JAX numpy[#](#getting-started-with-jax-numpy)
Fundamentally, JAX is a library that enables transformations of array-manipulating programs written with a NumPy-like API.
Over the course of this series of guides, we will unpack exactly what that means. For now, you can think of JAX as *differentiable NumPy that runs on accelerators*.
The code below shows how to import JAX and create a vector.
```
import jax import jax.numpy as jnp
x = jnp.arange(10)
print(x)
```
```
[0 1 2 3 4 5 6 7 8 9]
```
```
WARNING:absl:No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)
```
So far, everything is just like NumPy. A big appeal of JAX is that you don’t need to learn a new API. Many common NumPy programs would run just as well in JAX if you substitute `np` for `jnp`. However, there are some important differences which we touch on at the end of this section.
You can notice the first difference if you check the type of `x`. It is a variable of type `Array`, which is the way JAX represents arrays.
```
x
```
```
Array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], dtype=int32)
```
One useful feature of JAX is that the same code can be run on different backends – CPU, GPU and TPU.
We will now perform a dot product to demonstrate that it can be done in different devices without changing the code. We use `%timeit` to check the performance.
(Technical detail: when a JAX function is called (including `jnp.array`
creation), the corresponding operation is dispatched to an accelerator to be computed asynchronously when possible. The returned array is therefore not necessarily ‘filled in’ as soon as the function returns. Thus, if we don’t require the result immediately, the computation won’t block Python execution.
Therefore, unless we `block_until_ready` or convert the array to a regular Python type, we will only time the dispatch, not the actual computation. See
[Asynchronous dispatch](https://jax.readthedocs.io/en/latest/async_dispatch.html#asynchronous-dispatch)
in the JAX docs.)
```
long_vector = jnp.arange(int(1e7))
%timeit jnp.dot(long_vector, long_vector).block_until_ready()
```
```
The slowest run took 7.39 times longer than the fastest. This could mean that an intermediate result is being cached.
100 loops, best of 5: 7.85 ms per loop
```
**Tip**: Try running the code above twice, once without an accelerator, and once with a GPU runtime (while in Colab, click *Runtime* → *Change Runtime Type* and choose `GPU`). Notice how much faster it runs on a GPU.
#### JAX first transformation: `grad`[#](#jax-first-transformation-grad)
A fundamental feature of JAX is that it allows you to transform functions.
One of the most commonly used transformations is `jax.grad`, which takes a numerical function written in Python and returns you a new Python function that computes the gradient of the original function.
To use it, let’s first define a function that takes an array and returns the sum of squares.
```
def sum_of_squares(x):
return jnp.sum(x**2)
```
Applying `jax.grad` to `sum_of_squares` will return a different function, namely the gradient of `sum_of_squares` with respect to its first parameter `x`.
Then, you can use that function on an array to return the derivatives with respect to each element of the array.
```
sum_of_squares_dx = jax.grad(sum_of_squares)
x = jnp.asarray([1.0, 2.0, 3.0, 4.0])
print(sum_of_squares(x))
print(sum_of_squares_dx(x))
```
```
30.0
[2. 4. 6. 8.]
```
You can think of `jax.grad` by analogy to the \(\nabla\) operator from vector calculus. Given a function \(f(x)\), \(\nabla f\) represents the function that computes \(f\)’s gradient, i.e.
\[
(\nabla f)(x)_i = \frac{\partial f}{\partial x_i}(x).
\]
Analogously, `jax.grad(f)` is the function that computes the gradient, so `jax.grad(f)(x)` is the gradient of `f` at `x`.
(Like \(\nabla\), `jax.grad` will only work on functions with a scalar output – it will raise an error otherwise.)
This makes the JAX API quite different from other autodiff libraries like Tensorflow and PyTorch, where to compute the gradient we use the loss tensor itself (e.g. by calling `loss.backward()`). The JAX API works directly with functions, staying closer to the underlying math. Once you become accustomed to this way of doing things, it feels natural: your loss function in code really is a function of parameters and data, and you find its gradient just like you would in the math.
This way of doing things makes it straightforward to control things like which variables to differentiate with respect to. By default, `jax.grad` will find the gradient with respect to the first argument. In the example below, the result of `sum_squared_error_dx` will be the gradient of `sum_squared_error` with respect to `x`.
```
def sum_squared_error(x, y):
return jnp.sum((x-y)**2)
sum_squared_error_dx = jax.grad(sum_squared_error)
y = jnp.asarray([1.1, 2.1, 3.1, 4.1])
print(sum_squared_error_dx(x, y))
```
```
[-0.20000005 -0.19999981 -0.19999981 -0.19999981]
```
To find the gradient with respect to a different argument (or several), you can set `argnums`:
```
jax.grad(sum_squared_error, argnums=(0, 1))(x, y) # Find gradient wrt both x & y
```
```
(Array([-0.20000005, -0.19999981, -0.19999981, -0.19999981], dtype=float32),
Array([0.20000005, 0.19999981, 0.19999981, 0.19999981], dtype=float32))
```
Does this mean that when doing machine learning, we need to write functions with gigantic argument lists, with an argument for each model parameter array? No. JAX comes equipped with machinery for bundling arrays together in data structures called ‘pytrees’, on which more in a [later guide](https://colab.research.google.com/github/google/jax/blob/main/docs/jax-101/05.1-pytrees.ipynb). So, most often, use of `jax.grad` looks like this:
```
def loss_fn(params, data):
...
grads = jax.grad(loss_fn)(params, data_batch)
```
where `params` is, for example, a nested dict of arrays, and the returned `grads` is another nested dict of arrays with the same structure.
#### Value and Grad[#](#value-and-grad)
Often, you need to find both the value and the gradient of a function, e.g. if you want to log the training loss. JAX has a handy sister transformation for efficiently doing that:
```
jax.value_and_grad(sum_squared_error)(x, y)
```
```
(Array(0.03999995, dtype=float32),
Array([-0.20000005, -0.19999981, -0.19999981, -0.19999981], dtype=float32))
```
which returns a tuple of, you guessed it, (value, grad). To be precise, for any `f`,
```
jax.value_and_grad(f)(*xs) == (f(*xs), jax.grad(f)(*xs))
```
#### Auxiliary data[#](#auxiliary-data)
In addition to wanting to log the value, we often want to report some intermediate results obtained in computing the loss function. But if we try doing that with regular `jax.grad`, we run into trouble:
```
def squared_error_with_aux(x, y):
return sum_squared_error(x, y), x-y
jax.grad(squared_error_with_aux)(x, y)
```
```
---
FilteredStackTrace Traceback (most recent call last)
<ipython-input-9-7433a86e7375> in <module>()
3
---> 4 jax.grad(squared_error_with_aux)(x, y)
FilteredStackTrace: TypeError: Gradient only defined for scalar-output functions. Output was (Array(0.03999995, dtype=float32), Array([-0.10000002, -0.0999999 , -0.0999999 , -0.0999999 ], dtype=float32)).
The stack trace above excludes JAX-internal frames.
```
This is because `jax.grad` is only defined on scalar functions, and our new function returns a tuple. But we need to return a tuple to return our intermediate results! This is where `has_aux` comes in:
```
jax.grad(squared_error_with_aux, has_aux=True)(x, y)
```
```
(Array([-0.20000005, -0.19999981, -0.19999981, -0.19999981], dtype=float32),
Array([-0.10000002, -0.0999999 , -0.0999999 , -0.0999999 ], dtype=float32))
```
`has_aux` signifies that the function returns a pair, `(out, aux)`. It makes `jax.grad` ignore `aux`, passing it through to the user, while differentiating the function as if only `out` was returned.
#### Differences from NumPy[#](#differences-from-numpy)
The `jax.numpy` API closely follows that of NumPy. However, there are some important differences. We cover many of these in future guides, but it’s worth pointing some out now.
The most important difference, and in some sense the root of all the rest, is that JAX is designed to be *functional*, as in *functional programming*. The reason behind this is that the kinds of program transformations that JAX enables are much more feasible in functional-style programs.
An introduction to functional programming (FP) is out of scope of this guide. If you already are familiar with FP, you will find your FP intuition helpful while learning JAX. If not, don’t worry! The important feature of functional programming to grok when working with JAX is very simple: don’t write code with side-effects.
A side-effect is any effect of a function that doesn’t appear in its output. One example is modifying an array in place:
```
import numpy as np
x = np.array([1, 2, 3])
def in_place_modify(x):
x[0] = 123
return None
in_place_modify(x)
x
```
```
array([123, 2, 3])
```
The side-effectful function modifies its argument, but returns a completely unrelated value. The modification is a side-effect.
The code below will run in NumPy. However, JAX arrays won’t allow themselves to be modified in-place:
```
in_place_modify(jnp.array(x)) # Raises error when we cast input to jnp.ndarray
```
```
---
TypeError Traceback (most recent call last)
<ipython-input-12-709e2d7ddd3f> in <module>()
---> 1 in_place_modify(jnp.array(x)) # Raises error when we cast input to jnp.ndarray
<ipython-input-11-fce65eb843c7> in in_place_modify(x)
4
5 def in_place_modify(x):
---> 6 x[0] = 123
7 return None
8
/usr/local/lib/python3.7/dist-packages/jax/_src/numpy/lax_numpy.py in _unimplemented_setitem(self, i, x)
6594 "or another .at[] method: "
6595 "https://jax.readthedocs.io/en/latest/jax.ops.html")
-> 6596 raise TypeError(msg.format(type(self)))
6597
6598 def _operator_round(number, ndigits=None):
TypeError: '<class 'jaxlib.xla_extension.Array'>' object does not support item assignment. JAX arrays are immutable. Instead of ``x[idx] = y``, use ``x = x.at[idx].set(y)`` or another .at[] method: https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.ndarray.at.html
```
Helpfully, the error points us to JAX’s side-effect-free way of doing the same thing via the [`jax.numpy.ndarray.at`](https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.ndarray.at.html) index update operators (be careful [`jax.ops.index_*`](https://jax.readthedocs.io/en/latest/jax.ops.html#indexed-update-functions-deprecated) functions are deprecated). They are analogous to in-place modification by index, but create a new array with the corresponding modifications made:
```
def jax_in_place_modify(x):
return x.at[0].set(123)
y = jnp.array([1, 2, 3])
jax_in_place_modify(y)
```
```
Array([123, 2, 3], dtype=int32)
```
Note that the old array was untouched, so there is no side-effect:
```
y
```
```
Array([1, 2, 3], dtype=int32)
```
Side-effect-free code is sometimes called *functionally pure*, or just *pure*.
Isn’t the pure version less efficient? Strictly, yes; we are creating a new array. However, as we will explain in the next guide, JAX computations are often compiled before being run using another program transformation, `jax.jit`. If we don’t use the old array after modifying it ‘in place’ using indexed update operators, the compiler can recognise that it can in fact compile to an in-place modify, resulting in efficient code in the end.
Of course, it’s possible to mix side-effectful Python code and functionally pure JAX code, and we will touch on this more later. As you get more familiar with JAX, you will learn how and when this can work. As a rule of thumb, however, any functions intended to be transformed by JAX should avoid side-effects, and the JAX primitives themselves will try to help you do that.
We will explain other places where the JAX idiosyncrasies become relevant as they come up. There is even a section that focuses entirely on getting used to the functional programming style of handling state: [Part 7: Problem of State](https://colab.research.google.com/github/google/jax/blob/main/docs/jax-101/07-state.ipynb). However, if you’re impatient, you can find a [summary of JAX’s sharp edges](https://jax.readthedocs.io/en/latest/notebooks/Common_Gotchas_in_JAX.html) in the JAX docs.
#### Your first JAX training loop[#](#your-first-jax-training-loop)
We still have much to learn about JAX, but you already know enough to understand how we can use JAX to build a simple training loop.
To keep things simple, we’ll start with a linear regression.
Our data is sampled according to \(y = w_{true} x + b_{true} + \epsilon\).
```
import numpy as np import matplotlib.pyplot as plt
xs = np.random.normal(size=(100,))
noise = np.random.normal(scale=0.1, size=(100,))
ys = xs * 3 - 1 + noise
plt.scatter(xs, ys);
```
Therefore, our model is \(\hat y(x; \theta) = wx + b\).
We will use a single array, `theta = [w, b]` to house both parameters:
```
def model(theta, x):
"""Computes wx + b on a batch of input x."""
w, b = theta
return w * x + b
```
The loss function is \(J(x, y; \theta) = (\hat y - y)^2\).
```
def loss_fn(theta, x, y):
prediction = model(theta, x)
return jnp.mean((prediction-y)**2)
```
How do we optimize a loss function? Using gradient descent. At each update step, we will find the gradient of the loss w.r.t. the parameters, and take a small step in the direction of steepest descent:
\(\theta_{new} = \theta - 0.1 (\nabla_\theta J) (x, y; \theta)\)
```
def update(theta, x, y, lr=0.1):
return theta - lr * jax.grad(loss_fn)(theta, x, y)
```
In JAX, it’s common to define an `update()` function that is called every step, taking the current parameters as input and returning the new parameters. This is a natural consequence of JAX’s functional nature, and is explained in more detail in [The Problem of State](https://colab.research.google.com/github/google/jax/blob/main/docs/jax-101/07-state.ipynb).
This function can then be JIT-compiled in its entirety for maximum efficiency. The next guide will explain exactly how `jax.jit` works, but if you want to, you can try adding `@jax.jit` before the `update()` definition, and see how the training loop below runs much faster.
```
theta = jnp.array([1., 1.])
for _ in range(1000):
theta = update(theta, xs, ys)
plt.scatter(xs, ys)
plt.plot(xs, model(theta, xs))
w, b = theta print(f"w: {w:<.2f}, b: {b:<.2f}")
```
```
w: 3.00, b: -1.00
```
As you will see going through these guides, this basic recipe underlies almost all training loops you’ll see implemented in JAX. The main difference between this example and real training loops is the simplicity of our model: that allows us to use a single array to house all our parameters. We cover managing more parameters in the later [pytree guide](https://colab.research.google.com/github/google/jax/blob/main/docs/jax-101/05.1-pytrees.ipynb). Feel free to skip forward to that guide now to see how to manually define and train a simple MLP in JAX.
### Just In Time Compilation with JAX[#](#just-in-time-compilation-with-jax)
*Authors: <NAME> & <NAME>*
In this section, we will further explore how JAX works, and how we can make it performant.
We will discuss the `jax.jit()` transform, which will perform *Just In Time* (JIT) compilation of a JAX Python function so it can be executed efficiently in XLA.
#### How JAX transforms work[#](#how-jax-transforms-work)
In the previous section, we discussed that JAX allows us to transform Python functions. This is done by first converting the Python function into a simple intermediate language called jaxpr. The transformations then work on the jaxpr representation.
We can show a representation of the jaxpr of a function by using `jax.make_jaxpr`:
```
import jax import jax.numpy as jnp
global_list = []
def log2(x):
global_list.append(x)
ln_x = jnp.log(x)
ln_2 = jnp.log(2.0)
return ln_x / ln_2
print(jax.make_jaxpr(log2)(3.0))
```
```
{ lambda ; a:f32[]. let
b:f32[] = log a
c:f32[] = log 2.0
d:f32[] = div b c
in (d,) }
```
The [Understanding Jaxprs](https://jax.readthedocs.io/en/latest/jaxpr.html) section of the documentation provides more information on the meaning of the above output.
Importantly, note how the jaxpr does not capture the side-effect of the function: there is nothing in it corresponding to `global_list.append(x)`. This is a feature, not a bug: JAX is designed to understand side-effect-free (a.k.a. functionally pure) code. If *pure function* and *side-effect* are unfamiliar terms, this is explained in a little more detail in [🔪 JAX - The Sharp Bits 🔪: Pure Functions](https://jax.readthedocs.io/en/latest/notebooks/Common_Gotchas_in_JAX.html#pure-functions).
Of course, impure functions can still be written and even run, but JAX gives no guarantees about their behaviour once converted to jaxpr. However, as a rule of thumb, you can expect (but shouldn’t rely on) the side-effects of a JAX-transformed function to run once (during the first call), and never again. This is because of the way that JAX generates jaxpr, using a process called ‘tracing’.
When tracing, JAX wraps each argument by a *tracer* object. These tracers then record all JAX operations performed on them during the function call (which happens in regular Python). Then, JAX uses the tracer records to reconstruct the entire function. The output of that reconstruction is the jaxpr. Since the tracers do not record the Python side-effects, they do not appear in the jaxpr. However, the side-effects still happen during the trace itself.
Note: the Python `print()` function is not pure: the text output is a side-effect of the function. Therefore, any `print()` calls will only happen during tracing, and will not appear in the jaxpr:
```
def log2_with_print(x):
print("printed x:", x)
ln_x = jnp.log(x)
ln_2 = jnp.log(2.0)
return ln_x / ln_2
print(jax.make_jaxpr(log2_with_print)(3.))
```
```
printed x: Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>
{ lambda ; a:f32[]. let
b:f32[] = log a
c:f32[] = log 2.0
d:f32[] = div b c
in (d,) }
```
See how the printed `x` is a `Traced` object? That’s the JAX internals at work.
The fact that the Python code runs at least once is strictly an implementation detail, and so shouldn’t be relied upon. However, it’s useful to understand as you can use it when debugging to print out intermediate values of a computation.
A key thing to understand is that jaxpr captures the function as executed on the parameters given to it. For example, if we have a conditional, jaxpr will only know about the branch we take:
```
def log2_if_rank_2(x):
if x.ndim == 2:
ln_x = jnp.log(x)
ln_2 = jnp.log(2.0)
return ln_x / ln_2
else:
return x
print(jax.make_jaxpr(log2_if_rank_2)(jax.numpy.array([1, 2, 3])))
```
```
{ lambda ; a:i32[3]. let in (a,) }
```
#### JIT compiling a function[#](#jit-compiling-a-function)
As explained before, JAX enables operations to execute on CPU/GPU/TPU using the same code.
Let’s look at an example of computing a *Scaled Exponential Linear Unit*
([SELU](https://proceedings.neurips.cc/paper/6698-self-normalizing-neural-networks.pdf)), an operation commonly used in deep learning:
```
import jax import jax.numpy as jnp
def selu(x, alpha=1.67, lambda_=1.05):
return lambda_ * jnp.where(x > 0, x, alpha * jnp.exp(x) - alpha)
x = jnp.arange(1000000)
%timeit selu(x).block_until_ready()
```
```
100 loops, best of 5: 2.05 ms per loop
```
The code above is sending one operation at a time to the accelerator. This limits the ability of the XLA compiler to optimize our functions.
Naturally, what we want to do is give the XLA compiler as much code as possible, so it can fully optimize it. For this purpose, JAX provides the `jax.jit` transformation, which will JIT compile a JAX-compatible function. The example below shows how to use JIT to speed up the previous function.
```
selu_jit = jax.jit(selu)
# Warm up selu_jit(x).block_until_ready()
%timeit selu_jit(x).block_until_ready()
```
```
10000 loops, best of 5: 150 µs per loop
```
Here’s what just happened:
1. We defined `selu_jit` as the compiled version of `selu`.
2. We called `selu_jit` once on `x`. This is where JAX does its tracing – it needs to have some inputs to wrap in tracers, after all. The jaxpr is then compiled using XLA into very efficient code optimized for your GPU or TPU. Finally, the compiled code is executed to satisfy the call. Subsequent calls to `selu_jit` will use the compiled code directly, skipping the python implementation entirely.
(If we didn’t include the warm-up call separately, everything would still work, but then the compilation time would be included in the benchmark. It would still be faster, because we run many loops in the benchmark, but it wouldn’t be a fair comparison.)
3. We timed the execution speed of the compiled version. (Note the use of `block_until_ready()`, which is required due to JAX’s [Asynchronous execution](https://jax.readthedocs.io/en/latest/async_dispatch.html) model).
#### Why can’t we just JIT everything?[#](#why-can-t-we-just-jit-everything)
After going through the example above, you might be wondering whether we should simply apply `jax.jit` to every function. To understand why this is not the case, and when we should/shouldn’t apply `jit`, let’s first check some cases where JIT doesn’t work.
```
# Condition on value of x.
def f(x):
if x > 0:
return x
else:
return 2 * x
f_jit = jax.jit(f)
f_jit(10) # Should raise an error.
```
```
---
UnfilteredStackTrace Traceback (most recent call last)
<ipython-input-12-2c1a07641e48> in <module>()
9 f_jit = jax.jit(f)
---> 10 f_jit(10) # Should raise an error.
/usr/local/lib/python3.7/dist-packages/jax/_src/traceback_util.py in reraise_with_filtered_traceback(*args, **kwargs)
161 try:
--> 162 return fun(*args, **kwargs)
163 except Exception as e:
/usr/local/lib/python3.7/dist-packages/jax/_src/api.py in cache_miss(*args, **kwargs)
418 device=device, backend=backend, name=flat_fun.__name__,
--> 419 donated_invars=donated_invars, inline=inline)
420 out_pytree_def = out_tree()
/usr/local/lib/python3.7/dist-packages/jax/core.py in bind(self, fun, *args, **params)
1631 def bind(self, fun, *args, **params):
-> 1632 return call_bind(self, fun, *args, **params)
1633
/usr/local/lib/python3.7/dist-packages/jax/core.py in call_bind(primitive, fun, *args, **params)
1622 tracers = map(top_trace.full_raise, args)
-> 1623 outs = primitive.process(top_trace, fun, tracers, params)
1624 return map(full_lower, apply_todos(env_trace_todo(), outs))
/usr/local/lib/python3.7/dist-packages/jax/core.py in process(self, trace, fun, tracers, params)
1634 def process(self, trace, fun, tracers, params):
-> 1635 return trace.process_call(self, fun, tracers, params)
1636
/usr/local/lib/python3.7/dist-packages/jax/core.py in process_call(self, primitive, f, tracers, params)
626 def process_call(self, primitive, f, tracers, params):
--> 627 return primitive.impl(f, *tracers, **params)
628 process_map = process_call
/usr/local/lib/python3.7/dist-packages/jax/interpreters/xla.py in _xla_call_impl(***failed resolving arguments***)
687 compiled_fun = _xla_callable(fun, device, backend, name, donated_invars,
--> 688 *unsafe_map(arg_spec, args))
689 try:
/usr/local/lib/python3.7/dist-packages/jax/linear_util.py in memoized_fun(fun, *args)
262 else:
--> 263 ans = call(fun, *args)
264 cache[key] = (ans, fun.stores)
/usr/local/lib/python3.7/dist-packages/jax/interpreters/xla.py in _xla_callable_uncached(fun, device, backend, name, donated_invars, *arg_specs)
759 return lower_xla_callable(fun, device, backend, name, donated_invars,
--> 760 *arg_specs).compile().unsafe_call
761
/usr/local/lib/python3.7/dist-packages/jax/interpreters/xla.py in lower_xla_callable(fun, device, backend, name, donated_invars, *arg_specs)
771 jaxpr, out_avals, consts = pe.trace_to_jaxpr_final(
--> 772 fun, abstract_args, pe.debug_info_final(fun, "jit"))
773 if any(isinstance(c, core.Tracer) for c in consts):
/usr/local/lib/python3.7/dist-packages/jax/interpreters/partial_eval.py in trace_to_jaxpr_final(fun, in_avals, debug_info)
1541 with core.new_sublevel():
-> 1542 jaxpr, out_avals, consts = trace_to_subjaxpr_dynamic(fun, main, in_avals)
1543 del fun, main
/usr/local/lib/python3.7/dist-packages/jax/interpreters/partial_eval.py in trace_to_subjaxpr_dynamic(fun, main, in_avals)
1519 in_tracers = map(trace.new_arg, in_avals)
-> 1520 ans = fun.call_wrapped(*in_tracers)
1521 out_tracers = map(trace.full_raise, ans)
/usr/local/lib/python3.7/dist-packages/jax/linear_util.py in call_wrapped(self, *args, **kwargs)
165 try:
--> 166 ans = self.f(*args, **dict(self.params, **kwargs))
167 except:
<ipython-input-12-2c1a07641e48> in f(x)
3 def f(x):
---> 4 if x > 0:
5 return x
/usr/local/lib/python3.7/dist-packages/jax/core.py in __bool__(self)
548 def __nonzero__(self): return self.aval._nonzero(self)
--> 549 def __bool__(self): return self.aval._bool(self)
550 def __int__(self): return self.aval._int(self)
/usr/local/lib/python3.7/dist-packages/jax/core.py in error(self, arg)
999 def error(self, arg):
-> 1000 raise ConcretizationTypeError(arg, fname_context)
1001 return error
UnfilteredStackTrace: jax._src.errors.ConcretizationTypeError: Abstract tracer value encountered where concrete value is expected: Traced<ShapedArray(bool[], weak_type=True)>with<DynamicJaxprTrace(level=0/1)>
The problem arose with the bool function.
While tracing the function f at <ipython-input-12-2c1a07641e48>:3 for jit, this concrete value was not available in Python because it depends on the value of the argument 'x'.
See https://jax.readthedocs.io/en/latest/errors.html#jax.errors.ConcretizationTypeError
The stack trace below excludes JAX-internal frames.
The preceding is the original exception that occurred, unmodified.
---
The above exception was the direct cause of the following exception:
ConcretizationTypeError Traceback (most recent call last)
<ipython-input-12-2c1a07641e48> in <module>()
8
9 f_jit = jax.jit(f)
---> 10 f_jit(10) # Should raise an error.
<ipython-input-12-2c1a07641e48> in f(x)
2
3 def f(x):
---> 4 if x > 0:
5 return x
6 else:
ConcretizationTypeError: Abstract tracer value encountered where concrete value is expected: Traced<ShapedArray(bool[], weak_type=True)>with<DynamicJaxprTrace(level=0/1)>
The problem arose with the bool function.
While tracing the function f at <ipython-input-12-2c1a07641e48>:3 for jit, this concrete value was not available in Python because it depends on the value of the argument 'x'.
See https://jax.readthedocs.io/en/latest/errors.html#jax.errors.ConcretizationTypeError
```
```
# While loop conditioned on x and n.
def g(x, n):
i = 0
while i < n:
i += 1
return x + i
g_jit = jax.jit(g)
g_jit(10, 20) # Should raise an error.
```
```
---
UnfilteredStackTrace Traceback (most recent call last)
<ipython-input-13-2aa78f448d5d> in <module>()
9 g_jit = jax.jit(g)
---> 10 g_jit(10, 20) # Should raise an error.
/usr/local/lib/python3.7/dist-packages/jax/_src/traceback_util.py in reraise_with_filtered_traceback(*args, **kwargs)
161 try:
--> 162 return fun(*args, **kwargs)
163 except Exception as e:
/usr/local/lib/python3.7/dist-packages/jax/_src/api.py in cache_miss(*args, **kwargs)
418 device=device, backend=backend, name=flat_fun.__name__,
--> 419 donated_invars=donated_invars, inline=inline)
420 out_pytree_def = out_tree()
/usr/local/lib/python3.7/dist-packages/jax/core.py in bind(self, fun, *args, **params)
1631 def bind(self, fun, *args, **params):
-> 1632 return call_bind(self, fun, *args, **params)
1633
/usr/local/lib/python3.7/dist-packages/jax/core.py in call_bind(primitive, fun, *args, **params)
1622 tracers = map(top_trace.full_raise, args)
-> 1623 outs = primitive.process(top_trace, fun, tracers, params)
1624 return map(full_lower, apply_todos(env_trace_todo(), outs))
/usr/local/lib/python3.7/dist-packages/jax/core.py in process(self, trace, fun, tracers, params)
1634 def process(self, trace, fun, tracers, params):
-> 1635 return trace.process_call(self, fun, tracers, params)
1636
/usr/local/lib/python3.7/dist-packages/jax/core.py in process_call(self, primitive, f, tracers, params)
626 def process_call(self, primitive, f, tracers, params):
--> 627 return primitive.impl(f, *tracers, **params)
628 process_map = process_call
/usr/local/lib/python3.7/dist-packages/jax/interpreters/xla.py in _xla_call_impl(***failed resolving arguments***)
687 compiled_fun = _xla_callable(fun, device, backend, name, donated_invars,
--> 688 *unsafe_map(arg_spec, args))
689 try:
/usr/local/lib/python3.7/dist-packages/jax/linear_util.py in memoized_fun(fun, *args)
262 else:
--> 263 ans = call(fun, *args)
264 cache[key] = (ans, fun.stores)
/usr/local/lib/python3.7/dist-packages/jax/interpreters/xla.py in _xla_callable_uncached(fun, device, backend, name, donated_invars, *arg_specs)
759 return lower_xla_callable(fun, device, backend, name, donated_invars,
--> 760 *arg_specs).compile().unsafe_call
761
/usr/local/lib/python3.7/dist-packages/jax/interpreters/xla.py in lower_xla_callable(fun, device, backend, name, donated_invars, *arg_specs)
771 jaxpr, out_avals, consts = pe.trace_to_jaxpr_final(
--> 772 fun, abstract_args, pe.debug_info_final(fun, "jit"))
773 if any(isinstance(c, core.Tracer) for c in consts):
/usr/local/lib/python3.7/dist-packages/jax/interpreters/partial_eval.py in trace_to_jaxpr_final(fun, in_avals, debug_info)
1541 with core.new_sublevel():
-> 1542 jaxpr, out_avals, consts = trace_to_subjaxpr_dynamic(fun, main, in_avals)
1543 del fun, main
/usr/local/lib/python3.7/dist-packages/jax/interpreters/partial_eval.py in trace_to_subjaxpr_dynamic(fun, main, in_avals)
1519 in_tracers = map(trace.new_arg, in_avals)
-> 1520 ans = fun.call_wrapped(*in_tracers)
1521 out_tracers = map(trace.full_raise, ans)
/usr/local/lib/python3.7/dist-packages/jax/linear_util.py in call_wrapped(self, *args, **kwargs)
165 try:
--> 166 ans = self.f(*args, **dict(self.params, **kwargs))
167 except:
<ipython-input-13-2aa78f448d5d> in g(x, n)
4 i = 0
---> 5 while i < n:
6 i += 1
/usr/local/lib/python3.7/dist-packages/jax/core.py in __bool__(self)
548 def __nonzero__(self): return self.aval._nonzero(self)
--> 549 def __bool__(self): return self.aval._bool(self)
550 def __int__(self): return self.aval._int(self)
/usr/local/lib/python3.7/dist-packages/jax/core.py in error(self, arg)
999 def error(self, arg):
-> 1000 raise ConcretizationTypeError(arg, fname_context)
1001 return error
UnfilteredStackTrace: jax._src.errors.ConcretizationTypeError: Abstract tracer value encountered where concrete value is expected: Traced<ShapedArray(bool[], weak_type=True)>with<DynamicJaxprTrace(level=0/1)>
The problem arose with the bool function.
While tracing the function g at <ipython-input-13-2aa78f448d5d>:3 for jit, this concrete value was not available in Python because it depends on the value of the argument 'n'.
See https://jax.readthedocs.io/en/latest/errors.html#jax.errors.ConcretizationTypeError
The stack trace below excludes JAX-internal frames.
The preceding is the original exception that occurred, unmodified.
---
The above exception was the direct cause of the following exception:
ConcretizationTypeError Traceback (most recent call last)
<ipython-input-13-2aa78f448d5d> in <module>()
8
9 g_jit = jax.jit(g)
---> 10 g_jit(10, 20) # Should raise an error.
<ipython-input-13-2aa78f448d5d> in g(x, n)
3 def g(x, n):
4 i = 0
---> 5 while i < n:
6 i += 1
7 return x + i
ConcretizationTypeError: Abstract tracer value encountered where concrete value is expected: Traced<ShapedArray(bool[], weak_type=True)>with<DynamicJaxprTrace(level=0/1)>
The problem arose with the bool function.
While tracing the function g at <ipython-input-13-2aa78f448d5d>:3 for jit, this concrete value was not available in Python because it depends on the value of the argument 'n'.
See https://jax.readthedocs.io/en/latest/errors.html#jax.errors.ConcretizationTypeError
```
The problem is that we tried to condition on the *value* of an input to the function being jitted. The reason we can’t do this is related to the fact mentioned above that jaxpr depends on the actual values used to trace it.
The more specific information about the values we use in the trace, the more we can use standard Python control flow to express ourselves. However, being too specific means we can’t reuse the same traced function for other values. JAX solves this by tracing at different levels of abstraction for different purposes.
For `jax.jit`, the default level is `ShapedArray` – that is, each tracer has a concrete shape (which we’re allowed to condition on), but no concrete value. This allows the compiled function to work on all possible inputs with the same shape – the standard use case in machine learning. However, because the tracers have no concrete value, if we attempt to condition on one, we get the error above.
In `jax.grad`, the constraints are more relaxed, so you can do more. If you compose several transformations, however, you must satisfy the constraints of the most strict one. So, if you `jit(grad(f))`, `f` mustn’t condition on value. For more detail on the interaction between Python control flow and JAX, see [🔪 JAX - The Sharp Bits 🔪: Control Flow](https://jax.readthedocs.io/en/latest/notebooks/Common_Gotchas_in_JAX.html#control-flow).
One way to deal with this problem is to rewrite the code to avoid conditionals on value. Another is to use special [control flow operators](https://jax.readthedocs.io/en/latest/jax.lax.html#control-flow-operators) like `jax.lax.cond`. However, sometimes that is impossible. In that case, you can consider jitting only part of the function. For example, if the most computationally expensive part of the function is inside the loop, we can JIT just that inner part (though make sure to check the next section on caching to avoid shooting yourself in the foot):
```
# While loop conditioned on x and n with a jitted body.
@jax.jit def loop_body(prev_i):
return prev_i + 1
def g_inner_jitted(x, n):
i = 0
while i < n:
i = loop_body(i)
return x + i
g_inner_jitted(10, 20)
```
```
Array(30, dtype=int32, weak_type=True)
```
If we really need to JIT a function that has a condition on the value of an input, we can tell JAX to help itself to a less abstract tracer for a particular input by specifying `static_argnums` or `static_argnames`. The cost of this is that the resulting jaxpr is less flexible, so JAX will have to re-compile the function for every new value of the specified static input. It is only a good strategy if the function is guaranteed to get limited different values.
```
f_jit_correct = jax.jit(f, static_argnums=0)
print(f_jit_correct(10))
```
```
10
```
```
g_jit_correct = jax.jit(g, static_argnames=['n'])
print(g_jit_correct(10, 20))
```
```
30
```
To specify such arguments when using `jit` as a decorator, a common pattern is to use python’s `functools.partial`:
```
from functools import partial
@partial(jax.jit, static_argnames=['n'])
def g_jit_decorated(x, n):
i = 0
while i < n:
i += 1
return x + i
print(g_jit_decorated(10, 20))
```
```
30
```
#### When to use JIT[#](#when-to-use-jit)
In many of the examples above, jitting is not worth it:
```
print("g jitted:")
%timeit g_jit_correct(10, 20).block_until_ready()
print("g:")
%timeit g(10, 20)
```
```
g jitted:
The slowest run took 13.54 times longer than the fastest. This could mean that an intermediate result is being cached.
1000 loops, best of 5: 229 µs per loop g:
The slowest run took 11.72 times longer than the fastest. This could mean that an intermediate result is being cached.
1000000 loops, best of 5: 1.2 µs per loop
```
This is because `jax.jit` introduces some overhead itself. Therefore, it usually only saves time if the compiled function is complex and you will run it numerous times. Fortunately, this is common in machine learning, where we tend to compile a large, complicated model, then run it for millions of iterations.
Generally, you want to jit the largest possible chunk of your computation; ideally, the entire update step. This gives the compiler maximum freedom to optimise.
#### Caching[#](#caching)
It’s important to understand the caching behaviour of `jax.jit`.
Suppose I define `f = jax.jit(g)`. When I first invoke `f`, it will get compiled, and the resulting XLA code will get cached. Subsequent calls of `f` will reuse the cached code. This is how `jax.jit` makes up for the up-front cost of compilation.
If I specify `static_argnums`, then the cached code will be used only for the same values of arguments labelled as static. If any of them change, recompilation occurs. If there are many values, then your program might spend more time compiling than it would have executing ops one-by-one.
Avoid calling `jax.jit` inside loops. For most cases, JAX will be able to use the compiled, cached function in subsequent calls to `jax.jit`. However, because the cache relies on the hash of the function, it becomes problematic when equivalent functions are redefined. This will cause unnecessary compilation each time in the loop:
```
from functools import partial
def unjitted_loop_body(prev_i):
return prev_i + 1
def g_inner_jitted_partial(x, n):
i = 0
while i < n:
# Don't do this! each time the partial returns
# a function with different hash
i = jax.jit(partial(unjitted_loop_body))(i)
return x + i
def g_inner_jitted_lambda(x, n):
i = 0
while i < n:
# Don't do this!, lambda will also return
# a function with a different hash
i = jax.jit(lambda x: unjitted_loop_body(x))(i)
return x + i
def g_inner_jitted_normal(x, n):
i = 0
while i < n:
# this is OK, since JAX can find the
# cached, compiled function
i = jax.jit(unjitted_loop_body)(i)
return x + i
print("jit called in a loop with partials:")
%timeit g_inner_jitted_partial(10, 20).block_until_ready()
print("jit called in a loop with lambdas:")
%timeit g_inner_jitted_lambda(10, 20).block_until_ready()
print("jit called in a loop with caching:")
%timeit g_inner_jitted_normal(10, 20).block_until_ready()
```
```
jit called in a loop with partials:
1 loop, best of 5: 192 ms per loop jit called in a loop with lambdas:
10 loops, best of 5: 199 ms per loop jit called in a loop with caching:
10 loops, best of 5: 21.6 ms per loop
```
### Automatic Vectorization in JAX[#](#automatic-vectorization-in-jax)
*Authors: <NAME>*
In the previous section we discussed JIT compilation via the `jax.jit` function. This notebook discusses another of JAX’s transforms: vectorization via `jax.vmap`.
#### Manual Vectorization[#](#manual-vectorization)
Consider the following simple code that computes the convolution of two one-dimensional vectors:
```
import jax import jax.numpy as jnp
x = jnp.arange(5)
w = jnp.array([2., 3., 4.])
def convolve(x, w):
output = []
for i in range(1, len(x)-1):
output.append(jnp.dot(x[i-1:i+2], w))
return jnp.array(output)
convolve(x, w)
```
```
Array([11., 20., 29.], dtype=float32)
```
Suppose we would like to apply this function to a batch of weights `w` to a batch of vectors `x`.
```
xs = jnp.stack([x, x])
ws = jnp.stack([w, w])
```
The most naive option would be to simply loop over the batch in Python:
```
def manually_batched_convolve(xs, ws):
output = []
for i in range(xs.shape[0]):
output.append(convolve(xs[i], ws[i]))
return jnp.stack(output)
manually_batched_convolve(xs, ws)
```
```
Array([[11., 20., 29.],
[11., 20., 29.]], dtype=float32)
```
This produces the correct result, however it is not very efficient.
In order to batch the computation efficiently, you would normally have to rewrite the function manually to ensure it is done in vectorized form. This is not particularly difficult to implement, but does involve changing how the function treats indices, axes, and other parts of the input.
For example, we could manually rewrite `convolve()` to support vectorized computation across the batch dimension as follows:
```
def manually_vectorized_convolve(xs, ws):
output = []
for i in range(1, xs.shape[-1] -1):
output.append(jnp.sum(xs[:, i-1:i+2] * ws, axis=1))
return jnp.stack(output, axis=1)
manually_vectorized_convolve(xs, ws)
```
```
Array([[11., 20., 29.],
[11., 20., 29.]], dtype=float32)
```
Such re-implementation is messy and error-prone; fortunately JAX provides another way.
#### Automatic Vectorization[#](#automatic-vectorization)
In JAX, the `jax.vmap` transformation is designed to generate such a vectorized implementation of a function automatically:
```
auto_batch_convolve = jax.vmap(convolve)
auto_batch_convolve(xs, ws)
```
```
Array([[11., 20., 29.],
[11., 20., 29.]], dtype=float32)
```
It does this by tracing the function similarly to `jax.jit`, and automatically adding batch axes at the beginning of each input.
If the batch dimension is not the first, you may use the `in_axes` and `out_axes` arguments to specify the location of the batch dimension in inputs and outputs. These may be an integer if the batch axis is the same for all inputs and outputs, or lists, otherwise.
```
auto_batch_convolve_v2 = jax.vmap(convolve, in_axes=1, out_axes=1)
xst = jnp.transpose(xs)
wst = jnp.transpose(ws)
auto_batch_convolve_v2(xst, wst)
```
```
Array([[11., 11.],
[20., 20.],
[29., 29.]], dtype=float32)
```
`jax.vmap` also supports the case where only one of the arguments is batched: for example, if you would like to convolve to a single set of weights `w` with a batch of vectors `x`; in this case the `in_axes` argument can be set to `None`:
```
batch_convolve_v3 = jax.vmap(convolve, in_axes=[0, None])
batch_convolve_v3(xs, w)
```
```
Array([[11., 20., 29.],
[11., 20., 29.]], dtype=float32)
```
#### Combining transformations[#](#combining-transformations)
As with all JAX transformations, `jax.jit` and `jax.vmap` are designed to be composable, which means you can wrap a vmapped function with `jit`, or a JITted function with `vmap`, and everything will work correctly:
```
jitted_batch_convolve = jax.jit(auto_batch_convolve)
jitted_batch_convolve(xs, ws)
```
```
Array([[11., 20., 29.],
[11., 20., 29.]], dtype=float32)
```
### Advanced Automatic Differentiation in JAX[#](#advanced-automatic-differentiation-in-jax)
*Authors: <NAME> & <NAME>*
Computing gradients is a critical part of modern machine learning methods. This section considers a few advanced topics in the areas of automatic differentiation as it relates to modern machine learning.
While understanding how automatic differentiation works under the hood isn’t crucial for using JAX in most contexts, we encourage the reader to check out this quite accessible [video](https://www.youtube.com/watch?v=wG_nF1awSSY) to get a deeper sense of what’s going on.
[The Autodiff Cookbook](https://jax.readthedocs.io/en/latest/notebooks/autodiff_cookbook.html) is a more advanced and more detailed explanation of how these ideas are implemented in the JAX backend. It’s not necessary to understand this to do most things in JAX. However, some features (like defining [custom derivatives](https://jax.readthedocs.io/en/latest/notebooks/Custom_derivative_rules_for_Python_code.html)) depend on understanding this, so it’s worth knowing this explanation exists if you ever need to use them.
#### Higher-order derivatives[#](#higher-order-derivatives)
JAX’s autodiff makes it easy to compute higher-order derivatives, because the functions that compute derivatives are themselves differentiable. Thus, higher-order derivatives are as easy as stacking transformations.
We illustrate this in the single-variable case:
The derivative of \(f(x) = x^3 + 2x^2 - 3x + 1\) can be computed as:
```
import jax
f = lambda x: x**3 + 2*x**2 - 3*x + 1
dfdx = jax.grad(f)
```
The higher-order derivatives of \(f\) are:
\[\begin{split}
\begin{array}{l}
f'(x) = 3x^2 + 4x -3\\
f''(x) = 6x + 4\\
f'''(x) = 6\\
f^{iv}(x) = 0
\end{array}
\end{split}\]
Computing any of these in JAX is as easy as chaining the `grad` function:
```
d2fdx = jax.grad(dfdx)
d3fdx = jax.grad(d2fdx)
d4fdx = jax.grad(d3fdx)
```
Evaluating the above in \(x=1\) would give us:
\[\begin{split}
\begin{array}{l}
f'(1) = 4\\
f''(1) = 10\\
f'''(1) = 6\\
f^{iv}(1) = 0
\end{array}
\end{split}\]
Using JAX:
```
print(dfdx(1.))
print(d2fdx(1.))
print(d3fdx(1.))
print(d4fdx(1.))
```
```
4.0 10.0 6.0 0.0
```
In the multivariable case, higher-order derivatives are more complicated. The second-order derivative of a function is represented by its [Hessian matrix](https://en.wikipedia.org/wiki/Hessian_matrix), defined according to
\[(\mathbf{H}f)_{i,j} = \frac{\partial^2 f}{\partial_i\partial_j}.\]
The Hessian of a real-valued function of several variables, \(f: \mathbb R^n\to\mathbb R\), can be identified with the Jacobian of its gradient. JAX provides two transformations for computing the Jacobian of a function, `jax.jacfwd` and `jax.jacrev`, corresponding to forward- and reverse-mode autodiff. They give the same answer, but one can be more efficient than the other in different circumstances – see the [video about autodiff](https://www.youtube.com/watch?v=wG_nF1awSSY) linked above for an explanation.
```
def hessian(f):
return jax.jacfwd(jax.grad(f))
```
Let’s double check this is correct on the dot-product \(f: \mathbf{x} \mapsto \mathbf{x} ^\top \mathbf{x}\).
if \(i=j\), \(\frac{\partial^2 f}{\partial_i\partial_j}(\mathbf{x}) = 2\). Otherwise, \(\frac{\partial^2 f}{\partial_i\partial_j}(\mathbf{x}) = 0\).
```
import jax.numpy as jnp
def f(x):
return jnp.dot(x, x)
hessian(f)(jnp.array([1., 2., 3.]))
```
```
Array([[2., 0., 0.],
[0., 2., 0.],
[0., 0., 2.]], dtype=float32)
```
Often, however, we aren’t interested in computing the full Hessian itself, and doing so can be very inefficient. [The Autodiff Cookbook](https://jax.readthedocs.io/en/latest/notebooks/autodiff_cookbook.html) explains some tricks, like the Hessian-vector product, that allow to use it without materialising the whole matrix.
If you plan to work with higher-order derivatives in JAX, we strongly recommend reading the Autodiff Cookbook.
#### Higher order optimization[#](#higher-order-optimization)
Some meta-learning techniques, such as Model-Agnostic Meta-Learning ([MAML](https://arxiv.org/abs/1703.03400)), require differentiating through gradient updates. In other frameworks this can be quite cumbersome, but in JAX it’s much easier:
```
def meta_loss_fn(params, data):
"""Computes the loss after one step of SGD."""
grads = jax.grad(loss_fn)(params, data)
return loss_fn(params - lr * grads, data)
meta_grads = jax.grad(meta_loss_fn)(params, data)
```
#### Stopping gradients[#](#stopping-gradients)
Auto-diff enables automatic computation of the gradient of a function with respect to its inputs. Sometimes, however, we might want some additional control: for instance, we might want to avoid back-propagating gradients through some subset of the computational graph.
Consider for instance the TD(0) ([temporal difference](https://en.wikipedia.org/wiki/Temporal_difference_learning)) reinforcement learning update. This is used to learn to estimate the *value* of a state in an environment from experience of interacting with the environment. Let’s assume the value estimate \(v_{\theta}(s_{t-1}\)) in a state \(s_{t-1}\) is parameterised by a linear function.
```
# Value function and initial parameters value_fn = lambda theta, state: jnp.dot(theta, state)
theta = jnp.array([0.1, -0.1, 0.])
```
Consider a transition from a state \(s_{t-1}\) to a state \(s_t\) during which we observed the reward \(r_t\)
```
# An example transition.
s_tm1 = jnp.array([1., 2., -1.])
r_t = jnp.array(1.)
s_t = jnp.array([2., 1., 0.])
```
The TD(0) update to the network parameters is:
\[
\Delta \theta = (r_t + v_{\theta}(s_t) - v_{\theta}(s_{t-1})) \nabla v_{\theta}(s_{t-1})
\]
This update is not the gradient of any loss function.
However, it can be **written** as the gradient of the pseudo loss function
\[
L(\theta) = [r_t + v_{\theta}(s_t) - v_{\theta}(s_{t-1})]^2
\]
if the dependency of the target \(r_t + v_{\theta}(s_t)\) on the parameter \(\theta\) is ignored.
How can we implement this in JAX? If we write the pseudo loss naively we get:
```
def td_loss(theta, s_tm1, r_t, s_t):
v_tm1 = value_fn(theta, s_tm1)
target = r_t + value_fn(theta, s_t)
return (target - v_tm1) ** 2
td_update = jax.grad(td_loss)
delta_theta = td_update(theta, s_tm1, r_t, s_t)
delta_theta
```
```
Array([ 2.4, -2.4, 2.4], dtype=float32)
```
But `td_update` will **not** compute a TD(0) update, because the gradient computation will include the dependency of `target` on \(\theta\).
We can use `jax.lax.stop_gradient` to force JAX to ignore the dependency of the target on \(\theta\):
```
def td_loss(theta, s_tm1, r_t, s_t):
v_tm1 = value_fn(theta, s_tm1)
target = r_t + value_fn(theta, s_t)
return (jax.lax.stop_gradient(target) - v_tm1) ** 2
td_update = jax.grad(td_loss)
delta_theta = td_update(theta, s_tm1, r_t, s_t)
delta_theta
```
```
Array([-2.4, -4.8, 2.4], dtype=float32)
```
This will treat `target` as if it did **not** depend on the parameters \(\theta\) and compute the correct update to the parameters.
The `jax.lax.stop_gradient` may also be useful in other settings, for instance if you want the gradient from some loss to only affect a subset of the parameters of the neural network (because, for instance, the other parameters are trained using a different loss).
#### Straight-through estimator using `stop_gradient`[#](#straight-through-estimator-using-stop-gradient)
The straight-through estimator is a trick for defining a ‘gradient’ of a function that is otherwise non-differentiable. Given a non-differentiable function \(f : \mathbb{R}^n \to \mathbb{R}^n\) that is used as part of a larger function that we wish to find a gradient of, we simply pretend during the backward pass that \(f\) is the identity function. This can be implemented neatly using `jax.lax.stop_gradient`:
```
def f(x):
return jnp.round(x) # non-differentiable
def straight_through_f(x):
# Create an exactly-zero expression with Sterbenz lemma that has
# an exactly-one gradient.
zero = x - jax.lax.stop_gradient(x)
return zero + jax.lax.stop_gradient(f(x))
print("f(x): ", f(3.2))
print("straight_through_f(x):", straight_through_f(3.2))
print("grad(f)(x):", jax.grad(f)(3.2))
print("grad(straight_through_f)(x):", jax.grad(straight_through_f)(3.2))
```
```
f(x): 3.0 straight_through_f(x): 3.0 grad(f)(x): 0.0 grad(straight_through_f)(x): 1.0
```
#### Per-example gradients[#](#per-example-gradients)
While most ML systems compute gradients and updates from batches of data, for reasons of computational efficiency and/or variance reduction, it is sometimes necessary to have access to the gradient/update associated with each specific sample in the batch.
For instance, this is needed to prioritise data based on gradient magnitude, or to apply clipping / normalisations on a sample by sample basis.
In many frameworks (PyTorch, TF, Theano) it is often not trivial to compute per-example gradients, because the library directly accumulates the gradient over the batch. Naive workarounds, such as computing a separate loss per example and then aggregating the resulting gradients are typically very inefficient.
In JAX we can define the code to compute the gradient per-sample in an easy but efficient way.
Just combine the `jit`, `vmap` and `grad` transformations together:
```
perex_grads = jax.jit(jax.vmap(jax.grad(td_loss), in_axes=(None, 0, 0, 0)))
# Test it:
batched_s_tm1 = jnp.stack([s_tm1, s_tm1])
batched_r_t = jnp.stack([r_t, r_t])
batched_s_t = jnp.stack([s_t, s_t])
perex_grads(theta, batched_s_tm1, batched_r_t, batched_s_t)
```
```
Array([[-2.4, -4.8, 2.4],
[-2.4, -4.8, 2.4]], dtype=float32)
```
Let’s walk through this one transformation at a time.
First, we apply `jax.grad` to `td_loss` to obtain a function that computes the gradient of the loss w.r.t. the parameters on single (unbatched) inputs:
```
dtdloss_dtheta = jax.grad(td_loss)
dtdloss_dtheta(theta, s_tm1, r_t, s_t)
```
```
Array([-2.4, -4.8, 2.4], dtype=float32)
```
This function computes one row of the array above.
Then, we vectorise this function using `jax.vmap`. This adds a batch dimension to all inputs and outputs. Now, given a batch of inputs, we produce a batch of outputs – each output in the batch corresponds to the gradient for the corresponding member of the input batch.
```
almost_perex_grads = jax.vmap(dtdloss_dtheta)
batched_theta = jnp.stack([theta, theta])
almost_perex_grads(batched_theta, batched_s_tm1, batched_r_t, batched_s_t)
```
```
Array([[-2.4, -4.8, 2.4],
[-2.4, -4.8, 2.4]], dtype=float32)
```
This isn’t quite what we want, because we have to manually feed this function a batch of `theta`s, whereas we actually want to use a single `theta`. We fix this by adding `in_axes` to the `jax.vmap`, specifying theta as `None`, and the other args as `0`. This makes the resulting function add an extra axis only to the other arguments, leaving `theta` unbatched, as we want:
```
inefficient_perex_grads = jax.vmap(dtdloss_dtheta, in_axes=(None, 0, 0, 0))
inefficient_perex_grads(theta, batched_s_tm1, batched_r_t, batched_s_t)
```
```
Array([[-2.4, -4.8, 2.4],
[-2.4, -4.8, 2.4]], dtype=float32)
```
Almost there! This does what we want, but is slower than it has to be. Now, we wrap the whole thing in a `jax.jit` to get the compiled, efficient version of the same function:
```
perex_grads = jax.jit(inefficient_perex_grads)
perex_grads(theta, batched_s_tm1, batched_r_t, batched_s_t)
```
```
Array([[-2.4, -4.8, 2.4],
[-2.4, -4.8, 2.4]], dtype=float32)
```
```
%timeit inefficient_perex_grads(theta, batched_s_tm1, batched_r_t, batched_s_t).block_until_ready()
%timeit perex_grads(theta, batched_s_tm1, batched_r_t, batched_s_t).block_until_ready()
```
```
100 loops, best of 5: 7.74 ms per loop 10000 loops, best of 5: 86.2 µs per loop
```
### Pseudo Random Numbers in JAX[#](#pseudo-random-numbers-in-jax)
*Authors: <NAME> & <NAME>*
In this section we focus on pseudo random number generation (PRNG); that is, the process of algorithmically generating sequences of numbers whose properties approximate the properties of sequences of random numbers sampled from an appropriate distribution.
PRNG-generated sequences are not truly random because they are actually determined by their initial value, which is typically referred to as the `seed`, and each step of random sampling is a deterministic function of some `state` that is carried over from a sample to the next.
Pseudo random number generation is an essential component of any machine learning or scientific computing framework. Generally, JAX strives to be compatible with NumPy, but pseudo random number generation is a notable exception.
To better understand the difference between the approaches taken by JAX and NumPy when it comes to random number generation we will discuss both approaches in this section.
#### Random numbers in NumPy[#](#random-numbers-in-numpy)
Pseudo random number generation is natively supported in NumPy by the `numpy.random` module.
In NumPy, pseudo random number generation is based on a global `state`.
This can be set to a deterministic initial condition using `random.seed(SEED)`.
```
import numpy as np np.random.seed(0)
```
You can inspect the content of the state using the following command.
```
def print_truncated_random_state():
"""To avoid spamming the outputs, print only part of the state."""
full_random_state = np.random.get_state()
print(str(full_random_state)[:460], '...')
print_truncated_random_state()
```
```
('MT19937', array([ 0, 1, 1812433255, 1900727105, 1208447044,
2481403966, 4042607538, 337614300, 3232553940, 1018809052,
3202401494, 1775180719, 3192392114, 594215549, 184016991,
829906058, 610491522, 3879932251, 3139825610, 297902587,
4075895579, 2943625357, 3530655617, 1423771745, 2135928312,
2891506774, 1066338622, 135451537, 933040465, 2759011858,
2273819758, 3545703099, 2516396728, 127 ...
```
The `state` is updated by each call to a random function:
```
np.random.seed(0)
print_truncated_random_state()
_ = np.random.uniform()
print_truncated_random_state()
```
```
('MT19937', array([ 0, 1, 1812433255, 1900727105, 1208447044,
2481403966, 4042607538, 337614300, 3232553940, 1018809052,
3202401494, 1775180719, 3192392114, 594215549, 184016991,
829906058, 610491522, 3879932251, 3139825610, 297902587,
4075895579, 2943625357, 3530655617, 1423771745, 2135928312,
2891506774, 1066338622, 135451537, 933040465, 2759011858,
2273819758, 3545703099, 2516396728, 127 ...
('MT19937', array([2443250962, 1093594115, 1878467924, 2709361018, 1101979660,
3904844661, 676747479, 2085143622, 1056793272, 3812477442,
2168787041, 275552121, 2696932952, 3432054210, 1657102335,
3518946594, 962584079, 1051271004, 3806145045, 1414436097,
2032348584, 1661738718, 1116708477, 2562755208, 3176189976,
696824676, 2399811678, 3992505346, 569184356, 2626558620,
136797809, 4273176064, 296167901, 343 ...
```
NumPy allows you to sample both individual numbers, or entire vectors of numbers in a single function call. For instance, you may sample a vector of 3 scalars from a uniform distribution by doing:
```
np.random.seed(0)
print(np.random.uniform(size=3))
```
```
[0.5488135 0.71518937 0.60276338]
```
NumPy provides a *sequential equivalent guarantee*, meaning that sampling N numbers in a row individually or sampling a vector of N numbers results in the same pseudo-random sequences:
```
np.random.seed(0)
print("individually:", np.stack([np.random.uniform() for _ in range(3)]))
np.random.seed(0)
print("all at once: ", np.random.uniform(size=3))
```
```
individually: [0.5488135 0.71518937 0.60276338]
all at once: [0.5488135 0.71518937 0.60276338]
```
#### Random numbers in JAX[#](#random-numbers-in-jax)
JAX’s random number generation differs from NumPy’s in important ways. The reason is that NumPy’s PRNG design makes it hard to simultaneously guarantee a number of desirable properties for JAX, specifically that code must be:
1. reproducible,
2. parallelizable,
3. vectorisable.
We will discuss why in the following. First, we will focus on the implications of a PRNG design based on a global state. Consider the code:
```
import numpy as np
np.random.seed(0)
def bar(): return np.random.uniform()
def baz(): return np.random.uniform()
def foo(): return bar() + 2 * baz()
print(foo())
```
```
1.9791922366721637
```
The function `foo` sums two scalars sampled from a uniform distribution.
The output of this code can only satisfy requirement #1 if we assume a specific order of execution for `bar()` and `baz()`, as native Python does.
This doesn’t seem to be a major issue in NumPy, as it is already enforced by Python, but it becomes an issue in JAX.
Making this code reproducible in JAX would require enforcing this specific order of execution. This would violate requirement #2, as JAX should be able to parallelize `bar` and `baz` when jitting as these functions don’t actually depend on each other.
To avoid this issue, JAX does not use a global state. Instead, random functions explicitly consume the state, which is referred to as a `key` .
```
from jax import random
key = random.PRNGKey(42)
print(key)
```
```
[ 0 42]
```
A key is just an array of shape `(2,)`.
‘Random key’ is essentially just another word for ‘random seed’. However, instead of setting it once as in NumPy, any call of a random function in JAX requires a key to be specified. Random functions consume the key, but do not modify it. Feeding the same key to a random function will always result in the same sample being generated:
```
print(random.normal(key))
print(random.normal(key))
```
```
-0.18471184
-0.18471184
```
**Note:** Feeding the same key to different random functions can result in correlated outputs, which is generally undesirable.
**The rule of thumb is: never reuse keys (unless you want identical outputs).**
In order to generate different and independent samples, you must `split()` the key *yourself* whenever you want to call a random function:
```
print("old key", key)
new_key, subkey = random.split(key)
del key # The old key is discarded -- we must never use it again.
normal_sample = random.normal(subkey)
print(r" \---SPLIT --> new key ", new_key)
print(r" \--> new subkey", subkey, "--> normal", normal_sample)
del subkey # The subkey is also discarded after use.
# Note: you don't actually need to `del` keys -- that's just for emphasis.
# Not reusing the same values is enough.
key = new_key # If we wanted to do this again, we would use new_key as the key.
```
```
old key [ 0 42]
\---SPLIT --> new key [2465931498 3679230171]
\--> new subkey [255383827 267815257] --> normal 1.3694694
```
`split()` is a deterministic function that converts one `key` into several independent (in the pseudorandomness sense) keys. We keep one of the outputs as the `new_key`, and can safely use the unique extra key (called `subkey`) as input into a random function, and then discard it forever.
If you wanted to get another sample from the normal distribution, you would split `key` again, and so on. The crucial point is that you never use the same PRNGKey twice. Since `split()` takes a key as its argument, we must throw away that old key when we split it.
It doesn’t matter which part of the output of `split(key)` we call `key`, and which we call `subkey`. They are all pseudorandom numbers with equal status. The reason we use the key/subkey convention is to keep track of how they’re consumed down the road. Subkeys are destined for immediate consumption by random functions, while the key is retained to generate more randomness later.
Usually, the above example would be written concisely as
```
key, subkey = random.split(key)
```
which discards the old key automatically.
It’s worth noting that `split()` can create as many keys as you need, not just 2:
```
key, *forty_two_subkeys = random.split(key, num=43)
```
Another difference between NumPy’s and JAX’s random modules relates to the sequential equivalence guarantee mentioned above.
As in NumPy, JAX’s random module also allows sampling of vectors of numbers.
However, JAX does not provide a sequential equivalence guarantee, because doing so would interfere with the vectorization on SIMD hardware (requirement #3 above).
In the example below, sampling 3 values out of a normal distribution individually using three subkeys gives a different result to using giving a single key and specifying `shape=(3,)`:
```
key = random.PRNGKey(42)
subkeys = random.split(key, 3)
sequence = np.stack([random.normal(subkey) for subkey in subkeys])
print("individually:", sequence)
key = random.PRNGKey(42)
print("all at once: ", random.normal(key, shape=(3,)))
```
```
individually: [-0.04838839 0.10796146 -1.2226542 ]
all at once: [ 0.18693541 -1.2806507 -1.5593133 ]
```
Note that contrary to our recommendation above, we use `key` directly as an input to `random.normal()` in the second example. This is because we won’t reuse it anywhere else, so we don’t violate the single-use principle.
### Working with Pytrees[#](#working-with-pytrees)
*Author: <NAME>*
Often, we want to operate on objects that look like dicts of arrays, or lists of lists of dicts, or other nested structures. In JAX, we refer to these as *pytrees*, but you can sometimes see them called *nests*, or just *trees*.
JAX has built-in support for such objects, both in its library functions as well as through the use of functions from [`jax.tree_utils`](https://jax.readthedocs.io/en/latest/jax.tree_util.html) (with the most common ones also available as `jax.tree_*`). This section will explain how to use them, give some useful snippets and point out common gotchas.
#### What is a pytree?[#](#what-is-a-pytree)
As defined in the [JAX pytree docs](https://jax.readthedocs.io/en/latest/pytrees.html):
> a pytree is a container of leaf elements and/or more pytrees. Containers include lists, tuples, and dicts. A leaf element is anything that’s not a pytree, e.g. an array. In other words, a pytree is just a possibly-nested standard or user-registered Python container. If nested, note that the container types do not need to match. A single “leaf”, i.e. a non-container object, is also considered a pytree.
Some example pytrees:
```
import jax import jax.numpy as jnp
example_trees = [
[1, 'a', object()],
(1, (2, 3), ()),
[1, {'k1': 2, 'k2': (3, 4)}, 5],
{'a': 2, 'b': (2, 3)},
jnp.array([1, 2, 3]),
]
# Let's see how many leaves they have:
for pytree in example_trees:
leaves = jax.tree_util.tree_leaves(pytree)
print(f"{repr(pytree):<45} has {len(leaves)} leaves: {leaves}")
```
```
[1, 'a', <object object at 0x7fada8e9e9d0>] has 3 leaves: [1, 'a', <object object at 0x7fada8e9e9d0>]
(1, (2, 3), ()) has 3 leaves: [1, 2, 3]
[1, {'k1': 2, 'k2': (3, 4)}, 5] has 5 leaves: [1, 2, 3, 4, 5]
{'a': 2, 'b': (2, 3)} has 3 leaves: [2, 2, 3]
Array([1, 2, 3], dtype=int32) has 1 leaves: [Array([1, 2, 3], dtype=int32)]
```
We’ve also introduced our first `jax.tree_*` function, which allowed us to extract the flattened leaves from the trees.
#### Why pytrees?[#](#why-pytrees)
In machine learning, some places where you commonly find pytrees are:
* Model parameters
* Dataset entries
* RL agent observations
They also often arise naturally when working in bulk with datasets (e.g., lists of lists of dicts).
#### Common pytree functions[#](#common-pytree-functions)
Perhaps the most commonly used pytree function is `jax.tree_map`. It works analogously to Python’s native `map`, but on entire pytrees:
```
list_of_lists = [
[1, 2, 3],
[1, 2],
[1, 2, 3, 4]
]
jax.tree_map(lambda x: x*2, list_of_lists)
```
```
[[2, 4, 6], [2, 4], [2, 4, 6, 8]]
```
`jax.tree_map` also works with multiple arguments:
```
another_list_of_lists = list_of_lists jax.tree_map(lambda x, y: x+y, list_of_lists, another_list_of_lists)
```
```
[[2, 4, 6], [2, 4], [2, 4, 6, 8]]
```
When using multiple arguments with `jax.tree_map`, the structure of the inputs must exactly match. That is, lists must have the same number of elements, dicts must have the same keys, etc.
#### Example: ML model parameters[#](#example-ml-model-parameters)
A simple example of training an MLP displays some ways in which pytree operations come in useful:
```
import numpy as np
def init_mlp_params(layer_widths):
params = []
for n_in, n_out in zip(layer_widths[:-1], layer_widths[1:]):
params.append(
dict(weights=np.random.normal(size=(n_in, n_out)) * np.sqrt(2/n_in),
biases=np.ones(shape=(n_out,))
)
)
return params
params = init_mlp_params([1, 128, 128, 1])
```
We can use `jax.tree_map` to check that the shapes of our parameters are what we expect:
```
jax.tree_map(lambda x: x.shape, params)
```
```
[{'biases': (128,), 'weights': (1, 128)},
{'biases': (128,), 'weights': (128, 128)},
{'biases': (1,), 'weights': (128, 1)}]
```
Now, let’s train our MLP:
```
def forward(params, x):
*hidden, last = params
for layer in hidden:
x = jax.nn.relu(x @ layer['weights'] + layer['biases'])
return x @ last['weights'] + last['biases']
def loss_fn(params, x, y):
return jnp.mean((forward(params, x) - y) ** 2)
LEARNING_RATE = 0.0001
@jax.jit def update(params, x, y):
grads = jax.grad(loss_fn)(params, x, y)
# Note that `grads` is a pytree with the same structure as `params`.
# `jax.grad` is one of the many JAX functions that has
# built-in support for pytrees.
# This is handy, because we can apply the SGD update using tree utils:
return jax.tree_map(
lambda p, g: p - LEARNING_RATE * g, params, grads
)
```
```
import matplotlib.pyplot as plt
xs = np.random.normal(size=(128, 1))
ys = xs ** 2
for _ in range(1000):
params = update(params, xs, ys)
plt.scatter(xs, ys)
plt.scatter(xs, forward(params, xs), label='Model prediction')
plt.legend();
```
#### Key paths[#](#key-paths)
In a pytree each leaf has a *key path*. A key path for a leaf is a `list` of *keys*, where the length of the list is equal to the depth of the leaf in the pytree . Each *key* is a [hashable object](https://docs.python.org/3/glossary.html#term-hashable) that represents an index into the corresponding pytree node type. The type of the key depends on the pytree node type; for example, the type of keys for `dict`s is different from the type of keys for `tuple`s.
For built-in pytree node types, the set of keys for any pytree node instance is unique. For a pytree comprising nodes with this property, the key path for each leaf is unique.
The APIs for working with key paths are:
* [`jax.tree_util.tree_flatten_with_path`](https://jax.readthedocs.io/en/latest/_autosummary/jax.tree_util.tree_flatten_with_path.html): Works similarly with `jax.tree_util.tree_flatten`, but returns key paths.
* [`jax.tree_util.tree_map_with_path`](https://jax.readthedocs.io/en/latest/_autosummary/jax.tree_util.tree_map_with_path.html): Works similarly with `jax.tree_util.tree_map`, but the function also takes key paths as arguments.
* [`jax.tree_util.keystr`](https://jax.readthedocs.io/en/latest/_autosummary/jax.tree_util.keystr.html): Given a general key path, returns a reader-friendly string expression.
One use case is to print debugging information related to a certain leaf value:
```
import collections ATuple = collections.namedtuple("ATuple", ('name'))
tree = [1, {'k1': 2, 'k2': (3, 4)}, ATuple('foo')]
flattened, _ = jax.tree_util.tree_flatten_with_path(tree)
for key_path, value in flattened:
print(f'Value of tree{jax.tree_util.keystr(key_path)}: {value}')
```
```
Value of tree[0]: 1 Value of tree[1]['k1']: 2 Value of tree[1]['k2'][0]: 3 Value of tree[1]['k2'][1]: 4 Value of tree[2].name: foo
```
To express key paths, JAX provides a few default key types for the built-in pytree node types, namely:
* `SequenceKey(idx: int)`: for lists and tuples.
* `DictKey(key: Hashable)`: for dictionaries.
* `GetAttrKey(name: str)`: for `namedtuple`s and preferably custom pytree nodes (more in the next section)
You are free to define your own key types for your own custom nodes. They will work with `jax.tree_util.keystr` as long as their `__str__()` method is also overridden with a reader-friendly expression.
```
for key_path, _ in flattened:
print(f'Key path of tree{jax.tree_util.keystr(key_path)}: {repr(key_path)}')
```
```
Key path of tree[0]: (SequenceKey(idx=0),)
Key path of tree[1]['k1']: (SequenceKey(idx=1), DictKey(key='k1'))
Key path of tree[1]['k2'][0]: (SequenceKey(idx=1), DictKey(key='k2'), SequenceKey(idx=0))
Key path of tree[1]['k2'][1]: (SequenceKey(idx=1), DictKey(key='k2'), SequenceKey(idx=1))
Key path of tree[2].name: (SequenceKey(idx=2), GetAttrKey(name='name'))
```
#### Custom pytree nodes[#](#custom-pytree-nodes)
So far, we’ve only been considering pytrees of lists, tuples, and dicts; everything else is considered a leaf. Therefore, if you define your own container class, it will be considered a leaf, even if it has trees inside it:
```
class MyContainer:
"""A named container."""
def __init__(self, name: str, a: int, b: int, c: int):
self.name = name
self.a = a
self.b = b
self.c = c
```
```
jax.tree_util.tree_leaves([
MyContainer('Alice', 1, 2, 3),
MyContainer('Bob', 4, 5, 6)
])
```
```
[<__main__.MyContainer at 0x121ae9ac0>, <__main__.MyContainer at 0x1233f9910>]
```
Accordingly, if we try to use a tree map expecting our leaves to be the elements inside the container, we will get an error:
```
try:
jax.tree_map(lambda x: x + 1, [
MyContainer('Alice', 1, 2, 3),
MyContainer('Bob', 4, 5, 6)
])
except TypeError as e:
print(f'TypeError: {e}')
```
```
TypeError: unsupported operand type(s) for +: 'MyContainer' and 'int'
```
To solve this, we need to register our container with JAX by telling it how to flatten and unflatten it:
```
from typing import Iterable
def flatten_MyContainer(container) -> tuple[Iterable[int], str]:
"""Returns an iterable over container contents, and aux data."""
flat_contents = [container.a, container.b, container.c]
# we don't want the name to appear as a child, so it is auxiliary data.
# auxiliary data is usually a description of the structure of a node,
# e.g., the keys of a dict -- anything that isn't a node's children.
aux_data = container.name
return flat_contents, aux_data
def unflatten_MyContainer(
aux_data: str, flat_contents: Iterable[int]) -> MyContainer:
"""Converts aux data and the flat contents into a MyContainer."""
return MyContainer(aux_data, *flat_contents)
jax.tree_util.register_pytree_node(
MyContainer, flatten_MyContainer, unflatten_MyContainer)
jax.tree_util.tree_leaves([
MyContainer('Alice', 1, 2, 3),
MyContainer('Bob', 4, 5, 6)
])
```
```
[1, 2, 3, 4, 5, 6]
```
Alternatively, using the key path API mentioned above, you can register this container with its keys in mind by defining how the keys should look like for each flattened-out value.
```
class MyKeyPathContainer(MyContainer):
pass
def flatten_with_keys_MyKeyPathContainer(container) -> tuple[Iterable[int], str]:
"""Returns an iterable over container contents, and aux data."""
# GetAttrKey is a common way to express an attribute key. Users are free
# to pick any other expression that fits their use cases the best.
flat_contents = [(jax.tree_util.GetAttrKey('a'), container.a),
(jax.tree_util.GetAttrKey('b'), container.b),
(jax.tree_util.GetAttrKey('c'), container.c)]
# we don't want the name to appear as a child, so it is auxiliary data.
# auxiliary data is usually a description of the structure of a node,
# e.g., the keys of a dict -- anything that isn't a node's children.
aux_data = container.name
return flat_contents, aux_data
def unflatten_MyKeyPathContainer(
aux_data: str, flat_contents: Iterable[int]) -> MyKeyPathContainer:
"""Converts aux data and the flat contents into a MyContainer."""
return MyKeyPathContainer(aux_data, *flat_contents)
jax.tree_util.register_pytree_with_keys(
MyKeyPathContainer, flatten_with_keys_MyKeyPathContainer, unflatten_MyKeyPathContainer)
jax.tree_util.tree_leaves([
MyKeyPathContainer('Alice', 1, 2, 3),
MyKeyPathContainer('Bob', 4, 5, 6)
])
```
```
[1, 2, 3, 4, 5, 6]
```
`register_pytree_with_keys` is an extended API of `register_pytree_node`, and containers registered in either way can freely use all the `tree_util` utilities without error.
When a container registered with `register_pytree_node` uses `.*_with_path` APIs, the keys being returned will be a series of “flat index” fallbacks:
```
flattened, _ = jax.tree_util.tree_flatten_with_path(MyContainer('Alice', 1, 2, 3))
for key_path, value in flattened:
print(f'MyContainer container{jax.tree_util.keystr(key_path)}: {value}')
flattened, _ = jax.tree_util.tree_flatten_with_path(MyKeyPathContainer('Alice', 1, 2, 3))
for key_path, value in flattened:
print(f'MyKeyPathContainer container{jax.tree_util.keystr(key_path)}: {value}')
```
```
MyContainer container[<flat index 0>]: 1 MyContainer container[<flat index 1>]: 2 MyContainer container[<flat index 2>]: 3 MyKeyPathContainer container.a: 1 MyKeyPathContainer container.b: 2 MyKeyPathContainer container.c: 3
```
Modern Python comes equipped with helpful tools to make defining containers easier. Some of these will work with JAX out-of-the-box, but others require more care. For instance, a `NamedTuple` subclass doesn’t need to be registered to be considered a pytree node type:
```
from typing import NamedTuple, Any
class MyOtherContainer(NamedTuple):
name: str
a: Any
b: Any
c: Any
# NamedTuple subclasses are handled as pytree nodes, so
# this will work out-of-the-box:
jax.tree_util.tree_leaves([
MyOtherContainer('Alice', 1, 2, 3),
MyOtherContainer('Bob', 4, 5, 6)
])
```
```
['Alice', 1, 2, 3, 'Bob', 4, 5, 6]
```
Notice that the `name` field now appears as a leaf, as all tuple elements are children. That’s the price we pay for not having to register the class the hard way.
One shortcut is to use `jax.tree_util.register_static` to register a type as being a node without children:
```
from typing import NamedTuple, Any
@jax.tree_util.register_static class StaticStr(str):
pass
class YetAnotherContainer(NamedTuple):
name: StaticStr
a: Any
b: Any
c: Any
# NamedTuple subclasses are handled as pytree nodes, so
# this will work out-of-the-box:
jax.tree_util.tree_leaves([
YetAnotherContainer(StaticStr('Alice'), 1, 2, 3),
YetAnotherContainer(StaticStr('Bob'), 4, 5, 6)
])
```
```
[1, 2, 3, 4, 5, 6]
```
#### Common pytree gotchas and patterns[#](#common-pytree-gotchas-and-patterns)
##### Gotchas[#](#gotchas)
###### Mistaking nodes for leaves[#](#mistaking-nodes-for-leaves)
A common problem to look out for is accidentally introducing tree nodes instead of leaves:
```
a_tree = [jnp.zeros((2, 3)), jnp.zeros((3, 4))]
# Try to make another tree with ones instead of zeros shapes = jax.tree_map(lambda x: x.shape, a_tree)
jax.tree_map(jnp.ones, shapes)
```
```
[(Array([1., 1.], dtype=float32), Array([1., 1., 1.], dtype=float32)),
(Array([1., 1., 1.], dtype=float32), Array([1., 1., 1., 1.], dtype=float32))]
```
What happened is that the `shape` of an array is a tuple, which is a pytree node, with its elements as leaves. Thus, in the map, instead of calling `jnp.ones` on e.g. `(2, 3)`, it’s called on `2` and `3`.
The solution will depend on the specifics, but there are two broadly applicable options:
* rewrite the code to avoid the intermediate `tree_map`.
* convert the tuple into an `np.array` or `jnp.array`, which makes the entire sequence a leaf.
###### Handling of None[#](#handling-of-none)
`jax.tree_utils` treats `None` as a node without children, not as a leaf:
```
jax.tree_util.tree_leaves([None, None, None])
```
```
[]
```
##### Patterns[#](#patterns)
###### Transposing trees[#](#transposing-trees)
If you would like to transpose a pytree, i.e. turn a list of trees into a tree of lists, you can do so using `jax.tree_map`:
```
def tree_transpose(list_of_trees):
"""Convert a list of trees of identical structure into a single tree of lists."""
return jax.tree_map(lambda *xs: list(xs), *list_of_trees)
# Convert a dataset from row-major to column-major:
episode_steps = [dict(t=1, obs=3), dict(t=2, obs=4)]
tree_transpose(episode_steps)
```
```
{'obs': [3, 4], 't': [1, 2]}
```
For more complicated transposes, JAX provides `jax.tree_transpose`, which is more verbose, but allows you specify the structure of the inner and outer Pytree for more flexibility:
```
jax.tree_transpose(
outer_treedef = jax.tree_structure([0 for e in episode_steps]),
inner_treedef = jax.tree_structure(episode_steps[0]),
pytree_to_transpose = episode_steps
)
```
```
/<KEY>kernel_94597/112383129.py:2: FutureWarning: jax.tree_structure is deprecated, and will be removed in a future release. Use jax.tree_util.tree_structure instead.
outer_treedef = jax.tree_structure([0 for e in episode_steps]),
/<KEY>/ipykernel_94597/112383129.py:3: FutureWarning: jax.tree_structure is deprecated, and will be removed in a future release. Use jax.tree_util.tree_structure instead.
inner_treedef = jax.tree_structure(episode_steps[0]),
/<KEY>4597/112383129.py:1: FutureWarning: jax.tree_transpose is deprecated, and will be removed in a future release. Use jax.tree_util.tree_transpose instead.
jax.tree_transpose(
```
```
{'obs': [3, 4], 't': [1, 2]}
```
#### More Information[#](#more-information)
For more information on pytrees in JAX and the operations that are available, see the [Pytrees](https://jax.readthedocs.io/en/latest/pytrees.html) section in the JAX documentation.
### Parallel Evaluation in JAX[#](#parallel-evaluation-in-jax)
*Authors: <NAME> & <NAME>*
In this section we will discuss the facilities built into JAX for single-program, multiple-data (SPMD) code.
SPMD refers to a parallelism technique where the same computation (e.g., the forward pass of a neural net) is run on different input data (e.g., different inputs in a batch) in parallel on different devices (e.g., several TPUs).
Conceptually, this is not very different from vectorisation, where the same operations occur in parallel in different parts of memory on the same device. We have already seen that vectorisation is supported in JAX as a program transformation, `jax.vmap`. JAX supports device parallelism analogously, using `jax.pmap` to transform a function written for one device into a function that runs in parallel on multiple devices. This colab will teach you all about it.
#### TPU Setup[#](#tpu-setup)
This notebook requires multiple accelerators and we recommend running it using Kaggle TPU VMs.
Next run the following to see the TPU devices you have available:
```
import jax jax.devices()
```
```
[TpuDevice(id=0, host_id=0, coords=(0,0,0), core_on_chip=0),
TpuDevice(id=1, host_id=0, coords=(0,0,0), core_on_chip=1),
TpuDevice(id=2, host_id=0, coords=(1,0,0), core_on_chip=0),
TpuDevice(id=3, host_id=0, coords=(1,0,0), core_on_chip=1),
TpuDevice(id=4, host_id=0, coords=(0,1,0), core_on_chip=0),
TpuDevice(id=5, host_id=0, coords=(0,1,0), core_on_chip=1),
TpuDevice(id=6, host_id=0, coords=(1,1,0), core_on_chip=0),
TpuDevice(id=7, host_id=0, coords=(1,1,0), core_on_chip=1)]
```
#### The basics[#](#the-basics)
The most basic use of `jax.pmap` is completely analogous to `jax.vmap`, so let’s return to the convolution example from the [Vectorisation notebook](https://colab.research.google.com/github/google/jax/blob/main/docs/jax-101/03-vectorization.ipynb).
```
import numpy as np import jax.numpy as jnp
x = np.arange(5)
w = np.array([2., 3., 4.])
def convolve(x, w):
output = []
for i in range(1, len(x)-1):
output.append(jnp.dot(x[i-1:i+2], w))
return jnp.array(output)
convolve(x, w)
```
```
Array([11., 20., 29.], dtype=float32)
```
Now, let’s convert our `convolve` function into one that runs on entire batches of data. In anticipation of spreading the batch across several devices, we’ll make the batch size equal to the number of devices:
```
n_devices = jax.local_device_count()
xs = np.arange(5 * n_devices).reshape(-1, 5)
ws = np.stack([w] * n_devices)
xs
```
```
array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19],
[20, 21, 22, 23, 24],
[25, 26, 27, 28, 29],
[30, 31, 32, 33, 34],
[35, 36, 37, 38, 39]])
```
```
ws
```
```
array([[2., 3., 4.],
[2., 3., 4.],
[2., 3., 4.],
[2., 3., 4.],
[2., 3., 4.],
[2., 3., 4.],
[2., 3., 4.],
[2., 3., 4.]])
```
As before, we can vectorise using `jax.vmap`:
```
jax.vmap(convolve)(xs, ws)
```
```
Array([[ 11., 20., 29.],
[ 56., 65., 74.],
[101., 110., 119.],
[146., 155., 164.],
[191., 200., 209.],
[236., 245., 254.],
[281., 290., 299.],
[326., 335., 344.]], dtype=float32)
```
To spread out the computation across multiple devices, just replace `jax.vmap` with `jax.pmap`:
```
jax.pmap(convolve)(xs, ws)
```
```
Array([[ 11., 20., 29.],
[ 56., 65., 74.],
[101., 110., 119.],
[146., 155., 164.],
[191., 200., 209.],
[236., 245., 254.],
[281., 290., 299.],
[326., 335., 344.]], dtype=float32)
```
Note that the parallelized `convolve` returns a `jax.Array`. That is because the elements of this array are sharded across all of the devices used in the parallelism. If we were to run another parallel computation, the elements would stay on their respective devices, without incurring cross-device communication costs.
```
jax.pmap(convolve)(xs, jax.pmap(convolve)(xs, ws))
```
```
Array([[ 78., 138., 198.],
[ 1188., 1383., 1578.],
[ 3648., 3978., 4308.],
[ 7458., 7923., 8388.],
[12618., 13218., 13818.],
[19128., 19863., 20598.],
[26988., 27858., 28728.],
[36198., 37203., 38208.]], dtype=float32)
```
The outputs of the inner `jax.pmap(convolve)` never left their devices when being fed into the outer `jax.pmap(convolve)`.
#### Specifying `in_axes`[#](#specifying-in-axes)
Like with `vmap`, we can use `in_axes` to specify whether an argument to the parallelized function should be broadcast (`None`), or whether it should be split along a given axis. Note, however, that unlike `vmap`, only the leading axis (`0`) is supported by `pmap` at the time of writing this guide.
```
jax.pmap(convolve, in_axes=(0, None))(xs, w)
```
```
Array([[ 11., 20., 29.],
[ 56., 65., 74.],
[101., 110., 119.],
[146., 155., 164.],
[191., 200., 209.],
[236., 245., 254.],
[281., 290., 299.],
[326., 335., 344.]], dtype=float32)
```
Notice how we get equivalent output to what we observe above with `jax.pmap(convolve)(xs, ws)`, where we manually replicated `w` when creating `ws`. Here, it is replicated via broadcasting, by specifying it as `None` in `in_axes`.
Keep in mind that when calling the transformed function, the size of the specified axis in arguments must not exceed the number of devices available to the host.
#### `pmap` and `jit`[#](#pmap-and-jit)
`jax.pmap` JIT-compiles the function given to it as part of its operation, so there is no need to additionally `jax.jit` it.
#### Communication between devices[#](#communication-between-devices)
The above is enough to perform simple parallel operations, e.g. batching a simple MLP forward pass across several devices. However, sometimes we need to pass information between the devices. For example, perhaps we are interested in normalizing the output of each device so they sum to 1.
For that, we can use special [collective ops](https://jax.readthedocs.io/en/latest/jax.lax.html#parallel-operators) (such as the `jax.lax.p*` ops `psum`, `pmean`, `pmax`, …). In order to use the collective ops we must specify the name of the `pmap`-ed axis through `axis_name` argument, and then refer to it when calling the op. Here’s how to do that:
```
def normalized_convolution(x, w):
output = []
for i in range(1, len(x)-1):
output.append(jnp.dot(x[i-1:i+2], w))
output = jnp.array(output)
return output / jax.lax.psum(output, axis_name='p')
jax.pmap(normalized_convolution, axis_name='p')(xs, ws)
```
```
Array([[0.00816024, 0.01408451, 0.019437 ],
[0.04154303, 0.04577465, 0.04959785],
[0.07492582, 0.07746479, 0.07975871],
[0.10830861, 0.10915492, 0.10991956],
[0.14169139, 0.14084506, 0.14008042],
[0.17507419, 0.17253521, 0.17024128],
[0.20845698, 0.20422535, 0.20040214],
[0.24183977, 0.23591548, 0.23056298]], dtype=float32)
```
The `axis_name` is just a string label that allows collective operations like `jax.lax.psum` to refer to the axis bound by `jax.pmap`. It can be named anything you want – in this case, `p`. This name is essentially invisible to anything but those functions, and those functions use it to know which axis to communicate across.
`jax.vmap` also supports `axis_name`, which allows `jax.lax.p*` operations to be used in the vectorisation context in the same way they would be used in a `jax.pmap`:
```
jax.vmap(normalized_convolution, axis_name='p')(xs, ws)
```
```
Array([[0.00816024, 0.01408451, 0.019437 ],
[0.04154303, 0.04577465, 0.04959785],
[0.07492582, 0.07746479, 0.07975871],
[0.10830861, 0.10915492, 0.10991956],
[0.14169139, 0.14084506, 0.14008042],
[0.17507419, 0.17253521, 0.17024128],
[0.20845698, 0.20422535, 0.20040214],
[0.24183977, 0.23591548, 0.23056298]], dtype=float32)
```
Note that `normalized_convolution` will no longer work without being transformed by `jax.pmap` or `jax.vmap`, because `jax.lax.psum` expects there to be a named axis (`'p'`, in this case), and those two transformations are the only way to bind one.
#### Nesting `jax.pmap` and `jax.vmap`[#](#nesting-jax-pmap-and-jax-vmap)
The reason we specify `axis_name` as a string is so we can use collective operations when nesting `jax.pmap` and `jax.vmap`. For example:
```
jax.vmap(jax.pmap(f, axis_name='i'), axis_name='j')
```
A `jax.lax.psum(..., axis_name='i')` in `f` would refer only to the pmapped axis, since they share the `axis_name`.
In general, `jax.pmap` and `jax.vmap` can be nested in any order, and with themselves (so you can have a `pmap` within another `pmap`, for instance).
#### Example[#](#example)
Here’s an example of a regression training loop with data parallelism, where each batch is split into sub-batches which are evaluated on separate devices.
There are two places to pay attention to:
* the `update()` function
* the replication of parameters and splitting of data across devices.
If this example is too confusing, you can find the same example, but without parallelism, in the next notebook, [State in JAX](https://colab.research.google.com/github/google/jax/blob/main/docs/jax-101/07-state.ipynb). Once that example makes sense, you can compare the differences to understand how parallelism changes the picture.
```
from typing import NamedTuple import functools
class Params(NamedTuple):
weight: jnp.ndarray
bias: jnp.ndarray
def init(rng) -> Params:
"""Returns the initial model params."""
weights_key, bias_key = jax.random.split(rng)
weight = jax.random.normal(weights_key, ())
bias = jax.random.normal(bias_key, ())
return Params(weight, bias)
def loss_fn(params: Params, xs: jnp.ndarray, ys: jnp.ndarray) -> jnp.ndarray:
"""Computes the least squares error of the model's predictions on x against y."""
pred = params.weight * xs + params.bias
return jnp.mean((pred - ys) ** 2)
LEARNING_RATE = 0.005
# So far, the code is identical to the single-device case. Here's what's new:
# Remember that the `axis_name` is just an arbitrary string label used
# to later tell `jax.lax.pmean` which axis to reduce over. Here, we call it
# 'num_devices', but could have used anything, so long as `pmean` used the same.
@functools.partial(jax.pmap, axis_name='num_devices')
def update(params: Params, xs: jnp.ndarray, ys: jnp.ndarray) -> tuple[Params, jnp.ndarray]:
"""Performs one SGD update step on params using the given data."""
# Compute the gradients on the given minibatch (individually on each device).
loss, grads = jax.value_and_grad(loss_fn)(params, xs, ys)
# Combine the gradient across all devices (by taking their mean).
grads = jax.lax.pmean(grads, axis_name='num_devices')
# Also combine the loss. Unnecessary for the update, but useful for logging.
loss = jax.lax.pmean(loss, axis_name='num_devices')
# Each device performs its own update, but since we start with the same params
# and synchronise gradients, the params stay in sync.
new_params = jax.tree_map(
lambda param, g: param - g * LEARNING_RATE, params, grads)
return new_params, loss
```
Here’s how `update()` works:
Undecorated and without the `pmean`s, `update()` takes data tensors of shape `[batch, ...]`, computes the loss function on that batch and evaluates its gradients.
We want to spread the `batch` dimension across all available devices. To do that, we add a new axis using `pmap`. The arguments to the decorated `update()` thus need to have shape `[num_devices, batch_per_device, ...]`. So, to call the new `update()`, we’ll need to reshape data batches so that what used to be `batch` is reshaped to `[num_devices, batch_per_device]`. That’s what `split()` does below. Additionally, we’ll need to replicate our model parameters, adding the `num_devices` axis. This reshaping is how a pmapped function knows which devices to send which data.
At some point during the update step, we need to combine the gradients computed by each device – otherwise, the updates performed by each device would be different. That’s why we use `jax.lax.pmean` to compute the mean across the `num_devices` axis, giving us the average gradient of the batch. That average gradient is what we use to compute the update.
Aside on naming: here, we use `num_devices` for the `axis_name` for didactic clarity while introducing `jax.pmap`. However, in some sense that is tautologous: any axis introduced by a pmap will represent a number of devices. Therefore, it’s common to see the axis be named something semantically meaningful, like `batch`, `data` (signifying data parallelism) or `model` (signifying model parallelism).
```
# Generate true data from y = w*x + b + noise true_w, true_b = 2, -1 xs = np.random.normal(size=(128, 1))
noise = 0.5 * np.random.normal(size=(128, 1))
ys = xs * true_w + true_b + noise
# Initialise parameters and replicate across devices.
params = init(jax.random.PRNGKey(123))
n_devices = jax.local_device_count()
replicated_params = jax.tree_map(lambda x: jnp.array([x] * n_devices), params)
```
So far, we’ve just constructed arrays with an additional leading dimension. The params are all still all on the host (CPU). `pmap` will communicate them to the devices when `update()` is first called, and each copy will stay on its own device subsequently.
```
type(replicated_params.weight)
```
```
jax.Array
```
The params will become a jax.Array when they are returned by our pmapped `update()` (see further down).
We do the same to the data:
```
def split(arr):
"""Splits the first axis of `arr` evenly across the number of devices."""
return arr.reshape(n_devices, arr.shape[0] // n_devices, *arr.shape[1:])
# Reshape xs and ys for the pmapped `update()`.
x_split = split(xs)
y_split = split(ys)
type(x_split)
```
```
numpy.ndarray
```
The data is just a reshaped vanilla NumPy array. Hence, it cannot be anywhere but on the host, as NumPy runs on CPU only. Since we never modify it, it will get sent to the device at each `update` call, like in a real pipeline where data is typically streamed from CPU to the device at each step.
```
def type_after_update(name, obj):
print(f"after first `update()`, `{name}` is a", type(obj))
# Actual training loop.
for i in range(1000):
# This is where the params and data gets communicated to devices:
replicated_params, loss = update(replicated_params, x_split, y_split)
# The returned `replicated_params` and `loss` are now both jax.Arrays,
# indicating that they're on the devices.
# `x_split`, of course, remains a NumPy array on the host.
if i == 0:
type_after_update('replicated_params.weight', replicated_params.weight)
type_after_update('loss', loss)
type_after_update('x_split', x_split)
if i % 100 == 0:
# Note that loss is actually an array of shape [num_devices], with identical
# entries, because each device returns its copy of the loss.
# So, we take the first element to print it.
print(f"Step {i:3d}, loss: {loss[0]:.3f}")
# Plot results.
# Like the loss, the leaves of params have an extra leading dimension,
# so we take the params from the first device.
params = jax.device_get(jax.tree_map(lambda x: x[0], replicated_params))
```
```
after first `update()`, `replicated_params.weight` is a <class 'jax.Array'>
after first `update()`, `loss` is a <class 'jax.Array'>
after first `update()`, `x_split` is a <class 'numpy.ndarray'>
Step 0, loss: 0.228 Step 100, loss: 0.228 Step 200, loss: 0.228 Step 300, loss: 0.228 Step 400, loss: 0.228 Step 500, loss: 0.228 Step 600, loss: 0.228 Step 700, loss: 0.228 Step 800, loss: 0.228 Step 900, loss: 0.228
```
```
import matplotlib.pyplot as plt plt.scatter(xs, ys)
plt.plot(xs, params.weight * xs + params.bias, c='red', label='Model Prediction')
plt.legend()
plt.show()
```
#### Aside: hosts and devices in JAX[#](#aside-hosts-and-devices-in-jax)
When running on TPU, the idea of a ‘host’ becomes important. A host is the CPU that manages several devices. A single host can only manage so many devices (usually 8), so when running very large parallel programs, multiple hosts are needed, and some finesse is required to manage them.
```
jax.devices()
```
```
[TpuDevice(id=0, host_id=0, coords=(0,0,0), core_on_chip=0),
TpuDevice(id=1, host_id=0, coords=(0,0,0), core_on_chip=1),
TpuDevice(id=2, host_id=0, coords=(1,0,0), core_on_chip=0),
TpuDevice(id=3, host_id=0, coords=(1,0,0), core_on_chip=1),
TpuDevice(id=4, host_id=0, coords=(0,1,0), core_on_chip=0),
TpuDevice(id=5, host_id=0, coords=(0,1,0), core_on_chip=1),
TpuDevice(id=6, host_id=0, coords=(1,1,0), core_on_chip=0),
TpuDevice(id=7, host_id=0, coords=(1,1,0), core_on_chip=1)]
```
When running on CPU you can always emulate an arbitrary number of devices with a nifty `--xla_force_host_platform_device_count` XLA flag, e.g. by executing the following before importing JAX:
```
import os os.environ['XLA_FLAGS'] = '--xla_force_host_platform_device_count=8'
jax.devices()
```
```
[CpuDevice(id=0),
CpuDevice(id=1),
CpuDevice(id=2),
CpuDevice(id=3),
CpuDevice(id=4),
CpuDevice(id=5),
CpuDevice(id=6),
CpuDevice(id=7)]
```
This is especially useful for debugging and testing locally or even for prototyping in Colab since a CPU runtime is faster to (re-)start.
### Stateful Computations in JAX[#](#stateful-computations-in-jax)
*Authors: <NAME>*
This section explores how JAX constrains the implementation of stateful programs.
#### Motivation[#](#motivation)
In machine learning, program state most often comes in the form of:
* model parameters,
* optimizer state, and
* stateful layers, such as [BatchNorm](https://en.wikipedia.org/wiki/Batch_normalization).
Some JAX transformations, most notably `jax.jit`, impose constraints on the functions they transform. In particular, the function transformed by `jax.jit` must have no side-effects. This is because any such side-effects will only be executed once, when the python version of the function is run during compilation. These side-effects will not be executed by the compiled function on subsequent runs.
Changing program state is one kind of side-effect. So, if we can’t have side effects, how do we update model parameters, the optimizer state, and use stateful layers in our models? This colab will explain this in detail, but the short answer is: with [functional programming](https://en.wikipedia.org/wiki/Functional_programming).
#### A simple example: Counter[#](#a-simple-example-counter)
Let’s start by looking at a simple stateful program: a counter.
```
import jax import jax.numpy as jnp
class Counter:
"""A simple counter."""
def __init__(self):
self.n = 0
def count(self) -> int:
"""Increments the counter and returns the new value."""
self.n += 1
return self.n
def reset(self):
"""Resets the counter to zero."""
self.n = 0
counter = Counter()
for _ in range(3):
print(counter.count())
```
```
1 2
3
```
The `n` attribute maintains the counter’s *state* between successive calls of `count`. It is modified as a side effect of calling `count`.
Let’s say we want to count fast, so we `jax.jit` the `count` method. (In this example, this wouldn’t actually help speed anyway, for many reasons, but treat this as a toy model of wanting to JIT-compile the update of model parameters, where `jax.jit` makes an enormous difference).
```
counter.reset()
fast_count = jax.jit(counter.count)
for _ in range(3):
print(fast_count())
```
```
1 1
1
```
Oh no! Our counter isn’t working. This is because the line
```
self.n += 1
```
in `count` is only called once, when JAX compiles the method call. Moreover, since the return value doesn’t depend on the arguments to `count`, once it returns the first 1, subsequent calls to `fast_count` will always return 1. This won’t do. So, how do we fix it?
#### The solution: explicit state[#](#the-solution-explicit-state)
Part of the problem with our counter was that the returned value didn’t depend on the arguments, meaning a constant was “baked into” the compiled output. But it shouldn’t be a constant – it should depend on the state. Well, then why don’t we make the state into an argument?
```
CounterState = int
class CounterV2:
def count(self, n: CounterState) -> tuple[int, CounterState]:
# You could just return n+1, but here we separate its role as
# the output and as the counter state for didactic purposes.
return n+1, n+1
def reset(self) -> CounterState:
return 0
counter = CounterV2()
state = counter.reset()
for _ in range(3):
value, state = counter.count(state)
print(value)
```
```
1 2
3
```
In this new version of `Counter`, we moved `n` to be an argument of `count`, and added another return value that represents the new, updated, state. To use this counter, we now need to keep track of the state explicitly. But in return, we can now safely `jax.jit` this counter:
```
state = counter.reset()
fast_count = jax.jit(counter.count)
for _ in range(3):
value, state = fast_count(state)
print(value)
```
```
1 2
3
```
#### A general strategy[#](#a-general-strategy)
We can apply the same process to any stateful method to convert it into a stateless one. We took a class of the form
```
class StatefulClass
state: State
def stateful_method(*args, **kwargs) -> Output:
```
and turned it into a class of the form
```
class StatelessClass
def stateless_method(state: State, *args, **kwargs) -> (Output, State):
```
This is a common [functional programming](https://en.wikipedia.org/wiki/Functional_programming) pattern, and, essentially, is the way that state is handled in all JAX programs.
Notice that the need for a class becomes less clear once we have rewritten it this way. We could just keep `stateless_method`, since the class is no longer doing any work. This is because, like the strategy we just applied, object-oriented programming (OOP) is a way to help programmers understand program state.
In our case, the `CounterV2` class is nothing more than a namespace bringing all the functions that use `CounterState` into one location. Exercise for the reader: do you think it makes sense to keep it as a class?
Incidentally, you’ve already seen an example of this strategy in the JAX pseudo-randomness API, `jax.random`, shown in the [Random Numbers section](https://colab.research.google.com/github/google/jax/blob/main/docs/jax-101/05-random-numbers.ipynb). Unlike Numpy, which manages random state using stateful classes, JAX requires the programmer to work directly with the random generator state – the PRNGKey.
#### Simple worked example: Linear Regression[#](#simple-worked-example-linear-regression)
Let’s apply this strategy to a simple machine learning model: linear regression via gradient descent.
Here, we only deal with one kind of state: the model parameters. But generally, you’ll see many kinds of state being threaded in and out of JAX functions, like optimizer state, layer statistics for batchnorm, and others.
The function to look at carefully is `update`.
```
from typing import NamedTuple
class Params(NamedTuple):
weight: jnp.ndarray
bias: jnp.ndarray
def init(rng) -> Params:
"""Returns the initial model params."""
weights_key, bias_key = jax.random.split(rng)
weight = jax.random.normal(weights_key, ())
bias = jax.random.normal(bias_key, ())
return Params(weight, bias)
def loss(params: Params, x: jnp.ndarray, y: jnp.ndarray) -> jnp.ndarray:
"""Computes the least squares error of the model's predictions on x against y."""
pred = params.weight * x + params.bias
return jnp.mean((pred - y) ** 2)
LEARNING_RATE = 0.005
@jax.jit def update(params: Params, x: jnp.ndarray, y: jnp.ndarray) -> Params:
"""Performs one SGD update step on params using the given data."""
grad = jax.grad(loss)(params, x, y)
# If we were using Adam or another stateful optimizer,
# we would also do something like
# ```
# updates, new_optimizer_state = optimizer(grad, optimizer_state)
# ```
# and then use `updates` instead of `grad` to actually update the params.
# (And we'd include `new_optimizer_state` in the output, naturally.)
new_params = jax.tree_map(
lambda param, g: param - g * LEARNING_RATE, params, grad)
return new_params
```
Notice that we manually pipe the params in and out of the update function.
```
import matplotlib.pyplot as plt
rng = jax.random.PRNGKey(42)
# Generate true data from y = w*x + b + noise true_w, true_b = 2, -1 x_rng, noise_rng = jax.random.split(rng)
xs = jax.random.normal(x_rng, (128, 1))
noise = jax.random.normal(noise_rng, (128, 1)) * 0.5 ys = xs * true_w + true_b + noise
# Fit regression params = init(rng)
for _ in range(1000):
params = update(params, xs, ys)
plt.scatter(xs, ys)
plt.plot(xs, params.weight * xs + params.bias, c='red', label='Model Prediction')
plt.legend();
```
#### Taking it further[#](#taking-it-further)
The strategy described above is how any (jitted) JAX program must handle state.
Handling parameters manually seems fine if you’re dealing with two parameters, but what if it’s a neural net with dozens of layers? You might already be getting worried about two things:
1. Are we supposed to initialize them all manually, essentially repeating what we already write in the forward pass definition?
2. Are we supposed to pipe all these things around manually?
The details can be tricky to handle, but there are examples of libraries that take care of this for you. See [JAX Neural Network Libraries](https://github.com/google/jax#neural-network-libraries) for some examples.
User Guides[#](#user-guides)
---
User guides are deeper dives into particular topics within JAX that become relevant as your JAX project matures into larger or deployed codebases.
### Profiling JAX programs[#](#profiling-jax-programs)
#### Viewing program traces with Perfetto[#](#viewing-program-traces-with-perfetto)
We can use the JAX profiler to generate traces of a JAX program that can be visualized using the [Perfetto visualizer](https://ui.perfetto.dev). Currently,
this method blocks the program until a link is clicked and the Perfetto UI loads the trace. If you wish to get profiling information without any interaction,
check out the Tensorboard profiler below.
```
with jax.profiler.trace("/tmp/jax-trace", create_perfetto_link=True):
# Run the operations to be profiled
key = jax.random.PRNGKey(0)
x = jax.random.normal(key, (5000, 5000))
y = x @ x
y.block_until_ready()
```
After this computation is done, the program will prompt you to open a link to
`ui.perfetto.dev`. When you open the link, the Perfetto UI will load the trace file and open a visualizer.
Program execution will continue after loading the link. The link is no longer valid after opening once, but it will redirect to a new URL that remains valid.
You can then click the “Share” button in the Perfetto UI to create a permalink to the trace that can be shared with others.
##### Remote profiling[#](#remote-profiling)
When profiling code that is running remotely (for example on a hosted VM),
you need to establish an SSH tunnel on port 9001 for the link to work. You can do that with this command:
```
$ ssh -L 9001:127.0.0.1:9001 <user>@<host>
```
or if you’re using Google Cloud:
```
$ gcloud compute ssh <machine-name> -- -L 9001:127.0.0.1:9001
```
##### Manual capture[#](#manual-capture)
Instead of capturing traces programmatically using `jax.profiler.trace`, you can instead start a profiling server in the script of interest by calling
`jax.profiler.start_server(<port>)`. If you only need the profiler server to be active for a portion of your script, you can shut it down by calling
`jax.profiler.stop_server()`.
Once the script is running and after the profiler server has started, we can manually capture an trace by running:
```
$ python -m jax.collect_profile <port> <duration_in_ms>
```
By default, the resulting trace information is dumped into a temporary directory but this can be overridden by passing in `--log_dir=<directory of choice>`.
Also, by default, the program will prompt you to open a link to
`ui.perfetto.dev`. When you open the link, the Perfetto UI will load the trace file and open a visualizer. This feature is disabled by passing in
`--no_perfetto_link` into the command. Alternatively, you can also point Tensorboard to the `log_dir` to analyze the trace (see the
“Tensorboard Profiling” section below).
#### TensorBoard profiling[#](#tensorboard-profiling)
[TensorBoard’s profiler](https://www.tensorflow.org/tensorboard/tensorboard_profiling_keras)
can be used to profile JAX programs. Tensorboard is a great way to acquire and visualize performance traces and profiles of your program, including activity on GPU and TPU. The end result looks something like this:
##### Installation[#](#installation)
The TensorBoard profiler is only available with the version of TensorBoard bundled with TensorFlow.
```
pip install tensorflow tensorboard-plugin-profile
```
If you already have TensorFlow installed, you only need to install the
`tensorboard-plugin-profile` pip package. Be careful to only install one version of TensorFlow or TensorBoard, otherwise you may encounter the “duplicate plugins” error described [below](#multiple-installs). See
<https://www.tensorflow.org/guide/profiler> for more information on installing TensorBoard.
##### Programmatic capture[#](#programmatic-capture)
You can instrument your code to capture a profiler trace via the
[`jax.profiler.start_trace()`](index.html#jax.profiler.start_trace) and [`jax.profiler.stop_trace()`](index.html#jax.profiler.stop_trace)
methods. Call [`start_trace()`](index.html#jax.profiler.start_trace) with the directory to write trace files to. This should be the same `--logdir` directory used to start TensorBoard. Then, you can use TensorBoard to view the traces.
For example, to take a profiler trace:
```
import jax
jax.profiler.start_trace("/tmp/tensorboard")
# Run the operations to be profiled key = jax.random.PRNGKey(0)
x = jax.random.normal(key, (5000, 5000))
y = x @ x y.block_until_ready()
jax.profiler.stop_trace()
```
Note the `block_until_ready()` call. We use this to make sure on-device execution is captured by the trace. See [Asynchronous dispatch](index.html#async-dispatch) for details on why this is necessary.
You can also use the [`jax.profiler.trace()`](index.html#jax.profiler.trace) context manager as an alternative to `start_trace` and `stop_trace`:
```
import jax
with jax.profiler.trace("/tmp/tensorboard"):
key = jax.random.PRNGKey(0)
x = jax.random.normal(key, (5000, 5000))
y = x @ x
y.block_until_ready()
```
To view the trace, first start TensorBoard if you haven’t already:
```
$ tensorboard --logdir=/tmp/tensorboard
[...]
Serving TensorBoard on localhost; to expose to the network, use a proxy or pass --bind_all TensorBoard 2.5.0 at http://localhost:6006/ (Press CTRL+C to quit)
```
You should be able to load TensorBoard at <http://localhost:6006/> in this example. You can specify a different port with the `--port` flag. See
[Profiling on a remote machine](#id2) below if running JAX on a remote server.
Then, either select “Profile” in the upper-right dropdown menu, or go directly to <http://localhost:6006/#profile>. Available traces appear in the “Runs”
dropdown menu on the left. Select the run you’re interested in, and then under
“Tools”, select `trace_viewer`. You should now see a timeline of the execution. You can use the WASD keys to navigate the trace, and click or drag to select events to see more details at the bottom. See [these TensorFlow docs](https://www.tensorflow.org/tensorboard/tensorboard_profiling_keras#use_the_tensorflow_profiler_to_profile_model_training_performance)
for more details on using the trace viewer.
You can also use the `memory_viewer`, `op_profile`, and `graph_viewer` tools.
##### Manual capture via TensorBoard[#](#manual-capture-via-tensorboard)
The following are instructions for capturing a manually-triggered N-second trace from a running program.
1. Start a TensorBoard server:
```
tensorboard --logdir /tmp/tensorboard/
```
You should be able to load TensorBoard at <http://localhost:6006/>. You can specify a different port with the `--port` flag. See [Profiling on a remote machine](#id2)
below if running JAX on a remote server.
2. In the Python program or process you’d like to profile, add the following somewhere near the beginning:
```
import jax.profiler jax.profiler.start_server(9999)
```
This starts the profiler server that TensorBoard connects to. The profiler server must be running before you move on to the next step. When you’re done using the server, you can call `jax.profiler.stop_server()` to shut it down.
If you’d like to profile a snippet of a long-running program (e.g. a long training loop), you can put this at the beginning of the program and start your program as usual. If you’d like to profile a short program (e.g. a microbenchmark), one option is to start the profiler server in an IPython shell, and run the short program with `%run` after starting the capture in the next step. Another option is to start the profiler server at the beginning of the program and use `time.sleep()` to give you enough time to start the capture.
3. Open <http://localhost:6006/#profile>, and click the “CAPTURE PROFILE” button in the upper left. Enter “localhost:9999” as the profile service URL (this is the address of the profiler server you started in the previous step). Enter the number of milliseconds you’d like to profile for, and click “CAPTURE”.
4. If the code you’d like to profile isn’t already running (e.g. if you started the profiler server in a Python shell), run it while the capture is running.
5. After the capture finishes, TensorBoard should automatically refresh. (Not all of the TensorBoard profiling features are hooked up with JAX, so it may initially look like nothing was captured.) On the left under “Tools”, select
`trace_viewer`.
You should now see a timeline of the execution. You can use the WASD keys to navigate the trace, and click or drag to select events to see more details at the bottom. See [these TensorFlow docs](https://www.tensorflow.org/tensorboard/tensorboard_profiling_keras#use_the_tensorflow_profiler_to_profile_model_training_performance)
for more details on using the trace viewer.
You can also use the `memory_viewer`, `op_profile`, and `graph_viewer`
tools.
##### Concurrent kernel tracing on GPU[#](#concurrent-kernel-tracing-on-gpu)
By default, traces captured on GPU in a mode that prevents CUDA kernels from running concurrently. This allows for more accurate kernel timings, but removes any concurrency between streams (for example, between compute and communication). To enable concurrent kernel tracing, set the environment variable `TF_GPU_CUPTI_FORCE_CONCURRENT_KERNEL=1` when launching the JAX program.
##### Adding custom trace events[#](#adding-custom-trace-events)
By default, the events in the trace viewer are mostly low-level internal JAX functions. You can add your own events and functions by using
[`jax.profiler.TraceAnnotation`](index.html#jax.profiler.TraceAnnotation) and [`jax.profiler.annotate_function()`](index.html#jax.profiler.annotate_function) in your code.
##### Troubleshooting[#](#troubleshooting)
###### GPU profiling[#](#gpu-profiling)
Programs running on GPU should produce traces for the GPU streams near the top of the trace viewer. If you’re only seeing the host traces, check your program logs and/or output for the following error messages.
**If you get an error like: `Could not load dynamic library 'libcupti.so.10.1'`**
Full error:
```
W external/org_tensorflow/tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcupti.so.10.1'; dlerror: libcupti.so.10.1: cannot open shared object file: No such file or directory 2020-06-12 13:19:59.822799: E external/org_tensorflow/tensorflow/core/profiler/internal/gpu/cupti_tracer.cc:1422] function cupti_interface_->Subscribe( &subscriber_, (CUpti_CallbackFunc)ApiCallback, this)failed with error CUPTI could not be loaded or symbol could not be found.
```
Add the path to `libcupti.so` to the environment variable `LD_LIBRARY_PATH`.
(Try `locate libcupti.so` to find the path.) For example:
```
export LD_LIBRARY_PATH=/usr/local/cuda-10.1/extras/CUPTI/lib64/:$LD_LIBRARY_PATH
```
If you still get the `Could not load dynamic library` message after doing this,
check if the GPU trace shows up in the trace viewer anyway. This message sometimes occurs even when everything is working, since it looks for the
`libcupti` library in multiple places.
**If you get an error like: `failed with error CUPTI_ERROR_INSUFFICIENT_PRIVILEGES`**
Full error:
```
E external/org_tensorflow/tensorflow/core/profiler/internal/gpu/cupti_tracer.cc:1445] function cupti_interface_->EnableCallback( 0 , subscriber_, CUPTI_CB_DOMAIN_DRIVER_API, cbid)failed with error CUPTI_ERROR_INSUFFICIENT_PRIVILEGES 2020-06-12 14:31:54.097791: E external/org_tensorflow/tensorflow/core/profiler/internal/gpu/cupti_tracer.cc:1487] function cupti_interface_->ActivityDisable(activity)failed with error CUPTI_ERROR_NOT_INITIALIZED
```
Run the following commands (note this requires a reboot):
```
echo 'options nvidia "NVreg_RestrictProfilingToAdminUsers=0"' | sudo tee -a /etc/modprobe.d/nvidia-kernel-common.conf sudo update-initramfs -u sudo reboot now
```
See [NVIDIA’s documentation on this error](https://developer.nvidia.com/nvidia-development-tools-solutions-err-nvgpuctrperm-cupti)
for more information.
###### Profiling on a remote machine[#](#profiling-on-a-remote-machine)
If the JAX program you’d like to profile is running on a remote machine, one option is to run all the instructions above on the remote machine (in particular, start the TensorBoard server on the remote machine), then use SSH local port forwarding to access the TensorBoard web UI from your local machine. Use the following SSH command to forward the default TensorBoard port 6006 from the local to the remote machine:
```
ssh -L 6006:localhost:6006 <remote server address>
```
or if you’re using Google Cloud:
```
$ gcloud compute ssh <machine-name> -- -L 6006:localhost:6006
```
###### Multiple TensorBoard installs[#](#multiple-tensorboard-installs)
**If starting TensorBoard fails with an error like: `ValueError: Duplicate plugins for name projector`**
It’s often because there are two versions of TensorBoard and/or TensorFlow installed (e.g. the `tensorflow`, `tf-nightly`, `tensorboard`, and `tb-nightly`
pip packages all include TensorBoard). Uninstalling a single pip package can result in the `tensorboard` executable being removed which is then hard to replace, so it may be necessary to uninstall everything and reinstall a single version:
```
pip uninstall tensorflow tf-nightly tensorboard tb-nightly pip install tensorflow
```
#### Nsight[#](#nsight)
NVIDIA’s `Nsight` tools can be used to trace and profile JAX code on GPU. For details, see the [`Nsight`
documentation](https://developer.nvidia.com/tools-overview).
### Device Memory Profiling[#](#device-memory-profiling)
Note
May 2023 update: we recommend using [Tensorboard profiling](index.html#tensorboard-profiling) for device memory analysis. After taking a profile, open the `memory_viewer` tab of the Tensorboard profiler for more detailed and understandable device memory usage.
The JAX Device Memory Profiler allows us to explore how and why JAX programs are using GPU or TPU memory. For example, it can be used to:
* Figure out which arrays and executables are in GPU memory at a given time, or
* Track down memory leaks.
#### Installation[#](#installation)
The JAX device memory profiler emits output that can be interpreted using pprof ([google/pprof](https://github.com/google/pprof)). Start by installing `pprof`,
by following its
[installation instructions](https://github.com/google/pprof#building-pprof).
At the time of writing, installing `pprof` requires first installing
[Go](https://golang.org/) of version 1.16+,
[Graphviz](http://www.graphviz.org/), and then running
```
go install github.com/google/pprof@latest
```
which installs `pprof` as `$GOPATH/bin/pprof`, where `GOPATH` defaults to
`~/go`.
Note
The version of `pprof` from [google/pprof](https://github.com/google/pprof) is not the same as the older tool of the same name distributed as part of the `gperftools` package.
The `gperftools` version of `pprof` will not work with JAX.
#### Understanding how a JAX program is using GPU or TPU memory[#](#understanding-how-a-jax-program-is-using-gpu-or-tpu-memory)
A common use of the device memory profiler is to figure out why a JAX program is using a large amount of GPU or TPU memory, for example if trying to debug an out-of-memory problem.
To capture a device memory profile to disk, use
[`jax.profiler.save_device_memory_profile()`](index.html#jax.profiler.save_device_memory_profile). For example, consider the following Python program:
```
import jax import jax.numpy as jnp import jax.profiler
def func1(x):
return jnp.tile(x, 10) * 0.5
def func2(x):
y = func1(x)
return y, jnp.tile(x, 10) + 1
x = jax.random.normal(jax.random.PRNGKey(42), (1000, 1000))
y, z = func2(x)
z.block_until_ready()
jax.profiler.save_device_memory_profile("memory.prof")
```
If we first run the program above and then execute
```
pprof --web memory.prof
```
`pprof` opens a web browser containing the following visualization of the device memory profile in callgraph format:
The callgraph is a visualization of the Python stack at the point the allocation of each live buffer was made.
For example, in this specific case, the visualization shows that
`func2` and its callees were responsible for allocating 76.30MB, of which 38.15MB was allocated inside the call from `func1` to `func2`.
For more information about how to interpret callgraph visualizations, see the
[pprof documentation](https://github.com/google/pprof/blob/master/doc/README.md#interpreting-the-callgraph).
Functions compiled with [`jax.jit()`](index.html#jax.jit) are opaque to the device memory profiler.
That is, any memory allocated inside a `jit`-compiled function will be attributed to the function as a whole.
In the example, the call to `block_until_ready()` is to ensure that `func2`
completes before the device memory profile is collected. See
[Asynchronous dispatch](index.html#document-async_dispatch) for more details.
#### Debugging memory leaks[#](#debugging-memory-leaks)
We can also use the JAX device memory profiler to track down memory leaks by using
`pprof` to visualize the change in memory usage between two device memory profiles taken at different times. For example, consider the following program which accumulates JAX arrays into a constantly-growing Python list.
```
import jax import jax.numpy as jnp import jax.profiler
def afunction():
return jax.random.normal(jax.random.PRNGKey(77), (1000000,))
z = afunction()
def anotherfunc():
arrays = []
for i in range(1, 10):
x = jax.random.normal(jax.random.PRNGKey(42), (i, 10000))
arrays.append(x)
x.block_until_ready()
jax.profiler.save_device_memory_profile(f"memory{i}.prof")
anotherfunc()
```
If we simply visualize the device memory profile at the end of execution
(`memory9.prof`), it may not be obvious that each iteration of the loop in
`anotherfunc` accumulates more device memory allocations:
```
pprof --web memory9.prof
```
The large but fixed allocation inside `afunction` dominates the profile but does not grow over time.
By using `pprof`’s
[`--diff_base` feature](https://github.com/google/pprof/blob/master/doc/README.md#comparing-profiles) to visualize the change in memory usage across loop iterations, we can identify why the memory usage of the program increases over time:
```
pprof --web --diff_base memory1.prof memory9.prof
```
The visualization shows that the memory growth can be attributed to the call to
`normal` inside `anotherfunc`.
### Runtime value debugging in JAX[#](#runtime-value-debugging-in-jax)
Do you have exploding gradients? Are NaNs making you gnash your teeth? Just want to poke around the intermediate values in your computation? Check out the following JAX debugging tools! This page has TL;DR summaries and you can click the “Read more” links at the bottom to learn more.
Table of contents:
* [Interactive inspection with `jax.debug`](index.html#document-debugging/print_breakpoint)
* [Functional error checks with jax.experimental.checkify](index.html#document-debugging/checkify_guide)
* [Throwing Python errors with JAX’s debug flags](index.html#document-debugging/flags)
#### [Interactive inspection with `jax.debug`](index.html#document-debugging/print_breakpoint)[#](#interactive-inspection-with-jax-debug)
**TL;DR** Use [`jax.debug.print()`](index.html#jax.debug.print) to print values to stdout in `jax.jit`-,`jax.pmap`-, and `pjit`-decorated functions,
and [`jax.debug.breakpoint()`](index.html#jax.debug.breakpoint) to pause execution of your compiled function to inspect values in the call stack:
```
import jax import jax.numpy as jnp
@jax.jit def f(x):
jax.debug.print("🤯 {x} 🤯", x=x)
y = jnp.sin(x)
jax.debug.breakpoint()
jax.debug.print("🤯 {y} 🤯", y=y)
return y
f(2.)
# Prints:
# 🤯 2.0 🤯
# Enters breakpoint to inspect values!
# 🤯 0.9092974662780762 🤯
```
Click [here](index.html#document-debugging/print_breakpoint) to learn more!
#### [Functional error checks with `jax.experimental.checkify`](index.html#document-debugging/checkify_guide)[#](#functional-error-checks-with-jax-experimental-checkify)
**TL;DR** Checkify lets you add `jit`-able runtime error checking (e.g. out of bounds indexing) to your JAX code. Use the `checkify.checkify` transformation together with the assert-like `checkify.check` function to add runtime checks to JAX code:
```
from jax.experimental import checkify import jax import jax.numpy as jnp
def f(x, i):
checkify.check(i >= 0, "index needs to be non-negative!")
y = x[i]
z = jnp.sin(y)
return z
jittable_f = checkify.checkify(f)
err, z = jax.jit(jittable_f)(jnp.ones((5,)), -1)
print(err.get())
# >> index needs to be non-negative! (check failed at <...>:6 (f))
```
You can also use checkify to automatically add common checks:
```
errors = checkify.user_checks | checkify.index_checks | checkify.float_checks checked_f = checkify.checkify(f, errors=errors)
err, z = checked_f(jnp.ones((5,)), 100)
err.throw()
# ValueError: out-of-bounds indexing at <..>:7 (f)
err, z = checked_f(jnp.ones((5,)), -1)
err.throw()
# ValueError: index needs to be non-negative! (check failed at <…>:6 (f))
err, z = checked_f(jnp.array([jnp.inf, 1]), 0)
err.throw()
# ValueError: nan generated by primitive sin at <...>:8 (f)
```
Click [here](index.html#document-debugging/checkify_guide) to learn more!
#### [Throwing Python errors with JAX’s debug flags](index.html#document-debugging/flags)[#](#throwing-python-errors-with-jax-s-debug-flags)
**TL;DR** Enable the `jax_debug_nans` flag to automatically detect when NaNs are produced in `jax.jit`-compiled code (but not in `jax.pmap` or `jax.pjit`-compiled code) and enable the `jax_disable_jit` flag to disable JIT-compilation, enabling use of traditional Python debugging tools like `print` and `pdb`.
```
from jax import config config.update("jax_debug_nans", True)
def f(x, y):
return x / y jax.jit(f)(0., 0.) # ==> raises FloatingPointError exception!
```
Click [here](index.html#document-debugging/flags) to learn more!
##### `jax.debug.print` and `jax.debug.breakpoint`[#](#jax-debug-print-and-jax-debug-breakpoint)
The [`jax.debug`](index.html#module-jax.debug) package offers some useful tools for inspecting values inside of JIT-ted functions.
###### Debugging with `jax.debug.print` and other debugging callbacks[#](#debugging-with-jax-debug-print-and-other-debugging-callbacks)
**TL;DR** Use [`jax.debug.print()`](index.html#jax.debug.print) to print traced array values to stdout in `jit`- and `pmap`-decorated functions:
```
import jax import jax.numpy as jnp
@jax.jit def f(x):
jax.debug.print("🤯 {x} 🤯", x=x)
y = jnp.sin(x)
jax.debug.print("🤯 {y} 🤯", y=y)
return y
f(2.)
# Prints:
# 🤯 2.0 🤯
# 🤯 0.9092974662780762 🤯
```
With some transformations, like `jax.grad` and `jax.vmap`, you can use Python’s builtin `print` function to print out numerical values. But `print` won’t work with `jax.jit` or `jax.pmap` because those transformations delay numerical evaluation. So use `jax.debug.print` instead!
Semantically, `jax.debug.print` is roughly equivalent to the following Python function
```
def debug.print(fmt: str, *args: PyTree[Array], **kwargs: PyTree[Array]) -> None:
print(fmt.format(*args, **kwargs))
```
except that it can be staged out and transformed by JAX. See the [`API reference`](index.html#jax.debug.print) for more details.
Note that `fmt` cannot be an f-string because f-strings are formatted immediately, whereas for `jax.debug.print`, we’d like to delay formatting until later.
###### When to use “*debug*” print?[#](#when-to-use-debug-print)
You should use `jax.debug.print` for dynamic (i.e. traced) array values within JAX transformations like `jit`, `vmap`, and others.
For printing of static values (like array shapes or dtypes), you can use a normal Python `print` statement.
###### Why “*debug*” print?[#](#why-debug-print)
In the name of debugging, `jax.debug.print` can reveal information about *how* computations are evaluated:
```
xs = jnp.arange(3.)
def f(x):
jax.debug.print("x: {}", x)
y = jnp.sin(x)
jax.debug.print("y: {}", y)
return y jax.vmap(f)(xs)
# Prints: x: 0.0
# x: 1.0
# x: 2.0
# y: 0.0
# y: 0.841471
# y: 0.9092974 jax.lax.map(f, xs)
# Prints: x: 0.0
# y: 0.0
# x: 1.0
# y: 0.841471
# x: 2.0
# y: 0.9092974
```
Notice that the printed results are in different orders!
By revealing these inner-workings, the output of `jax.debug.print` doesn’t respect JAX’s usual semantics guarantees, like that `jax.vmap(f)(xs)` and `jax.lax.map(f, xs)` compute the same thing (in different ways). Yet these evaluation order details are exactly what we might want to see when debugging!
So use `jax.debug.print` for debugging, and not when semantics guarantees are important.
###### More examples of `jax.debug.print`[#](#more-examples-of-jax-debug-print)
In addition to the above examples using `jit` and `vmap`, here are a few more to have in mind.
###### Printing under `jax.pmap`[#](#printing-under-jax-pmap)
When `jax.pmap`-ed, `jax.debug.print`s might be reordered!
```
xs = jnp.arange(2.)
def f(x):
jax.debug.print("x: {}", x)
return x jax.pmap(f)(xs)
# Prints: x: 1.0
# x: 0.0
# OR
# Prints: x: 1.0
# x: 0.0
```
###### Printing under `jax.grad`[#](#printing-under-jax-grad)
Under a `jax.grad`, `jax.debug.print`s will only print on the forward pass:
```
def f(x):
jax.debug.print("x: {}", x)
return x * 2.
jax.grad(f)(1.)
# Prints: x: 1.0
```
This behavior is similar to how Python’s builtin `print` works under a `jax.grad`. But by using `jax.debug.print` here, the behavior is the same even if the caller applies a `jax.jit`.
To print on the backward pass, just use a `jax.custom_vjp`:
```
@jax.custom_vjp def print_grad(x):
return x
def print_grad_fwd(x):
return x, None
def print_grad_bwd(_, x_grad):
jax.debug.print("x_grad: {}", x_grad)
return (x_grad,)
print_grad.defvjp(print_grad_fwd, print_grad_bwd)
def f(x):
x = print_grad(x)
return x * 2.
jax.grad(f)(1.)
# Prints: x_grad: 2.0
```
###### Printing in other transformations[#](#printing-in-other-transformations)
`jax.debug.print` also works in other transformations like `xmap` and `pjit`.
###### More control with `jax.debug.callback`[#](#more-control-with-jax-debug-callback)
In fact, `jax.debug.print` is a thin convenience wrapper around `jax.debug.callback`, which can be used directly for greater control over string formatting, or even the kind of output.
Semantically, `jax.debug.callback` is roughly equivalent to the following Python function
```
def callback(fun: Callable, *args: PyTree[Array], **kwargs: PyTree[Array]) -> None:
fun(*args, **kwargs)
return None
```
As with `jax.debug.print`, these callbacks should only be used for debugging output, like printing or plotting. Printing and plotting are pretty harmless, but if you use it for anything else its behavior might surprise you under transformations. For example, it’s not safe to use `jax.debug.callback` for timing operations, since callbacks might reordered and asynchronous (see below).
###### Sharp bits[#](#sharp-bits)
Like most JAX APIs, `jax.debug.print` can cut you if you’re not careful.
###### Ordering of printed results[#](#ordering-of-printed-results)
When distinct calls to `jax.debug.print` involve arguments which don’t depend on one another, they might be reordered when staged out, e.g. by `jax.jit`:
```
@jax.jit def f(x, y):
jax.debug.print("x: {}", x)
jax.debug.print("y: {}", y)
return x + y
f(2., 3.)
# Prints: x: 2.0
# y: 3.0
# OR
# Prints: y: 3.0
# x: 2.0
```
Why? Under the hood, the compiler gets a functional representation of the staged-out computation, where the imperative order of the Python function is lost and only data dependence remains. This change is invisible to users with functionally pure code, but in the presence of side-effects like printing, it’s noticeable.
To preserve the original order of `jax.debug.print`s as written in your Python function, you can use `jax.debug.print(..., ordered=True)`, which will ensure the relative order of prints is preserved. But using `ordered=True` will raise an error under `jax.pmap` and other JAX transformations involving parallelism, since ordering can’t be guaranteed under parallel execution.
###### Asynchronous callbacks[#](#asynchronous-callbacks)
Depending on the backend, `jax.debug.print`s may happen asynchronously, i.e. not in your main program thread. This means that values could be printed to your screen even after your JAX function has returned a value.
```
@jax.jit def f(x):
jax.debug.print("x: {}", x)
return x f(2.).block_until_ready()
# <do something else>
# Prints: x: 2.
```
To block on the `jax.debug.print`s in a function, you can call `jax.effects_barrier()`, which will wait until any remaining side-effects in the function have completed as well:
```
@jax.jit def f(x):
jax.debug.print("x: {}", x)
return x f(2.).block_until_ready()
jax.effects_barrier()
# Prints: x: 2.
# <do something else>
```
###### Performance impacts[#](#performance-impacts)
###### Unnecessary materialization[#](#unnecessary-materialization)
While `jax.debug.print` was designed to have a minimal performance footprint, it can interfere with compiler optimizations and potentially affect the memory profile of your JAX programs.
```
def f(w, b, x):
logits = w.dot(x) + b
jax.debug.print("logits: {}", logits)
return jax.nn.relu(logits)
```
In this example, we are printing intermediate values in between a linear layer and the activation function. Compilers like XLA can perform fusion optimizations, which might avoid materializing `logits` in memory. But when we use `jax.debug.print` on `logits`, we are forcing those intermediates to be materialized, potentially slowing down the program and increasing memory usage.
Furthermore, when using `jax.debug.print` with `jax.pjit`, a global synchronization occurs that will materialize values on a single device.
###### Callback overhead[#](#callback-overhead)
`jax.debug.print` inherently incurs communication between an accelerator and its host. The underlying mechanism differs from backend to backend (e.g. GPU vs TPU) but in all cases, we’ll need to copy the printed values from device to host. In the CPU case, this overhead is smaller.
Furthermore, when using `jax.debug.print` with `jax.pjit`, a global synchronization occurs that adds some overhead.
###### Strengths and limitations of `jax.debug.print`[#](#strengths-and-limitations-of-jax-debug-print)
###### Strengths[#](#strengths)
* Print debugging is simple and intuitive
* `jax.debug.callback` can be used for other innocuous side-effects
###### Limitations[#](#limitations)
* Adding print statements is a manual process
* Can have performance impacts
###### Interactive inspection with `jax.debug.breakpoint()`[#](#interactive-inspection-with-jax-debug-breakpoint)
**TL;DR** Use `jax.debug.breakpoint()` to pause the execution of your JAX program to inspect values:
```
@jax.jit def f(x):
y, z = jnp.sin(x), jnp.cos(x)
jax.debug.breakpoint()
return y * z f(2.) # ==> Pauses during execution!
```
`jax.debug.breakpoint()` is actually just an application of `jax.debug.callback(...)` that captures information about the call stack. It has the same transformation behaviors as `jax.debug.print` as a result (e.g. `vmap`-ing `jax.debug.breakpoint()` unrolls it across the mapped axis).
###### Usage[#](#usage)
Calling `jax.debug.breakpoint()` in a compiled JAX function will pause your program when it hits the breakpoint. You’ll be presented with a `pdb`-like prompt that allows you to inspect the values in the call stack. Unlike `pdb`, you will not be able to step through the execution, but you are allowed to resume it.
Debugger commands:
* `help` - prints out available commands
* `p` - evaluates an expression and prints its result
* `pp` - evaluates an expression and pretty-prints its result
* `u(p)` - go up a stack frame
* `d(own)` - go down a stack frame
* `w(here)/bt` - print out a backtrace
* `l(ist)` - print out code context
* `c(ont(inue))` - resumes the execution of the program
* `q(uit)/exit` - exits the program (does not work on TPU)
###### Examples[#](#examples)
###### Usage with `jax.lax.cond`[#](#usage-with-jax-lax-cond)
When combined with `jax.lax.cond`, the debugger can become a useful tool for detecting `nan`s or `inf`s.
```
def breakpoint_if_nonfinite(x):
is_finite = jnp.isfinite(x).all()
def true_fn(x):
pass
def false_fn(x):
jax.debug.breakpoint()
lax.cond(is_finite, true_fn, false_fn, x)
@jax.jit def f(x, y):
z = x / y
breakpoint_if_nonfinite(z)
return z f(2., 0.) # ==> Pauses during execution!
```
###### Sharp bits[#](#id1)
Because `jax.debug.breakpoint` is a just an application of `jax.debug.callback`, it has the same [sharp bits as `jax.debug.print`](#sharp-bits), with a few more caveats:
* `jax.debug.breakpoint` materializes *even more* intermediates than `jax.debug.print` because it forces materialization of all values in the call stack
* `jax.debug.breakpoint` has more runtime overhead than a `jax.debug.print` because it has to potentially copy all the intermediate values in a JAX program from device to host.
###### Strengths and limitations of `jax.debug.breakpoint()`[#](#strengths-and-limitations-of-jax-debug-breakpoint)
###### Strengths[#](#id2)
* Simple, intuitive and (somewhat) standard
* Can inspect many values at the same time, up and down the call stack
###### Limitations[#](#id3)
* Need to potentially use many breakpoints to pinpoint the source of an error
* Materializes many intermediates
##### The `checkify` transformation[#](#the-checkify-transformation)
**TL;DR** Checkify lets you add `jit`-able runtime error checking (e.g. out of bounds indexing) to your JAX code. Use the `checkify.checkify` transformation together with the assert-like `checkify.check` function to add runtime checks to JAX code:
```
from jax.experimental import checkify import jax import jax.numpy as jnp
def f(x, i):
checkify.check(i >= 0, "index needs to be non-negative, got {i}", i=i)
y = x[i]
z = jnp.sin(y)
return z
jittable_f = checkify.checkify(f)
err, z = jax.jit(jittable_f)(jnp.ones((5,)), -2)
print(err.get())
# >> index needs to be non-negative, got -2! (check failed at <...>:6 (f))
```
You can also use checkify to automatically add common checks:
```
errors = checkify.user_checks | checkify.index_checks | checkify.float_checks checked_f = checkify.checkify(f, errors=errors)
err, z = checked_f(jnp.ones((5,)), 100)
err.throw()
# ValueError: out-of-bounds indexing at <..>:7 (f)
err, z = checked_f(jnp.ones((5,)), -1)
err.throw()
# ValueError: index needs to be non-negative! (check failed at <…>:6 (f))
err, z = checked_f(jnp.array([jnp.inf, 1]), 0)
err.throw()
# ValueError: nan generated by primitive sin at <...>:8 (f)
err, z = checked_f(jnp.array([5, 1]), 0)
err.throw() # if no error occurred, throw does nothing!
```
###### Functionalizing checks[#](#functionalizing-checks)
The assert-like check API by itself is not functionally pure: it can raise a Python Exception as a side-effect, just like assert. So it can’t be staged out with `jit`, `pmap`, `pjit`, or `scan`:
```
jax.jit(f)(jnp.ones((5,)), -1) # checkify transformation not used
# ValueError: Cannot abstractly evaluate a checkify.check which was not functionalized.
```
But the checkify transformation functionalizes (or discharges) these effects. A checkify-transformed function returns an error *value* as a new output and remains functionally pure. That functionalization means checkify-transformed functions can be composed with staging/transforms however we like:
```
err, z = jax.pmap(checked_f)(jnp.ones((3, 5)), jnp.array([-1, 2, 100]))
err.throw()
"""
ValueError:
.. at mapped index 0: index needs to be non-negative! (check failed at :6 (f))
.. at mapped index 2: out-of-bounds indexing at <..>:7 (f)
"""
```
###### Why does JAX need checkify?[#](#why-does-jax-need-checkify)
Under some JAX transformations you can express runtime error checks with ordinary Python assertions, for example when only using `jax.grad` and `jax.numpy`:
```
def f(x):
assert x > 0., "must be positive!"
return jnp.log(x)
jax.grad(f)(0.)
# ValueError: "must be positive!"
```
But ordinary assertions don’t work inside `jit`, `pmap`, `pjit`, or `scan`. In those cases, numeric computations are staged out rather than evaluated eagerly during Python execution, and as a result numeric values aren’t available:
```
jax.jit(f)(0.)
# ConcretizationTypeError: "Abstract tracer value encountered ..."
```
JAX transformation semantics rely on functional purity, especially when composing multiple transformations, so how can we provide an error mechanism without disrupting all that?
Beyond needing a new API, the situation is trickier still:
XLA HLO doesn’t support assertions or throwing errors, so even if we had a JAX API which was able to stage out assertions, how would we lower these assertions to XLA?
You could imagine manually adding run-time checks to your function and plumbing out values representing errors:
```
def f_checked(x):
error = x <= 0.
result = jnp.log(x)
return error, result
err, y = jax.jit(f_checked)(0.)
if err:
raise ValueError("must be positive!")
# ValueError: "must be positive!"
```
The error is a regular value computed by the function, and the error is raised outside of `f_checked`. `f_checked` is functionally pure, so we know by construction that it’ll already work with `jit`, pmap, pjit, scan, and all of JAX’s transformations. The only problem is that this plumbing can be a pain!
`checkify` does this rewrite for you: that includes plumbing the error value through the function, rewriting checks to boolean operations and merging the result with the tracked error value, and returning the final error value as an output to the checkified function:
```
def f(x):
checkify.check(x > 0., "{} must be positive!", x) # convenient but effectful API
return jnp.log(x)
f_checked = checkify(f)
err, x = jax.jit(f_checked)(-1.)
err.throw()
# ValueError: -1. must be positive! (check failed at <...>:2 (f))
```
We call this functionalizing or discharging the effect introduced by calling check. (In the “manual” example above the error value is just a boolean. checkify’s error values are conceptually similar but also track error messages and expose throw and get methods; see [`jax.experimental.checkify`](index.html#module-jax.experimental.checkify)). `checkify.check` also allows you to add run-time values to your error message by providing them as format arguments to the error message.
You could now manually instrument your code with run-time checks, but `checkify` can also automatically add checks for common errors!
Consider these error cases:
```
jnp.arange(3)[5] # out of bounds jnp.sin(jnp.inf) # NaN generated jnp.ones((5,)) / jnp.arange(5) # division by zero
```
By default `checkify` only discharges `checkify.check`s, and won’t do anything to catch errors like the above. But if you ask it to, `checkify` will also instrument your code with checks automatically.
```
def f(x, i):
y = x[i] # i could be out of bounds.
z = jnp.sin(y) # z could become NaN
return z
errors = checkify.user_checks | checkify.index_checks | checkify.float_checks checked_f = checkify.checkify(f, errors=errors)
err, z = checked_f(jnp.ones((5,)), 100)
err.throw()
# ValueError: out-of-bounds indexing at <..>:7 (f)
err, z = checked_f(jnp.array([jnp.inf, 1]), 0)
err.throw()
# ValueError: nan generated by primitive sin at <...>:8 (f)
```
The API for selecting which automatic checks to enable is based on Sets. See [`jax.experimental.checkify`](index.html#module-jax.experimental.checkify) for more details.
###### `checkify` under JAX transformations.[#](#checkify-under-jax-transformations)
As demonstrated in the examples above, a checkified function can be happily jitted. Here’s a few more examples of `checkify` with other JAX transformations. Note that checkified functions are functionally pure, and should trivially compose with all JAX transformations!
###### `jit`[#](#jit)
You can safely add `jax.jit` to a checkified function, or `checkify` a jitted function, both will work.
```
def f(x, i):
return x[i]
checkify_of_jit = checkify.checkify(jax.jit(f))
jit_of_checkify = jax.jit(checkify.checkify(f))
err, _ = checkify_of_jit(jnp.ones((5,)), 100)
err.get()
# out-of-bounds indexing at <..>:2 (f)
err, _ = jit_of_checkify(jnp.ones((5,)), 100)
# out-of-bounds indexing at <..>:2 (f)
```
###### `vmap`/`pmap`[#](#vmap-pmap)
You can `vmap` and `pmap` checkified functions (or `checkify` mapped functions).
Mapping a checkified function will give you a mapped error, which can contain different errors for every element of the mapped dimension.
```
def f(x, i):
checkify.check(i >= 0, "index needs to be non-negative!")
return x[i]
checked_f = checkify.checkify(f, errors=checkify.all_checks)
errs, out = jax.vmap(checked_f)(jnp.ones((3, 5)), jnp.array([-1, 2, 100]))
errs.throw()
"""
ValueError:
at mapped index 0: index needs to be non-negative! (check failed at <...>:2 (f))
at mapped index 2: out-of-bounds indexing at <...>:3 (f)
"""
```
However, a checkify-of-vmap will produce a single (unmapped) error!
```
@jax.vmap def f(x, i):
checkify.check(i >= 0, "index needs to be non-negative!")
return x[i]
checked_f = checkify.checkify(f, errors=checkify.all_checks)
err, out = checked_f(jnp.ones((3, 5)), jnp.array([-1, 2, 100]))
err.throw()
# ValueError: index needs to be non-negative! (check failed at <...>:2 (f))
```
###### `pjit`[#](#pjit)
`pjit` of a checkified function *just works*, you only need to specify an additional `out_axis_resources` of `None` for the error value output.
```
def f(x):
return x / x
f = checkify.checkify(f, errors=checkify.float_checks)
f = pjit(
f,
in_shardings=PartitionSpec('x', None),
out_shardings=(None, PartitionSpec('x', None)))
with jax.sharding.Mesh(mesh.devices, mesh.axis_names):
err, data = f(input_data)
err.throw()
# ValueError: divided by zero at <...>:4 (f)
```
###### `grad`[#](#grad)
Your gradient computation will also be instrumented if you checkify-of-grad:
```
def f(x):
return x / (1 + jnp.sqrt(x))
grad_f = jax.grad(f)
err, _ = checkify.checkify(grad_f, errors=checkify.nan_checks)(0.)
print(err.get())
>> nan generated by primitive mul at <...>:3 (f)
```
Note that there’s no multiply in `f`, but there is a multiply in its gradient computation (and this is where the NaN is generated!). So use checkify-of-grad to add automatic checks to both forward and backward pass operations.
`checkify.check`s will only be applied to the primal value of your function. If you want to use a `check` on a gradient value, use a `custom_vjp`:
```
@jax.custom_vjp def assert_gradient_negative(x):
return x
def fwd(x):
return assert_gradient_negative(x), None
def bwd(_, grad):
checkify.check(grad < 0, "gradient needs to be negative!")
return (grad,)
assert_gradient_negative.defvjp(fwd, bwd)
jax.grad(assert_gradient_negative)(-1.)
# ValueError: gradient needs to be negative!
```
###### Strengths and limitations of `jax.experimental.checkify`[#](#strengths-and-limitations-of-jax-experimental-checkify)
###### Strengths[#](#strengths)
* You can use it everywhere (errors are “just values” and behave intuitively under transformations like other values)
* Automatic instrumentation: you don’t need to make local modifications to your code. Instead, `checkify` can instrument all of it!
###### Limitations[#](#limitations)
* Adding a lot of runtime checks can be expensive (eg. adding a NaN check to every primitive will add a lot of operations to your computation)
* Requires threading error values out of functions and manually throwing the error. If the error is not explicitly thrown, you might miss out on errors!
* Throwing an error value will materialize that error value on the host, meaning it’s a blocking operation which defeats JAX’s async run-ahead.
##### JAX debugging flags[#](#jax-debugging-flags)
JAX offers flags and context managers that enable catching errors more easily.
###### `jax_debug_nans` configuration option and context manager[#](#jax-debug-nans-configuration-option-and-context-manager)
**TL;DR** Enable the `jax_debug_nans` flag to automatically detect when NaNs are produced in `jax.jit`-compiled code (but not in `jax.pmap` or `jax.pjit`-compiled code).
`jax_debug_nans` is a JAX flag that when enabled, automatically raises an error when a NaN is detected. It has special handling for JIT-compiled – when a NaN output is detected from a JIT-ted function, the function is re-run eagerly (i.e. without compilation) and will throw an error at the specific primitive that produced the NaN.
###### Usage[#](#usage)
If you want to trace where NaNs are occurring in your functions or gradients, you can turn on the NaN-checker by:
* setting the `JAX_DEBUG_NANS=True` environment variable;
* adding `from jax import config` and `config.update("jax_debug_nans", True)` near the top of your main file;
* adding from `jax.config import config` and `config.parse_flags_with_absl()` to your main file, then set the option using a command-line flag like `--jax_debug_nans=True`;
###### Example(s)[#](#example-s)
```
from jax import config config.update("jax_debug_nans", True)
def f(x, y):
return x / y jax.jit(f)(0., 0.) # ==> raises FloatingPointError exception!
```
###### Strengths and limitations of `jax_debug_nans`[#](#strengths-and-limitations-of-jax-debug-nans)
###### Strengths[#](#strengths)
* Easy to apply
* Precisely detects where NaNs were produced
* Throws a standard Python exception and is compatible with PDB postmortem
###### Limitations[#](#limitations)
* Not compatible with `jax.pmap` or `jax.pjit`
* Re-running functions eagerly can be slow
* Errors on false positives (e.g. intentionally created NaNs)
###### `jax_disable_jit` configuration option and context manager[#](#jax-disable-jit-configuration-option-and-context-manager)
**TL;DR** Enable the `jax_disable_jit` flag to disable JIT-compilation, enabling use of traditional Python debugging tools like `print` and `pdb`
`jax_disable_jit` is a JAX flag that when enabled, disables JIT-compilation throughout JAX (including in control flow functions like `jax.lax.cond` and `jax.lax.scan`).
###### Usage[#](#id1)
You can disable JIT-compilation by:
* setting the `JAX_DISABLE_JIT=True` environment variable;
* adding `from jax import config` and `config.update("jax_disable_jit", True)` near the top of your main file;
* adding from `jax.config import config` and `config.parse_flags_with_absl()` to your main file, then set the option using a command-line flag like `--jax_disable_jit=True`;
###### Examples[#](#examples)
```
from jax import config config.update("jax_disable_jit", True)
def f(x):
y = jnp.log(x)
if jnp.isnan(y):
breakpoint()
return y jax.jit(f)(-2.) # ==> Enters PDB breakpoint!
```
###### Strengths and limitations of `jax_disable_jit`[#](#strengths-and-limitations-of-jax-disable-jit)
###### Strengths[#](#id2)
* Easy to apply
* Enables use of Python’s built-in `breakpoint` and `print`
* Throws standard Python exceptions and is compatible with PDB postmortem
###### Limitations[#](#id3)
* Not compatible with `jax.pmap` or `jax.pjit`
* Running functions without JIT-compilation can be slow
### Understanding Jaxprs[#](#understanding-jaxprs)
Updated: May 3, 2020 (for commit f1a46fe).
Conceptually, one can think of JAX transformations as first trace-specializing the Python function to be transformed into a small and well-behaved intermediate form that is then interpreted with transformation-specific interpretation rules. One of the reasons JAX can pack so much power into such a small software package is that it starts with a familiar and flexible programming interface (Python with NumPy) and it uses the actual Python interpreter to do most of the heavy lifting to distill the essence of the computation into a simple statically-typed expression language with limited higher-order features. That language is the jaxpr language.
Not all Python programs can be processed this way, but it turns out that many scientific computing and machine learning programs can.
Before we proceed, it is important to point out that not all JAX transformations literally materialize a jaxpr as described above; some, e.g.,
differentiation or batching, will apply transformations incrementally during tracing. Nevertheless, if one wants to understand how JAX works internally, or to make use of the result of JAX tracing, it is useful to understand jaxprs.
A jaxpr instance represents a function with one or more typed parameters (input variables) and one or more typed results. The results depend only on the input variables; there are no free variables captured from enclosing scopes. The inputs and outputs have types, which in JAX are represented as abstract values.
There are two related representations in the code for jaxprs,
[`jax.core.Jaxpr`](index.html#jax.core.Jaxpr) and [`jax.core.ClosedJaxpr`](index.html#jax.core.ClosedJaxpr). A
[`jax.core.ClosedJaxpr`](index.html#jax.core.ClosedJaxpr) represents a partially-applied
[`jax.core.Jaxpr`](index.html#jax.core.Jaxpr), and is what you obtain when you use
[`jax.make_jaxpr()`](index.html#jax.make_jaxpr) to inspect jaxprs. It has the following fields:
> * `jaxpr`: is a [`jax.core.Jaxpr`](index.html#jax.core.Jaxpr) representing the actual
> computation content of the function (described below).
> * `consts` is a list of constants.
The most interesting part of the ClosedJaxpr is the actual execution content,
represented as a [`jax.core.Jaxpr`](index.html#jax.core.Jaxpr) as printed using the following grammar:
```
jaxpr ::= { lambda Var* ; Var+.
let Eqn*
in [Expr+] }
```
where:* The parameters of the jaxpr are shown as two lists of variables separated by
`;`. The first set of variables are the ones that have been introduced to stand for constants that have been hoisted out. These are called the
`constvars`, and in a [`jax.core.ClosedJaxpr`](index.html#jax.core.ClosedJaxpr) the `consts`
field holds corresponding values. The second list of variables, called
`invars`, correspond to the inputs of the traced Python function.
* `Eqn*` is a list of equations, defining intermediate variables referring to intermediate expressions. Each equation defines one or more variables as the result of applying a primitive on some atomic expressions. Each equation uses only input variables and intermediate variables defined by previous equations.
* `Expr+`: is a list of output atomic expressions (literals or variables)
for the jaxpr.
Equations are printed as follows:
```
Eqn ::= let Var+ = Primitive [ Param* ] Expr+
```
where:* `Var+` are one or more intermediate variables to be defined as the output of a primitive invocation (some primitives can return multiple values).
* `Expr+` are one or more atomic expressions, each either a variable or a literal constant. A special variable `unitvar` or literal `unit`,
printed as `*`, represents a value that is not needed in the rest of the computation and has been elided. That is, units are just placeholders.
* `Param*` are zero or more named parameters to the primitive, printed in square brackets. Each parameter is shown as `Name = Value`.
Most jaxpr primitives are first-order (they take just one or more Expr as arguments):
```
Primitive := add | sub | sin | mul | ...
```
The jaxpr primitives are documented in the [`jax.lax`](index.html#module-jax.lax) module.
For example, here is the jaxpr produced for the function `func1` below
```
>>> from jax import make_jaxpr
>>> import jax.numpy as jnp
>>> def func1(first, second):
... temp = first + jnp.sin(second) * 3.
... return jnp.sum(temp)
...
>>> print(make_jaxpr(func1)(jnp.zeros(8), jnp.ones(8)))
{ lambda ; a:f32[8] b:f32[8]. let
c:f32[8] = sin b
d:f32[8] = mul c 3.0
e:f32[8] = add a d
f:f32[] = reduce_sum[axes=(0,)] e
in (f,) }
```
Here there are no constvars, `a` and `b` are the input variables and they correspond respectively to
`first` and `second` function parameters. The scalar literal `3.0` is kept inline.
The `reduce_sum` primitive has named parameters `axes` and `input_shape`, in addition to the operand `e`.
Note that even though execution of a program that calls into JAX builds a jaxpr,
Python-level control-flow and Python-level functions execute normally.
This means that just because a Python program contains functions and control-flow,
the resulting jaxpr does not have to contain control-flow or higher-order features.
For example, when tracing the function `func3` JAX will inline the call to
`inner` and the conditional `if second.shape[0] > 4`, and will produce the same jaxpr as before
```
>>> def func2(inner, first, second):
... temp = first + inner(second) * 3.
... return jnp.sum(temp)
...
>>> def inner(second):
... if second.shape[0] > 4:
... return jnp.sin(second)
... else:
... assert False
...
>>> def func3(first, second):
... return func2(inner, first, second)
...
>>> print(make_jaxpr(func3)(jnp.zeros(8), jnp.ones(8)))
{ lambda ; a:f32[8] b:f32[8]. let
c:f32[8] = sin b
d:f32[8] = mul c 3.0
e:f32[8] = add a d
f:f32[] = reduce_sum[axes=(0,)] e
in (f,) }
```
#### Handling PyTrees[#](#handling-pytrees)
In jaxpr there are no tuple types; instead primitives take multiple inputs and produce multiple outputs. When processing a function that has structured inputs or outputs, JAX will flatten those and in jaxpr they will appear as lists of inputs and outputs. For more details, please see the documentation for PyTrees ([Pytrees](index.html#pytrees)).
For example, the following code produces an identical jaxpr to what we saw before (with two input vars, one for each element of the input tuple)
```
>>> def func4(arg): # Arg is a pair
... temp = arg[0] + jnp.sin(arg[1]) * 3.
... return jnp.sum(temp)
...
>>> print(make_jaxpr(func4)((jnp.zeros(8), jnp.ones(8))))
{ lambda ; a:f32[8] b:f32[8]. let
c:f32[8] = sin b
d:f32[8] = mul c 3.0
e:f32[8] = add a d
f:f32[] = reduce_sum[axes=(0,)] e
in (f,) }
```
#### Constant Vars[#](#constant-vars)
Some values in jaxprs are constants, in that their value does not depend on the jaxpr’s arguments. When these values are scalars they are represented directly in the jaxpr equations; non-scalar array constants are instead hoisted out to the top-level jaxpr, where they correspond to constant variables (“constvars”).
These constvars differ from the other jaxpr parameters (“invars”) only as a bookkeeping convention.
#### Higher-order primitives[#](#higher-order-primitives)
jaxpr includes several higher-order primitives. They are more complicated because they include sub-jaxprs.
##### Conditionals[#](#conditionals)
JAX traces through normal Python conditionals. To capture a conditional expression for dynamic execution, one must use the
[`jax.lax.switch()`](index.html#jax.lax.switch) and [`jax.lax.cond()`](index.html#jax.lax.cond) constructors,
which have the signatures:
```
lax.switch(index: int, branches: Sequence[A -> B], operand: A) -> B
lax.cond(pred: bool, true_body: A -> B, false_body: A -> B, operand: A) -> B
```
Both of these will bind a primitive called `cond` internally. The
`cond` primitive in jaxprs reflects the more general signature of
`lax.switch()`: it takes an integer denoting the index of the branch to execute (clamped into valid indexing range).
For example:
```
>>> from jax import lax
>>>
>>> def one_of_three(index, arg):
... return lax.switch(index, [lambda x: x + 1.,
... lambda x: x - 2.,
... lambda x: x + 3.],
... arg)
...
>>> print(make_jaxpr(one_of_three)(1, 5.))
{ lambda ; a:i32[] b:f32[]. let
c:i32[] = convert_element_type[new_dtype=int32 weak_type=False] a
d:i32[] = clamp 0 c 2
e:f32[] = cond[
branches=(
{ lambda ; f:f32[]. let g:f32[] = add f 1.0 in (g,) }
{ lambda ; h:f32[]. let i:f32[] = sub h 2.0 in (i,) }
{ lambda ; j:f32[]. let k:f32[] = add j 3.0 in (k,) }
)
linear=(False,)
] d b
in (e,) }
```
The cond primitive has a number of parameters:
> * branches are jaxprs that correspond to the branch
> functionals. In this example, those functionals each take one
> input variable, corresponding to `x`.
> * linear is a tuple of booleans that is used internally by the
> auto-differentiation machinery to encode which of the input
> parameters are used linearly in the conditional.
The above instance of the cond primitive takes two operands. The first one (`d`) is the branch index, then `b` is the operand (`arg`) to be passed to whichever jaxpr in `branches` is selected by the branch index.
Another example, using `lax.cond()`:
```
>>> from jax import lax
>>>
>>> def func7(arg):
... return lax.cond(arg >= 0.,
... lambda xtrue: xtrue + 3.,
... lambda xfalse: xfalse - 3.,
... arg)
...
>>> print(make_jaxpr(func7)(5.))
{ lambda ; a:f32[]. let
b:bool[] = ge a 0.0
c:i32[] = convert_element_type[new_dtype=int32 weak_type=False] b
d:f32[] = cond[
branches=(
{ lambda ; e:f32[]. let f:f32[] = sub e 3.0 in (f,) }
{ lambda ; g:f32[]. let h:f32[] = add g 3.0 in (h,) }
)
linear=(False,)
] c a
in (d,) }
```
In this case, the boolean predicate is converted to an integer index
(0 or 1), and `branches` are jaxprs that correspond to the false and true branch functionals, in that order. Again, each functional takes one input variable, corresponding to `xfalse` and `xtrue`
respectively.
The following example shows a more complicated situation when the input to the branch functionals is a tuple, and the false branch functional contains a constant `jnp.ones(1)` that is hoisted as a constvar
```
>>> def func8(arg1, arg2): # arg2 is a pair
... return lax.cond(arg1 >= 0.,
... lambda xtrue: xtrue[0],
... lambda xfalse: jnp.array([1]) + xfalse[1],
... arg2)
...
>>> print(make_jaxpr(func8)(5., (jnp.zeros(1), 2.)))
{ lambda a:i32[1]; b:f32[] c:f32[1] d:f32[]. let
e:bool[] = ge b 0.0
f:i32[] = convert_element_type[new_dtype=int32 weak_type=False] e
g:f32[1] = cond[
branches=(
{ lambda ; h:i32[1] i:f32[1] j:f32[]. let
k:f32[1] = convert_element_type[new_dtype=float32 weak_type=True] h
l:f32[1] = add k j
in (l,) }
{ lambda ; m_:i32[1] n:f32[1] o:f32[]. let in (n,) }
)
linear=(False, False, False)
] f a c d
in (g,) }
```
##### While[#](#while)
Just like for conditionals, Python loops are inlined during tracing.
If you want to capture a loop for dynamic execution, you must use one of several special operations, [`jax.lax.while_loop()`](index.html#jax.lax.while_loop) (a primitive)
and [`jax.lax.fori_loop()`](index.html#jax.lax.fori_loop)
(a helper that generates a while_loop primitive):
```
lax.while_loop(cond_fun: (C -> bool), body_fun: (C -> C), init: C) -> C lax.fori_loop(start: int, end: int, body: (int -> C -> C), init: C) -> C
```
In the above signature, “C” stands for the type of a the loop “carry” value.
For example, here is an example fori loop
```
>>> import numpy as np
>>>
>>> def func10(arg, n):
... ones = jnp.ones(arg.shape) # A constant
... return lax.fori_loop(0, n,
... lambda i, carry: carry + ones * 3. + arg,
... arg + ones)
...
>>> print(make_jaxpr(func10)(np.ones(16), 5))
{ lambda ; a:f32[16] b:i32[]. let
c:f32[16] = broadcast_in_dim[broadcast_dimensions=() shape=(16,)] 1.0
d:f32[16] = add a c
_:i32[] _:i32[] e:f32[16] = while[
body_jaxpr={ lambda ; f:f32[16] g:f32[16] h:i32[] i:i32[] j:f32[16]. let
k:i32[] = add h 1
l:f32[16] = mul f 3.0
m:f32[16] = add j l
n:f32[16] = add m g
in (k, i, n) }
body_nconsts=2
cond_jaxpr={ lambda ; o:i32[] p:i32[] q:f32[16]. let
r:bool[] = lt o p
in (r,) }
cond_nconsts=0
] c a 0 b d
in (e,) }
```
The while primitive takes 5 arguments: `c a 0 b d`, as follows:
> * 0 constants for `cond_jaxpr` (since `cond_nconsts` is 0)
> * 2 constants for `body_jaxpr` (`c`, and `a`)
> * 3 parameters for the initial value of carry
##### Scan[#](#scan)
JAX supports a special form of loop over the elements of an array (with statically known shape). The fact that there are a fixed number of iterations makes this form of looping easily reverse-differentiable. Such loops are constructed with the [`jax.lax.scan()`](index.html#jax.lax.scan) function:
```
lax.scan(body_fun: (C -> A -> (C, B)), init_carry: C, in_arr: Array[A]) -> (C, Array[B])
```
This is written in terms of a [Haskell Type Signature](https://wiki.haskell.org/Type_signature):
`C` is the type of the scan carry, `A` is the element type of the input array(s), and `B` is the element type of the output array(s).
For the example consider the function `func11` below
```
>>> def func11(arr, extra):
... ones = jnp.ones(arr.shape) # A constant
... def body(carry, aelems):
... # carry: running dot-product of the two arrays
... # aelems: a pair with corresponding elements from the two arrays
... ae1, ae2 = aelems
... return (carry + ae1 * ae2 + extra, carry)
... return lax.scan(body, 0., (arr, ones))
...
>>> print(make_jaxpr(func11)(np.ones(16), 5.))
{ lambda ; a:f32[16] b:f32[]. let
c:f32[16] = broadcast_in_dim[broadcast_dimensions=() shape=(16,)] 1.0
d:f32[] e:f32[16] = scan[
jaxpr={ lambda ; f:f32[] g:f32[] h:f32[] i:f32[]. let
j:f32[] = mul h i
k:f32[] = convert_element_type[new_dtype=float32 weak_type=False] g
l:f32[] = add k j
m:f32[] = convert_element_type[new_dtype=float32 weak_type=False] f
n:f32[] = add l m
in (n, g) }
length=16
linear=(False, False, False, False)
num_carry=1
num_consts=1
reverse=False
unroll=1
] b 0.0 a c
in (d, e) }
```
The `linear` parameter describes for each of the input variables whether they are guaranteed to be used linearly in the body. Once the scan goes through linearization, more arguments will be linear.
The scan primitive takes 4 arguments: `b 0.0 a c`, of which:
> * one is the free variable for the body
> * one is the initial value of the carry
> * The next 2 are the arrays over which the scan operates.
##### XLA_call[#](#xla-call)
The call primitive arises from JIT compilation, and it encapsulates a sub-jaxpr along with parameters that specify the backend and the device on which the computation should run. For example
```
>>> from jax import jit
>>>
>>> def func12(arg):
... @jit
... def inner(x):
... return x + arg * jnp.ones(1) # Include a constant in the inner function
... return arg + inner(arg - 2.)
...
>>> print(make_jaxpr(func12)(1.))
{ lambda ; a:f32[]. let
b:f32[] = sub a 2.0
c:f32[1] = pjit[
jaxpr={ lambda ; d:f32[] e:f32[]. let
f:f32[1] = broadcast_in_dim[broadcast_dimensions=() shape=(1,)] 1.0
g:f32[] = convert_element_type[new_dtype=float32 weak_type=False] d
h:f32[1] = mul g f
i:f32[] = convert_element_type[new_dtype=float32 weak_type=False] e
j:f32[1] = add i h
in (j,) }
name=inner
] a b
k:f32[] = convert_element_type[new_dtype=float32 weak_type=False] a
l:f32[1] = add k c
in (l,) }
```
##### XLA_pmap[#](#xla-pmap)
If you use the [`jax.pmap()`](index.html#jax.pmap) transformation, the function to be mapped is captured using the `xla_pmap` primitive. Consider this example
```
>>> from jax import pmap
>>>
>>> def func13(arr, extra):
... def inner(x):
... # use a free variable "extra" and a constant jnp.ones(1)
... return (x + extra + jnp.ones(1)) / lax.psum(x, axis_name='rows')
... return pmap(inner, axis_name='rows')(arr)
...
>>> print(make_jaxpr(func13)(jnp.ones((1, 3)), 5.))
{ lambda ; a:f32[1,3] b:f32[]. let
c:f32[1,3] = xla_pmap[
axis_name=rows
axis_size=1
backend=None
call_jaxpr={ lambda ; d:f32[] e:f32[3]. let
f:f32[] = convert_element_type[new_dtype=float32 weak_type=False] d
g:f32[3] = add e f
h:f32[1] = broadcast_in_dim[broadcast_dimensions=() shape=(1,)] 1.0
i:f32[3] = add g h
j:f32[3] = psum[axes=('rows',) axis_index_groups=None] e
k:f32[3] = div i j
in (k,) }
devices=None
donated_invars=(False, False)
global_axis_size=1
in_axes=(None, 0)
is_explicit_global_axis_size=False
name=inner
out_axes=(0,)
] b a
in (c,) }
```
The `xla_pmap` primitive specifies the name of the axis (parameter
`axis_name`) and the body of the function to be mapped as the `call_jaxpr`
parameter. The value of this parameter is a Jaxpr with 2 input variables.
The parameter `in_axes` specifies which of the input variables should be mapped and which should be broadcast. In our example, the value of `extra`
is broadcast and the value of `arr` is mapped.
### External Callbacks in JAX[#](#external-callbacks-in-jax)
This guide outlines the uses of various callback functions, which allow JAX runtimes to execute Python code on the host, even while running under `jit`, `vmap`, `grad`, or another transformation.
#### Why callbacks?[#](#why-callbacks)
A callback routine is a way to perform **host-side** execution of code at runtime.
As a simple example, suppose you’d like to print the *value* of some variable during the course of a computation.
Using a simple Python `print` statement, it looks like this:
```
import jax
@jax.jit def f(x):
y = x + 1
print("intermediate value: {}".format(y))
return y * 2
result = f(2)
```
```
intermediate value: Traced<ShapedArray(int32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>
```
What is printed is not the runtime value, but the trace-time abstract value (if you’re not famililar with *tracing* in JAX, a good primer can be found in [How To Think In JAX](https://jax.readthedocs.io/en/latest/notebooks/thinking_in_jax.html)).
To print the value at runtime we need a callback, for example `jax.debug.print`:
```
@jax.jit def f(x):
y = x + 1
jax.debug.print("intermediate value: {}", y)
return y * 2
result = f(2)
```
```
intermediate value: 3
```
This works by passing the runtime value represented by `y` back to the host process, where the host can print the value.
#### Flavors of Callback[#](#flavors-of-callback)
In earlier versions of JAX, there was only one kind of callback available, implemented in `jax.experimental.host_callback`. The `host_callback` routines had some deficiencies, and are now deprecated in favor of several callbacks designed for different situations:
* [`jax.pure_callback()`](index.html#jax.pure_callback): appropriate for pure functions: i.e. functions with no side effect.
* [`jax.experimental.io_callback()`](index.html#jax.experimental.io_callback): appropriate for impure functions: e.g. functions which read or write data to disk.
* [`jax.debug.callback()`](index.html#jax.debug.callback): appropriate for functions that should reflect the execution behavior of the compiler.
(The [`jax.debug.print()`](index.html#jax.debug.print) function we used above is a wrapper around [`jax.debug.callback()`](index.html#jax.debug.callback)).
From the user perspective, these three flavors of callback are mainly distinguished by what transformations and compiler optimizations they allow.
| callback function | supports return value | `jit` | `vmap` | `grad` | `scan`/`while_loop` | guaranteed execution |
| --- | --- | --- | --- | --- | --- | --- |
| `jax.pure_callback` | ✅ | ✅ | ✅ | ❌¹ | ✅ | ❌ |
| `jax.experimental.io_callback` | ✅ | ✅ | ✅/❌² | ❌ | ✅³ | ✅ |
| `jax.debug.callback` | ❌ | ✅ | ✅ | ✅ | ✅ | ❌ |
¹ `jax.pure_callback` can be used with `custom_jvp` to make it compatible with autodiff
² `jax.experimental.io_callback` is compatible with `vmap` only if `ordered=False`.
³ Note that `vmap` of `scan`/`while_loop` of `io_callback` has complicated semantics, and its behavior may change in future releases.
##### Exploring `jax.pure_callback`[#](#exploring-jax-pure-callback)
`jax.pure_callback` is generally the callback function you should reach for when you want host-side execution of a pure function: i.e. a function that has no side-effects (such as printing values, reading data from disk, updating a global state, etc.).
The function you pass to `jax.pure_callback` need not actually be pure, but it will be assumed pure by JAX’s transformations and higher-order functions, which means that it may be silently elided or called multiple times.
```
import jax import jax.numpy as jnp import numpy as np
def f_host(x):
# call a numpy (not jax.numpy) operation:
return np.sin(x).astype(x.dtype)
def f(x):
result_shape = jax.ShapeDtypeStruct(x.shape, x.dtype)
return jax.pure_callback(f_host, result_shape, x)
x = jnp.arange(5.0)
f(x)
```
```
Array([ 0. , 0.841471 , 0.9092974, 0.14112 , -0.7568025], dtype=float32)
```
Because `pure_callback` can be elided or duplicated, it is compatible out-of-the-box with transformations like `jit` and `vmap`, as well as higher-order primitives like `scan` and `while_loop`:”
```
jax.jit(f)(x)
```
```
Array([ 0. , 0.841471 , 0.9092974, 0.14112 , -0.7568025], dtype=float32)
```
```
jax.vmap(f)(x)
```
```
Array([ 0. , 0.841471 , 0.9092974, 0.14112 , -0.7568025], dtype=float32)
```
```
def body_fun(_, x):
return _, f(x)
jax.lax.scan(body_fun, None, jnp.arange(5.0))[1]
```
```
Array([ 0. , 0.841471 , 0.9092974, 0.14112 , -0.7568025], dtype=float32)
```
However, because there is no way for JAX to introspect the content of the callback, `pure_callback` has undefined autodiff semantics:
```
%xmode minimal
```
```
Exception reporting mode: Minimal
```
```
jax.grad(f)(x)
```
```
ValueError: Pure callbacks do not support JVP. Please use `jax.custom_jvp` to use callbacks while taking gradients.
```
For an example of using `pure_callback` with `jax.custom_jvp`, see *Example: `pure_callback` with `custom_jvp`* below.
By design functions passed to `pure_callback` are treated as if they have no side-effects: one consequence of this is that if the output of the function is not used, the compiler may eliminate the callback entirely:
```
def print_something():
print('printing something')
return np.int32(0)
@jax.jit def f1():
return jax.pure_callback(print_something, np.int32(0))
f1();
```
```
printing something
```
```
@jax.jit def f2():
jax.pure_callback(print_something, np.int32(0))
return 1.0 f2();
```
In `f1`, the output of the callback is used in the return value of the function, so the callback is executed and we see the printed output.
In `f2` on the other hand, the output of the callback is unused, and so the compiler notices this and eliminates the function call. These are the correct semantics for a callback to a function with no side-effects.
##### Exploring `jax.experimental.io_callback`[#](#exploring-jax-experimental-io-callback)
In contrast to [`jax.pure_callback()`](index.html#jax.pure_callback), [`jax.experimental.io_callback()`](index.html#jax.experimental.io_callback) is explicitly meant to be used with impure functions, i.e. functions that do have side-effects.
As an example, here is a callback to a global host-side numpy random generator. This is an impure operation because a side-effect of generating a random number in numpy is that the random state is updated (Please note that this is meant as a toy example of `io_callback` and not necessarily a recommended way of generating random numbers in JAX!).
```
from jax.experimental import io_callback from functools import partial
global_rng = np.random.default_rng(0)
def host_side_random_like(x):
"""Generate a random array like x using the global_rng state"""
# We have two side-effects here:
# - printing the shape and dtype
# - calling global_rng, thus updating its state
print(f'generating {x.dtype}{list(x.shape)}')
return global_rng.uniform(size=x.shape).astype(x.dtype)
@jax.jit def numpy_random_like(x):
return io_callback(host_side_random_like, x, x)
x = jnp.zeros(5)
numpy_random_like(x)
```
```
generating float32[5]
```
```
Array([0.6369617 , 0.26978672, 0.04097353, 0.01652764, 0.8132702 ], dtype=float32)
```
The `io_callback` is compatible with `vmap` by default:
```
jax.vmap(numpy_random_like)(x)
```
```
generating float32[]
generating float32[]
generating float32[]
generating float32[]
generating float32[]
```
```
Array([0.91275555, 0.60663575, 0.72949654, 0.543625 , 0.9350724 ], dtype=float32)
```
Note, however, that this may execute the mapped callbacks in any order. So, for example, if you ran this on a GPU, the order of the mapped outputs might differ from run to run.
If it is important that the order of callbacks be preserved, you can set `ordered=True`, in which case attempting to `vmap` will raise an error:
```
@jax.jit def numpy_random_like_ordered(x):
return io_callback(host_side_random_like, x, x, ordered=True)
jax.vmap(numpy_random_like_ordered)(x)
```
```
JaxStackTraceBeforeTransformation: ValueError: Cannot `vmap` ordered IO callback.
The preceding stack trace is the source of the JAX operation that, once transformed by JAX, triggered the following exception.
---
The above exception was the direct cause of the following exception:
ValueError: Cannot `vmap` ordered IO callback.
```
On the other hand, `scan` and `while_loop` work with `io_callback` regardless of whether ordering is enforced:
```
def body_fun(_, x):
return _, numpy_random_like_ordered(x)
jax.lax.scan(body_fun, None, jnp.arange(5.0))[1]
```
```
generating float32[]
generating float32[]
generating float32[]
generating float32[]
generating float32[]
```
```
Array([0.81585354, 0.0027385 , 0.8574043 , 0.03358557, 0.72965544], dtype=float32)
```
Like `pure_callback`, `io_callback` fails under automatic differentiation if it is passed a differentiated variable:
```
jax.grad(numpy_random_like)(x)
```
```
JaxStackTraceBeforeTransformation: ValueError: IO callbacks do not support JVP.
The preceding stack trace is the source of the JAX operation that, once transformed by JAX, triggered the following exception.
---
The above exception was the direct cause of the following exception:
ValueError: IO callbacks do not support JVP.
```
However, if the callback is not dependent on a differentiated variable, it will execute:
```
@jax.jit def f(x):
io_callback(lambda: print('hello'), None)
return x
jax.grad(f)(1.0);
```
```
hello
```
Unlike `pure_callback`, the compiler will not remove the callback execution in this case, even though the output of the callback is unused in the subsequent computation.
##### Exploring `debug.callback`[#](#exploring-debug-callback)
Both `pure_callback` and `io_callback` enforce some assumptions about the purity of the function they’re calling, and limit in various ways what JAX transforms and compilation machinery may do. `debug.callback` essentially assumes *nothing* about the callback function, such that the action of the callback reflects exactly what JAX is doing during the course of a program. Further, `debug.callback` *cannot* return any value to the program.
```
from jax import debug
def log_value(x):
# This could be an actual logging call; we'll use
# print() for demonstration
print("log:", x)
@jax.jit def f(x):
debug.callback(log_value, x)
return x
f(1.0);
```
```
log: 1.0
```
The debug callback is compatible with `vmap`:
```
x = jnp.arange(5.0)
jax.vmap(f)(x);
```
```
log: 0.0 log: 1.0 log: 2.0 log: 3.0 log: 4.0
```
And is also compatible with `grad` and other autodiff transformations
```
jax.grad(f)(1.0);
```
```
log: 1.0
```
This can make `debug.callback` more useful for general-purpose debugging than either `pure_callback` or `io_callback`.
#### Example: `pure_callback` with `custom_jvp`[#](#example-pure-callback-with-custom-jvp)
One powerful way to take advantage of [`jax.pure_callback()`](index.html#jax.pure_callback) is to combine it with [`jax.custom_jvp`](index.html#jax.custom_jvp) (see [Custom derivative rules](https://jax.readthedocs.io/en/latest/notebooks/Custom_derivative_rules_for_Python_code.html) for more details on `custom_jvp`).
Suppose we want to create a JAX-compatible wrapper for a scipy or numpy function that is not yet available in the `jax.scipy` or `jax.numpy` wrappers.
Here, we’ll consider creating a wrapper for the Bessel function of the first kind, implemented in `scipy.special.jv`.
We can start by defining a straightforward `pure_callback`:
```
import jax import jax.numpy as jnp import scipy.special
def jv(v, z):
v, z = jnp.asarray(v), jnp.asarray(z)
# Require the order v to be integer type: this simplifies
# the JVP rule below.
assert jnp.issubdtype(v.dtype, jnp.integer)
# Promote the input to inexact (float/complex).
# Note that jnp.result_type() accounts for the enable_x64 flag.
z = z.astype(jnp.result_type(float, z.dtype))
# Wrap scipy function to return the expected dtype.
_scipy_jv = lambda v, z: scipy.special.jv(v, z).astype(z.dtype)
# Define the expected shape & dtype of output.
result_shape_dtype = jax.ShapeDtypeStruct(
shape=jnp.broadcast_shapes(v.shape, z.shape),
dtype=z.dtype)
# We use vectorize=True because scipy.special.jv handles broadcasted inputs.
return jax.pure_callback(_scipy_jv, result_shape_dtype, v, z, vectorized=True)
```
This lets us call into `scipy.special.jv` from transformed JAX code, including when transformed by `jit` and `vmap`:
```
from functools import partial j1 = partial(jv, 1)
z = jnp.arange(5.0)
```
```
print(j1(z))
```
```
[ 0. 0.44005057 0.5767248 0.33905897 -0.06604332]
```
Here is the same result with `jit`:
```
print(jax.jit(j1)(z))
```
```
[ 0. 0.44005057 0.5767248 0.33905897 -0.06604332]
```
And here is the same result again with `vmap`:
```
print(jax.vmap(j1)(z))
```
```
[ 0. 0.44005057 0.5767248 0.33905897 -0.06604332]
```
However, if we call `jax.grad`, we see an error because there is no autodiff rule defined for this function:
```
jax.grad(j1)(z)
```
```
ValueError: Pure callbacks do not support JVP. Please use `jax.custom_jvp` to use callbacks while taking gradients.
```
Let’s define a custom gradient rule for this. Looking at the definition of the [Bessel Function of the First Kind](https://en.wikipedia.org/?title=Bessel_function_of_the_first_kind), we find that there is a relatively straightforward recurrence relationship for the derivative with respect to the argument `z`:
\[\begin{split}
d J_\nu(z) = \left\{
\begin{eqnarray}
-J_1(z),\ &\nu=0\\
[J_{\nu - 1}(z) - J_{\nu + 1}(z)]/2,\ &\nu\ne 0
\end{eqnarray}\right.
\end{split}\]
The gradient with respect to \(\nu\) is more complicated, but since we’ve restricted the `v` argument to integer types we don’t need to worry about its gradient for the sake of this example.
We can use `jax.custom_jvp` to define this automatic differentiation rule for our callback function:
```
jv = jax.custom_jvp(jv)
@jv.defjvp def _jv_jvp(primals, tangents):
v, z = primals
_, z_dot = tangents # Note: v_dot is always 0 because v is integer.
jv_minus_1, jv_plus_1 = jv(v - 1, z), jv(v + 1, z)
djv_dz = jnp.where(v == 0, -jv_plus_1, 0.5 * (jv_minus_1 - jv_plus_1))
return jv(v, z), z_dot * djv_dz
```
Now computing the gradient of our function will work correctly:
```
j1 = partial(jv, 1)
print(jax.grad(j1)(2.0))
```
```
-0.06447162
```
Further, since we’ve defined our gradient in terms of `jv` itself, JAX’s architecture means that we get second-order and higher derivatives for free:
```
jax.hessian(j1)(2.0)
```
```
Array(-0.4003078, dtype=float32, weak_type=True)
```
Keep in mind that although this all works correctly with JAX, each call to our callback-based `jv` function will result in passing the input data from the device to the host, and passing the output of `scipy.special.jv` from the host back to the device.
When running on accelerators like GPU or TPU, this data movement and host synchronization can lead to significant overhead each time `jv` is called.
However, if you are running JAX on a single CPU (where the “host” and “device” are on the same hardware), JAX will generally do this data transfer in a fast, zero-copy fashion, making this pattern is a relatively straightforward way extend JAX’s capabilities.
### Type promotion semantics[#](#type-promotion-semantics)
This document describes JAX’s type promotion rules–i.e., the result of [`jax.numpy.promote_types()`](index.html#jax.numpy.promote_types) for each pair of types.
For some background on the considerations that went into the design of what is described below, see [Design of Type Promotion Semantics for JAX](https://jax.readthedocs.io/en/latest/jep/9407-type-promotion.html).
JAX’s type promotion behavior is determined via the following type promotion lattice:
where, for example:
* `b1` means `np.bool_`,
* `i2` means `np.int16`,
* `u4` means `np.uint32`,
* `bf` means `np.bfloat16`,
* `f2` means `np.float16`,
* `c8` means `np.complex64`,
* `i*` means Python `int` or weakly-typed `int`,
* `f*` means Python `float` or weakly-typed `float`, and
* `c*` means Python `complex` or weakly-typed `complex`.
(for more about weak types, see [Weakly-typed values in JAX](#weak-types) below).
Promotion between any two types is given by their [join](https://en.wikipedia.org/wiki/Join_and_meet)
on this lattice, which generates the following binary promotion table:
| | b1 | u1 | u2 | u4 | u8 | i1 | i2 | i4 | i8 | bf | f2 | f4 | f8 | c8 | c16 | i* | f* | c* |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| b1 | b1 | u1 | u2 | u4 | u8 | i1 | i2 | i4 | i8 | bf | f2 | f4 | f8 | c8 | c16 | i* | f* | c* |
| u1 | u1 | u1 | u2 | u4 | u8 | i2 | i2 | i4 | i8 | bf | f2 | f4 | f8 | c8 | c16 | u1 | f* | c* |
| u2 | u2 | u2 | u2 | u4 | u8 | i4 | i4 | i4 | i8 | bf | f2 | f4 | f8 | c8 | c16 | u2 | f* | c* |
| u4 | u4 | u4 | u4 | u4 | u8 | i8 | i8 | i8 | i8 | bf | f2 | f4 | f8 | c8 | c16 | u4 | f* | c* |
| u8 | u8 | u8 | u8 | u8 | u8 | f* | f* | f* | f* | bf | f2 | f4 | f8 | c8 | c16 | u8 | f* | c* |
| i1 | i1 | i2 | i4 | i8 | f* | i1 | i2 | i4 | i8 | bf | f2 | f4 | f8 | c8 | c16 | i1 | f* | c* |
| i2 | i2 | i2 | i4 | i8 | f* | i2 | i2 | i4 | i8 | bf | f2 | f4 | f8 | c8 | c16 | i2 | f* | c* |
| i4 | i4 | i4 | i4 | i8 | f* | i4 | i4 | i4 | i8 | bf | f2 | f4 | f8 | c8 | c16 | i4 | f* | c* |
| i8 | i8 | i8 | i8 | i8 | f* | i8 | i8 | i8 | i8 | bf | f2 | f4 | f8 | c8 | c16 | i8 | f* | c* |
| bf | bf | bf | bf | bf | bf | bf | bf | bf | bf | bf | f4 | f4 | f8 | c8 | c16 | bf | bf | c8 |
| f2 | f2 | f2 | f2 | f2 | f2 | f2 | f2 | f2 | f2 | f4 | f2 | f4 | f8 | c8 | c16 | f2 | f2 | c8 |
| f4 | f4 | f4 | f4 | f4 | f4 | f4 | f4 | f4 | f4 | f4 | f4 | f4 | f8 | c8 | c16 | f4 | f4 | c8 |
| f8 | f8 | f8 | f8 | f8 | f8 | f8 | f8 | f8 | f8 | f8 | f8 | f8 | f8 | c16 | c16 | f8 | f8 | c16 |
| c8 | c8 | c8 | c8 | c8 | c8 | c8 | c8 | c8 | c8 | c8 | c8 | c8 | c16 | c8 | c16 | c8 | c8 | c8 |
| c16 | c16 | c16 | c16 | c16 | c16 | c16 | c16 | c16 | c16 | c16 | c16 | c16 | c16 | c16 | c16 | c16 | c16 | c16 |
| i* | i* | u1 | u2 | u4 | u8 | i1 | i2 | i4 | i8 | bf | f2 | f4 | f8 | c8 | c16 | i* | f* | c* |
| f* | f* | f* | f* | f* | f* | f* | f* | f* | f* | bf | f2 | f4 | f8 | c8 | c16 | f* | f* | c* |
| c* | c* | c* | c* | c* | c* | c* | c* | c* | c* | c8 | c8 | c8 | c16 | c8 | c16 | c* | c* | c* |
Jax’s type promotion rules differ from those of NumPy, as given by
[`numpy.promote_types()`](https://numpy.org/doc/stable/reference/generated/numpy.promote_types.html#numpy.promote_types), in those cells highlighted with a green background in the table above. There are three key classes of differences:
* When promoting a weakly typed value against a typed JAX value of the same category,
JAX always prefers the precision of the JAX value. For example, `jnp.int16(1) + 1`
will return `int16` rather than promoting to `int64` as in NumPy.
Note that this applies only to Python scalar values; if the constant is a NumPy array then the above lattice is used for type promotion.
For example, `jnp.int16(1) + np.array(1)` will return `int64`.
* When promoting an integer or boolean type against a floating-point or complex type, JAX always prefers the type of the floating-point or complex type.
* JAX supports the
[bfloat16](https://en.wikipedia.org/wiki/Bfloat16_floating-point_format)
non-standard 16-bit floating point type
(`jax.numpy.bfloat16`), which is useful for neural network training.
The only notable promotion behavior is with respect to IEEE-754
`float16`, with which `bfloat16` promotes to a `float32`.
The differences between NumPy and JAX are motivated by the fact that accelerator devices, such as GPUs and TPUs, either pay a significant performance penalty to use 64-bit floating point types (GPUs) or do not support 64-bit floating point types at all (TPUs). Classic NumPy’s promotion rules are too willing to overpromote to 64-bit types, which is problematic for a system designed to run on accelerators.
JAX uses floating point promotion rules that are more suited to modern accelerator devices and are less aggressive about promoting floating point types. The promotion rules used by JAX for floating-point types are similar to those used by PyTorch.
#### Effects of Python operator dispatch[#](#effects-of-python-operator-dispatch)
Keep in mind that Python operators like + will dispatch based on the Python type of the two values being added. This means that, for example, `np.int16(1) + 1` will promote using NumPy rules, whereas `jnp.int16(1) + 1` will promote using JAX rules.
This can lead to potentially confusing non-associative promotion semantics when the two types of promotion are combined;
for example with `np.int16(1) + 1 + jnp.int16(1)`.
#### Weakly-typed values in JAX[#](#weakly-typed-values-in-jax)
*Weakly-typed* values in JAX can in most cases be thought of as having promotion behavior equivalent to that of Python scalars, such as the integer scalar `2` in the following:
```
>>> x = jnp.arange(5, dtype='int8')
>>> 2 * x Array([0, 2, 4, 6, 8], dtype=int8)
```
JAX’s weak type framework is designed to prevent unwanted type promotion within binary operations between JAX values and values with no explicitly user-specified type,
such as Python scalar literals. For example, if `2` were not treated as weakly-typed,
the expression above would lead to an implicit type promotion:
```
>>> jnp.int32(2) * x Array([0, 2, 4, 6, 8], dtype=int32)
```
When used in JAX, Python scalars are sometimes promoted to `DeviceArray`
objects, for example during JIT compilation. To maintain the desired promotion semantics in this case, `DeviceArray` objects carry a `weak_type` flag that can be seen in an array’s string representation:
```
>>> jnp.asarray(2)
Array(2, dtype=int32, weak_type=True)
```
If the `dtype` is specified explicitly, it will instead result in a standard strongly-typed array value:
```
>>> jnp.asarray(2, dtype='int32')
Array(2, dtype=int32)
```
### Pytrees[#](#pytrees)
#### What is a pytree?[#](#what-is-a-pytree)
In JAX, we use the term *pytree* to refer to a tree-like structure built out of container-like Python objects. Classes are considered container-like if they are in the pytree registry, which by default includes lists, tuples, and dicts.
That is:
1. any object whose type is *not* in the pytree container registry is considered a *leaf* pytree;
2. any object whose type is in the pytree container registry, and which contains pytrees, is considered a pytree.
For each entry in the pytree container registry, a container-like type is registered with a pair of functions that specify how to convert an instance of the container type to a `(children, metadata)` pair and how to convert such a pair back to an instance of the container type. Using these functions, JAX can canonicalize any tree of registered container objects into tuples.
Example pytrees:
```
[1, "a", object()] # 3 leaves
(1, (2, 3), ()) # 3 leaves
[1, {"k1": 2, "k2": (3, 4)}, 5] # 5 leaves
```
JAX can be extended to consider other container types as pytrees; see
[Extending pytrees](#extending-pytrees) below.
#### Pytrees and JAX functions[#](#pytrees-and-jax-functions)
Many JAX functions, like [`jax.lax.scan()`](index.html#jax.lax.scan), operate over pytrees of arrays.
JAX function transformations can be applied to functions that accept as input and produce as output pytrees of arrays.
#### Applying optional parameters to pytrees[#](#applying-optional-parameters-to-pytrees)
Some JAX function transformations take optional parameters that specify how certain input or output values should be treated (e.g. the `in_axes` and
`out_axes` arguments to [`vmap()`](index.html#jax.vmap)). These parameters can also be pytrees,
and their structure must correspond to the pytree structure of the corresponding arguments. In particular, to be able to “match up” leaves in these parameter pytrees with values in the argument pytrees, the parameter pytrees are often constrained to be tree prefixes of the argument pytrees.
For example, if we pass the following input to [`vmap()`](index.html#jax.vmap) (note that the input arguments to a function are considered a tuple):
```
(a1, {"k1": a2, "k2": a3})
```
We can use the following `in_axes` pytree to specify that only the `k2`
argument is mapped (`axis=0`) and the rest aren’t mapped over
(`axis=None`):
```
(None, {"k1": None, "k2": 0})
```
The optional parameter pytree structure must match that of the main input pytree. However, the optional parameters can optionally be specified as a
“prefix” pytree, meaning that a single leaf value can be applied to an entire sub-pytree. For example, if we have the same [`vmap()`](index.html#jax.vmap) input as above,
but wish to only map over the dictionary argument, we can use:
```
(None, 0) # equivalent to (None, {"k1": 0, "k2": 0})
```
Or, if we want every argument to be mapped, we can simply write a single leaf value that is applied over the entire argument tuple pytree:
```
0
```
This happens to be the default `in_axes` value for [`vmap()`](index.html#jax.vmap)!
The same logic applies to other optional parameters that refer to specific input or output values of a transformed function, e.g. `vmap`’s `out_axes`.
#### Viewing the pytree definition of an object[#](#viewing-the-pytree-definition-of-an-object)
To view the pytree definition of an arbitrary `object` for debugging purposes, you can use:
```
from jax.tree_util import tree_structure print(tree_structure(object))
```
#### Developer information[#](#developer-information)
*This is primarily JAX internal documentation, end-users are not supposed to need to understand this to use JAX, except when registering new user-defined container types with JAX. Some of these details may change.*
##### Internal pytree handling[#](#internal-pytree-handling)
JAX flattens pytrees into lists of leaves at the `api.py` boundary (and also in control flow primitives). This keeps downstream JAX internals simpler:
transformations like [`grad()`](index.html#jax.grad), [`jit()`](index.html#jax.jit), and [`vmap()`](index.html#jax.vmap)
can handle user functions that accept and return the myriad different Python containers, while all the other parts of the system can operate on functions that only take (multiple) array arguments and always return a flat list of arrays.
When JAX flattens a pytree it will produce a list of leaves and a `treedef`
object that encodes the structure of the original value. The `treedef` can then be used to construct a matching structured value after transforming the leaves. Pytrees are tree-like, rather than DAG-like or graph-like, in that we handle them assuming referential transparency and that they can’t contain reference cycles.
Here is a simple example:
```
from jax.tree_util import tree_flatten, tree_unflatten import jax.numpy as jnp
# The structured value to be transformed value_structured = [1., (2., 3.)]
# The leaves in value_flat correspond to the `*` markers in value_tree value_flat, value_tree = tree_flatten(value_structured)
print(f"{value_flat=}\n{value_tree=}")
# Transform the flat value list using an element-wise numeric transformer transformed_flat = list(map(lambda v: v * 2., value_flat))
print(f"{transformed_flat=}")
# Reconstruct the structured output, using the original transformed_structured = tree_unflatten(value_tree, transformed_flat)
print(f"{transformed_structured=}")
```
```
value_flat=[1.0, 2.0, 3.0]
value_tree=PyTreeDef([*, (*, *)])
transformed_flat=[2.0, 4.0, 6.0]
transformed_structured=[2.0, (4.0, 6.0)]
```
By default, pytree containers can be lists, tuples, dicts, namedtuple, None,
OrderedDict. Other types of values, including numeric and ndarray values, are treated as leaves:
```
from collections import namedtuple Point = namedtuple('Point', ['x', 'y'])
example_containers = [
(1., [2., 3.]),
(1., {'b': 2., 'a': 3.}),
1.,
None,
jnp.zeros(2),
Point(1., 2.)
]
def show_example(structured):
flat, tree = tree_flatten(structured)
unflattened = tree_unflatten(tree, flat)
print(f"{structured=}\n {flat=}\n {tree=}\n {unflattened=}")
for structured in example_containers:
show_example(structured)
```
```
structured=(1.0, [2.0, 3.0])
flat=[1.0, 2.0, 3.0]
tree=PyTreeDef((*, [*, *]))
unflattened=(1.0, [2.0, 3.0])
structured=(1.0, {'b': 2.0, 'a': 3.0})
flat=[1.0, 3.0, 2.0]
tree=PyTreeDef((*, {'a': *, 'b': *}))
unflattened=(1.0, {'a': 3.0, 'b': 2.0})
structured=1.0
flat=[1.0]
tree=PyTreeDef(*)
unflattened=1.0 structured=None
flat=[]
tree=PyTreeDef(None)
unflattened=None structured=Array([0., 0.], dtype=float32)
flat=[Array([0., 0.], dtype=float32)]
tree=PyTreeDef(*)
unflattened=Array([0., 0.], dtype=float32)
structured=Point(x=1.0, y=2.0)
flat=[1.0, 2.0]
tree=PyTreeDef(CustomNode(namedtuple[Point], [*, *]))
unflattened=Point(x=1.0, y=2.0)
```
##### Extending pytrees[#](#extending-pytrees)
By default, any part of a structured value that is not recognized as an internal pytree node (i.e. container-like) is treated as a leaf:
```
class Special(object):
def __init__(self, x, y):
self.x = x
self.y = y
def __repr__(self):
return "Special(x={}, y={})".format(self.x, self.y)
show_example(Special(1., 2.))
```
```
structured=Special(x=1.0, y=2.0)
flat=[Special(x=1.0, y=2.0)]
tree=PyTreeDef(*)
unflattened=Special(x=1.0, y=2.0)
```
The set of Python types that are considered internal pytree nodes is extensible,
through a global registry of types, and values of registered types are traversed recursively. To register a new type, you can use
[`register_pytree_node()`](index.html#jax.tree_util.register_pytree_node):
```
from jax.tree_util import register_pytree_node
class RegisteredSpecial(Special):
def __repr__(self):
return "RegisteredSpecial(x={}, y={})".format(self.x, self.y)
def special_flatten(v):
"""Specifies a flattening recipe.
Params:
v: the value of registered type to flatten.
Returns:
a pair of an iterable with the children to be flattened recursively,
and some opaque auxiliary data to pass back to the unflattening recipe.
The auxiliary data is stored in the treedef for use during unflattening.
The auxiliary data could be used, e.g., for dictionary keys.
"""
children = (v.x, v.y)
aux_data = None
return (children, aux_data)
def special_unflatten(aux_data, children):
"""Specifies an unflattening recipe.
Params:
aux_data: the opaque data that was specified during flattening of the
current treedef.
children: the unflattened children
Returns:
a re-constructed object of the registered type, using the specified
children and auxiliary data.
"""
return RegisteredSpecial(*children)
# Global registration register_pytree_node(
RegisteredSpecial,
special_flatten, # tell JAX what are the children nodes
special_unflatten # tell JAX how to pack back into a RegisteredSpecial
)
show_example(RegisteredSpecial(1., 2.))
```
```
structured=RegisteredSpecial(x=1.0, y=2.0)
flat=[1.0, 2.0]
tree=PyTreeDef(CustomNode(RegisteredSpecial[None], [*, *]))
unflattened=RegisteredSpecial(x=1.0, y=2.0)
```
Alternatively, you can define appropriate `tree_flatten` and `tree_unflatten` methods on your class and decorate it with [`register_pytree_node_class()`](index.html#jax.tree_util.register_pytree_node_class):
```
from jax.tree_util import register_pytree_node_class
@register_pytree_node_class class RegisteredSpecial2(Special):
def __repr__(self):
return "RegisteredSpecial2(x={}, y={})".format(self.x, self.y)
def tree_flatten(self):
children = (self.x, self.y)
aux_data = None
return (children, aux_data)
@classmethod
def tree_unflatten(cls, aux_data, children):
return cls(*children)
show_example(RegisteredSpecial2(1., 2.))
```
```
structured=RegisteredSpecial2(x=1.0, y=2.0)
flat=[1.0, 2.0]
tree=PyTreeDef(CustomNode(RegisteredSpecial2[None], [*, *]))
unflattened=RegisteredSpecial2(x=1.0, y=2.0)
```
When defining an unflattening functions, in general `children` should contain all the dynamic elements of the data structure (arrays, dynamic scalars, and pytrees), while
`aux_data` should contain all the static elements that will be rolled into the `treedef`
structure. JAX sometimes needs to compare `treedef` for equality, or compute its hash for use in the JIT cache, and so care must be taken to ensure that the auxiliary data specified in the flattening recipe supports meaningful hashing and equality comparisons.
The whole set of functions for operating on pytrees are in [`jax.tree_util`](index.html#module-jax.tree_util).
##### Custom PyTrees and Initialization[#](#custom-pytrees-and-initialization)
One common gotcha with user-defined PyTree objects is that JAX transformations occasionally initialize them with unexpected values, so that any input validation done at initialization may fail. For example:
```
class MyTree:
def __init__(self, a):
self.a = jnp.asarray(a)
register_pytree_node(MyTree, lambda tree: ((tree.a,), None),
lambda _, args: MyTree(*args))
tree = MyTree(jnp.arange(5.0))
jax.vmap(lambda x: x)(tree) # Error because object() is passed to MyTree.
jax.jacobian(lambda x: x)(tree) # Error because MyTree(...) is passed to MyTree
```
In the first case, JAX’s internals use arrays of `object()` values to infer the structure of the tree; in the second case, the jacobian of a function mapping a tree to a tree is defined as a tree of trees.
For this reason, the `__init__` and `__new__` methods of custom PyTree classes should generally avoid doing any array conversion or other input validation, or else anticipate and handle these special cases. For example:
```
class MyTree:
def __init__(self, a):
if not (type(a) is object or a is None or isinstance(a, MyTree)):
a = jnp.asarray(a)
self.a = a
```
Another possibility is to structure your `tree_unflatten` function so that it avoids calling `__init__`; for example:
```
def tree_unflatten(aux_data, children):
del aux_data # unused in this class
obj = object.__new__(MyTree)
obj.a = a
return obj
```
If you go this route, make sure that your `tree_unflatten` function stays in-sync with
`__init__` if and when the code is updated.
### Ahead-of-time lowering and compilation[#](#ahead-of-time-lowering-and-compilation)
JAX offers several transformations, such as `jax.jit` and `jax.pmap`, returning a function that is compiled and runs on accelerators or the CPU. As the JIT acronym indicates, all compilation happens *just-in-time* for execution.
Some situations call for *ahead-of-time* (AOT) compilation instead. When you want to fully compile prior to execution time, or you want control over when different parts of the compilation process take place, JAX has some options for you.
First, let’s review the stages of compilation. Suppose that `f` is a function/callable output by [`jax.jit()`](index.html#jax.jit), say `f = jax.jit(F)` for some input callable `F`. When it is invoked with arguments, say `f(x, y)` where `x` and `y`
are arrays, JAX does the following in order:
1. **Stage out** a specialized version of the original Python callable `F` to an internal representation. The specialization reflects a restriction of `F` to input types inferred from properties of the arguments `x` and `y` (usually their shape and element type).
2. **Lower** this specialized, staged-out computation to the XLA compiler’s input language, StableHLO.
3. **Compile** the lowered HLO program to produce an optimized executable for the target device (CPU, GPU, or TPU).
4. **Execute** the compiled executable with the arrays `x` and `y` as arguments.
JAX’s AOT API gives you direct control over steps #2, #3, and #4 (but [not
#1](#inspecting-staged-out-computations)), plus some other features along the way. An example:
```
>>> import jax
>>> import jax.numpy as jnp
>>> import numpy as np
>>> def f(x, y): return 2 * x + y
>>> x, y = 3, 4
>>> lowered = jax.jit(f).lower(x, y)
>>> # Print lowered HLO
>>> print(lowered.as_text())
module @jit_f.0 {
func.func public @main(%arg0: tensor<i32>, %arg1: tensor<i32>) -> tensor<i32> {
%0 = stablehlo.constant dense<2> : tensor<i32>
%1 = stablehlo.multiply %0, %arg0 : tensor<i32>
%2 = stablehlo.add %1, %arg1 : tensor<i32>
return %2 : tensor<i32>
}
}
>>> compiled = lowered.compile()
>>> # Query for cost analysis, print FLOP estimate
>>> compiled.cost_analysis()[0]['flops']
2.0
>>> # Execute the compiled function!
>>> compiled(x, y)
DeviceArray(10, dtype=int32)
```
See the [`jax.stages`](index.html#module-jax.stages) documentation for more details on what functionality the lowering and compiled functions provide.
In place of `jax.jit` above, you can also `lower(...)` the result of
[`jax.pmap()`](index.html#jax.pmap), as well as `pjit` and `xmap` (from
[`jax.experimental.pjit`](index.html#module-jax.experimental.pjit) and [`jax.experimental.maps`](index.html#module-jax.experimental.maps) respectively). In each case, you can `compile()` the result similarly.
All optional arguments to `jit`—such as `static_argnums`—are respected in the corresponding lowering, compilation, and execution. Again the same goes for
`pmap`, `pjit`, and `xmap`.
In the example above, we can replace the arguments to `lower` with any objects that have `shape` and `dtype` attributes:
```
>>> i32_scalar = jax.ShapeDtypeStruct((), jnp.dtype('int32'))
>>> jax.jit(f).lower(i32_scalar, i32_scalar).compile()(x, y)
DeviceArray(10, dtype=int32)
```
More generally, `lower` only needs its arguments to structurally supply what JAX must know for specialization and lowering. For typical array arguments like the ones above, this means `shape` and `dtype` fields. For static arguments, by contrast, JAX needs actual array values (more on this
[below](#lowering-with-static-arguments)).
Invoking an AOT-compiled function with arguments that are incompatible with its lowering raises an error:
```
>>> x_1d = y_1d = jnp.arange(3)
>>> jax.jit(f).lower(i32_scalar, i32_scalar).compile()(x_1d, y_1d)
...
TypeError: Argument types differ from the types for which this computation was compiled. The mismatches are:
Argument 'x' compiled with int32[] and called with int32[3]
Argument 'y' compiled with int32[] and called with int32[3]
>>> x_f = y_f = jnp.float32(72.)
>>> jax.jit(f).lower(i32_scalar, i32_scalar).compile()(x_f, y_f)
...
TypeError: Argument types differ from the types for which this computation was compiled. The mismatches are:
Argument 'x' compiled with int32[] and called with float32[]
Argument 'y' compiled with int32[] and called with float32[]
```
Relatedly, AOT-compiled functions [cannot be transformed by JAX’s just-in-time transformations](#aot-compiled-functions-cannot-be-transformed) such as
`jax.jit`, [`jax.grad()`](index.html#jax.grad), and [`jax.vmap()`](index.html#jax.vmap).
#### Lowering with static arguments[#](#lowering-with-static-arguments)
Lowering with static arguments underscores the interaction between options passed to `jax.jit`, the arguments passed to `lower`, and the arguments needed to invoke the resulting compiled function. Continuing with our example above:
```
>>> lowered_with_x = jax.jit(f, static_argnums=0).lower(7, 8)
>>> # Lowered HLO, specialized to the *value* of the first argument (7)
>>> print(lowered_with_x.as_text())
module @jit_f.1 {
func.func public @main(%arg0: tensor<i32>) -> tensor<i32> {
%0 = stablehlo.constant dense<14> : tensor<i32>
%1 = stablehlo.add %0, %arg0 : tensor<i32>
return %1 : tensor<i32>
}
}
>>> lowered_with_x.compile()(5)
DeviceArray(19, dtype=int32)
```
Note that `lower` here takes two arguments as usual, but the subsequent compiled function accepts only the remaining non-static second argument. The static first argument (value 7) is taken as a constant at lowering time and built into the lowered computation, where it is possibly folded in with other constants. In this case, its multiplication by 2 is simplified, resulting in the constant 14.
Although the second argument to `lower` above can be replaced by a hollow shape/dtype structure, it is necessary that the static first argument be a concrete value. Otherwise, lowering would err:
```
>>> jax.jit(f, static_argnums=0).lower(i32_scalar, i32_scalar)
TypeError: unsupported operand type(s) for *: 'int' and 'ShapeDtypeStruct'
>>> jax.jit(f, static_argnums=0).lower(10, i32_scalar).compile()(5)
DeviceArray(25, dtype=int32)
```
#### AOT-compiled functions cannot be transformed[#](#aot-compiled-functions-cannot-be-transformed)
Compiled functions are specialized to a particular set of argument “types,” such as arrays with a specific shape and element type in our running example. From JAX’s internal point of view, transformations such as [`jax.vmap()`](index.html#jax.vmap) alter the type signature of functions in a way that invalidates the compiled-for type signature. As a policy, JAX simply disallows compiled functions to be involved in transformations. Example:
```
>>> def g(x):
... assert x.shape == (3, 2)
... return x @ jnp.ones(2)
>>> def make_z(*shape):
... return jnp.arange(np.prod(shape)).reshape(shape)
>>> z, zs = make_z(3, 2), make_z(4, 3, 2)
>>> g_jit = jax.jit(g)
>>> g_aot = jax.jit(g).lower(z).compile()
>>> jax.vmap(g_jit)(zs)
DeviceArray([[ 1., 5., 9.],
[13., 17., 21.],
[25., 29., 33.],
[37., 41., 45.]], dtype=float32)
>>> jax.vmap(g_aot)(zs)
TypeError: Cannot apply JAX transformations to a function lowered and compiled for a particular signature. Detected argument of Tracer type <class 'jax.interpreters.batching.BatchTracer'>.
```
A similar error is raised when `g_aot` is involved in autodiff
(e.g. [`jax.grad()`](index.html#jax.grad)). For consistency, transformation by `jax.jit` is disallowed as well, even though `jit` does not meaningfully modify its argument’s type signature.
#### Debug information and analyses, when available[#](#debug-information-and-analyses-when-available)
In addition to the primary AOT functionality (separate and explicit lowering,
compilation, and execution), JAX’s various AOT stages also offer some additional features to help with debugging and gathering compiler feedback.
For instance, as the initial example above shows, lowered functions often offer a text representation. Compiled functions do the same, and also offer cost and memory analyses from the compiler. All of these are provided via methods on the
[`jax.stages.Lowered`](index.html#jax.stages.Lowered) and [`jax.stages.Compiled`](index.html#jax.stages.Compiled) objects (e.g.,
`lowered.as_text()` and `compiled.cost_analysis()` above).
These methods are meant as an aid for manual inspection and debugging, not as a reliably programmable API. Their availability and output vary by compiler,
platform, and runtime. This makes for two important caveats:
1. If some functionality is unavailable on JAX’s current backend, then the method for it returns something trivial (and `False`-like). For example, if the compiler underlying JAX does not provide a cost analysis, then
`compiled.cost_analysis()` will be `None`.
2. If some functionality is available, there are still very limited guarantees on what the corresponding method provides. The return value is not required to be consistent—in type, structure, or value—across JAX configurations,
backends/platforms, versions, or even invocations of the method. JAX cannot guarantee that the output of `compiled.cost_analysis()` on one day will remain the same on the following day.
When in doubt, see the package API documentation for [`jax.stages`](index.html#module-jax.stages).
#### Inspecting staged-out computations[#](#inspecting-staged-out-computations)
Stage #1 in the list at the top of this note mentions specialization and staging, prior to lowering. JAX’s internal notion of a function specialized to the types of its arguments is not always a reified data structure in memory. To explicitly construct a view of JAX’s specialization of a function in the internal [Jaxpr intermediate language](https://jax.readthedocs.io/en/latest/jaxpr.html), see
[`jax.make_jaxpr()`](index.html#jax.make_jaxpr).
### JAX Errors[#](#jax-errors)
This page lists a few of the errors you might encounter when using JAX,
along with representative examples of how one might fix them.
*class* jax.errors.ConcretizationTypeError(*tracer*, *context=''*)[#](#jax.errors.ConcretizationTypeError)
This error occurs when a JAX Tracer object is used in a context where a concrete value is required (see [Different kinds of JAX values](index.html#faq-different-kinds-of-jax-values)
for more on what a Tracer is). In some situations, it can be easily fixed by marking problematic values as static; in others, it may indicate that your program is doing operations that are not directly supported by JAX’s JIT compilation model.
Examples:
Traced value where static value is expectedOne common cause of this error is using a traced value where a static value is required. For example:
```
>>> from functools import partial
>>> from jax import jit
>>> import jax.numpy as jnp
>>> @jit
... def func(x, axis):
... return x.min(axis)
```
```
>>> func(jnp.arange(4), 0)
Traceback (most recent call last):
...
ConcretizationTypeError: Abstract tracer value encountered where concrete value is expected: axis argument to jnp.min().
```
This can often be fixed by marking the problematic argument as static:
```
>>> @partial(jit, static_argnums=1)
... def func(x, axis):
... return x.min(axis)
>>> func(jnp.arange(4), 0)
Array(0, dtype=int32)
```
Shape depends on Traced ValueSuch an error may also arise when a shape in your JIT-compiled computation depends on the values within a traced quantity. For example:
```
>>> @jit
... def func(x):
... return jnp.where(x < 0)
>>> func(jnp.arange(4))
Traceback (most recent call last):
...
ConcretizationTypeError: Abstract tracer value encountered where concrete value is expected:
The error arose in jnp.nonzero.
```
This is an example of an operation that is incompatible with JAX’s JIT compilation model, which requires array sizes to be known at compile-time.
Here the size of the returned array depends on the contents of x, and such code cannot be JIT compiled.
In many cases it is possible to work around this by modifying the logic used in the function; for example here is code with a similar issue:
```
>>> @jit
... def func(x):
... indices = jnp.where(x > 1)
... return x[indices].sum()
>>> func(jnp.arange(4))
Traceback (most recent call last):
...
ConcretizationTypeError: Abstract tracer value encountered where concrete value is expected: The error arose in jnp.nonzero.
```
And here is how you might express the same operation in a way that avoids creation of a dynamically-sized index array:
```
>>> @jit
... def func(x):
... return jnp.where(x > 1, x, 0).sum()
>>> func(jnp.arange(4))
Array(5, dtype=int32)
```
To understand more subtleties having to do with tracers vs. regular values,
and concrete vs. abstract values, you may want to read
[Different kinds of JAX values](index.html#faq-different-kinds-of-jax-values).
Parameters:
* **tracer** (`Tracer`) –
* **context** ([`str`](https://docs.python.org/3/library/stdtypes.html#str)) –
*class* jax.errors.NonConcreteBooleanIndexError(*tracer*)[#](#jax.errors.NonConcreteBooleanIndexError)
This error occurs when a program attempts to use non-concrete boolean indices in a traced indexing operation. Under JIT compilation, JAX arrays must have static shapes (i.e. shapes that are known at compile-time) and so boolean masks must be used carefully. Some logic implemented via boolean masking is simply not possible in a [`jax.jit()`](index.html#jax.jit) function; in other cases, the logic can be re-expressed in a JIT-compatible way, often using the three-argument version of [`where()`](index.html#jax.numpy.where).
Following are a few examples of when this error might arise.
Constructing arrays via boolean maskingThis most commonly arises when attempting to create an array via a boolean mask within a JIT context. For example:
```
>>> import jax
>>> import jax.numpy as jnp
>>> @jax.jit
... def positive_values(x):
... return x[x > 0]
>>> positive_values(jnp.arange(-5, 5))
Traceback (most recent call last):
...
NonConcreteBooleanIndexError: Array boolean indices must be concrete: ShapedArray(bool[10])
```
This function is attempting to return only the positive values in the input array; the size of this returned array cannot be determined at compile-time unless x is marked as static, and so operations like this cannot be performed under JIT compilation.
Reexpressible Boolean LogicAlthough creating dynamically sized arrays is not supported directly, in many cases it is possible to re-express the logic of the computation in terms of a JIT-compatible operation. For example, here is another function that fails under JIT for the same reason:
```
>>> @jax.jit
... def sum_of_positive(x):
... return x[x > 0].sum()
>>> sum_of_positive(jnp.arange(-5, 5))
Traceback (most recent call last):
...
NonConcreteBooleanIndexError: Array boolean indices must be concrete: ShapedArray(bool[10])
```
In this case, however, the problematic array is only an intermediate value,
and we can instead express the same logic in terms of the JIT-compatible three-argument version of [`jax.numpy.where()`](index.html#jax.numpy.where):
```
>>> @jax.jit
... def sum_of_positive(x):
... return jnp.where(x > 0, x, 0).sum()
>>> sum_of_positive(jnp.arange(-5, 5))
Array(10, dtype=int32)
```
This pattern of replacing boolean masking with three-argument
[`where()`](index.html#jax.numpy.where) is a common solution to this sort of problem.
Boolean indexing into JAX arraysThe other situation where this error often arises is when using boolean indices, such as with `.at[...].set(...)`. Here is a simple example:
```
>>> @jax.jit
... def manual_clip(x):
... return x.at[x < 0].set(0)
>>> manual_clip(jnp.arange(-2, 2))
Traceback (most recent call last):
...
NonConcreteBooleanIndexError: Array boolean indices must be concrete: ShapedArray(bool[4])
```
This function is attempting to set values smaller than zero to a scalar fill value. As above, this can be addressed by re-expressing the logic in terms of [`where()`](index.html#jax.numpy.where):
```
>>> @jax.jit
... def manual_clip(x):
... return jnp.where(x < 0, 0, x)
>>> manual_clip(jnp.arange(-2, 2))
Array([0, 0, 0, 1], dtype=int32)
```
Parameters:
**tracer** (`Tracer`) –
*class* jax.errors.TracerArrayConversionError(*tracer*)[#](#jax.errors.TracerArrayConversionError)
This error occurs when a program attempts to convert a JAX Tracer object into a standard NumPy array (see [Different kinds of JAX values](index.html#faq-different-kinds-of-jax-values) for more on what a Tracer is). It typically occurs in one of a few situations.
Using non-JAX functions in JAX transformationsThis error can occur if you attempt to use a non-JAX library like `numpy`
or `scipy` inside a JAX transformation ([`jit()`](index.html#jax.jit), [`grad()`](index.html#jax.grad),
[`jax.vmap()`](index.html#jax.vmap), etc.). For example:
```
>>> from jax import jit
>>> import numpy as np
>>> @jit
... def func(x):
... return np.sin(x)
>>> func(np.arange(4))
Traceback (most recent call last):
...
TracerArrayConversionError: The numpy.ndarray conversion method
__array__() was called on traced array with shape int32[4]
```
In this case, you can fix the issue by using [`jax.numpy.sin()`](index.html#jax.numpy.sin) in place of
`numpy.sin()`:
```
>>> import jax.numpy as jnp
>>> @jit
... def func(x):
... return jnp.sin(x)
>>> func(jnp.arange(4))
Array([0. , 0.84147096, 0.9092974 , 0.14112 ], dtype=float32)
```
See also [External Callbacks](https://jax.readthedocs.io/en/latest/notebooks/external_callbacks.html) for options for calling back to host-side computations from transformed JAX code.
Indexing a numpy array with a tracerIf this error arises on a line that involves array indexing, it may be that the array being indexed `x` is a standard numpy.ndarray while the indices
`idx` are traced JAX arrays. For example:
```
>>> x = np.arange(10)
>>> @jit
... def func(i):
... return x[i]
>>> func(0)
Traceback (most recent call last):
...
TracerArrayConversionError: The numpy.ndarray conversion method
__array__() was called on traced array with shape int32[0]
```
Depending on the context, you may fix this by converting the numpy array into a JAX array:
```
>>> @jit
... def func(i):
... return jnp.asarray(x)[i]
>>> func(0)
Array(0, dtype=int32)
```
or by declaring the index as a static argument:
```
>>> from functools import partial
>>> @partial(jit, static_argnums=(0,))
... def func(i):
... return x[i]
>>> func(0)
Array(0, dtype=int32)
```
To understand more subtleties having to do with tracers vs. regular values,
and concrete vs. abstract values, you may want to read
[Different kinds of JAX values](index.html#faq-different-kinds-of-jax-values).
Parameters:
**tracer** (`Tracer`) –
*class* jax.errors.TracerBoolConversionError(*tracer*)[#](#jax.errors.TracerBoolConversionError)
This error occurs when a traced value in JAX is used in a context where a boolean value is expected (see [Different kinds of JAX values](index.html#faq-different-kinds-of-jax-values)
for more on what a Tracer is).
The boolean cast may be an explicit (e.g. `bool(x)`) or implicit, through use of control flow (e.g. `if x > 0` or `while x`), use of Python boolean operators (e.g. `z = x and y`, `z = x or y`, `z = not x`) or functions that use them (e.g. `z = max(x, y)`, `z = min(x, y)` etc.).
In some situations, this problem can be easily fixed by marking traced values as static; in others, it may indicate that your program is doing operations that are not directly supported by JAX’s JIT compilation model.
Examples:
Traced value used in control flowOne case where this often arises is when a traced value is used in Python control flow. For example:
```
>>> from jax import jit
>>> import jax.numpy as jnp
>>> @jit
... def func(x, y):
... return x if x.sum() < y.sum() else y
>>> func(jnp.ones(4), jnp.zeros(4))
Traceback (most recent call last):
...
TracerBoolConversionError: Attempted boolean conversion of JAX Tracer [...]
```
We could mark both inputs `x` and `y` as static, but that would defeat the purpose of using [`jax.jit()`](index.html#jax.jit) here. Another option is to re-express the if statement in terms of the three-term [`jax.numpy.where()`](index.html#jax.numpy.where):
```
>>> @jit
... def func(x, y):
... return jnp.where(x.sum() < y.sum(), x, y)
>>> func(jnp.ones(4), jnp.zeros(4))
Array([0., 0., 0., 0.], dtype=float32)
```
For more complicated control flow including loops, see
[Control flow operators](index.html#lax-control-flow).
Control flow on traced valuesAnother common cause of this error is if you inadvertently trace over a boolean flag. For example:
```
>>> @jit
... def func(x, normalize=True):
... if normalize:
... return x / x.sum()
... return x
>>> func(jnp.arange(5), True)
Traceback (most recent call last):
...
TracerBoolConversionError: Attempted boolean conversion of JAX Tracer ...
```
Here because the flag `normalize` is traced, it cannot be used in Python control flow. In this situation, the best solution is probably to mark this value as static:
```
>>> from functools import partial
>>> @partial(jit, static_argnames=['normalize'])
... def func(x, normalize=True):
... if normalize:
... return x / x.sum()
... return x
>>> func(jnp.arange(5), True)
Array([0. , 0.1, 0.2, 0.3, 0.4], dtype=float32)
```
For more on `static_argnums`, see the documentation of [`jax.jit()`](index.html#jax.jit).
Using non-JAX aware functionsAnother common cause of this error is using non-JAX aware functions within JAX code. For example:
```
>>> @jit
... def func(x):
... return min(x, 0)
```
```
>>> func(2)
Traceback (most recent call last):
...
TracerBoolConversionError: Attempted boolean conversion of JAX Tracer ...
```
In this case, the error occurs because Python’s built-in `min` function is not compatible with JAX transforms. This can be fixed by replacing it with
`jnp.minumum`:
```
>>> @jit
... def func(x):
... return jnp.minimum(x, 0)
```
```
>>> print(func(2))
0
```
To understand more subtleties having to do with tracers vs. regular values,
and concrete vs. abstract values, you may want to read
[Different kinds of JAX values](index.html#faq-different-kinds-of-jax-values).
Parameters:
**tracer** (`Tracer`) –
*class* jax.errors.TracerIntegerConversionError(*tracer*)[#](#jax.errors.TracerIntegerConversionError)
This error can occur when a JAX Tracer object is used in a context where a Python integer is expected (see [Different kinds of JAX values](index.html#faq-different-kinds-of-jax-values) for more on what a Tracer is). It typically occurs in a few situations.
Passing a tracer in place of an integerThis error can occur if you attempt to pass a traced value to a function that requires a static integer argument; for example:
```
>>> from jax import jit
>>> import numpy as np
>>> @jit
... def func(x, axis):
... return np.split(x, 2, axis)
>>> func(np.arange(4), 0)
Traceback (most recent call last):
...
TracerIntegerConversionError: The __index__() method was called on traced array with shape int32[0]
```
When this happens, the solution is often to mark the problematic argument as static:
```
>>> from functools import partial
>>> @partial(jit, static_argnums=1)
... def func(x, axis):
... return np.split(x, 2, axis)
>>> func(np.arange(10), 0)
[Array([0, 1, 2, 3, 4], dtype=int32),
Array([5, 6, 7, 8, 9], dtype=int32)]
```
An alternative is to apply the transformation to a closure that encapsulates the arguments to be protected, either manually as below or by using
[`functools.partial()`](https://docs.python.org/3/library/functools.html#functools.partial):
```
>>> jit(lambda arr: np.split(arr, 2, 0))(np.arange(4))
[Array([0, 1], dtype=int32), Array([2, 3], dtype=int32)]
```
**Note a new closure is created at every invocation, which defeats the compilation caching mechanism, which is why static_argnums is preferred.**
Indexing a list with a TracerThis error can occur if you attempt to index a Python list with a traced quantity.
For example:
```
>>> import jax.numpy as jnp
>>> from jax import jit
>>> L = [1, 2, 3]
>>> @jit
... def func(i):
... return L[i]
>>> func(0)
Traceback (most recent call last):
...
TracerIntegerConversionError: The __index__() method was called on traced array with shape int32[0]
```
Depending on the context, you can generally fix this either by converting the list to a JAX array:
```
>>> @jit
... def func(i):
... return jnp.array(L)[i]
>>> func(0)
Array(1, dtype=int32)
```
or by declaring the index as a static argument:
```
>>> from functools import partial
>>> @partial(jit, static_argnums=0)
... def func(i):
... return L[i]
>>> func(0)
Array(1, dtype=int32, weak_type=True)
```
To understand more subtleties having to do with tracers vs. regular values,
and concrete vs. abstract values, you may want to read
[Different kinds of JAX values](index.html#faq-different-kinds-of-jax-values).
Parameters:
**tracer** (`Tracer`) –
*class* jax.errors.UnexpectedTracerError(*msg*)[#](#jax.errors.UnexpectedTracerError)
This error occurs when you use a JAX value that has leaked out of a function.
What does it mean to leak a value? If you use a JAX transformation on a function `f` that stores, in some scope outside of `f`, a reference to an intermediate value, that value is considered to have been leaked.
Leaking values is a side effect. (Read more about avoiding side effects in
[Pure Functions](https://jax.readthedocs.io/en/latest/notebooks/Common_Gotchas_in_JAX.html#pure-functions))
JAX detects leaks when you then use the leaked value in another operation later on, at which point it raises an `UnexpectedTracerError`.
To fix this, avoid side effects: if a function computes a value needed in an outer scope, return that value from the transformed function explicitly.
Specifically, a `Tracer` is JAX’s internal representation of a function’s intermediate values during transformations, e.g. within [`jit()`](index.html#jax.jit),
[`pmap()`](index.html#jax.pmap), [`vmap()`](index.html#jax.vmap), etc. Encountering a `Tracer` outside of a transformation implies a leak.
Life-cycle of a leaked valueConsider the following example of a transformed function which leaks a value to an outer scope:
```
>>> from jax import jit
>>> import jax.numpy as jnp
>>> outs = []
>>> @jit # 1
... def side_effecting(x):
... y = x + 1 # 3
... outs.append(y) # 4
>>> x = 1
>>> side_effecting(x) # 2
>>> outs[0] + 1 # 5
Traceback (most recent call last):
...
UnexpectedTracerError: Encountered an unexpected tracer.
```
In this example we leak a Traced value from an inner transformed scope to an outer scope. We get an `UnexpectedTracerError` when the leaked value is used, not when the value is leaked.
This example also demonstrates the life-cycle of a leaked value:
> 1. A function is transformed (in this case, by [`jit()`](index.html#jax.jit))
> 2. The transformed function is called (initiating an abstract trace of the
> function and turning `x` into a `Tracer`)
> 3. The intermediate value `y`, which will later be leaked, is created
> (an intermediate value of a traced function is also a `Tracer`)
> 4. The value is leaked (appended to a list in an outer scope, escaping
> the function through a side-channel)
> 5. The leaked value is used, and an UnexpectedTracerError is raised.
The UnexpectedTracerError message tries to point to these locations in your code by including information about each stage. Respectively:
> 1. The name of the transformed function (`side_effecting`) and which
> transform kicked of the trace [`jit()`](index.html#jax.jit)).
> 2. A reconstructed stack trace of where the leaked Tracer was created,
> which includes where the transformed function was called.
> (`When the Tracer was created, the final 5 stack frames were...`).
> 3. From the reconstructed stack trace, the line of code that created
> the leaked Tracer.
> 4. The leak location is not included in the error message because it is
> difficult to pin down! JAX can only tell you what the leaked value
> looks like (what shape is has and where it was created) and what
> boundary it was leaked over (the name of the transformation and the
> name of the transformed function).
> 5. The current error’s stack trace points to where the value is used.
The error can be fixed by the returning the value out of the transformed function:
```
>>> from jax import jit
>>> import jax.numpy as jnp
>>> outs = []
>>> @jit
... def not_side_effecting(x):
... y = x+1
... return y
>>> x = 1
>>> y = not_side_effecting(x)
>>> outs.append(y)
>>> outs[0] + 1 # all good! no longer a leaked value.
Array(3, dtype=int32, weak_type=True)
```
Leak checkerAs discussed in point 2 and 3 above, JAX shows a reconstructed stack trace which points to where the leaked value was created. This is because JAX only raises an error when the leaked value is used, not when the value is leaked. This is not the most useful place to raise this error,
because you need to know the location where the Tracer was leaked to fix the error.
To make this location easier to track down, you can use the leak checker.
When the leak checker is enabled, an error is raised as soon as a `Tracer`
is leaked. (To be more exact, it will raise an error when the transformed function from which the `Tracer` is leaked returns)
To enable the leak checker you can use the `JAX_CHECK_TRACER_LEAKS`
environment variable or the `with jax.checking_leaks()` context manager.
Note
Note that this tool is experimental and may report false positives. It works by disabling some JAX caches, so it will have a negative effect on performance and should only be used when debugging.
Example usage:
```
>>> from jax import jit
>>> import jax.numpy as jnp
>>> outs = []
>>> @jit
... def side_effecting(x):
... y = x+1
... outs.append(y)
>>> x = 1
>>> with jax.checking_leaks():
... y = side_effecting(x)
Traceback (most recent call last):
...
Exception: Leaked Trace
```
Parameters:
**msg** ([`str`](https://docs.python.org/3/library/stdtypes.html#str)) –
### Transfer guard[#](#transfer-guard)
JAX may transfer data between the host and devices and between devices during type conversion and input sharding. To log or disallow any unintended transfers, the user may configure a JAX transfer guard.
JAX transfer guards distinguish between two types of transfers:
* Explicit transfers: `jax.device_put*()` and `jax.device_get()` calls.
* Implicit transfers: Other transfers (e.g., printing a `DeviceArray`).
A transfer guard can take an action based on its guard level:
* `"allow"`: Silently allow all transfers (default).
* `"log"`: Log and allow implicit transfers. Silently allow explicit transfers.
* `"disallow"`: Disallow implicit transfers. Silently allow explicit transfers.
* `"log_explicit"`: Log and allow all transfers.
* `"disallow_explicit"`: Disallow all transfers.
JAX will raise a `RuntimeError` when disallowing a transfer.
The transfer guards use the standard JAX configuration system:
* A `--jax_transfer_guard=GUARD_LEVEL` command-line flag and
`jax.config.update("jax_transfer_guard", GUARD_LEVEL)` will set the global option.
* A `with jax.transfer_guard(GUARD_LEVEL): ...` context manager will set the thread-local option within the scope of the context manager.
Note that similar to other JAX configuration options, a newly spawned thread will use the global option instead of any active thread-local option of the scope where the thread was spawned.
The transfer guards can also be applied more selectively, based on the direction of transfer. The flag and context manager name is suffixed with a corresponding transfer direction (e.g., `--jax_transfer_guard_host_to_device`
and `jax.config.transfer_guard_host_to_device`):
* `"host_to_device"`: Converting a Python value or NumPy array into a JAX on-device buffer.
* `"device_to_device"`: Copying a JAX on-device buffer to a different device.
* `"device_to_host"`: Fetching a JAX on-device buffer.
Fetching a buffer on a CPU device is always allowed regardless of the transfer guard level.
The following shows an example of using the transfer guard.
```
>>> jax.config.update("jax_transfer_guard", "allow") # This is default.
>>>
>>> x = jnp.array(1)
>>> y = jnp.array(2)
>>> z = jnp.array(3)
>>>
>>> print("x", x) # All transfers are allowed.
x 1
>>> with jax.transfer_guard("disallow"):
... print("x", x) # x has already been fetched into the host.
... print("y", jax.device_get(y)) # Explicit transfers are allowed.
... try:
... print("z", z) # Implicit transfers are disallowed.
... assert False, "This line is expected to be unreachable."
... except:
... print("z could not be fetched")
x 1 y 2 z could not be fetched
```
### Pallas: a JAX kernel language[#](#pallas-a-jax-kernel-language)
Pallas is an extension to JAX that enables writing custom kernels for GPU and TPU.
This section contains tutorials, guides and examples for using Pallas.
#### Pallas Design[#](#pallas-design)
In this document, we explain the initial Pallas design. This is a snapshot of some of the earlier design decisions made and Pallas’s specific APIs might have changed since.
##### Introduction[#](#introduction)
JAX is being used for a diverse set of workloads, from large scale machine learning to scientific computing. JAX’s success story is as much a success story for XLA, the primary compiler that JAX targets – XLA compiles JAX programs for accelerators and has enabled JAX to scale to the largest ML models. JAX describes logical computations in XLA’s representation, HLO. HLO describes how computations happen logically but not physically. Given a logical HLO computation, XLA decides how that computation is to be executed physically. For a wide variety of ML applications, XLA does a good job of compiling user programs but inevitably some users hit XLA’s limitations. In these cases, we need to provide an “escape hatch” to allow experts to write hand-tuned kernels that outperform XLA at that point in time. Furthermore, advances in ML systems research take some time to be incorporated into XLA and users often want to run ahead with them. Over time, the compiler can incorporate the optimizations that were proven out experimentally through hand-tuned kernels.
XLA does offer the `CustomCall` mechanism as an escape hatch, but it requires users to write C++ and on GPU it requires users to learn the CUDA programming model. The CUDA programming model is arguably too low-level for many machine learning GPU kernels, like matrix multiplication, and even expert users will have trouble using CUDA to implement efficient matrix multiplication or multi-headed attention. Not only this, JAX users are usually familiar with Python and NumPy-style array programming which doesn’t involve writing any C++ or thinking about GPU parallelism. All popular machine learning frameworks share this idea: manipulating (usually) arrays with high level operations like `matmul` or `convolution`. Unfortunately, this means implementing a custom operation via `CustomCall` is a big investment, involving potentially learning C++ and/or GPU programming.
[Triton](https://triton-lang.org/main/index.html), a GPU compiler built and maintained by OpenAI, has taken the ML compiler world by storm. Triton offers the best of both worlds: an array-based programming model for GPU kernels. Triton is the primary code generation route for `torch.compile` in PyTorch 2.0, via the Torch Inductor library. Triton actively hides some aspects of GPU programming in the name of a more accessible programming model that can be used from Python and to generate optimized code from a higher-level representation. While GPUs are more flexible than what Triton offers, in the ML domain, Triton seems to be expressive enough for many applications.
In this document, we describe Pallas, an extension to JAX that enables kernel programming for both GPUs and TPUs using a Triton-like model. A JAX-based kernel language offers several advantages:
* Although Triton exposes a TPU-like programming model to users, i.e. writing programs for tiles of arrays in L1-cache, it is specialized enough to GPU that we cannot directly compile Triton for TPU. For example, Triton offers atomic operations specifically meant to handle parallel writes that don’t necessarily make sense on TPU. A higher level front end can abstract away details of the platform while surfacing just that tile-based programming model. The kernels will thus be portable across different hardware platforms.
* JAX as a tracing-based frontend for numerical computing is both mature and well-used. By embedding the kernel programming language in JAX itself, we can re-use JAX’s tracing infrastructure and provide a NumPy-like frontend that’s already familiar to users.
* JAX transformations are key to its success, allowing users to express simple programs but transform them to achieve complex functionality. We can leverage the same transformations (vmap, jvp, etc.) to transform user-written kernels.
The open question is: is JAX a good fit for a kernel language at all? We think so. Triton demonstrates that an array programming language can be practical for writing GPU kernels and JAX is just that. JAX has also proven to be a flexible front-end for compilers and for program transformations.
We describe Pallas as follows: we first describe the ways in which we extend JAX to support writing custom kernels. We then show how we can lower Pallas to both Triton and Mosaic. We conclude by describing existing and potential ways to transform Pallas kernels via JAX transformations.
Visualization of Pallas lowering paths
##### Pallas: Extending JAX for kernels[#](#pallas-extending-jax-for-kernels)
The key point we’d like to make is that Pallas is just JAX, with some extensions:
1. Users now use reference types called `Ref`s in their JAX code. This gives users more precise control over memory access and layout in JAX will more closely resemble physical layout.
2. Users write their JAX programs using a subset of JAX primitives, along with a set of Pallas-specific primitives.
3. Users embed their Pallas kernels in an outer JAX program via a special `pallas_call` higher-order function, that executes the kernel in a map. It is analogous to `pmap` or `shard_map`, except with references to shared memory.
We’ll go over these three extensions one at a time, by example.
Note that these APIs are still experimental and subject to change.
###### Reference types[#](#reference-types)
Let’s look at an example Pallas program for adding two vectors:
```
import jax import jax.numpy as jnp from jax.experimental import pallas as pl
def add_kernel(x_ref, y_ref, o_ref):
# In this code, `x_ref`, `y_ref` and `o_ref` are (8,)-shaped `Ref`s
x = x_ref[:]
y = y_ref[:]
o_ref[:] = x + y x, y = jnp.arange(8), jnp.arange(8, 16)
add = pl.pallas_call(add_kernel, out_shape=jax.ShapeDtypeStruct((8,), jnp.int32))
add(x, y)
```
Unlike a regular JAX program, `add_kernel` does not receive immutable array arguments. Instead, it’s provided with references that can be read from and updated in-place using NumPy-like syntax. `Ref`s are not a Pallas-specific concept – they were introduced to JAX to represent stateful computations. However, we can leverage them when writing kernels that operate on mutable memory too.
Pallas kernels not only receive `Ref`s corresponding to the inputs to the kernel, but also receive `Ref`s for the outputs as well (specified in `pallas_call` via `out_shape`). `Ref`s are special types that cannot be passed into the usual set of JAX primitives without being read from first. When you read from a `Ref` you get a JAX `Array` type out, and you must write an `Array` into a `Ref`.
###### Reading from/writing into Refs[#](#reading-from-writing-into-refs)
Reading from a `Ref` corresponds to loading an array into the lowest level of the memory hierarchy (L1-cache on GPU and vector registers on TPU). Writing into a `Ref` is analogous.
```
def f(x_ref, o_ref):
# Using vanilla Python indexing
x = x_ref[0, 2:5, :]
# Or via Numpy advanced int indexing
o_ref[jnp.arange(3), :] = x
Note that in order to use NumPy advanced int indexing, you need to broadcast the indices against each other into the desired multidimensional shape:
def f(x_ref):
# Assume x_ref is (8, 4) and we want to read out a (2, 3) slice
x = x_ref[jnp.arange(2)[..., None], jnp.arange(3)[None, ...]]
```
Writing to `Ref`s can be done via analogous `__setitem__` style indexing.
Other forms of indexing (for example, dynamic slicing) can be done via `pallas.load` and `pallas.store`, new JAX primitives designed to make loading from/storing into memory easier. We’ll discuss these new primitives later.
###### Extending JAX with new Pallas primitives[#](#extending-jax-with-new-pallas-primitives)
Because JAX was designed with HLO in mind, the set of JAX primitives closely mirrors the set of HLO operations. Targeting a new compiler (e.g. Triton or Mosaic) means we might need to supplement JAX’s primitives with new ones specific to the new compiler. At the same time, we may not be able to lower all JAX primitives, so we need to restrict it to a subset.
Because Pallas was initially designed with Triton in mind, we offer a set of new primitives targeting the Triton programming model. As we’ll show later, we can lower these primitives to Mosaic as well.
###### `pallas.load` and `pallas.store`[#](#pallas-load-and-pallas-store)
`pallas.load` and `pallas.store` are primitives that allow loading from memory and storing into memory. Unlike `__getitem__` and `__setitem__` they are more flexible at the cost of being more verbose. Specifically, you can use the `pallas.dynamic_slice` (`pallas.ds` for short) construct (which should maybe be upstreamed into JAX to be used with Ref `__getitem__` and `__setitem__`).
```
def f(x_ref, o_ref):
# Reading from memory via pallas.load
x = pl.load(x_ref, (0, slice(2, 5), slice(None)))
# Using integer indexing automatically broadcasts
x = pl.load(x_ref, (0, 2 + jnp.arange(3), slice(None)))
# You can also use `pl.dynamic_slice` (`pl.ds` for short) objects as well
pl.store(o_ref, (0, pl.ds(start=2, size=3), slice(None)), x)
```
`pallas.load` and `pallas.store` also support masking via the mask argument.
```
def f(x_ref, o_ref):
# Reading from memory via pallas.load
idx = jnp.arange(8)
mask = idx < 5
x = pl.load(x_ref, (idx,), mask=mask, other=float('-inf'))
```
Masking is important when doing out-of-bounds loads/stores. The operational semantics of masking can be compiler-determined (if we understand the documentation properly, Triton avoids the read from/write to memory if it’s masked).
###### `pallas.program_id` and `pallas.num_programs`[#](#pallas-program-id-and-pallas-num-programs)
As we’ll soon see, we’ll be executing the same Pallas kernels many times (either in parallel or in a pipeline depending on the backend). These new primitives tell us “where” we are in the execution of the kernel.
`pallas.program_id` takes in an axis argument, which tells us which index in an axis of a multidimensional grid this kernel is currently executing in (analogous to `threadId` from CUDA programming or `lax.axis_index` in `jax.pmap`). Note that we are currently borrowing the “program” terminology from Triton and in the future we might want to change it to something more familiar to JAX users.
```
def f(x_ref, o_ref):
i = pl.program_id(axis=0) # execution index in the first axis of the grid
o_ref[i] = jnp.exp(x_ref[i])
```
`pallas.num_programs` also takes in an axis and returns the grid size for that axis.
Note that while `program_id` and `num_programs` are Triton-specific terminology they are easily generalized to make sense on TPU as well.
###### Using a subset of JAX primitives in Pallas[#](#using-a-subset-of-jax-primitives-in-pallas)
Because we’re writing kernels, not high-level HLO programs, some JAX primitives may not be able to be represented in our underlying substrate efficiently. However, we know we can support most elementwise operations, simple dot products, and JAX control flow.
While we haven’t yet mapped out exactly all the JAX primitives that we can support in Pallas kernels, we can certainly identify some that are not easy to lower or are unlikely to be useful:
* `conv_general` - convolution usually isn’t offered as primitive in the underlying hardware.
* `gather/scatter` - the underlying compiler may not support noncontiguous memory reads and writes
###### Executing Pallas kernels with `pallas_call`[#](#executing-pallas-kernels-with-pallas-call)
Now that we’ve written our Pallas kernels (a.k.a. JAX with `Ref`s and the extra Pallas primitives), how do we execute them on a GPU or TPU? We use `pallas_call`, a higher order function (akin to `jax.jit` and `jax.pmap`) that executes the kernel.
The signature of `pallas_call` is as follows:
```
def pallas_call(
kernel: Callable,
in_specs: Sequence[Spec],
out_specs:Sequence[Spec],
out_shapes: Sequence[jax.ShapeDtypeStruct],
grid: Optional[Tuple[int, ...]] = None) -> Callable:
...
```
When we provide a kernel to `pallas_call` we provide additional information. The first is `out_shape` which tells the kernel what the outputs look like (`pallas_call` will pass a `Ref` corresponding to these into the kernel to be written to). The rest of the information (`in_specs`, `out_specs`, and `grid`) are information about how the kernel will be scheduled on the accelerator.
The (rough) semantics are for `pallas_call` are as follows:
```
def pallas_call(kernel, in_specs, out_specs, out_shapes, grid):
def execute(*args):
outputs = map(empty_ref, out_shape)
grid_indices = map(range, grid)
for indices in itertools.product(*grid_indices): # Could run in parallel!
local_inputs = [in_spec.transform(arg, indices) for arg, in_spec in
zip(in_specs, args)]
local_outputs = [out_spec.transform(arg, indices) for arg, out_spec in
zip(out_specs, outputs)]
kernel(*local_inputs, *local_outputs) # writes to outputs
return execute
```
Specifically, `pallas_call` will “loop” over grid iteration space, applying a transformation to the inputs and outputs specified via the `in_specs` and `out_specs`. In each iteration, the kernel will be called on the transformed inputs and outputs. Note that the “loop” over the iteration space could be executed in parallel (e.g. on GPU). `pallas_call` also provides no guarantees on the order of loop iterations over the iteration space, just that every member of the iteration space will be looped over. Compilers like Triton and Mosaic will have more specific operational semantics associated with the grid.
###### Transformation functions[#](#transformation-functions)
The `in_specs` and `out_specs` arguments to `pallas_call` allow inputs and outputs to be transformed in some way. The two options that Pallas offers right now are an identity transformation (where inputs and outputs are left unchanged), and `BlockSpec`s, take fixed-size slices of `Ref`s determined by the loop index.
A `BlockSpec` takes an `index_map` function and a `block_shape`. Logically, it takes an array and slices it along each axis into `block_shape` sizes blocks. The `index_map` function takes loop indices (from the grid index set) and maps them to block indices. The transform function converts `Ref`s into logical views of the `Ref` at the corresponding block. When we specify `None` in an entry in block_shape, that corresponds to “mapping” over that dimension, removing it from the block within the kernel.
```
class BlockSpec:
index_map: Callable[[Tuple[Int, ...]], Tuple[Int, ...]]
block_shape: Tuple[Optional[int], ...]
def transform(self, ref, *loop_indices):
block_indices = self.transform_function(loop_indices)
# Returns a view of `ref` starting at `block_indices` of shape self.block_shape
...
```
We could also imagine other `Spec`s that are used with `pallas_call`, for example a `Spec` that corresponds to overlapping windows to, say, implement convolutions.
###### Immediate benefits of Pallas as a front-end[#](#immediate-benefits-of-pallas-as-a-front-end)
By offering a JAX front-end for kernel writing, we can immediately reap some benefits.
###### More flexible front end[#](#more-flexible-front-end)
The first is that JAX users are already accustomed to the benefits (and limitations) of programming with JAX and its tracing-based transformations. This means users can use closures and other familiar Python constructs when writing Pallas kernels. This is unlike the existing AST-parsing-based Triton front end or the MLIR builders for Mosaic. For example, this makes Pallas far more amenable to templating than Triton.
See this example of how we can use higher-order functions in Python to template a kernel.
```
def make_kernel(eltwise_kernel):
def add(x_ref, y_ref, o_ref):
x = pl.load(x_ref, ())
y = pl.load(y_ref, ())
pl.store(o_ref, (), eltwise_kernel(x + y))
return add
kernel1 = make_kernel(lambda x: x * 2)
kernel2 = make_kernel(jnp.exp)
pl.pallas_call(kernel1, out_shape=x, grid=1)(1., 1.)
pl.pallas_call(kernel2, out_shape=x, grid=1)(1., 1.)
```
###### Emulation mode[#](#emulation-mode)
By representing kernels as programs with JAX primitives and some new Pallas primitives, we can also lower Pallas programs to MHLO directly and compile/execute them with XLA. Specifically, a `pallas_call` can be implemented as a `lax.scan` over the grid. This enables us to develop GPU or TPU kernels on any XLA-supported platform (even CPU!) and debug them using JAX/XLA debugging tools (like `jax.debug.print`). We can also use the more reliable and better tested XLA numerics to verify the correctness of the Triton and Mosaic compilers. One could also imagine perturbing the `scan` ordering to simulate the parallel reads and writes that happen on GPU.
###### Examples[#](#examples)
###### `add`[#](#add)
We modify our `add_kernel` example to operate over (2,)-sized blocks using `BlockSpec`s.
```
def add_kernel(x_ref, y_ref, o_ref):
# In this code, `x_ref`, `y_ref` and `o_ref` are (2,)-shaped `Ref`s
x = x_ref[:]
y = y_ref[:]
o_ref[:] = x + y x, y = jnp.arange(8), jnp.arange(8, 16)
add = pl.pallas_call(
add_kernel,
out_shape=jax.ShapeDtypeStruct((8,), jnp.int32),
in_specs=[
pl.BlockSpec(lambda i:, i, (2,)),
pl.BlockSpec(lambda i:, i, (2,))
],
out_specs=pl.BlockSpec(lambda i: i, (2,))
add(x, y)
```
###### Templated matmul[#](#templated-matmul)
In this example, we compute tiles of the output by doing an unrolled accumulation over blocks of rows and columns from our input arrays. We inline an activation function into the body of the kernel using a higher order function so we can emit a fused kernel.
```
def matmul_kernel(x_ref, y_ref, o_ref, *, activation, block_k):
acc = jnp.zeros((x_ref.shape[0], x_ref.shape[1]), jnp.float32)
for k in range(x_ref.shape[1] // block_k)
x = x_ref[:, k*block_k:(k+1)*block_k]
y = y_ref[k*block_k:(k+1)*block_k, :]
acc += x @ y
o_ref[:, :] = activation(acc).astype(o_ref.dtype)
x, y = jnp.ones((512, 256)), jnp.ones((256, 1024))
block_shape = 256, 256, 128
@partial(jax.jit, static_argnames=["block_shape", "activation"])
def matmul(x, y, *, block_shape, activation):
block_m, block_n, block_k = block_shape
fused_matmul = pl.pallas_call(
partial(matmul_kernel, block_k=block_k, activation=activation),
out_shape=jax.ShapeDtypeStruct((x.shape[0], y.shape[1],), jnp.float32),
in_specs=[
pl.BlockSpec(lambda i, j:, (i, 0), (block_m, x.shape[1])),
pl.BlockSpec(lambda i, j:, (0, j), (y.shape[0], block_n))
],
out_specs=pl.BlockSpec(lambda i, j: (i, j), (block_m, block_n))
return fused_matmul(x, y)
z = matmul(x, y, block_shape=block_shape, activation=jax.nn.gelu)
```
###### Lowering Pallas[#](#lowering-pallas)
After users express their Pallas kernels, we lower them to different representations depending on the target backend. On GPUs, we lower Pallas to Triton IR, and on TPU we lower Pallas to Mosaic.
###### Lowering Pallas to Triton for GPU[#](#lowering-pallas-to-triton-for-gpu)
Lowering Pallas to Triton is easy because Pallas was designed with Triton as a target language in mind. The main differences between Pallas and Triton is that Triton doesn’t have a notion of `BlockSpec`s and also uses pointers when doing memory loads and stores as opposed to indices.
Triton supports pointers as an array element type in its language and in Triton you can load from and store to arrays of pointers. In Pallas, when given a `(4, 5)`-shaped `Ref`, `x_ref`, and then do like `x_ref[3, 2]`, we need to lower this to computing a Triton pointer to the appropriate row-major position in `x_ref` (that is, doing 5 * 3 + 2 * 1). Similarly, when we lower slices to Triton, e.g. `x_ref[4, :]` we need to produce an array of pointers `5 * 4 + jnp.arange(3)`.
Other than that, lowering to Triton is fairly straightforward. JAX dot products can be lowered to Triton dot products and JAX unary primitives are lowered to their Triton equivalents. Triton’s atomic operations are lowered via new Pallas atomic primitives.
###### Lowering Pallas to Mosaic for TPU[#](#lowering-pallas-to-mosaic-for-tpu)
Mosaic consumes (mostly) standard dialect MLIR and emits LLO to be compiled for TPU. Pallas can be lowered to Mosaic via translating JAX primitives to mostly MLIR `vector` and `arith` dialect. The `BlockSpec`s can be converted into pipeline schedules (i.e. the `transform_func`s in Mosaic).
###### Transforming Pallas[#](#transforming-pallas)
A natural question is how do JAX transformations interact with Pallas kernels? There are two main ways: transformations inside Pallas kernels and transformations outside Pallas kernels.
Transformation inside Pallas kernels should actually “just work”, so long as we are able to lower the transformed code. For example, we could use `jax.grad(jnp.sin)(...)` inside of a JAX kernel because we can lower a `cos` to both Triton and Mosaic. However, we might not be able to lower a `jax.vmap(lax.dynamic_slice)` because it could turn into a gather that we cannot lower.
Transformations of Pallas kernels from the outer JAX programs is perhaps the more interesting case. How do we handle things like `vmap(pallas_call)` and `grad(pallas_call)`?
###### `vmap-of-pallas_call`[#](#vmap-of-pallas-call)
vmap automatically vectorizes JAX programs. While kernel writers might want precise control over how a batched kernel will behave differently from its unbatched variant, we can offer a reasonable default `vmap` rule for `pallas_call` while offering the `jax.custom_vmap` customization mechanism. When `pallas_call` is `vmap`-ed, we augment the `pallas_call` to have an extra grid dimension corresponding to the new batch dimension and transform the `BlockSpec`s to handle indexing along that dimension.
###### `grad-of-pallas_call`[#](#grad-of-pallas-call)
`grad` of `pallas_call` enables automatic differentiation of kernels. `jax.grad` breaks down into applications of three distinct transforms: `jvp`, `partial_eval` and `transpose`. In principle, we can re-use most of JAX’s infrastructure when implementing these rules for `pallas_call` (since it behaves much like existing JAX higher order primitives).
However, automatic differentiation of kernels can result in a performance hit due to how memory access is transposed. If we write a GPU kernel with overlapping-and-parallel reads and disjoint-but-parallel writes, we automatically transpose it into a kernel that has overlapping-but-parallel writes (which are slow when done atomically) and disjoint-and-parallel reads. To emit a kernel that better uses parallelism with shared memory, we would need to reorder loops and change how the kernel is vectorized. Unfortunately, we do not have a program representation amenable to that in Pallas. A potential direction to automatically differentiating kernels efficiently is to explore a different representation, perhaps one like that in Dex. We could also look at how Enzyme approaches this problem. However, AD of Pallas kernels may still be useful for a class of kernels that does transpose efficiently (for example elementwise kernels).
In general, though, `jax.custom_vjp` is a viable escape hatch to express Pallas kernels that work with `jax.grad`.
###### Other transformations[#](#other-transformations)
We could imagine other JAX transformations applying to Pallas kernels that we haven’t explicitly explored yet. For example, `checkify` is a JAX transformation that does functional error handling. We could imagine using `checkify` with pallas_call to allow plumbing out error codes from GPU kernels that indicate if OOB access or NaNs were produced.
Another potential transformation to integrate with is custom_partitioning to enable automatically partitionable kernels to be used with pjit.
#### Pallas Quickstart[#](#pallas-quickstart)
Pallas is an extension to JAX that enables writing custom kernels for GPU and TPU. Pallas allows you to use the same JAX functions and APIs but operates at a *lower* level of abstraction.
Specifically, Pallas requires users to think about memory access and how to divide up computations across multiple compute units in a hardware accelerator. On GPUs, Pallas lowers to Triton and on TPUs, Pallas lowers to Mosaic.
Let’s dive into some examples.
> Note: Pallas is still an experimental API and you may be broken by changes!
##### Hello world in Pallas[#](#hello-world-in-pallas)
```
from functools import partial
import jax from jax.experimental import pallas as pl import jax.numpy as jnp import numpy as np
```
We’ll first write the “hello world” in Pallas, a kernel that adds two vectors.
```
def add_vectors_kernel(x_ref, y_ref, o_ref):
x, y = x_ref[...], y_ref[...]
o_ref[...] = x + y
```
**`Ref` types**
Let’s dissect this function a bit. Unlike most JAX functions you’ve probably written, it does not take in `jax.Array`s as inputs and doesn’t return any values. Instead it takes in *`Ref`* objects as inputs. Note that we also don’t have any outputs but we are given an `o_ref`, which corresponds to the desired output.
**Reading from `Ref`s**
In the body, we are first reading from `x_ref` and `y_ref`, indicated by the `[...]` (the ellipsis means we are reading the whole `Ref`; alternatively we also could have used `x_ref[:]`). Reading from a `Ref` like this returns a `jax.Array`.
**Writing to `Ref`s**
We then write `x + y` to `o_ref`. Mutation has not historically been supported in JAX – `jax.Array`s are immutable! `Ref`s are new (experimental) types that allow mutation under certain circumstances. We can interpret writing to a `Ref` as mutating its underlying buffer.
So we’ve written what we call a “kernel”, which we define as a program that will run as an atomic unit of execution on an accelerator, without any interaction with the host. How do we invoke it from a JAX computation? We use the `pallas_call` higher-order function.
```
@jax.jit def add_vectors(x: jax.Array, y: jax.Array) -> jax.Array:
return pl.pallas_call(add_vectors_kernel,
out_shape=jax.ShapeDtypeStruct(x.shape, x.dtype)
)(x, y)
add_vectors(jnp.arange(8), jnp.arange(8))
```
```
Array([ 0, 2, 4, 6, 8, 10, 12, 14], dtype=int32)
```
`pallas_call` lifts the Pallas kernel function into an operation that can be called as part of a larger JAX program. But, to do so, it needs a few more details. Here we specify `out_shape`, an object that has a `.shape` and `.dtype` (or a list thereof).
`out_shape` determines the shape/dtype of `o_ref` in our `add_vector_kernel`.
`pallas_call` returns a function that takes in and returns `jax.Array`s.
**What’s actually happening here?**
Thus far we’ve described how to think about Pallas kernels but what we’ve actually accomplished is we’re writing a function that’s executed very close to the compute units.
On GPU, `x_ref` corresponds to a value in high-bandwidth memory (HBM) and when we do `x_ref[...]` we are copying the value from HBM into static RAM (SRAM) (this is a costly operation generally speaking!). We then use GPU vector compute to execute the addition, then copy the resulting value in SRAM back to HBM.
On TPU, we do something slightly different. Before the kernel is ever executed, we fetch the value from HBM into SRAM. `x_ref` therefore corresponds to a value in SRAM and when we do `x_ref[...]` we are copying the value from SRAM into a register. We then use TPU vector compute to execute the addition, then copy the resulting value back into SRAM. After the kernel is executed, the SRAM value is copied back into HBM.
We are in the process of writing backend-specific Pallas guides. Coming soon!
##### Pallas programming model[#](#pallas-programming-model)
In our “hello world” example, we wrote a very simple kernel. It takes advantage of the fact that our 8-sized arrays can comfortably fit inside the SRAM of hardware accelerators. In most real-world applications, this will not be the case!
Part of writing Pallas kernels is thinking about how to take big arrays that live in high-bandwidth memory (HBM, also known as DRAM) and expressing computations that operate on “blocks” of those arrays that can fit in SRAM.
###### Grids[#](#grids)
To automatically “carve” up the inputs and outputs, you provide a `grid` and `BlockSpec`s to `pallas_call`.
A `grid` is a tuple of integers (e.g. `()`, `(2, 3, 4)`, or `(8,)`) that specifies an iteration space.
For example, a grid `(4, 5)` would have 20 elements: `(0, 0), (0, 1), ..., (0, 4), (1, 0), ..., (3, 4)`.
We run the kernel function once for each element, a style of single-program multiple-data (SPMD) programming.
A 2D grid
When we provide a `grid` to `pallas_call`, the kernel is executed as many times as `prod(grid)`. Each of these invocations is referred to as a “program”, To access which program (i.e. which element of the grid) the kernel is currently executing, we use `program_id(axis=...)`. For example, for invocation `(1, 2)`, `program_id(axis=0)` returns `1` and `program_id(axis=1)` returns `2`.
Here’s an example kernel that uses a `grid` and `program_id`.
```
def iota_kernel(o_ref):
i = pl.program_id(0)
o_ref[i] = i
```
We now execute it using `pallas_call` with an additional `grid` argument.
```
def iota(len: int):
return pl.pallas_call(iota_kernel,
out_shape=jax.ShapeDtypeStruct((len,), jnp.int32),
grid=(len,))()
iota(8)
```
```
Array([0, 1, 2, 3, 4, 5, 6, 7], dtype=int32)
```
On GPUs, each program is executed in parallel on separate threads. Thus, we need to think about race conditions on writes to HBM. A reasonable approach is to write our kernels in such a way that different programs write to disjoint places in HBM to avoid these parallel writes. On the other hand, parallelizing the computation is how we can execute operations like matrix multiplications really quickly.
On TPUs, programs are executed in a combination of parallel and sequential (depending on the architecture) so there are slightly different considerations.
###### Block specs[#](#block-specs)
With `grid` and `program_id` in mind, Pallas provides an abstraction that takes care of some common indexing patterns seen in a lot of kernels.
To build intution, let’s try to implement a matrix multiplication.
A simple strategy for implementing a matrix multiplication in Pallas is to implement it recursively. We know our underlying hardware has support for small matrix multiplications (using GPU and TPU tensorcores), so we just express a big matrix multiplication in terms of smaller ones.
Suppose we have input matrices \(X\) and \(Y\) and are computing \(Z = XY\). We first express \(X\) and \(Y\) as block matrices. \(X\) will “row” blocks and \(Y\) will have “column” blocks.
\[\begin{split}
\begin{align*}
X = \begin{bmatrix}
X_0 \\ X_1
\end{bmatrix}
\end{align*}
\end{split}\]
\[
\begin{align*}
Y = \begin{bmatrix}
Y_0 & Y_1
\end{bmatrix}
\end{align*}
\]
\[\begin{split}
\begin{align*}
Z &=
\begin{bmatrix}
X_0 \\ X_1
\end{bmatrix}
\begin{bmatrix}
Y_0 & Y_1
\end{bmatrix} \\
&=
\begin{bmatrix}
X_0 Y_0 & X_0 Y_1 \\
X_1 Y_0 & X_1 Y_1
\end{bmatrix}
\end{align*}
\end{split}\]
Our strategy is that because \(Z\) is also a block matrix, we can assign each of the programs in our Pallas kernel one of the output blocks. Computing each output block corresponds to doing a smaller matrix multiply between a “row” block of \(X\) and a “column” block of \(Y\).
To express this pattern, we use `BlockSpec`s. A `BlockSpec` specifies a block shape for each input and output, and an “index map” function, that maps a set of program indices to a block index.
A visualization of a `BlockSpec`
For a concrete example, let’s say we’d like to multiply two `(1024, 1024)` matrices together `x` and `y` to produce `z`and would like to parallelize the computation 4 ways. We split up `z` into 4 `(512, 512)` blocks where each block is computed with a `(512, 1024) x (1024, 512)` matrix multiplication. To express this, we’d first use a `(2, 2)` grid (one block for each program).
For `x`, we use `BlockSpec(lambda i, j: (i, 0), (512, 1024))` – this carves `x` up into “row” blocks. To see this see how both program instances `(1, 0)` and `(1, 1)` pick the `(1, 0)` block in `x`. For `y`, we use a transposed version `BlockSpec(lambda i, j: (0, j), (1024, 512))`. Finally, for `z` we use `BlockSpec(lambda i, j: (i, j), (512, 512))`.
These `BlockSpec`s are passed into `pallas_call` via `in_specs` and `out_specs`.
Underneath the hood, `pallas_call` will automatically carve up your inputs and outputs into `Ref`s for each block that will be passed into the kernel.
```
def matmul_kernel(x_ref, y_ref, z_ref):
z_ref[...] = x_ref[...] @ y_ref[...]
def matmul(x: jax.Array, y: jax.Array):
return pl.pallas_call(
matmul_kernel,
out_shape=jax.ShapeDtypeStruct((x.shape[0], y.shape[1]), x.dtype),
grid=(2, 2),
in_specs=[
pl.BlockSpec(lambda i, j: (i, 0), (x.shape[0] // 2, x.shape[1])),
pl.BlockSpec(lambda i, j: (0, j), (y.shape[0], y.shape[1] // 2))
],
out_specs=pl.BlockSpec(
lambda i, j: (i, j), (x.shape[0] // 2, y.shape[1] // 2)
)
)(x, y)
k1, k2 = jax.random.split(jax.random.PRNGKey(0))
x = jax.random.normal(k1, (1024, 1024))
y = jax.random.normal(k2, (1024, 1024))
z = matmul(x, y)
np.testing.assert_allclose(z, x @ y)
```
Note that this is a very naive implementation of a matrix multiplication but consider it a starting point for various types of optimizations.
Let’s add an additional feature to our matrix multiply: fused activation. It’s actually really easy! Just pass a higher-order activation function into the kernel.
```
def matmul_kernel(x_ref, y_ref, z_ref, *, activation):
z_ref[...] = activation(x_ref[...] @ y_ref[...])
def matmul(x: jax.Array, y: jax.Array, *, activation):
return pl.pallas_call(
partial(matmul_kernel, activation=activation),
out_shape=jax.ShapeDtypeStruct((x.shape[0], y.shape[1]), x.dtype),
grid=(2, 2),
in_specs=[
pl.BlockSpec(lambda i, j: (i, 0), (x.shape[0] // 2, x.shape[1])),
pl.BlockSpec(lambda i, j: (0, j), (y.shape[0], y.shape[1] // 2))
],
out_specs=[
pl.BlockSpec(lambda i, j: (i, j), (x.shape[0] // 2, y.shape[1] // 2))
],
)(x, y)
k1, k2 = jax.random.split(jax.random.PRNGKey(0))
x = jax.random.normal(k1, (1024, 1024))
y = jax.random.normal(k2, (1024, 1024))
z = matmul(x, y, activation=jax.nn.relu)
np.testing.assert_allclose(z, jax.nn.relu(x @ y))
```
To conclude, let’s highlight a cool feature of Pallas: it composes with `jax.vmap`! To turn this matrix multiplication into a batched version, we just need to `vmap` it.
```
k1, k2 = jax.random.split(jax.random.PRNGKey(0))
x = jax.random.normal(k1, (4, 1024, 1024))
y = jax.random.normal(k2, (4, 1024, 1024))
z = jax.vmap(partial(matmul, activation=jax.nn.relu))(x, y)
np.testing.assert_allclose(z, jax.nn.relu(jax.vmap(jnp.matmul)(x, y)))
```
#### Writing TPU kernels with Pallas[#](#writing-tpu-kernels-with-pallas)
This page focuses on the details that are important when attempting to run Pallas kernels on Google TPUs. For one, the TPU backend is still in an experimental phase, and only a subset of JAX NumPy will be accepted.
Furthermore, writing performant code for TPUs might require thinking carefully about the native capabilities of the hardware. While many patterns that are unnatural to the hardware will be accepted, they might end up requiring software emulation, and can slow down the computation.
Warning
This feature should still be considered experimental as work is still in progress (in particular on improving the error messages).
Note
While all the features described here are experimental, we remain very serious about maintaining their correctness. As such, it might not be uncommon to see a “not implemented” error while attempting to write TPU kernels. But, if a kernel is accepted by the compiler, it *must* return the expected results.
If you see unexpected outputs, please compare them against a kernel ran with
`interpret=True` passed in to `pallas_call`. If the results diverge,
please file a [bug report](https://github.com/google/jax/issues/new/choose).
##### What is a TPU?[#](#what-is-a-tpu)
TPU is a hardware accelerator developed at Google. You can think of TPUs as GPUs, but specialized for machine learning workloads specifically. As such,
their architecture differs quite significantly. However, we believe that Pallas can make it easy to start writing TPU kernels, even without having a full understanding of the underlying hardware. Having said that, understanding the hardware well will certainly make it easier to write performant kernels.
In a nutshell, the main difference between TPUs and GPUs is that TPUs are sequential machines with a very wide vector register (kind of like a CPU!).
At the same time, they allow the software to schedule certain operations in the background, making them execute asynchronously with respect to the main instruction stream. This includes things like HBM memory accesses
(which cannot be addressed directly, but instead has to be prefetched to lower levels of memory hierarchy by the DMA subunits), matrix multiplies
(supported by the MXU unit) or matrix transpositions and permutes (supported by the XLU unit).
If you’re interested in learning more about the TPU architecture in details, we recommend reading a collection of papers published over the years. While many of them talk about specific TPU generations, many of the ideas described transfer to future generations as well.
* [A Domain-Specific Supercomputer for Training Deep Neural Networks](https://dl.acm.org/doi/10.1145/3360307)
* [The Design Process for Google’s Training Chips: TPUv2 and TPUv3](https://ieeexplore.ieee.org/document/9351692)
* [Ten Lessons From Three Generations Shaped Google’s TPUv4i : Industrial Product](https://ieeexplore.ieee.org/document/9499913)
* [TPU v4: An Optically Reconfigurable Supercomputer for Machine Learning with Hardware Support for Embeddings](https://dl.acm.org/doi/abs/10.1145/3579371.3589350)
##### Noteworthy properties and restrictions[#](#noteworthy-properties-and-restrictions)
###### `BlockSpec`s and grid iteration[#](#blockspecs-and-grid-iteration)
`BlockSpec`s generally behave as expected in Pallas — every invocation of kernel body gets access to slices of inputs and is meant to initialize a slice of the output.
Warning
Not all window shapes are supported. If the last two dimensions of your input are larger than 8 and 128 respectively, the window shape in those dimensions must be a multiple of the respective factor. If the input dimension is smaller,
the window should span the full dimension.
One interesting aspect of Pallas TPU kernels is the way they handle memory spaces:
While the inputs to `pallas_call` will often reside in HBM (the main TPU memory), the references passed in to the kernel body will point to buffers in lower levels of memory hierarchy (VMEM or SMEM). This enables the kernel body to write and read them at very high speeds, while all the communication with HBM (which has very high latency) is handled by the compiler and overlapped with compute.
What’s more, compared to GPUs, TPUs are actually highly sequential machines.
That’s why, the grid is generally not processed in parallel, but sequentially,
in lexicographic order (though see the [Multicore TPU configurations](#multicore-tpu-configurations) section for exceptions). This unlocks some interesting capabilities:
* When two (lexicographically) consecutive grid indices use the same slice of an input, the HBM transfer in the second iteration is skipped, as the data is already available.
* Multiple invocations of the kernel body can write to the same slice of the output, without any risk of race conditions. However, we do require that all invocations that write to a particular slice are consecutive.
The “consecutive” restriction on the output usually means that the some prefix of the grid dimensions always vary the slice of the output an invocation needs to access, while the output window remains constant for the remaining suffix.
For example, when implementing a Pallas TPU kernel for matrix multiplication,
one would generally use a 3 dimensional grid: the first two dimensions would correspond to slicing along the first axis of the left operand and the second axis of the second operand. The third and *last* grid axis would tile the reduction dimension. The grid axis corresponding to the reduction dimension has to be the last one, since the output window does not vary along this axis.
The output reference can be then used as an accumulator for partial results.
Note
VMEM is fairly large for such a low-level memory hierarchy (16MB+), making it possible to use large window sizes. And, oftentimes, the larger the window size, the better the eventual hardware utilization will be. However, if you do end up specifying a window size that (together with space necessary to hold spilled vector registers) exceeds the size of VMEM. You will likely see a low-level compiler error message complaining about an out-of-memory error.
###### Dimension ordering is meaningful[#](#dimension-ordering-is-meaningful)
In JAX programs, the ordering of intermediate arrays inside `jax.jit` usually has no impact on performance, as the compiler is free to rearrange them.
However, as Pallas is meant to expose lower-level capabilities, the dimension order can have great impact on the quality of generated code.
Recall that the TPUs perform bulk of the computation on 2D vector registers.
Pallas TPU will only ever consider mapping the last two dimensions of intermediate arrays to those vector register dimensions (sublanes and lanes respectively). An array of shape `(n, 1, 1)` is guaranteed to require at least
`n` vector registers to represent. If `n` becomes too large, this can lead to spills, and potential VMEM OOM errors due to overly large memory footprint.
But it also might not — the low-level compiler is free to rearrange the instructions to lower the register pressure, and is in fact very good at it.
Still, it is a good rule of thumb to keep the last two dimensions large
(especially the last dimension), while keeping the leading dimensions small.
###### Multicore TPU configurations[#](#multicore-tpu-configurations)
In newer TPU generations, the two cores on a chip are often abstracted as a single device. To take advantage of multiple cores, Pallas has to break the sequential grid execution guarantees, and will need to parallelize one of the grid axes over cores. This is an opt-in procedure. To allow that,
`pallas_call` requires an extra parameter named `dimension_semantics`:
That parameter is a list, with as many entries as many axes there are in the grid. Only `parallel` dimensions can be partitioned over cores. As a rule of thumb, the dimensions are parallel, unless the output window does not vary.
As such, `dimension_semantics` is always a number of `parallel` axes followed by a number of `arbitrary` axes.
While partitioning a kernel over a 2-core TPU device often leads to a 2x speedup, it can be in fact significantly smaller. This is especially true if different instances of the body have highly varying cost. If all of the expensive steps get mapped to one core, but all cheap steps are assigned to the other, the second core will be sitting idle until the first one completes its tasks.
Pallas TPU generally favors partitioning axes of size that is a multiple of the number of TPU cores, and prefers to partition leading grid axes.
###### Placing operands in SMEM[#](#placing-operands-in-smem)
Most of the compute on the TPU will happen on the vector unit. Still, there are many cases where it is useful to perform a number of scalar operations e.g. to perform control-flow operations. For that reason, TPUs come with a separate scalar unit, and a separate scalar memory (SMEM) attached to it.
As a rule of thumb, any data used to perform control-flow decisions should be placed in SMEM.
SMEM is a low-latency memory that supports random access, but lets you only read and write 32-bit values within a single instruction (very small compared to the 4KBi granularity of VMEM transactions, but much more flexible due to lack of alignment requirements!).
The scalar memory is also very useful when implementing kernels that do not access the tiles of inputs in a regular pattern, such as when writing block-sparse kernels. In Pallas, this can be achieved by replacing the
`grid` argument to `pallas_call` with a `grid_spec` of
`PrefetchScalarGridSpec` with a non-zero `num_scalar_prefetch` argument.
If `num_scalar_prefetch` is `n`, then the first `n` arguments to
`pallas_call` will be placed in SMEM. No `BlockSpec`s should be specified for those arguments. But, the `BlockSpec`s for all subsequent arguments will receive not only the grid indices, but also the SMEM references to the leading operands.
Note
We are working on implementing examples for this feature. Stay tuned!
###### Supported data types[#](#supported-data-types)
At the moment Pallas TPU only supports the following data types:
* `jnp.float32`
* `jnp.bfloat16`
* `jnp.int*`` (all precisions, except for `jnp.int4`)
* `jnp.uint*` (all precisions)
###### Computation placement[#](#computation-placement)
All scalar (i.e. 0D) arrays will be stored in scalar registers, and operations on then will be executed on the scalar core. All other operations (even on single-element, but 1D+ arrays) will be executed on the vector core.
##### Supported operations[#](#supported-operations)
###### Matrix multiplication[#](#matrix-multiplication)
Matrix multiplication always produces results in the float32 format.
If your inputs are not float32, we recommend using `lax.dot` with
`preferred_element_type` set to `jnp.float32`.
When using `lax.dot_general`, it is possible to fuse transpositions of the last two dimensions of matrix multiplication operands into the operation,
which can improve the overall kernel performance.
###### Precision control[#](#precision-control)
Pallas TPU lowering is aware of `jax.default_matmul_precision`. For best performance (and lowest precision) requirest `bfloat16`. If you care about numerical accuracy, you might want to set the precision to `float32`.
Warning
Even if you pass in 32-bit operands to a matrix multiplication, they will be rounded to `bfloat16` unless `float32` precision is requested.
###### Transposition[#](#transposition)
If the value has at least 4 dimensions, arbitrary transpositions of all but the last two axes are free.
Otherwise, only the transposition of the last two axes is implemented.
Note that some transpositions of the last two dimensions can be fused into matrix multiplication.
###### Accessing memory[#](#accessing-memory)
Arbitrary slices of references can be read or updated, subject to implementation constraints. Currently, no restrictions are placed on inputs that are 32-bit wide,
but only some slicing patterns are supported for narrower types. Reads and writes that are aligned to multiples of, and have a length that is a multiple of 8 and 128 respectively in the last two dimensions are always supported.
Reads and write to vector memory generally happen on tiles of shape `(8, 128)`.
As such, when reading or writing to references that have at least two dimensions,
the best performance is achieved when the base offset of the memory access has indices divisible by the tiling, and the size of the read region is a multiple of the tile size.
###### Elementwise operations[#](#elementwise-operations)
Many elementwise operations are supported. It is worth noting that the hardware generally only supports elementwise compute using 32-bit types. When loading operands that use lower-precision types, they should generally be upcast to a 32-bit type before applying elementwise ops.
It is worth noting that they can vary *significantly* in their cost. As such, we outline three categories of supported operations: cheap (🟢), medium (🌕) and expensive (🔴).
| Operation | Cost |
| --- | --- |
| `jnp.add`, `+` | 🟢 |
| `jnp.sub`, `-` | 🟢 |
| `jnp.mul`, `*` | 🟢 |
| `/`, `//`, `%` | 🌕 |
| `jnp.max`, `jnp.min` | 🟢 |
| `jnp.where` (select) | 🟢 |
| `jnp.abs` | 🟢 |
| `|`, `^`, `&`, `~` | 🟢 |
| `<<`, `>>` | 🟢 |
| Comparisons (`==`, …) | 🟢 |
| Type casts (`.astype`) | 🟢 |
| `jnp.exp` | 🌕 |
| `jnp.tanh` | 🌕 |
| `jnp.pow` | 🌕 |
| `jnp.sin` | 🔴 |
| `jnp.cos` | 🔴 |
Many JAX functions are implemented in terms of other JAX primitives, so this list might not be comprehensive. For example, `jax.nn.relu` is implemented in terms of comparisons and `jnp.where` and will work in Pallas kernels too.
###### Array constructors[#](#array-constructors)
All constant array constructors are supported (`jnp.ones`, `jnp.zeros`,
`jnp.full`). Notably, the `jax.random` module is **not** compatible with Pallas as of today.
###### Reductions[#](#reductions)
Sum, maximum and minimum reductions are supported, but only on a single array axis at a time.
Reductions over the last array dimension are generally the slowest.
Reductions over the second last dimension are faster, but still slower than over the leading dimensions.
###### Broadcasting[#](#broadcasting)
The performance characteristics of broadcasting are very similar to those of reductions. Broadcasting along all but the two trailing dimensions is always supported and free. Broadcasting along the second to last dimension is slower, while broadcasting along the last dimension is the slowest.
###### Reshapes[#](#reshapes)
As usual, reshapes in all dimensions but the last two dimensions are supported and free.
The only two supported cases when a reshape can modify the last two dimensions of an array is when (1) some leading dimensions are flattened onto the second to last dimension, or (2) it adds a dimension that was just removed by a reduction.
###### Control flow[#](#control-flow)
The TPU backend features limited support for control flow at the moment. The currently supported functions are `cond`, `fori_loop` and `for_loop`.
However, loop primitives get fully unrolled during the compilation at the moment, so try to keep the loop trip count reasonably small.
Overusing control flow can lead to significant regressions in low-level code generation, and it is recommended to try to squeeze as many computationaly expensive operations into a single basic block as possible.
Advanced Tutorials[#](#advanced-tutorials)
---
This section contains examples and tutorials on more advanced topics, such as Multi Core computation, Custom operations, and more in depth applications
**Copyright 2018 The JAX Authors.**
Licensed under the Apache License, Version 2.0 (the “License”);
Licensed under the Apache License, Version 2.0 (the “License”);
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and limitations under the License.
### Training a Simple Neural Network, with tensorflow/datasets Data Loading[#](#training-a-simple-neural-network-with-tensorflow-datasets-data-loading)
*Forked from* `neural_network_and_data_loading.ipynb`
Let’s combine everything we showed in the [quickstart notebook](https://jax.readthedocs.io/en/latest/notebooks/quickstart.html) to train a simple neural network. We will first specify and train a simple MLP on MNIST using JAX for the computation. We will use `tensorflow/datasets` data loading API to load images and labels (because it’s pretty great, and the world doesn’t need yet another data loading library :P).
Of course, you can use JAX with any API that is compatible with NumPy to make specifying the model a bit more plug-and-play. Here, just for explanatory purposes, we won’t use any neural network libraries or special APIs for building our model.
```
import jax.numpy as jnp from jax import grad, jit, vmap from jax import random
```
#### Hyperparameters[#](#hyperparameters)
Let’s get a few bookkeeping items out of the way.
```
# A helper function to randomly initialize weights and biases
# for a dense neural network layer def random_layer_params(m, n, key, scale=1e-2):
w_key, b_key = random.split(key)
return scale * random.normal(w_key, (n, m)), scale * random.normal(b_key, (n,))
# Initialize all layers for a fully-connected neural network with sizes "sizes"
def init_network_params(sizes, key):
keys = random.split(key, len(sizes))
return [random_layer_params(m, n, k) for m, n, k in zip(sizes[:-1], sizes[1:], keys)]
layer_sizes = [784, 512, 512, 10]
step_size = 0.01 num_epochs = 10 batch_size = 128 n_targets = 10 params = init_network_params(layer_sizes, random.PRNGKey(0))
```
#### Auto-batching predictions[#](#auto-batching-predictions)
Let us first define our prediction function. Note that we’re defining this for a *single* image example. We’re going to use JAX’s `vmap` function to automatically handle mini-batches, with no performance penalty.
```
from jax.scipy.special import logsumexp
def relu(x):
return jnp.maximum(0, x)
def predict(params, image):
# per-example predictions
activations = image
for w, b in params[:-1]:
outputs = jnp.dot(w, activations) + b
activations = relu(outputs)
final_w, final_b = params[-1]
logits = jnp.dot(final_w, activations) + final_b
return logits - logsumexp(logits)
```
Let’s check that our prediction function only works on single images.
```
# This works on single examples random_flattened_image = random.normal(random.PRNGKey(1), (28 * 28,))
preds = predict(params, random_flattened_image)
print(preds.shape)
```
```
(10,)
```
```
# Doesn't work with a batch random_flattened_images = random.normal(random.PRNGKey(1), (10, 28 * 28))
try:
preds = predict(params, random_flattened_images)
except TypeError:
print('Invalid shapes!')
```
```
Invalid shapes!
```
```
# Let's upgrade it to handle batches using `vmap`
# Make a batched version of the `predict` function batched_predict = vmap(predict, in_axes=(None, 0))
# `batched_predict` has the same call signature as `predict`
batched_preds = batched_predict(params, random_flattened_images)
print(batched_preds.shape)
```
```
(10, 10)
```
At this point, we have all the ingredients we need to define our neural network and train it. We’ve built an auto-batched version of `predict`, which we should be able to use in a loss function. We should be able to use `grad` to take the derivative of the loss with respect to the neural network parameters. Last, we should be able to use `jit` to speed up everything.
#### Utility and loss functions[#](#utility-and-loss-functions)
```
def one_hot(x, k, dtype=jnp.float32):
"""Create a one-hot encoding of x of size k."""
return jnp.array(x[:, None] == jnp.arange(k), dtype)
def accuracy(params, images, targets):
target_class = jnp.argmax(targets, axis=1)
predicted_class = jnp.argmax(batched_predict(params, images), axis=1)
return jnp.mean(predicted_class == target_class)
def loss(params, images, targets):
preds = batched_predict(params, images)
return -jnp.mean(preds * targets)
@jit def update(params, x, y):
grads = grad(loss)(params, x, y)
return [(w - step_size * dw, b - step_size * db)
for (w, b), (dw, db) in zip(params, grads)]
```
#### Data Loading with `tensorflow/datasets`[#](#data-loading-with-tensorflow-datasets)
JAX is laser-focused on program transformations and accelerator-backed NumPy, so we don’t include data loading or munging in the JAX library. There are already a lot of great data loaders out there, so let’s just use them instead of reinventing anything. We’ll use the `tensorflow/datasets` data loader.
```
import tensorflow as tf
# Ensure TF does not see GPU and grab all GPU memory.
tf.config.set_visible_devices([], device_type='GPU')
import tensorflow_datasets as tfds
data_dir = '/tmp/tfds'
# Fetch full datasets for evaluation
# tfds.load returns tf.Tensors (or tf.data.Datasets if batch_size != -1)
# You can convert them to NumPy arrays (or iterables of NumPy arrays) with tfds.dataset_as_numpy mnist_data, info = tfds.load(name="mnist", batch_size=-1, data_dir=data_dir, with_info=True)
mnist_data = tfds.as_numpy(mnist_data)
train_data, test_data = mnist_data['train'], mnist_data['test']
num_labels = info.features['label'].num_classes h, w, c = info.features['image'].shape num_pixels = h * w * c
# Full train set train_images, train_labels = train_data['image'], train_data['label']
train_images = jnp.reshape(train_images, (len(train_images), num_pixels))
train_labels = one_hot(train_labels, num_labels)
# Full test set test_images, test_labels = test_data['image'], test_data['label']
test_images = jnp.reshape(test_images, (len(test_images), num_pixels))
test_labels = one_hot(test_labels, num_labels)
```
```
print('Train:', train_images.shape, train_labels.shape)
print('Test:', test_images.shape, test_labels.shape)
```
```
Train: (60000, 784) (60000, 10)
Test: (10000, 784) (10000, 10)
```
#### Training Loop[#](#training-loop)
```
import time
def get_train_batches():
# as_supervised=True gives us the (image, label) as a tuple instead of a dict
ds = tfds.load(name='mnist', split='train', as_supervised=True, data_dir=data_dir)
# You can build up an arbitrary tf.data input pipeline
ds = ds.batch(batch_size).prefetch(1)
# tfds.dataset_as_numpy converts the tf.data.Dataset into an iterable of NumPy arrays
return tfds.as_numpy(ds)
for epoch in range(num_epochs):
start_time = time.time()
for x, y in get_train_batches():
x = jnp.reshape(x, (len(x), num_pixels))
y = one_hot(y, num_labels)
params = update(params, x, y)
epoch_time = time.time() - start_time
train_acc = accuracy(params, train_images, train_labels)
test_acc = accuracy(params, test_images, test_labels)
print("Epoch {} in {:0.2f} sec".format(epoch, epoch_time))
print("Training set accuracy {}".format(train_acc))
print("Test set accuracy {}".format(test_acc))
```
```
Epoch 0 in 28.30 sec Training set accuracy 0.8400499820709229 Test set accuracy 0.8469000458717346 Epoch 1 in 14.74 sec Training set accuracy 0.8743667006492615 Test set accuracy 0.8803000450134277 Epoch 2 in 14.57 sec Training set accuracy 0.8901500105857849 Test set accuracy 0.8957000374794006 Epoch 3 in 14.36 sec Training set accuracy 0.8991333246231079 Test set accuracy 0.903700053691864 Epoch 4 in 14.20 sec Training set accuracy 0.9061833620071411 Test set accuracy 0.9087000489234924 Epoch 5 in 14.89 sec Training set accuracy 0.9113333225250244 Test set accuracy 0.912600040435791 Epoch 6 in 13.95 sec Training set accuracy 0.9156833291053772 Test set accuracy 0.9176000356674194 Epoch 7 in 13.32 sec Training set accuracy 0.9192000031471252 Test set accuracy 0.9214000701904297 Epoch 8 in 13.55 sec Training set accuracy 0.9222500324249268 Test set accuracy 0.9241000413894653 Epoch 9 in 13.40 sec Training set accuracy 0.9253666996955872 Test set accuracy 0.9269000291824341
```
We’ve now used the whole of the JAX API: `grad` for derivatives, `jit` for speedups and `vmap` for auto-vectorization.
We used NumPy to specify all of our computation, and borrowed the great data loaders from `tensorflow/datasets`, and ran the whole thing on the GPU.
### Training a Simple Neural Network, with PyTorch Data Loading[#](#training-a-simple-neural-network-with-pytorch-data-loading)
**Copyright 2018 The JAX Authors.**
Licensed under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and limitations under the License.
Let’s combine everything we showed in the [quickstart notebook](https://colab.research.google.com/github/google/jax/blob/main/docs/notebooks/quickstart.ipynb) to train a simple neural network. We will first specify and train a simple MLP on MNIST using JAX for the computation. We will use PyTorch’s data loading API to load images and labels (because it’s pretty great, and the world doesn’t need yet another data loading library).
Of course, you can use JAX with any API that is compatible with NumPy to make specifying the model a bit more plug-and-play. Here, just for explanatory purposes, we won’t use any neural network libraries or special APIs for building our model.
```
import jax.numpy as jnp from jax import grad, jit, vmap from jax import random
```
#### Hyperparameters[#](#hyperparameters)
Let’s get a few bookkeeping items out of the way.
```
# A helper function to randomly initialize weights and biases
# for a dense neural network layer def random_layer_params(m, n, key, scale=1e-2):
w_key, b_key = random.split(key)
return scale * random.normal(w_key, (n, m)), scale * random.normal(b_key, (n,))
# Initialize all layers for a fully-connected neural network with sizes "sizes"
def init_network_params(sizes, key):
keys = random.split(key, len(sizes))
return [random_layer_params(m, n, k) for m, n, k in zip(sizes[:-1], sizes[1:], keys)]
layer_sizes = [784, 512, 512, 10]
step_size = 0.01 num_epochs = 8 batch_size = 128 n_targets = 10 params = init_network_params(layer_sizes, random.PRNGKey(0))
```
#### Auto-batching predictions[#](#auto-batching-predictions)
Let us first define our prediction function. Note that we’re defining this for a *single* image example. We’re going to use JAX’s `vmap` function to automatically handle mini-batches, with no performance penalty.
```
from jax.scipy.special import logsumexp
def relu(x):
return jnp.maximum(0, x)
def predict(params, image):
# per-example predictions
activations = image
for w, b in params[:-1]:
outputs = jnp.dot(w, activations) + b
activations = relu(outputs)
final_w, final_b = params[-1]
logits = jnp.dot(final_w, activations) + final_b
return logits - logsumexp(logits)
```
Let’s check that our prediction function only works on single images.
```
# This works on single examples random_flattened_image = random.normal(random.PRNGKey(1), (28 * 28,))
preds = predict(params, random_flattened_image)
print(preds.shape)
```
```
(10,)
```
```
# Doesn't work with a batch random_flattened_images = random.normal(random.PRNGKey(1), (10, 28 * 28))
try:
preds = predict(params, random_flattened_images)
except TypeError:
print('Invalid shapes!')
```
```
Invalid shapes!
```
```
# Let's upgrade it to handle batches using `vmap`
# Make a batched version of the `predict` function batched_predict = vmap(predict, in_axes=(None, 0))
# `batched_predict` has the same call signature as `predict`
batched_preds = batched_predict(params, random_flattened_images)
print(batched_preds.shape)
```
```
(10, 10)
```
At this point, we have all the ingredients we need to define our neural network and train it. We’ve built an auto-batched version of `predict`, which we should be able to use in a loss function. We should be able to use `grad` to take the derivative of the loss with respect to the neural network parameters. Last, we should be able to use `jit` to speed up everything.
#### Utility and loss functions[#](#utility-and-loss-functions)
```
def one_hot(x, k, dtype=jnp.float32):
"""Create a one-hot encoding of x of size k."""
return jnp.array(x[:, None] == jnp.arange(k), dtype)
def accuracy(params, images, targets):
target_class = jnp.argmax(targets, axis=1)
predicted_class = jnp.argmax(batched_predict(params, images), axis=1)
return jnp.mean(predicted_class == target_class)
def loss(params, images, targets):
preds = batched_predict(params, images)
return -jnp.mean(preds * targets)
@jit def update(params, x, y):
grads = grad(loss)(params, x, y)
return [(w - step_size * dw, b - step_size * db)
for (w, b), (dw, db) in zip(params, grads)]
```
#### Data Loading with PyTorch[#](#data-loading-with-pytorch)
JAX is laser-focused on program transformations and accelerator-backed NumPy, so we don’t include data loading or munging in the JAX library. There are already a lot of great data loaders out there, so let’s just use them instead of reinventing anything. We’ll grab PyTorch’s data loader, and make a tiny shim to make it work with NumPy arrays.
```
!pip install torch torchvision
```
```
Requirement already satisfied: torch in /opt/anaconda3/lib/python3.7/site-packages (1.4.0)
Requirement already satisfied: torchvision in /opt/anaconda3/lib/python3.7/site-packages (0.5.0)
Requirement already satisfied: numpy in /opt/anaconda3/lib/python3.7/site-packages (from torchvision) (1.17.2)
Requirement already satisfied: six in /opt/anaconda3/lib/python3.7/site-packages (from torchvision) (1.12.0)
Requirement already satisfied: pillow>=4.1.1 in /opt/anaconda3/lib/python3.7/site-packages (from torchvision) (6.2.0)
```
```
import numpy as np from jax.tree_util import tree_map from torch.utils import data from torchvision.datasets import MNIST
def numpy_collate(batch):
return tree_map(np.asarray, data.default_collate(batch))
class NumpyLoader(data.DataLoader):
def __init__(self, dataset, batch_size=1,
shuffle=False, sampler=None,
batch_sampler=None, num_workers=0,
pin_memory=False, drop_last=False,
timeout=0, worker_init_fn=None):
super(self.__class__, self).__init__(dataset,
batch_size=batch_size,
shuffle=shuffle,
sampler=sampler,
batch_sampler=batch_sampler,
num_workers=num_workers,
collate_fn=numpy_collate,
pin_memory=pin_memory,
drop_last=drop_last,
timeout=timeout,
worker_init_fn=worker_init_fn)
class FlattenAndCast(object):
def __call__(self, pic):
return np.ravel(np.array(pic, dtype=jnp.float32))
```
```
# Define our dataset, using torch datasets mnist_dataset = MNIST('/tmp/mnist/', download=True, transform=FlattenAndCast())
training_generator = NumpyLoader(mnist_dataset, batch_size=batch_size, num_workers=0)
```
```
Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz to /tmp/mnist/MNIST/raw/train-images-idx3-ubyte.gz Extracting /tmp/mnist/MNIST/raw/train-images-idx3-ubyte.gz to /tmp/mnist/MNIST/raw Downloading http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz to /tmp/mnist/MNIST/raw/train-labels-idx1-ubyte.gz Extracting /tmp/mnist/MNIST/raw/train-labels-idx1-ubyte.gz to /tmp/mnist/MNIST/raw Downloading http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz to /tmp/mnist/MNIST/raw/t10k-images-idx3-ubyte.gz Extracting /tmp/mnist/MNIST/raw/t10k-images-idx3-ubyte.gz to /tmp/mnist/MNIST/raw Downloading http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz to /tmp/mnist/MNIST/raw/t10k-labels-idx1-ubyte.gz Extracting /tmp/mnist/MNIST/raw/t10k-labels-idx1-ubyte.gz to /tmp/mnist/MNIST/raw Processing...
Done!
```
```
# Get the full train dataset (for checking accuracy while training)
train_images = np.array(mnist_dataset.train_data).reshape(len(mnist_dataset.train_data), -1)
train_labels = one_hot(np.array(mnist_dataset.train_labels), n_targets)
# Get full test dataset mnist_dataset_test = MNIST('/tmp/mnist/', download=True, train=False)
test_images = jnp.array(mnist_dataset_test.test_data.numpy().reshape(len(mnist_dataset_test.test_data), -1), dtype=jnp.float32)
test_labels = one_hot(np.array(mnist_dataset_test.test_labels), n_targets)
```
```
```
```
/opt/anaconda3/lib/python3.7/site-packages/torchvision/datasets/mnist.py:55: UserWarning: train_data has been renamed data
warnings.warn("train_data has been renamed data")
/opt/anaconda3/lib/python3.7/site-packages/torchvision/datasets/mnist.py:45: UserWarning: train_labels has been renamed targets
warnings.warn("train_labels has been renamed targets")
/opt/anaconda3/lib/python3.7/site-packages/torchvision/datasets/mnist.py:60: UserWarning: test_data has been renamed data
warnings.warn("test_data has been renamed data")
/opt/anaconda3/lib/python3.7/site-packages/torchvision/datasets/mnist.py:50: UserWarning: test_labels has been renamed targets
warnings.warn("test_labels has been renamed targets")
```
#### Training Loop[#](#training-loop)
```
import time
for epoch in range(num_epochs):
start_time = time.time()
for x, y in training_generator:
y = one_hot(y, n_targets)
params = update(params, x, y)
epoch_time = time.time() - start_time
train_acc = accuracy(params, train_images, train_labels)
test_acc = accuracy(params, test_images, test_labels)
print("Epoch {} in {:0.2f} sec".format(epoch, epoch_time))
print("Training set accuracy {}".format(train_acc))
print("Test set accuracy {}".format(test_acc))
```
```
Epoch 0 in 55.15 sec Training set accuracy 0.9157500267028809 Test set accuracy 0.9195000529289246 Epoch 1 in 42.26 sec Training set accuracy 0.9372166991233826 Test set accuracy 0.9384000301361084 Epoch 2 in 44.37 sec Training set accuracy 0.9491666555404663 Test set accuracy 0.9469000697135925 Epoch 3 in 41.75 sec Training set accuracy 0.9568166732788086 Test set accuracy 0.9534000158309937 Epoch 4 in 41.16 sec Training set accuracy 0.9631333351135254 Test set accuracy 0.9577000737190247 Epoch 5 in 38.89 sec Training set accuracy 0.9675000309944153 Test set accuracy 0.9616000652313232 Epoch 6 in 40.68 sec Training set accuracy 0.9708333611488342 Test set accuracy 0.9650000333786011 Epoch 7 in 41.50 sec Training set accuracy 0.973716676235199 Test set accuracy 0.9672000408172607
```
We’ve now used the whole of the JAX API: `grad` for derivatives, `jit` for speedups and `vmap` for auto-vectorization.
We used NumPy to specify all of our computation, and borrowed the great data loaders from PyTorch, and ran the whole thing on the GPU.
### Autobatching for Bayesian Inference[#](#autobatching-for-bayesian-inference)
This notebook demonstrates a simple Bayesian inference example where autobatching makes user code easier to write, easier to read, and less likely to include bugs.
Inspired by a notebook by @davmre.
```
import functools import itertools import re import sys import time
from matplotlib.pyplot import *
import jax
from jax import lax import jax.numpy as jnp import jax.scipy as jsp from jax import random
import numpy as np import scipy as sp
```
#### Generate a fake binary classification dataset[#](#generate-a-fake-binary-classification-dataset)
```
np.random.seed(10009)
num_features = 10 num_points = 100
true_beta = np.random.randn(num_features).astype(jnp.float32)
all_x = np.random.randn(num_points, num_features).astype(jnp.float32)
y = (np.random.rand(num_points) < sp.special.expit(all_x.dot(true_beta))).astype(jnp.int32)
```
```
y
```
```
array([0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0,
1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0,
1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0,
0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1,
1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0], dtype=int32)
```
#### Write the log-joint function for the model[#](#write-the-log-joint-function-for-the-model)
We’ll write a non-batched version, a manually batched version, and an autobatched version.
##### Non-batched[#](#non-batched)
```
def log_joint(beta):
result = 0.
# Note that no `axis` parameter is provided to `jnp.sum`.
result = result + jnp.sum(jsp.stats.norm.logpdf(beta, loc=0., scale=1.))
result = result + jnp.sum(-jnp.log(1 + jnp.exp(-(2*y-1) * jnp.dot(all_x, beta))))
return result
```
```
log_joint(np.random.randn(num_features))
```
```
Array(-213.2356, dtype=float32)
```
```
# This doesn't work, because we didn't write `log_prob()` to handle batching.
try:
batch_size = 10
batched_test_beta = np.random.randn(batch_size, num_features)
log_joint(np.random.randn(batch_size, num_features))
except ValueError as e:
print("Caught expected exception " + str(e))
```
```
Caught expected exception Incompatible shapes for broadcasting: shapes=[(100,), (100, 10)]
```
##### Manually batched[#](#manually-batched)
```
def batched_log_joint(beta):
result = 0.
# Here (and below) `sum` needs an `axis` parameter. At best, forgetting to set axis
# or setting it incorrectly yields an error; at worst, it silently changes the
# semantics of the model.
result = result + jnp.sum(jsp.stats.norm.logpdf(beta, loc=0., scale=1.),
axis=-1)
# Note the multiple transposes. Getting this right is not rocket science,
# but it's also not totally mindless. (I didn't get it right on the first
# try.)
result = result + jnp.sum(-jnp.log(1 + jnp.exp(-(2*y-1) * jnp.dot(all_x, beta.T).T)),
axis=-1)
return result
```
```
batch_size = 10 batched_test_beta = np.random.randn(batch_size, num_features)
batched_log_joint(batched_test_beta)
```
```
Array([-147.84033 , -207.02205 , -109.26075 , -243.80833 , -163.0291 ,
-143.84848 , -160.28773 , -113.771706, -126.60544 , -190.81992 ], dtype=float32)
```
##### Autobatched with vmap[#](#autobatched-with-vmap)
It just works.
```
vmap_batched_log_joint = jax.vmap(log_joint)
vmap_batched_log_joint(batched_test_beta)
```
```
Array([-147.84033 , -207.02205 , -109.26075 , -243.80833 , -163.0291 ,
-143.84848 , -160.28773 , -113.771706, -126.60544 , -190.81992 ], dtype=float32)
```
#### Self-contained variational inference example[#](#self-contained-variational-inference-example)
A little code is copied from above.
##### Set up the (batched) log-joint function[#](#set-up-the-batched-log-joint-function)
```
@jax.jit def log_joint(beta):
result = 0.
# Note that no `axis` parameter is provided to `jnp.sum`.
result = result + jnp.sum(jsp.stats.norm.logpdf(beta, loc=0., scale=10.))
result = result + jnp.sum(-jnp.log(1 + jnp.exp(-(2*y-1) * jnp.dot(all_x, beta))))
return result
batched_log_joint = jax.jit(jax.vmap(log_joint))
```
##### Define the ELBO and its gradient[#](#define-the-elbo-and-its-gradient)
```
def elbo(beta_loc, beta_log_scale, epsilon):
beta_sample = beta_loc + jnp.exp(beta_log_scale) * epsilon
return jnp.mean(batched_log_joint(beta_sample), 0) + jnp.sum(beta_log_scale - 0.5 * np.log(2*np.pi))
elbo = jax.jit(elbo)
elbo_val_and_grad = jax.jit(jax.value_and_grad(elbo, argnums=(0, 1)))
```
##### Optimize the ELBO using SGD[#](#optimize-the-elbo-using-sgd)
```
def normal_sample(key, shape):
"""Convenience function for quasi-stateful RNG."""
new_key, sub_key = random.split(key)
return new_key, random.normal(sub_key, shape)
normal_sample = jax.jit(normal_sample, static_argnums=(1,))
key = random.PRNGKey(10003)
beta_loc = jnp.zeros(num_features, jnp.float32)
beta_log_scale = jnp.zeros(num_features, jnp.float32)
step_size = 0.01 batch_size = 128 epsilon_shape = (batch_size, num_features)
for i in range(1000):
key, epsilon = normal_sample(key, epsilon_shape)
elbo_val, (beta_loc_grad, beta_log_scale_grad) = elbo_val_and_grad(
beta_loc, beta_log_scale, epsilon)
beta_loc += step_size * beta_loc_grad
beta_log_scale += step_size * beta_log_scale_grad
if i % 10 == 0:
print('{}\t{}'.format(i, elbo_val))
```
```
0 -180.8538818359375 10 -113.06045532226562 20 -102.73727416992188 30 -99.787353515625 40 -98.90898132324219 50 -98.29745483398438 60 -98.18632507324219 70 -97.57972717285156 80 -97.28599548339844 90 -97.46996307373047 100 -97.4771728515625 110 -97.5806655883789 120 -97.4943618774414 130 -97.50271606445312 140 -96.86396026611328 150 -97.44197845458984 160 -97.06941223144531 170 -96.84028625488281 180 -97.21336364746094 190 -97.56503295898438 200 -97.26397705078125 210 -97.11979675292969 220 -97.39595031738281 230 -97.16831970214844 240 -97.118408203125 250 -97.24345397949219 260 -97.29788970947266 270 -96.69286346435547 280 -96.96438598632812 290 -97.30055236816406 300 -96.63591766357422 310 -97.0351791381836 320 -97.52909088134766 330 -97.28811645507812 340 -97.07321166992188 350 -97.15619659423828 360 -97.25881958007812 370 -97.19515228271484 380 -97.13092041015625 390 -97.11726379394531 400 -96.938720703125 410 -97.26676940917969 420 -97.35322570800781 430 -97.21007537841797 440 -97.28434753417969 450 -97.1630859375 460 -97.2612533569336 470 -97.21343994140625 480 -97.23997497558594 490 -97.14913940429688 500 -97.23527526855469 510 -96.93419647216797 520 -97.21209716796875 530 -96.82575988769531 540 -97.01284790039062 550 -96.94175720214844 560 -97.16520690917969 570 -97.29165649414062 580 -97.42941284179688 590 -97.24370574951172 600 -97.15222930908203 610 -97.49844360351562 620 -96.9906997680664 630 -96.88956451416016 640 -96.89968872070312 650 -97.13793182373047 660 -97.43705749511719 670 -96.99235534667969 680 -97.15623474121094 690 -97.1869125366211 700 -97.11160278320312 710 -97.78105163574219 720 -97.23226165771484 730 -97.16206359863281 740 -96.99581909179688 750 -96.6672134399414 760 -97.16795349121094 770 -97.51435089111328 780 -97.28900146484375 790 -96.91226196289062 800 -97.17100524902344 810 -97.29047393798828 820 -97.16242980957031 830 -97.19107055664062 840 -97.56382751464844 850 -97.00194549560547 860 -96.86555480957031 870 -96.76338195800781 880 -96.83660888671875 890 -97.12178039550781 900 -97.09554290771484 910 -97.0682373046875 920 -97.11947631835938 930 -96.87930297851562 940 -97.45624542236328 950 -96.69279479980469 960 -97.29376220703125 970 -97.3353042602539 980 -97.34962463378906 990 -97.09675598144531
```
##### Display the results[#](#display-the-results)
Coverage isn’t quite as good as we might like, but it’s not bad, and nobody said variational inference was exact.
```
figure(figsize=(7, 7))
plot(true_beta, beta_loc, '.', label='Approximated Posterior Means')
plot(true_beta, beta_loc + 2*jnp.exp(beta_log_scale), 'r.', label='Approximated Posterior $2\sigma$ Error Bars')
plot(true_beta, beta_loc - 2*jnp.exp(beta_log_scale), 'r.')
plot_scale = 3 plot([-plot_scale, plot_scale], [-plot_scale, plot_scale], 'k')
xlabel('True beta')
ylabel('Estimated beta')
legend(loc='best')
```
```
<matplotlib.legend.Legend at 0x7f310e01e070>
```
### Using JAX in multi-host and multi-process environments[#](#using-jax-in-multi-host-and-multi-process-environments)
#### Introduction[#](#introduction)
This guide explains how to use JAX in environments such as GPU clusters and [Cloud TPU](https://cloud.google.com/tpu) pods where accelerators are spread across multiple CPU hosts or JAX processes. We’ll refer to these as “multi-process” environments.
This guide specifically focuses on how to use collective communication operations (e.g. [`jax.lax.psum()`](index.html#jax.lax.psum) ) in multi-process settings, although other communication methods may be useful too depending on your use case (e.g.
RPC, [mpi4jax](https://github.com/mpi4jax/mpi4jax)). If you’re not already familiar with JAX’s collective operations, we recommend starting with the
[Parallel Evaluation in JAX](index.html#document-jax-101/06-parallelism) notebook. An important requirement of multi-process environments in JAX is direct communication links between accelerators, e.g. the high-speed interconnects for Cloud TPUs or
[NCCL](https://developer.nvidia.com/nccl) for GPUs. These links allow collective operations to run across multiple processes’ worth of accelerators with high performance.
#### Multi-process programming model[#](#multi-process-programming-model)
Key concepts:
* You must run at least one JAX process per host.
* You should initialize the cluster with [`jax.distributed.initialize()`](index.html#jax.distributed.initialize).
* Each process has a distinct set of *local* devices it can address. The *global* devices are the set of all devices across all processes.
* Use standard JAX parallelism APIs like [`pmap()`](index.html#jax.pmap) and
[`xmap()`](index.html#jax.experimental.maps.xmap) . Each process “sees” *local* input and output to parallelized functions, but communication inside the computations is *global*.
* Make sure all processes run the same parallel computations in the same order.
##### Launching JAX processes[#](#launching-jax-processes)
Unlike other distributed systems where a single controller node manages many worker nodes, JAX uses a “multi-controller” programming model where each JAX Python process runs independently, sometimes referred to as a [Single Program, Multiple Data (SPMD)](index.html#term-SPMD) model. Generally, the same JAX Python program is run in each process, with only slight differences between each process’s execution (e.g. different processes will load different input data).
Furthermore, **you must manually run your JAX program on each host!** JAX doesn’t automatically start multiple processes from a single program invocation.
(The requirement for multiple processes is why this guide isn’t offered as a notebook – we don’t currently have a good way to manage multiple Python processes from a single notebook.)
##### Initializing the cluster[#](#initializing-the-cluster)
To initialize the cluster, you should call [`jax.distributed.initialize()`](index.html#jax.distributed.initialize) at the start of each process. [`jax.distributed.initialize()`](index.html#jax.distributed.initialize) must be called early in the program, before any JAX computations are executed.
The API [`jax.distributed.initialize()`](index.html#jax.distributed.initialize) takes several arguments, namely:
* `coordinator_address`: the IP address of process 0 in your cluster, together with a port available on that process. Process 0 will start a JAX service exposed via that IP address and port, to which the other processes in the cluster will connect.
* `num_processes`: the number of processes in the cluster
* `process_id`: the ID number of this process, in the range `[0 .. num_processes)`.
* `local_device_ids`: Restricts the visible devices of the current process to
`local_device_ids`.
For example on GPU, a typical usage is:
```
import jax
jax.distributed.initialize(coordinator_address="192.168.0.1:1234",
num_processes=2,
process_id=0)
```
On Cloud TPU, Slurm and Open MPI environments, you can simply call [`jax.distributed.initialize()`](index.html#jax.distributed.initialize) with no arguments. Default values for the arguments will be chosen automatically.
When running on GPUs with Slurm and Open MPI, it is assumed that one process is started per GPU, i.e. each process will be assigned only one visible local device. Otherwise it is assumed that one process is started per host,
i.e. each process will be assigned all local devices.
The Open MPI auto-initialization is only used when the JAX processes are launched via `mpirun`/`mpiexec`.
```
import jax
jax.distributed.initialize()
```
On TPU at present calling [`jax.distributed.initialize()`](index.html#jax.distributed.initialize) is optional, but recommended since it enables additional checkpointing and health checking features.
##### Local vs. global devices[#](#local-vs-global-devices)
Before we get to running multi-process computations from your program, it’s important to understand the distinction between *local* and *global* devices.
**A process’s *local* devices are those that it can directly address and launch computations on.** For example, on a GPU cluster, each host can only launch computations on the directly attached GPUs. On a Cloud TPU pod, each host can only launch computations on the 8 TPU cores attached directly to that host (see the
[Cloud TPU System Architecture](https://cloud.google.com/tpu/docs/system-architecture)
documentation for more details). You can see a process’s local devices via
[`jax.local_devices()`](index.html#jax.local_devices) .
**The *global* devices are the devices across all processes.** A computation can span devices across processes and perform collective operations via the direct communication links between devices, as long as each process launches the computation on its local devices. You can see all available global devices via
[`jax.devices()`](index.html#jax.devices) . A process’s local devices are always a subset of the global devices.
##### Running multi-process computations[#](#running-multi-process-computations)
So how do you actually run a computation involving cross-process communication?
**Use the same parallel evaluation APIs that you would in a single process!**
For example, [`pmap()`](index.html#jax.pmap) can be used to run a parallel computation across multiple processes. (If you’re not already familiar with how to use
[`pmap()`](index.html#jax.pmap) to run across multiple devices within a single process, check out the [Parallel Evaluation in JAX](index.html#document-jax-101/06-parallelism) notebook.) Each process should call the same pmapped function and pass in arguments to be mapped across its *local*
devices (i.e., the pmapped axis size is equal to the number of local devices).
Similarly, the function will return outputs sharded across *local* devices only.
Inside the function, however, collective communication operations are run across all *global* devices, across all processes. Conceptually, this can be thought of as running a pmap over a single array sharded across hosts, where each host
“sees” only its local shard of the input and output.
Here’s an example of multi-process pmap in action:
```
# The following is run in parallel on each host on a GPU cluster or TPU pod slice.
>>> import jax
>>> jax.distributed.initialize() # On GPU, see above for the necessary arguments.
>>> jax.device_count() # total number of accelerator devices in the cluster 32
>>> jax.local_device_count() # number of accelerator devices attached to this host 8
# The psum is performed over all mapped devices across the pod slice
>>> xs = jax.numpy.ones(jax.local_device_count())
>>> jax.pmap(lambda x: jax.lax.psum(x, 'i'), axis_name='i')(xs)
ShardedDeviceArray([32., 32., 32., 32., 32., 32., 32., 32.], dtype=float32)
```
[`xmap()`](index.html#jax.experimental.maps.xmap) works similarly when using a physical hardware mesh (see the [xmap tutorial](index.html#document-notebooks/xmap_tutorial) if you’re not familiar with the single-process version). Like [`pmap()`](index.html#jax.pmap) , the inputs and outputs are local and any parallel communication inside the xmapped function is global. The mesh is also global.
**It’s very important that all processes run the same cross-process computations in the same order.** Running the same JAX Python program in each process is usually sufficient. Some common pitfalls to look out for that may cause differently-ordered computations despite running the same program:
* Processes passing differently-shaped inputs to the same parallel function can cause hangs or incorrect return values. Differently-shaped inputs are safe so long as they result in identically-shaped per-device data shards across processes; e.g. passing in different leading batch sizes in order to run on different numbers of local devices per process is ok, but having each process pad its batch to a different max example length is not.
* “Last batch” issues where a parallel function is called in a (training)
loop, and one or more processes exit the loop earlier than the rest. This will cause the rest to hang waiting for the already-finished processes to start the computation.
* Conditions based on non-deterministic ordering of collections can cause code processes to hang. For example, iterating over
`set` on current Python versions or `dict` [before Python 3.7](https://mail.python.org/pipermail/python-dev/2017-December/151283.html)
may result in a different ordering on different processes, even with the same insertion order.
### Distributed arrays and automatic parallelization[#](#distributed-arrays-and-automatic-parallelization)
This tutorial discusses parallelism via `jax.Array`, the unified array object model available in JAX v0.4.1 and newer.
Refer to the [`jax.Array migration`](https://jax.readthedocs.io/en/latest/jax_array_migration.html#jax-array-migration) guide to learn how to migrate the existing JAX pre-v0.4.1 codebases to `jax.Array`.
**Note:** The features required by `jax.Array` are not supported by the Colab TPU runtime at this time, but are available on Google Cloud TPU and Kaggle TPU VMs.
```
import os
import functools from typing import Optional
import numpy as np
import jax import jax.numpy as jnp
```
⚠️ WARNING: The notebook requires 8 devices to run.
```
if len(jax.local_devices()) < 8:
raise Exception("Notebook requires 8 devices to run")
```
#### Intro and a quick example[#](#intro-and-a-quick-example)
By reading this tutorial notebook, you’ll learn about `jax.Array`, a unified datatype for representing arrays, even with physical storage spanning multiple devices. You’ll also learn about how using `jax.Array`s together with `jax.jit`
can provide automatic compiler-based parallelization.
Before we think step by step, here’s a quick example.
First, we’ll create a `jax.Array` sharded across multiple devices:
```
from jax.experimental import mesh_utils from jax.sharding import PositionalSharding
```
```
# Create a Sharding object to distribute a value across devices:
sharding = PositionalSharding(mesh_utils.create_device_mesh((8,)))
```
```
# Create an array of random values:
x = jax.random.normal(jax.random.PRNGKey(0), (8192, 8192))
# and use jax.device_put to distribute it across devices:
y = jax.device_put(x, sharding.reshape(4, 2))
jax.debug.visualize_array_sharding(y)
```
```
┌──────────┬──────────┐
│ TPU 0 │ TPU 1 │
├──────────┼──────────┤
│ TPU 2 │ TPU 3 │
├──────────┼──────────┤
│ TPU 6 │ TPU 7 │
├──────────┼──────────┤
│ TPU 4 │ TPU 5 │
└──────────┴──────────┘
```
Next, we’ll apply a computation to it and visualize how the result values are stored across multiple devices too:
```
z = jnp.sin(y)
jax.debug.visualize_array_sharding(z)
```
```
┌──────────┬──────────┐
│ TPU 0 │ TPU 1 │
├──────────┼──────────┤
│ TPU 2 │ TPU 3 │
├──────────┼──────────┤
│ TPU 6 │ TPU 7 │
├──────────┼──────────┤
│ TPU 4 │ TPU 5 │
└──────────┴──────────┘
```
The evaluation of the `jnp.sin` application was automatically parallelized across the devices on which the input values (and output values) are stored:
```
# `x` is present on a single device
%timeit -n 5 -r 5 jnp.sin(x).block_until_ready()
```
```
The slowest run took 13.32 times longer than the fastest. This could mean that an intermediate result is being cached
5 loops, best of 5: 9.69 ms per loop
```
```
# `y` is sharded across 8 devices.
%timeit -n 5 -r 5 jnp.sin(y).block_until_ready()
```
```
5 loops, best of 5: 1.86 ms per loop
```
Now let’s look at each of these pieces in more detail!
#### `Sharding` describes how array values are laid out in memory across devices[#](#sharding-describes-how-array-values-are-laid-out-in-memory-across-devices)
##### Sharding basics, and the `PositionalSharding` subclass[#](#sharding-basics-and-the-positionalsharding-subclass)
To parallelize computation across multiple devices, we first must lay out input data across multiple devices.
In JAX, `Sharding` objects describe distributed memory layouts. They can be used with `jax.device_put` to produce a value with distributed layout.
For example, here’s a value with a single-device `Sharding`:
```
import jax x = jax.random.normal(jax.random.PRNGKey(0), (8192, 8192))
```
```
jax.debug.visualize_array_sharding(x)
```
```
┌───────────────────────┐
│ │
│ │
│ │
│ │
│ TPU 0 │
│ │
│ │
│ │
│ │
└───────────────────────┘
```
Here, we’re using the `jax.debug.visualize_array_sharding` function to show where the value `x` is stored in memory. All of `x` is stored on a single device, so the visualization is pretty boring!
But we can shard `x` across multiple devices by using `jax.device_put` and a `Sharding` object. First, we make a `numpy.ndarray` of `Devices` using `mesh_utils.create_device_mesh`, which takes hardware topology into account for the `Device` order:
```
from jax.experimental import mesh_utils devices = mesh_utils.create_device_mesh((8,))
```
Then, we create a `PositionalSharding` and use it with `device_put`:
```
from jax.sharding import PositionalSharding
sharding = PositionalSharding(devices)
x = jax.device_put(x, sharding.reshape(8, 1))
jax.debug.visualize_array_sharding(x)
```
```
┌───────────────────────┐
│ TPU 0 │
├───────────────────────┤
│ TPU 1 │
├───────────────────────┤
│ TPU 2 │
├───────────────────────┤
│ TPU 3 │
├───────────────────────┤
│ TPU 6 │
├───────────────────────┤
│ TPU 7 │
├───────────────────────┤
│ TPU 4 │
├───────────────────────┤
│ TPU 5 │
└───────────────────────┘
```
Here `sharding` is a `PositionalSharding` which acts like an array with sets of devices as elements:
```
sharding
```
```
PositionalSharding([{TPU 0} {TPU 1} {TPU 2} {TPU 3} {TPU 6} {TPU 7} {TPU 4} {TPU 5}])
```
By writing `PositionalSharding(ndarray_of_devices)`, we fix the device order and the initial shape. Then we can reshape it:
```
sharding.reshape(8, 1)
```
```
PositionalSharding([[{TPU 0}]
[{TPU 1}]
[{TPU 2}]
[{TPU 3}]
[{TPU 6}]
[{TPU 7}]
[{TPU 4}]
[{TPU 5}]])
```
```
sharding.reshape(4, 2)
```
```
PositionalSharding([[{TPU 0} {TPU 1}]
[{TPU 2} {TPU 3}]
[{TPU 6} {TPU 7}]
[{TPU 4} {TPU 5}]])
```
To use `device_put` with a data array `x`, we can reshape the `sharding` into a shape that is *congruent* with `x.shape`, meaning a shape with the same length as `x.shape` and where each element evenly divides the corresponding element of `x.shape`:
```
def is_congruent(x_shape: Sequence[int], sharding_shape: Sequence[int]) -> bool:
return (len(x_shape) == len(sharding_shape) and
all(d1 % d2 == 0 for d1, d2 in zip(x_shape, sharding_shape)))
```
For example, we can reshape `sharding` to have shape `(4, 2)`, then use it in a `device_put`:
```
sharding = sharding.reshape(4, 2)
print(sharding)
```
```
PositionalSharding([[{TPU 0} {TPU 1}]
[{TPU 2} {TPU 3}]
[{TPU 6} {TPU 7}]
[{TPU 4} {TPU 5}]])
```
```
y = jax.device_put(x, sharding)
jax.debug.visualize_array_sharding(y)
```
```
┌──────────┬──────────┐
│ TPU 0 │ TPU 1 │
├──────────┼──────────┤
│ TPU 2 │ TPU 3 │
├──────────┼──────────┤
│ TPU 6 │ TPU 7 │
├──────────┼──────────┤
│ TPU 4 │ TPU 5 │
└──────────┴──────────┘
```
Here `y` represents the same *value* as `x`, but its shards (i.e. slices) are stored in different devices’ memories.
Different `PositionalSharding` shapes result in different distributed layouts (i.e. shardings) of the result:
```
sharding = sharding.reshape(1, 8)
print(sharding)
```
```
PositionalSharding([[{TPU 0} {TPU 1} {TPU 2} {TPU 3} {TPU 6} {TPU 7} {TPU 4} {TPU 5}]])
```
```
y = jax.device_put(x, sharding)
jax.debug.visualize_array_sharding(y)
```
```
┌───────┬───────┬───────┬───────┬───────┬───────┬───────┬───────┐
│ │ │ │ │ │ │ │ │
│ │ │ │ │ │ │ │ │
│ │ │ │ │ │ │ │ │
│ │ │ │ │ │ │ │ │
│ TPU 0 │ TPU 1 │ TPU 2 │ TPU 3 │ TPU 6 │ TPU 7 │ TPU 4 │ TPU 5 │
│ │ │ │ │ │ │ │ │
│ │ │ │ │ │ │ │ │
│ │ │ │ │ │ │ │ │
│ │ │ │ │ │ │ │ │
└───────┴───────┴───────┴───────┴───────┴───────┴───────┴───────┘
```
In some cases, we don’t just want to store each slice of `x` in a single device’s memory; we might want to *replicate* some slices, meaning storing copies of a slice’s values in multiple devices’ memories.
With `PositionalSharding`, we can express replication by calling the reducer method `replicate`:
```
sharding = sharding.reshape(4, 2)
print(sharding.replicate(axis=0, keepdims=True))
```
```
PositionalSharding([[{TPU 0, 2, 4, 6} {TPU 1, 3, 5, 7}]])
```
```
y = jax.device_put(x, sharding.replicate(axis=0, keepdims=True))
jax.debug.visualize_array_sharding(y)
```
```
┌───────────┬───────────┐
│ │ │
│ │ │
│ │ │
│ │ │
│TPU 0,2,4,6│TPU 1,3,5,7│
│ │ │
│ │ │
│ │ │
│ │ │
└───────────┴───────────┘
```
Here the visualization shows that `x` is sharded two ways along its second dimension (and not sharded along the first dimension), and each of those shards is replicated four ways (i.e. stored in four device memories).
The `replicate` method is analogous to the familiar NumPy array reduction methods like `.sum()` and `.prod()`. It operates along an axis performing a set union. So if `sharding` has shape `(4, 2)`, then `sharding.replicate(0, keepdims=True)` has shape `(1, 2)`, and `sharding.replicate(1, keepdims=True)` has shape `(4, 1)`. Unlike analogous NumPy methods, `keepdims=True` is actually the default, so reduced-over axes aren’t squeezed:
```
print(sharding.replicate(0).shape)
print(sharding.replicate(1).shape)
```
```
(1, 2)
(4, 1)
```
```
y = jax.device_put(x, sharding.replicate(1))
jax.debug.visualize_array_sharding(y)
```
```
┌───────────────────────┐
│ TPU 0,1 │
├───────────────────────┤
│ TPU 2,3 │
├───────────────────────┤
│ TPU 6,7 │
├───────────────────────┤
│ TPU 4,5 │
└───────────────────────┘
```
##### `NamedSharding` gives a way to express shardings with names[#](#namedsharding-gives-a-way-to-express-shardings-with-names)
So far we’ve worked with `PositionalSharding`, but there are alternative ways to express shardings. In fact, `Sharding` is an interface, and any class that implements that interface can be used with functions like `device_put`.
Another convenient way to express sharding is with the `NamedSharding`:
```
from jax.sharding import Mesh from jax.sharding import PartitionSpec from jax.sharding import NamedSharding from jax.experimental import mesh_utils
P = PartitionSpec
devices = mesh_utils.create_device_mesh((4, 2))
mesh = Mesh(devices, axis_names=('a', 'b'))
y = jax.device_put(x, NamedSharding(mesh, P('a', 'b')))
jax.debug.visualize_array_sharding(y)
```
```
┌──────────┬──────────┐
│ TPU 0 │ TPU 1 │
├──────────┼──────────┤
│ TPU 2 │ TPU 3 │
├──────────┼──────────┤
│ TPU 6 │ TPU 7 │
├──────────┼──────────┤
│ TPU 4 │ TPU 5 │
└──────────┴──────────┘
```
We can define a helper function to make things simpler:
```
devices = mesh_utils.create_device_mesh((4, 2))
default_mesh = Mesh(devices, axis_names=('a', 'b'))
def mesh_sharding(
pspec: PartitionSpec, mesh: Optional[Mesh] = None,
) -> NamedSharding:
if mesh is None:
mesh = default_mesh
return NamedSharding(mesh, pspec)
```
```
y = jax.device_put(x, mesh_sharding(P('a', 'b')))
jax.debug.visualize_array_sharding(y)
```
```
┌──────────┬──────────┐
│ TPU 0 │ TPU 1 │
├──────────┼──────────┤
│ TPU 2 │ TPU 3 │
├──────────┼──────────┤
│ TPU 6 │ TPU 7 │
├──────────┼──────────┤
│ TPU 4 │ TPU 5 │
└──────────┴──────────┘
```
Here, we use `P('a', 'b')` to express that the first and second axes of `x` should be sharded over the device mesh axes `'a'` and `'b'`, respectively. We can easily switch to `P('b', 'a')` to shard the axes of `x` over different devices:
```
y = jax.device_put(x, mesh_sharding(P('b', 'a')))
jax.debug.visualize_array_sharding(y)
```
```
┌───────┬───────┬───────┬───────┐
│ │ │ │ │
│ TPU 0 │ TPU 2 │ TPU 6 │ TPU 4 │
│ │ │ │ │
│ │ │ │ │
├───────┼───────┼───────┼───────┤
│ │ │ │ │
│ TPU 1 │ TPU 3 │ TPU 7 │ TPU 5 │
│ │ │ │ │
│ │ │ │ │
└───────┴───────┴───────┴───────┘
```
```
# This `None` means that `x` is not sharded on its second dimension,
# and since the Mesh axis name 'b' is not mentioned, shards are
# replicated across it.
y = jax.device_put(x, mesh_sharding(P('a', None)))
jax.debug.visualize_array_sharding(y)
```
```
┌───────────────────────┐
│ TPU 0,1 │
├───────────────────────┤
│ TPU 2,3 │
├───────────────────────┤
│ TPU 6,7 │
├───────────────────────┤
│ TPU 4,5 │
└───────────────────────┘
```
Here, because `P('a', None)` doesn’t mention the `Mesh` axis name `'b'`, we get replication over the axis `'b'`. The `None` here is just acting as a placeholder to line up against the second axis of the value `x`, without expressing sharding over any mesh axis. (As a shorthand, trailing `None`s can be omitted, so that `P('a', None)` means the same thing as `P('a')`. But it doesn’t hurt to be explicit!)
To shard only over the second axis of `x`, we can use a `None` placeholder in the `PartitionSpec`:
```
y = jax.device_put(x, mesh_sharding(P(None, 'b')))
jax.debug.visualize_array_sharding(y)
```
```
┌───────────┬───────────┐
│ │ │
│ │ │
│ │ │
│ │ │
│TPU 0,2,4,6│TPU 1,3,5,7│
│ │ │
│ │ │
│ │ │
│ │ │
└───────────┴───────────┘
```
```
y = jax.device_put(x, mesh_sharding(P(None, 'a')))
jax.debug.visualize_array_sharding(y)
```
```
┌───────┬───────┬───────┬───────┐
│ │ │ │ │
│ │ │ │ │
│ │ │ │ │
│ │ │ │ │
│TPU 0,1│TPU 2,3│TPU 6,7│TPU 4,5│
│ │ │ │ │
│ │ │ │ │
│ │ │ │ │
│ │ │ │ │
└───────┴───────┴───────┴───────┘
```
For a fixed mesh, we can even partition one logical axis of `x` over multiple device mesh axes:
```
y = jax.device_put(x, mesh_sharding(P(('a', 'b'), None)))
jax.debug.visualize_array_sharding(y)
```
```
┌───────────────────────┐
│ TPU 0 │
├───────────────────────┤
│ TPU 1 │
├───────────────────────┤
│ TPU 2 │
├───────────────────────┤
│ TPU 3 │
├───────────────────────┤
│ TPU 6 │
├───────────────────────┤
│ TPU 7 │
├───────────────────────┤
│ TPU 4 │
├───────────────────────┤
│ TPU 5 │
└───────────────────────┘
```
Using `NamedSharding` makes it easy to define a device mesh once and give its axes names, then just refer to those names in `PartitionSpec`s for each `device_put` as needed.
#### Computation follows data sharding and is automatically parallelized[#](#computation-follows-data-sharding-and-is-automatically-parallelized)
With sharded input data, the compiler can give us parallel computation. In particular, functions decorated with `jax.jit` can operate over sharded arrays without copying data onto a single device. Instead, computation follows sharding: based on the sharding of the input data, the compiler decides shardings for intermediates and output values, and parallelizes their evaluation, even inserting communication operations as necessary.
For example, the simplest computation is an elementwise one:
```
from jax.experimental import mesh_utils from jax.sharding import PositionalSharding sharding = PositionalSharding(mesh_utils.create_device_mesh((8,)))
```
```
x = jax.device_put(x, sharding.reshape(4, 2))
print('input sharding:')
jax.debug.visualize_array_sharding(x)
y = jnp.sin(x)
print('output sharding:')
jax.debug.visualize_array_sharding(y)
```
```
input sharding:
┌──────────┬──────────┐
│ TPU 0 │ TPU 1 │
├──────────┼──────────┤
│ TPU 2 │ TPU 3 │
├──────────┼──────────┤
│ TPU 6 │ TPU 7 │
├──────────┼──────────┤
│ TPU 4 │ TPU 5 │
└──────────┴──────────┘
output sharding:
┌──────────┬──────────┐
│ TPU 0 │ TPU 1 │
├──────────┼──────────┤
│ TPU 2 │ TPU 3 │
├──────────┼──────────┤
│ TPU 6 │ TPU 7 │
├──────────┼──────────┤
│ TPU 4 │ TPU 5 │
└──────────┴──────────┘
```
Here for the elementwise operation `jnp.sin` the compiler chose the output sharding to be the same as the input. Moreover, the compiler automatically parallelized the computation, so that each device computed its output shard from its input shard in parallel.
In other words, even though we wrote the `jnp.sin` computation as if a single machine were to execute it, the compiler splits up the computation for us and executes it on multiple devices.
We can do the same for more than just elementwise operations too. Consider a matrix multiplication with sharded inputs:
```
y = jax.device_put(x, sharding.reshape(4, 2).replicate(1))
z = jax.device_put(x, sharding.reshape(4, 2).replicate(0))
print('lhs sharding:')
jax.debug.visualize_array_sharding(y)
print('rhs sharding:')
jax.debug.visualize_array_sharding(z)
w = jnp.dot(y, z)
print('out sharding:')
jax.debug.visualize_array_sharding(w)
```
```
lhs sharding:
┌───────────────────────┐
│ TPU 0,1 │
├───────────────────────┤
│ TPU 2,3 │
├───────────────────────┤
│ TPU 6,7 │
├───────────────────────┤
│ TPU 4,5 │
└───────────────────────┘
rhs sharding:
┌───────────┬───────────┐
│ │ │
│ │ │
│ │ │
│ │ │
│TPU 0,2,4,6│TPU 1,3,5,7│
│ │ │
│ │ │
│ │ │
│ │ │
└───────────┴───────────┘
out sharding:
┌──────────┬──────────┐
│ TPU 0 │ TPU 1 │
├──────────┼──────────┤
│ TPU 2 │ TPU 3 │
├──────────┼──────────┤
│ TPU 6 │ TPU 7 │
├──────────┼──────────┤
│ TPU 4 │ TPU 5 │
└──────────┴──────────┘
```
Here the compiler chose the output sharding so that it could maximally parallelize the computation: without needing communication, each device already has the input shards it needs to compute its output shard.
How can we be sure it’s actually running in parallel? We can do a simple timing experiment:
```
x_single = jax.device_put(x, jax.devices()[0])
jax.debug.visualize_array_sharding(x_single)
```
```
┌───────────────────────┐
│ │
│ │
│ │
│ │
│ TPU 0 │
│ │
│ │
│ │
│ │
└───────────────────────┘
```
```
np.allclose(jnp.dot(x_single, x_single),
jnp.dot(y, z))
```
```
True
```
```
%timeit -n 5 -r 5 jnp.dot(x_single, x_single).block_until_ready()
```
```
5 loops, best of 5: 19.3 ms per loop
```
```
%timeit -n 5 -r 5 jnp.dot(y, z).block_until_ready()
```
```
5 loops, best of 5: 3.25 ms per loop
```
Even copying a sharded `Array` produces a result with the sharding of the input:
```
w_copy = jnp.copy(w)
jax.debug.visualize_array_sharding(w_copy)
```
```
┌──────────┬──────────┐
│ TPU 0 │ TPU 1 │
├──────────┼──────────┤
│ TPU 2 │ TPU 3 │
├──────────┼──────────┤
│ TPU 6 │ TPU 7 │
├──────────┼──────────┤
│ TPU 4 │ TPU 5 │
└──────────┴──────────┘
```
So computation follows data placement: when we explicitly shard data with `jax.device_put`, and apply functions to that data, the compiler attempts to parallelize the computation and decide the output sharding. This policy for sharded data is a generalization of [JAX’s policy of following explicit device placement](https://jax.readthedocs.io/en/latest/faq.html#controlling-data-and-computation-placement-on-devices).
##### When explicit shardings disagree, JAX errors[#](#when-explicit-shardings-disagree-jax-errors)
But what if two arguments to a computation are explicitly placed on different sets of devices, or with incompatible device orders?
In these ambiguous cases, an error is raised:
```
import textwrap from termcolor import colored
def print_exception(e):
name = colored(f'{type(e).__name__}', 'red')
print(textwrap.fill(f'{name}: {str(e)}'))
```
```
sharding1 = PositionalSharding(jax.devices()[:4])
sharding2 = PositionalSharding(jax.devices()[4:])
y = jax.device_put(x, sharding1.reshape(2, 2))
z = jax.device_put(x, sharding2.reshape(2, 2))
try: y + z except ValueError as e: print_exception(e)
```
```
ValueError: Devices of all `Array` inputs and outputs should be the same. Got array device ids [0, 1, 2, 3] on platform TPU and another array's device ids [4, 5, 6, 7] on platform TPU
```
```
devices = jax.devices()
permuted_devices = [devices[i] for i in [0, 1, 2, 3, 6, 7, 4, 5]]
sharding1 = PositionalSharding(devices)
sharding2 = PositionalSharding(permuted_devices)
y = jax.device_put(x, sharding1.reshape(4, 2))
z = jax.device_put(x, sharding2.reshape(4, 2))
try: y + z except ValueError as e: print_exception(e)
```
```
ValueError: Devices of all `Array` inputs and outputs should be the same. Got array device ids [0, 1, 2, 3, 4, 5, 6, 7] on platform TPU and another array's device ids [0, 1, 2, 3, 6, 7, 4, 5] on platform TPU
```
We say arrays that have been explicitly placed or sharded with `jax.device_put` are *committed* to their device(s), and so won’t be automatically moved. See the [device placement FAQ](https://jax.readthedocs.io/en/latest/faq.html#controlling-data-and-computation-placement-on-devices) for more information.
When arrays are *not* explicitly placed or sharded with `jax.device_put`, they are placed *uncommitted* on the default device.
Unlike committed arrays, uncommitted arrays can be moved and resharded automatically: that is, uncommitted arrays can be arguments to a computation even if other arguments are explicitly placed on different devices.
For example, the output of `jnp.zeros`, `jnp.arange`, and `jnp.array` are uncommitted:
```
y = jax.device_put(x, sharding1.reshape(4, 2))
y + jnp.ones_like(y)
y + jnp.arange(y.size).reshape(y.shape)
print('no error!')
```
```
no error!
```
#### Constraining shardings of intermediates in `jit`ted code[#](#constraining-shardings-of-intermediates-in-jitted-code)
While the compiler will attempt to decide how a function’s intermediate values and outputs should be sharded, we can also give it hints using `jax.lax.with_sharding_constraint`. Using `jax.lax.with_sharding_constraint` is much like `jax.device_put`, except we use it inside staged-out (i.e. `jit`-decorated) functions:
```
sharding = PositionalSharding(mesh_utils.create_device_mesh((8,)))
```
```
x = jax.random.normal(jax.random.PRNGKey(0), (8192, 8192))
x = jax.device_put(x, sharding.reshape(4, 2))
```
```
@jax.jit def f(x):
x = x + 1
y = jax.lax.with_sharding_constraint(x, sharding.reshape(2, 4))
return y
```
```
jax.debug.visualize_array_sharding(x)
y = f(x)
jax.debug.visualize_array_sharding(y)
```
```
┌──────────┬──────────┐
│ TPU 0 │ TPU 1 │
├──────────┼──────────┤
│ TPU 2 │ TPU 3 │
├──────────┼──────────┤
│ TPU 6 │ TPU 7 │
├──────────┼──────────┤
│ TPU 4 │ TPU 5 │
└──────────┴──────────┘
┌───────┬───────┬───────┬───────┐
│ │ │ │ │
│ TPU 0 │ TPU 1 │ TPU 2 │ TPU 3 │
│ │ │ │ │
│ │ │ │ │
├───────┼───────┼───────┼───────┤
│ │ │ │ │
│ TPU 6 │ TPU 7 │ TPU 4 │ TPU 5 │
│ │ │ │ │
│ │ │ │ │
└───────┴───────┴───────┴───────┘
```
```
@jax.jit def f(x):
x = x + 1
y = jax.lax.with_sharding_constraint(x, sharding.replicate())
return y
```
```
jax.debug.visualize_array_sharding(x)
y = f(x)
jax.debug.visualize_array_sharding(y)
```
```
┌──────────┬──────────┐
│ TPU 0 │ TPU 1 │
├──────────┼──────────┤
│ TPU 2 │ TPU 3 │
├──────────┼──────────┤
│ TPU 6 │ TPU 7 │
├──────────┼──────────┤
│ TPU 4 │ TPU 5 │
└──────────┴──────────┘
┌───────────────────────┐
│ │
│ │
│ │
│ │
│ TPU 0,1,2,3,4,5,6,7 │
│ │
│ │
│ │
│ │
└───────────────────────┘
```
By adding `with_sharding_constraint`, we’ve constrained the sharding of the output. In addition to respecting the annotation on a particular intermediate, the compiler will use annotations to decide shardings for other values.
It’s often a good practice to annotate the outputs of computations, for example based on how the values are ultimately consumed.
#### Examples: neural networks[#](#examples-neural-networks)
**⚠️ WARNING: The following is meant to be a simple demonstration of automatic sharding propagation with `jax.Array`, but it may not reflect best practices for real examples.** For instance, real examples may require more use of `with_sharding_constraint`.
We can use `jax.device_put` and `jax.jit`’s computation-follows-sharding features to parallelize computation in neural networks. Here are some simple examples, based on this basic neural network:
```
import jax import jax.numpy as jnp
```
```
def predict(params, inputs):
for W, b in params:
outputs = jnp.dot(inputs, W) + b
inputs = jnp.maximum(outputs, 0)
return outputs
def loss(params, batch):
inputs, targets = batch
predictions = predict(params, inputs)
return jnp.mean(jnp.sum((predictions - targets)**2, axis=-1))
```
```
loss_jit = jax.jit(loss)
gradfun = jax.jit(jax.grad(loss))
```
```
def init_layer(key, n_in, n_out):
k1, k2 = jax.random.split(key)
W = jax.random.normal(k1, (n_in, n_out)) / jnp.sqrt(n_in)
b = jax.random.normal(k2, (n_out,))
return W, b
def init_model(key, layer_sizes, batch_size):
key, *keys = jax.random.split(key, len(layer_sizes))
params = list(map(init_layer, keys, layer_sizes[:-1], layer_sizes[1:]))
key, *keys = jax.random.split(key, 3)
inputs = jax.random.normal(keys[0], (batch_size, layer_sizes[0]))
targets = jax.random.normal(keys[1], (batch_size, layer_sizes[-1]))
return params, (inputs, targets)
layer_sizes = [784, 8192, 8192, 8192, 10]
batch_size = 8192
params, batch = init_model(jax.random.PRNGKey(0), layer_sizes, batch_size)
```
##### 8-way batch data parallelism[#](#way-batch-data-parallelism)
```
sharding = PositionalSharding(jax.devices()).reshape(8, 1)
```
```
batch = jax.device_put(batch, sharding)
params = jax.device_put(params, sharding.replicate())
```
```
loss_jit(params, batch)
```
```
Array(23.469475, dtype=float32)
```
```
step_size = 1e-5
for _ in range(30):
grads = gradfun(params, batch)
params = [(W - step_size * dW, b - step_size * db)
for (W, b), (dW, db) in zip(params, grads)]
print(loss_jit(params, batch))
```
```
10.760101
```
```
%timeit -n 5 -r 5 gradfun(params, batch)[0][0].block_until_ready()
```
```
5 loops, best of 5: 26.3 ms per loop
```
```
batch_single = jax.device_put(batch, jax.devices()[0])
params_single = jax.device_put(params, jax.devices()[0])
```
```
%timeit -n 5 -r 5 gradfun(params_single, batch_single)[0][0].block_until_ready()
```
```
5 loops, best of 5: 122 ms per loop
```
##### 4-way batch data parallelism and 2-way model tensor parallelism[#](#way-batch-data-parallelism-and-2-way-model-tensor-parallelism)
```
sharding = sharding.reshape(4, 2)
```
```
batch = jax.device_put(batch, sharding.replicate(1))
jax.debug.visualize_array_sharding(batch[0])
jax.debug.visualize_array_sharding(batch[1])
```
```
┌───────┐
│TPU 0,1│
├───────┤
│TPU 2,3│
├───────┤
│TPU 4,5│
├───────┤
│TPU 6,7│
└───────┘
┌───────┐
│TPU 0,1│
├───────┤
│TPU 2,3│
├───────┤
│TPU 4,5│
├───────┤
│TPU 6,7│
└───────┘
```
```
(W1, b1), (W2, b2), (W3, b3), (W4, b4) = params
W1 = jax.device_put(W1, sharding.replicate())
b1 = jax.device_put(b1, sharding.replicate())
W2 = jax.device_put(W2, sharding.replicate(0))
b2 = jax.device_put(b2, sharding.replicate(0))
W3 = jax.device_put(W3, sharding.replicate(0).T)
b3 = jax.device_put(b3, sharding.replicate())
W4 = jax.device_put(W4, sharding.replicate())
b4 = jax.device_put(b4, sharding.replicate())
params = (W1, b1), (W2, b2), (W3, b3), (W4, b4)
```
```
jax.debug.visualize_array_sharding(W2)
```
```
┌───────────┬───────────┐
│ │ │
│ │ │
│ │ │
│ │ │
│TPU 0,2,4,6│TPU 1,3,5,7│
│ │ │
│ │ │
│ │ │
│ │ │
└───────────┴───────────┘
```
```
jax.debug.visualize_array_sharding(W3)
```
```
┌───────────────────────┐
│ │
│ TPU 0,2,4,6 │
│ │
│ │
├───────────────────────┤
│ │
│ TPU 1,3,5,7 │
│ │
│ │
└───────────────────────┘
```
```
print(loss_jit(params, batch))
```
```
10.760103
```
```
step_size = 1e-5
for _ in range(30):
grads = gradfun(params, batch)
params = [(W - step_size * dW, b - step_size * db)
for (W, b), (dW, db) in zip(params, grads)]
```
```
print(loss_jit(params, batch))
```
```
10.752466
```
```
(W1, b1), (W2, b2), (W3, b3), (W4, b4) = params jax.debug.visualize_array_sharding(W2)
jax.debug.visualize_array_sharding(W3)
```
```
┌───────────┬───────────┐
│ │ │
│ │ │
│ │ │
│ │ │
│TPU 0,2,4,6│TPU 1,3,5,7│
│ │ │
│ │ │
│ │ │
│ │ │
└───────────┴───────────┘
┌───────────────────────┐
│ │
│ TPU 0,2,4,6 │
│ │
│ │
├───────────────────────┤
│ │
│ TPU 1,3,5,7 │
│ │
│ │
└───────────────────────┘
```
```
%timeit -n 10 -r 10 gradfun(params, batch)[0][0].block_until_ready()
```
```
10 loops, best of 10: 30.5 ms per loop
```
#### Sharp bits[#](#sharp-bits)
##### Generating random numbers[#](#generating-random-numbers)
JAX comes with a functional, deterministic [random number generator](https://jax.readthedocs.io/en/latest/jep/263-prng.html). It underlies the various sampling functions in the [`jax.random` module](https://jax.readthedocs.io/en/latest/jax.random.html), such as `jax.random.uniform`.
JAX’s random numbers are produced by a counter-based PRNG, so in principle, random number generation should be a pure map over counter values. A pure map is a trivially partitionable operation in principle. It should require no cross-device communication, nor any redundant computation across devices.
However, the existing stable RNG implementation is not automatically partitionable, for historical reasons.
Consider the following example, where a function draws random uniform numbers and adds them to the input, elementwise:
```
@jax.jit def f(key, x):
numbers = jax.random.uniform(key, x.shape)
return x + numbers
key = jax.random.PRNGKey(42)
x_sharding = jax.sharding.PositionalSharding(jax.devices())
x = jax.device_put(jnp.arange(24), x_sharding)
```
On a partitioned input, the function `f` produces output that is also partitioned:
```
jax.debug.visualize_array_sharding(f(key, x))
```
```
┌───────┬───────┬───────┬───────┬───────┬───────┬───────┬───────┐
│ TPU 0 │ TPU 1 │ TPU 2 │ TPU 3 │ TPU 4 │ TPU 5 │ TPU 6 │ TPU 7 │
└───────┴───────┴───────┴───────┴───────┴───────┴───────┴───────┘
```
But if we inspect the compiled computation for `f` on this partitioned input, we see that it does involve some communication:
```
f_exe = f.lower(key, x).compile()
print('Communicating?', 'collective-permute' in f_exe.as_text())
```
```
Communicating? True
```
One way to work around this is to configure JAX with the experimental upgrade flag `jax_threefry_partitionable`. With the flag on, the “collective permute” operation is now gone from the compiled computation:
```
jax.config.update('jax_threefry_partitionable', True)
f_exe = f.lower(key, x).compile()
print('Communicating?', 'collective-permute' in f_exe.as_text())
```
```
Communicating? False
```
The output is still partitioned:
```
jax.debug.visualize_array_sharding(f(key, x))
```
```
┌───────┬───────┬───────┬───────┬───────┬───────┬───────┬───────┐
│ TPU 0 │ TPU 1 │ TPU 2 │ TPU 3 │ TPU 4 │ TPU 5 │ TPU 6 │ TPU 7 │
└───────┴───────┴───────┴───────┴───────┴───────┴───────┴───────┘
```
One caveat to the `jax_threefry_partitionable` option, however, is that *the random values produced may be different than without the flag set*, even though they were generated by the same random key:
```
jax.config.update('jax_threefry_partitionable', False)
print('Stable:')
print(f(key, x))
print()
jax.config.update('jax_threefry_partitionable', True)
print('Partitionable:')
print(f(key, x))
```
```
Stable:
[ 0.72503686 1.8532515 2.983416 3.083253 4.0332246 5.4782867
6.1720605 7.6900277 8.602836 9.810046 10.861367 11.907651
12.330483 13.456195 14.808557 15.960099 16.067581 17.739723
18.335474 19.46401 20.390276 21.116539 22.858128 23.223194 ]
Partitionable:
[ 0.48870957 1.6797972 2.6162715 3.561016 4.4506445 5.585866
6.0748096 7.775133 8.698959 9.818634 10.350306 11.87282
12.925881 13.86013 14.477554 15.818481 16.711355 17.586697
18.073738 19.777622 20.404566 21.119123 22.026257 23.63918 ]
```
In `jax_threefry_partitionable` mode, the JAX PRNG remains deterministic, but its implementation is new (and under development). The random values generated for a given key will be the same at a given JAX version (or a given commit on the `main` branch), but may vary across releases.
### Named axes and easy-to-revise parallelism with `xmap`[#](#named-axes-and-easy-to-revise-parallelism-with-xmap)
***UPDATE:*** The recommended ways to do multi-device programming in JAX are using: 1) [`jit` (automatic partitioning of computation and `jax.Array` sharding)](https://jax.readthedocs.io/en/latest/notebooks/Distributed_arrays_and_automatic_parallelization.html); and/or 2) [`shard_map` (manual data sharding)](https://jax.readthedocs.io/en/latest/jep/14273-shard-map.html). Learn more in [Why don’t `pmap` or `xmap` already solve this?](https://jax.readthedocs.io/en/latest/jep/14273-shard-map.html#why-don-t-pmap-or-xmap-already-solve-this) in the [`shard_map` JEP document](https://jax.readthedocs.io/en/latest/jep/14273-shard-map.html).
This tutorial introduces JAX `xmap` (`jax.experimental.maps.xmap`) and the named-axis programming model that comes with it. By reading this, you’ll learn how to write error-avoiding, self-documenting functions using named axes, then control how they’re executed on hardware at any scale, from your laptop CPU to the largest TPU supercomputer.
We start with a toy neural network example.
#### From positions to names in a toy neural network[#](#from-positions-to-names-in-a-toy-neural-network)
Presentations on JAX often start with a simple neural network prediction function and loss, written in pure NumPy. Here’s a simple network with one hidden layer:
```
import os os.environ["XLA_FLAGS"] = '--xla_force_host_platform_device_count=8' # Use 8 CPU devices
```
```
import jax.numpy as jnp from jax import lax from jax.nn import one_hot, relu from jax.scipy.special import logsumexp
def predict(w1, w2, images):
hiddens = relu(jnp.dot(images, w1))
logits = jnp.dot(hiddens, w2)
return logits - logsumexp(logits, axis=1, keepdims=True)
def loss(w1, w2, images, labels):
predictions = predict(w1, w2, images)
targets = one_hot(labels, predictions.shape[-1])
losses = jnp.sum(targets * predictions, axis=1)
return -jnp.mean(losses, axis=0)
```
We can then initialize inputs with the right shapes and compute the loss value:
```
w1 = jnp.zeros((784, 512))
w2 = jnp.zeros((512, 10))
images = jnp.zeros((128, 784))
labels = jnp.zeros(128, dtype=jnp.int32)
print(loss(w1, w2, images, labels))
```
Here’s how we might write the same function using named axes. Don’t worry if you can’t follow the API details. They are not important now and we will explain everything step-by-step afterwards. This is just to show you what you can do with xmap before you learn them!
```
def named_predict(w1, w2, image):
hidden = relu(lax.pdot(image, w1, 'inputs'))
logits = lax.pdot(hidden, w2, 'hidden')
return logits - logsumexp(logits, 'classes')
def named_loss(w1, w2, images, labels):
predictions = named_predict(w1, w2, images)
num_classes = lax.psum(1, 'classes')
targets = one_hot(labels, num_classes, axis='classes')
losses = lax.psum(targets * predictions, 'classes')
return -lax.pmean(losses, 'batch')
```
This code is simpler: we don’t need to worry about axis order when calling functions like `jnp.dot`, or remember which axis position to reduce over with `logsumexp`, `jnp.sum`, or `jnp.mean`.
But the real win is that names let us use `xmap` to control our function’s execution. At its simplest, `xmap` will just vectorize over all named axes, so that the function is executed just like its positional-axis counterpart:
```
from jax.experimental.maps import xmap
in_axes = [['inputs', 'hidden', ...],
['hidden', 'classes', ...],
['batch', 'inputs', ...],
['batch', ...]]
loss = xmap(named_loss, in_axes=in_axes, out_axes=[...])
print(loss(w1, w2, images, labels))
```
But on a whim we can decide to parallelize over the batch axis:
```
import jax import numpy as np from jax.sharding import Mesh
loss = xmap(named_loss, in_axes=in_axes, out_axes=[...],
axis_resources={'batch': 'x'})
devices = np.array(jax.local_devices())
with Mesh(devices, ('x',)):
print(loss(w1, w2, images, labels))
```
Or we might want to perform model parallelism over the hidden axis:
```
loss = xmap(named_loss, in_axes=in_axes, out_axes=[...],
axis_resources={'hidden': 'x'})
devices = np.array(jax.local_devices())
with Mesh(devices, ('x',)):
print(loss(w1, w2, images, labels))
```
Or we might want to do both model and batch data parallelism at once:
```
loss = xmap(named_loss, in_axes=in_axes, out_axes=[...],
axis_resources={'batch': 'x', 'hidden': 'y'})
devices = np.array(jax.local_devices()).reshape((4, 2))
with Mesh(devices, ('x', 'y')):
print(loss(w1, w2, images, labels))
```
With `xmap`, we can revise our parallelism strategy on a dime, without needing to rewrite our neural network function.
#### Preliminaries[#](#preliminaries)
```
import jax.numpy as jnp from jax import lax from functools import partial import jax import numpy as np
```
To better illustrate the new programming model, we make extensive use of custom type annotations in this notebook. The annotations have no effect on how the code evaluates and will be unchecked for now.
```
from typing import Any, Callable
class ArrayType:
def __getitem__(self, idx):
return Any f32 = ArrayType()
i32 = ArrayType()
```
#### Tensors with named axes[#](#tensors-with-named-axes)
The NumPy programming model is based around nd-arrays. Each nd-array can be associated with a two-component type:
* the element type (accessible via the `.dtype` attribute)
* shape (a tuple of integers given by `.shape`).
Using our little type annotation language, we will write these types as `dtype[shape_tuple]`.
> For example, a 5x7x4 array of 32-bit floating point numbers will be denoted as `f32[(5, 7, 4)]`.
Here is a small example that shows how the annotations can demonstrate the way shapes propagate through a simple NumPy program:
```
x: f32[(2, 3)] = np.ones((2, 3), dtype=np.float32)
y: f32[(3, 5)] = np.ones((3, 5), dtype=np.float32)
z: f32[(2, 5)] = x.dot(y) # matrix multiplication w: f32[(7, 1, 5)] = np.ones((7, 1, 5), dtype=np.float32)
q: f32[(7, 2, 5)] = z + w # broadcasting
```
The extension we propose is to add another component of array type: a `named_shape`, mapping axis names (arbitrary hashable objects, with strings being a common choice) to integer sizes. Most importantly, because each axis has a name, their order has no meaning. That is, a named shape of `{'a': 2, 'b': 5}` is indistinguishable from a named shape of `{'b': 5, 'a': 2}`.
> This is not an entirely new idea. Some good examples of where using named axes has been proposed in the past are: [Mesh TensorFlow](https://github.com/tensorflow/mesh), [Tensor Considered Harmful](http://nlp.seas.harvard.edu/NamedTensor) manifesto as well as the [xarray](http://xarray.pydata.org/en/stable/) and [einops](http://einops.rocks/) packages. Keep in mind that many of those are slightly different in that they do assign an order to the named axes, but they are unordered in JAX.
From now on we will allow the type annotations to have two components, the first one still being the value’s `.shape`, while the second one will be the `.named_shape`.
```
e: f32[(5, 7), {'batch': 20, 'sequence': 30}]
# e.shape == (5, 7)
# e.named_shape == {'batch': 20, 'sequence': 30} == {'sequence': 30, 'batch': 20}
```
While we don’t modify the meaning of `.ndim` (which is always equal to `len(shape)`) and `.size` (equal to the product of `shape`), we do so solely for backward-compatibility reasons. The true rank of an array that has non-empty named axes is `len(shape) + len(named_shape)`. The true number of elements stored in such an array is equal to the product of sizes of all dimensions, both positional and named.
#### Introducing and eliminating named axes[#](#introducing-and-eliminating-named-axes)
But how does one create such arrays, if all top-level JAX operations work in the NumPy model with purely positional axes? While this constraint could be lifted at some point, for the time being the only way to introduce named axes is to use `xmap`.
`xmap` can be thought of as an adapter that takes in arrays with positional axes, makes some of them named (as specified by `in_axes`), and calls the function that it wraps. Once the wrapped function returns arrays, all named axes appearing in those are converted back to positional axes (as specified by `out_axes`).
`in_axes` should have a structure that matches the signature of the `xmap`ped function arguments, except with all places where array arguments would be replaced by an *axis mapping*. There are two ways in which axis mappings can be specified:
* as dictionaries mapping positional axes to axis names (e.g. `{0: 'x', 2: 'y'}`); and
* as lists of axis names terminated by the ellipsis object (e.g. `['a', 'b', ...]`), indicating that a prefix of positional dimensions are to be mapped to given names.
`out_axes` are similar, except that their structure has to match the return signature of the `xmap`ped function (but again, with all arrays replaced by axes mappings).
For each array argument, all positional axes mentioned in its respective `in_axes` axis mapping are converted to named axes. For each array result, all named axes are inserted in the positions indicated by its respective `out_axes`.
```
from jax.experimental.maps import xmap
def my_func(x: f32[(5,), {'batch': 20}]) -> f32[(5,), {'batch': 20}]:
assert x.shape == (5,)
# assert x.named_shape == {'batch': 20} # TODO: Implement named_shape
return x
x: f32[(20, 5)] = jnp.zeros((20, 5), dtype=np.float32)
f = xmap(my_func,
in_axes={0: 'batch'}, # Name the first axis of the only argument 'batch'
out_axes={1: 'batch'}) # Place the 'batch' named axis of the output as the second positional axis y: f32[(5, 20)] = f(x)
assert (y == x.T).all() # The first dimension was removed from x and then re-inserted as the last dim
```
While this might seem like a handful at first, if you’ve seen code that uses `jnp.einsum` you are already familiar with this approach. The `einsum` function interprets an expression such as `nk,km->nm` assigning names (each letter is considered a separate name) to positional axes, performing necessary broadcasts and reductions, and finally putting back the results in positional axes, according to the order given by the right-hand side of the `->` separator. While `einsum` never lets you interact with named axes directly, they do appear naturally in its implementation. `xmap` is a *generalized einsum* because named axes are now first-class and you get to implement the function that can manipulate them.
Continuing this analogy, `xmap(my_func, ...)` from the above example is equivalent to `jnp.einsum('bx->xb')`. But of course not every `xmap`ped function will have an equivalent `einsum`.
One more similarity with `einsum` is that whenever a name is reused for multiple axes, they do have to have the same size:
```
x = jnp.arange(5)
y = jnp.arange(7)
try:
jnp.einsum('i,i->i', x, y)
except Exception as e:
print('einsum:', e)
try:
xmap(lambda x, y: x * y,
in_axes=(['i', ...], ['i', ...]),
out_axes=['i', ...])(x, y)
except Exception as e:
print('xmap:', e)
```
#### Named axis propagation[#](#named-axis-propagation)
We now know how named axes are introduced and eliminated, but what are they good for? How do they propagate throughout the program? Let’s explore a few examples.
##### Interactions with positional axes[#](#interactions-with-positional-axes)
First rule: named axes never implicitly interact with positional axes. Any function that’s written without named axes in mind can always be invoked with inputs that have named dimensions. The result is the same as if `vmap` was applied on a per-named-axis basis.
```
from jax.scipy.linalg import expm_frechet
# Any other function that does not assume existence of any named axes would do too,
# at least as long as it matches this type signature:
expm_frechet: Callable[[f32[(3, 3)], f32[(3, 3)]], f32[(3, 3)]]
f = partial(expm_frechet, compute_expm=False)
# Each A with each E batch_A = jnp.ones((5, 3, 3), dtype=np.float32)
batch_E = jnp.ones((5, 3, 3), dtype=np.float32)
batch_AE = xmap(f,
in_axes=(['b', ...], ['b', ...]), # Map first axes of both inputs to 'b'
out_axes=['b', ...])(batch_A, batch_E) # Place 'b' as the first positional axis in the result for i in range(5):
np.testing.assert_allclose(batch_AE[i], f(batch_A[i], batch_E[i]))
# All-pairs of As and Es batch_A = jnp.ones((7, 3, 3), dtype=np.float32)
batch_E = jnp.ones((5, 3, 3), dtype=np.float32)
batch_AE = xmap(f,
in_axes=(['ba', ...], ['be', ...]), # Map first axes of inputs to 'ba' and 'be' respectively
out_axes=['ba', 'be', ...])(batch_A, batch_E) # Prefix all positional dimensions of output with 'ba' and 'be'
for i in range(7):
for j in range(5):
np.testing.assert_allclose(batch_AE[i,j], f(batch_A[i], batch_E[j]))
```
##### Broadcasting[#](#broadcasting)
Secondly, named axes are broadcast *by name*, and every existing NumPy (and almost every JAX) operator implicitly broadcasts the named dimensions. Whenever a standard NumPy function is called with arrays with named axes, the NumPy function determines the positional shape of the result array, while the named shape becomes a union of all named shapes of its inputs. Analyze the following example to understand how the axes propagate:
```
def named_broadcasting(
x: f32[(2, 1, 1), {'a': 2}],
y: f32[(1, 3, 1), {'b': 3}],
z: f32[(1, 1, 5), {'c': 5}]) \
-> f32[(2, 3, 5), {'a': 2, 'b': 3, 'c': 5}]:
i: f32[(2, 3, 1), {'a': 2, 'b': 3}] = x + y
j: f32[(1, 3, 5), {'b': 3, 'c': 5}] = y + z
k: f32[(2, 3, 5), {'a': 2, 'b': 3, 'c': 5}] = i + j
return k
x = jnp.ones((2, 2, 1, 1), dtype=np.float32)
y = jnp.ones((3, 1, 3, 1), dtype=np.float32)
z = jnp.ones((5, 1, 1, 5), dtype=np.float32)
k = xmap(named_broadcasting,
in_axes=(['a', ...], ['b', ...], ['c', ...]),
out_axes=['a', 'b', 'c', ...])(x, y, z)
assert k.shape == (2, 3, 5, 2, 3, 5)
```
To recap, the named shape of the result of an expression such as `i + j` with `i` having a named shape of `{'a': 2, 'b': 3}` and `j` of `{'b': 3, 'c': 5}` is `{'a': 2, 'b': 3, 'c': 5}`. The `'b'` axis is present in both inputs, so no broadcasting is necessary, while `'a'` and `'c'` occur in only one of the two inputs, causing the other one to get broadcast along the axis missing in its named shape.
No shape errors can occur when operating over named axes, because `xmap` enforces that a single name is associated with a single size inside its body.
> While the rule for broadcasting named axes might seem like an arbitrary extension of the NumPy model, it is actually consistent with it.
> Broadcasting first looks for pairs of dimensions it considers as equivalent in both operands. For all matched pairs, it asserts that both sizes are equal or one of them is 1. All unpaired dimensions are carried over to the result.
> Now, in the positional world the way NumPy broadcasting chooses to form the pairs is by right-aligning the shapes. But our axes are named, so there is a straightforward way of finding equivalent axes: just check their names for equality!
##### Reductions[#](#reductions)
But named axes are not only good for batching! In fact, our goal is that named axes should be equivalent to positional axes. In particular, every NumPy function that takes in positional axes as arguments should also accept named axes.
> The paragraph above is aspirational and the set of NumPy functions that do accept named axes is relatively limited. At the moment named axes are only supported in:
> * `jnp.sum`, `jnp.max`, `jnp.min`
Reductions are a good example:
```
def named_broadcast_and_reduce(
x: f32[(), {'x': 2}],
y: f32[(5,), {'y': 4}]) \
-> f32[()]:
z: f32[(5,), {'x': 2, 'y': 4}] = x + y
w: f32[()] = jnp.sum(z, axis=(0, 'x', 'y'))
# We could also reduce in steps:
# w0 : f32[(), {'x': 2, 'y': 4}] = jnp.sum(z, 0) # eliminate the positional axis
# w0x: f32[(), {'y': 4}] = jnp.sum(w0, 'x') # eliminate the `x` axis
# w : f32[()] = jnp.sum(w0x, 'y') # eliminate the `y` axis
return w
positional_broadcast_and_reduce: Callable[[f32[(2,)], f32[(5, 4)]], f32[()]]
positional_broadcast_and_reduce = \
xmap(named_broadcast_and_reduce,
in_axes=({0: 'x'}, {1: 'y'}),
out_axes={})
positional_broadcast_and_reduce(jnp.arange(2, dtype=np.float32),
jnp.arange(20, dtype=np.float32).reshape((5, 4)))
```
##### `einsum`[#](#einsum)
Similarly to how we have extended reductions with support for named axes, we’ve also made it possible to contract over named axes using `jnp.einsum`.
Operands and results still use a convention of one letter per positional axis, but now it is also possible to mention named axes in curly braces. For example, `n{b,k}` implies that a value will have a single positional dimension `n` and named dimensions `b` and `k` (their order doesn’t matter). Following the usual einsum semantics, any named axes that appear in inputs, but do not appear in an output will be contracted (summed after all multiplications are performed).
It is acceptable to omit a named dimension from *all arguments and the result* in which case it will be treated according to the usual broadcasting semantics. However, it is not acceptable to mention a named axis in one argument that has it in its named shape and skip it in another argument that also has it in its named shape. Of course, skipping it in the arguments that don’t have it is required.
> NOTE: This invariant is **unchecked** at the moment (it is still work-in-progress). Such axis skipping will result in undefined behavior.
> At the moment `jnp.einsum` with named axes only supports two inputs and a single result.
```
def named_batch_matrix_single_matrix(
x: f32[(5,), {'b': 20, 'k': 7}],
y: f32[(), {'k': 7, 'm': 11}]) \
-> f32[(5,), {'b': 20, 'm': 11}]:
return jnp.einsum('n{b,k},{k,m}->n{b,m}', x, y)
x = jnp.ones((20, 5, 7))
y = jnp.ones((7, 11))
z = jnp.einsum('bnk,km->bnm', x, y)
zx = xmap(named_batch_matrix_single_matrix,
in_axes=[{0: 'b', 2: 'k'}, ['k', 'm', ...]],
out_axes={0: 'b', 2: 'm'})(x, y)
np.testing.assert_allclose(z, zx)
```
The example above is admittedly no clearer than using `jnp.einsum` directly. But contractions over named axes are a crucial component of larger applications such as Transformer models and this is only meant to be an exercise to show you how the names propagate.
##### Collectives[#](#collectives)
Finally, all collectives that could have been used with `pmap`ped functions also work with named axes. As we’ll show later, `xmap` can be used as a drop-in replacement for `pmap` that makes programming for multi-dimensional hardware meshes much easier.
```
x = jnp.arange(8)
xmap(lambda x: lax.pshuffle(x, 'i', list(reversed(range(8)))),
in_axes=['i', ...], out_axes=['i', ...])(x)
```
#### Parallelism support[#](#parallelism-support)
While the new programming paradigm can be nice at times, the killer feature of `xmap` is its ability to parallelize code over supercomputer-scale hardware meshes!
> Named axes are the secret sauce that makes all this possible, thanks to the carefully tuned rules that describe their propagation. Good support for partitioning in a purely positional programming model is notoriously difficult. Positional axes are usually disposable and it is hard to keep track of the way axis partitioning propagates through the program. As you’ll see below, named axes enable us to define a straightforward correspondence between their names and hardware resources, making it easy to reason about the way different values end up partitioned.
In all the previous examples, we haven’t said a word about parallelism and for a good reason. By default `xmap` doesn’t perform any parallelization and vectorizes the computation in the same way `vmap` does (i.e. it still executes on a single device). To partition the computation over multiple accelerators we have to introduce one more concept: *resource axes*.
The basic idea is that logical axes (the ones that appear in named shapes) assume that we have abundant hardware and memory, but before the program is to be executed, they have to be placed somewhere. The default (`vmap`-like) evaluation style pays a high memory cost on the default JAX device. By mapping logical axes to (one or more) resource axes through the `axis_resources` argument, we can control how `xmap` evaluates the computation.
```
x = jnp.ones((2048, 2048))
local_matmul = xmap(jnp.vdot,
in_axes=({0: 'left'}, {1: 'right'}),
out_axes=['left', 'right', ...])
distr_matmul = xmap(jnp.vdot,
in_axes=({0: 'left'}, {1: 'right'}),
out_axes=['left', 'right', ...],
axis_resources={'left': 'x', 'right': 'y'})
```
Both `local_matmul` and `distr_matmul` implement matrix multiplication, but `distr_matmul` will additionally partition the `left` and `right` logical axes over the `x` and `y` resource axes.
##### But… where do those resource names come from?[#](#but-where-do-those-resource-names-come-from)
Well, it depends, but one good choice is… a hardware mesh!
For our purposes a mesh is an nd-array of devices with named axes. But, because NumPy doesn’t support named axes (that’s our extension!), the meshes are represented by a pair of an nd-array of JAX device objects (as obtained from `jax.devices()` or `jax.local_devices()`) and a tuple of resource axis names of length matching the rank of the array.
```
axis_names = ('x', 'y')
mesh_devices = np.array(jax.devices()).reshape((2, 4))
assert len(axis_names) == mesh_devices.ndim mesh_def = (mesh_devices, axis_names)
mesh_def
```
The mesh axis names are exactly the names of resources that named axes can be mapped to. But just creating a mesh definition won’t make the resource names visible to `distr_matmul`:
```
try:
distr_matmul(x, x)
except Exception as e:
print(e)
```
To introduce the resources in a scope, use the `with Mesh` context manager:
```
from jax.sharding import Mesh
local = local_matmul(x, x) # The local function doesn't require the mesh definition with Mesh(*mesh_def): # Makes the mesh axis names available as resources
distr = distr_matmul(x, x)
np.testing.assert_allclose(local, distr)
```
Anyway, the best part of it is that specifying `axis_resources` **never changes program semantics**. You are free to experiment with different ways of partitioning your computation (just change the assignment of resources to named axes!) and even how the physical devices are organized in the mesh (by changing the construction of the NumPy array of devices). None of those things should have any significant influence on the results you get back (up to, for example, floating point inaccuracy), though of course some of them will achieve significantly better performance than the others.
`xmap` doesn’t provide any automatic scheduling options at the moment, because the best schedule often has to be somewhat carefully matched to your program. We’re considering adding support for that in the future, but it will take time.
> Once you map a logical axis to a mesh dimension, the size of that logical axis has to be divisible by the mesh dimension size.
##### Is my data replicated? Or partitioned? Where is it?[#](#is-my-data-replicated-or-partitioned-where-is-it)
Named axes also give us a neat way of reasoning about partitioning and replication. A value is partitioned over a mesh axis if and only if it has a named axis that has been mapped to that mesh axis in its shape. Otherwise, it will be replicated over all slices along that axis.
For example, assume that we’re in an `xmap` that had `axis_resources={'a': 'x', 'b': 'y'}` specified (i.e. we are running the computation over a 2D mesh with `x` and `y` axes with sizes 2 and 3 respectively). Then:
* An array of type `f32[(5, 5), {}]` is completely replicated over the whole mesh. All devices store a local copy of the value.
* An array of type `f32[(6,), {'a': 8}]` is partitioned over mesh axis `x`, because it has `'a'` in its named shape, and `'a'` is mapped to `x`. It is replicated over mesh axis `y`. To put it differently, all devices in a slice of the mesh with the same `x` coordinate will store a local copy of a chunk of this array. But, mesh slices with different `x` coordinates will store different chunks of the data.
* An array of type `f32[(), {'a': 8, 'c': 7}]` is partitioned just like in the previous case: split over the `x` mesh axis and replicated over the `y` axis. Named dimensions with no resources specified are no different than positional dimensions when considering partitioning, so `'c'` has no influence on it.
* An array of type `f32[(), {'a': 8, 'b': 12}]` is completely partitioned over the whole mesh. Every device holds a distinct chunk of the data.
This also highlights one restriction: `xmap` won’t complain if you specify `axis_resources={'a': 'x', 'b': 'x'}`, but consider how would an array with type `f32[(2, 8), {'a': 4, 'b': 12}]` be partitioned. If the size of the `x` mesh axis is 2, then we only have 2 devices, but we have 4 chunks to place (2 along `'a'` and 2 along `'b'`)! Now we can state it in full: **named axes mapped to the same resources can never both appear in the named shape of a single array**. But they can appear in named shapes of two distinct arrays, such as in this program:
```
def sum_two_args(x: f32[(), {'a': 4}], y: f32[(), {'b': 12}]) -> f32[()]:
return jnp.sum(x, axis='a') + jnp.sum(y, axis='b')
q = jnp.ones((4,), dtype=np.float32)
u = jnp.ones((12,), dtype=np.float32)
with Mesh(np.array(jax.devices()[:4]), ('x',)):
v = xmap(sum_two_args,
in_axes=(['a', ...], ['b', ...]),
out_axes=[...],
axis_resources={'a': 'x', 'b': 'x'})(q, u)
print(v)
```
This program is valid, because `jnp.sum` eliminates the axes that cannot co-occur before the values are added.
> While the final release of `xmap` will ensure that you don’t accidentally end up doing so, the current implementation *doesn’t verify it*. Violating this restriction will result in *undefined behavior*.
##### Why `axis_resources` and not a more direct mapping to hardware?[#](#why-axis-resources-and-not-a-more-direct-mapping-to-hardware)
At this point you might wonder why go through the detour of introducing yet another concept of resource axes in the mix. For as long as you’re interested in partitioning your computations over hardware, there is no good reason, but this mental framework is more flexible than that!
For example, there is one additional resource we all deal with: time! Just like a computation can be partitioned over multiple hardware devices, e.g. to lower its memory usage, the same thing can be achieved with a single accelerator that evaluates a chunk of the computation in multiple steps.
So, while hardware meshes are the only source of resource axes in JAX programs at the moment, we are planning to extend the whole system with other sources.
#### Porting positional code to named code[#](#porting-positional-code-to-named-code)
In this section we will go over a few more real examples to show how `xmap` can help you implement and distribute various models.
> **This section is a work in progress**
### The Autodiff Cookbook[#](#the-autodiff-cookbook)
*alexbw@, mattjj@*
JAX has a pretty general automatic differentiation system. In this notebook, we’ll go through a whole bunch of neat autodiff ideas that you can cherry pick for your own work, starting with the basics.
```
import jax.numpy as jnp from jax import grad, jit, vmap from jax import random
key = random.PRNGKey(0)
```
#### Gradients[#](#gradients)
##### Starting with `grad`[#](#starting-with-grad)
You can differentiate a function with `grad`:
```
grad_tanh = grad(jnp.tanh)
print(grad_tanh(2.0))
```
```
0.070650816
```
`grad` takes a function and returns a function. If you have a Python function `f` that evaluates the mathematical function \(f\), then `grad(f)` is a Python function that evaluates the mathematical function \(\nabla f\). That means `grad(f)(x)` represents the value \(\nabla f(x)\).
Since `grad` operates on functions, you can apply it to its own output to differentiate as many times as you like:
```
print(grad(grad(jnp.tanh))(2.0))
print(grad(grad(grad(jnp.tanh)))(2.0))
```
```
-0.13621868 0.25265405
```
Let’s look at computing gradients with `grad` in a linear logistic regression model. First, the setup:
```
def sigmoid(x):
return 0.5 * (jnp.tanh(x / 2) + 1)
# Outputs probability of a label being true.
def predict(W, b, inputs):
return sigmoid(jnp.dot(inputs, W) + b)
# Build a toy dataset.
inputs = jnp.array([[0.52, 1.12, 0.77],
[0.88, -1.08, 0.15],
[0.52, 0.06, -1.30],
[0.74, -2.49, 1.39]])
targets = jnp.array([True, True, False, True])
# Training loss is the negative log-likelihood of the training examples.
def loss(W, b):
preds = predict(W, b, inputs)
label_probs = preds * targets + (1 - preds) * (1 - targets)
return -jnp.sum(jnp.log(label_probs))
# Initialize random model coefficients key, W_key, b_key = random.split(key, 3)
W = random.normal(W_key, (3,))
b = random.normal(b_key, ())
```
Use the `grad` function with its `argnums` argument to differentiate a function with respect to positional arguments.
```
# Differentiate `loss` with respect to the first positional argument:
W_grad = grad(loss, argnums=0)(W, b)
print('W_grad', W_grad)
# Since argnums=0 is the default, this does the same thing:
W_grad = grad(loss)(W, b)
print('W_grad', W_grad)
# But we can choose different values too, and drop the keyword:
b_grad = grad(loss, 1)(W, b)
print('b_grad', b_grad)
# Including tuple values W_grad, b_grad = grad(loss, (0, 1))(W, b)
print('W_grad', W_grad)
print('b_grad', b_grad)
```
```
W_grad [-0.16965583 -0.8774644 -1.4901346 ]
W_grad [-0.16965583 -0.8774644 -1.4901346 ]
b_grad -0.29227245 W_grad [-0.16965583 -0.8774644 -1.4901346 ]
b_grad -0.29227245
```
This `grad` API has a direct correspondence to the excellent notation in Spivak’s classic *Calculus on Manifolds* (1965), also used in Sussman and Wisdom’s [*Structure and Interpretation of Classical Mechanics*](https://mitpress.mit.edu/9780262028967/structure-and-interpretation-of-classical-mechanics) (2015) and their [*Functional Differential Geometry*](https://mitpress.mit.edu/9780262019347/functional-differential-geometry) (2013). Both books are open-access. See in particular the “Prologue” section of *Functional Differential Geometry* for a defense of this notation.
Essentially, when using the `argnums` argument, if `f` is a Python function for evaluating the mathematical function \(f\), then the Python expression `grad(f, i)` evaluates to a Python function for evaluating \(\partial_i f\).
##### Differentiating with respect to nested lists, tuples, and dicts[#](#differentiating-with-respect-to-nested-lists-tuples-and-dicts)
Differentiating with respect to standard Python containers just works, so use tuples, lists, and dicts (and arbitrary nesting) however you like.
```
def loss2(params_dict):
preds = predict(params_dict['W'], params_dict['b'], inputs)
label_probs = preds * targets + (1 - preds) * (1 - targets)
return -jnp.sum(jnp.log(label_probs))
print(grad(loss2)({'W': W, 'b': b}))
```
```
{'W': Array([-0.16965583, -0.8774644 , -1.4901346 ], dtype=float32), 'b': Array(-0.29227245, dtype=float32)}
```
You can [register your own container types](https://github.com/google/jax/issues/446#issuecomment-467105048) to work with not just `grad` but all the JAX transformations (`jit`, `vmap`, etc.).
##### Evaluate a function and its gradient using `value_and_grad`[#](#evaluate-a-function-and-its-gradient-using-value-and-grad)
Another convenient function is `value_and_grad` for efficiently computing both a function’s value as well as its gradient’s value:
```
from jax import value_and_grad loss_value, Wb_grad = value_and_grad(loss, (0, 1))(W, b)
print('loss value', loss_value)
print('loss value', loss(W, b))
```
```
loss value 3.0519385 loss value 3.0519385
```
##### Checking against numerical differences[#](#checking-against-numerical-differences)
A great thing about derivatives is that they’re straightforward to check with finite differences:
```
# Set a step size for finite differences calculations eps = 1e-4
# Check b_grad with scalar finite differences b_grad_numerical = (loss(W, b + eps / 2.) - loss(W, b - eps / 2.)) / eps print('b_grad_numerical', b_grad_numerical)
print('b_grad_autodiff', grad(loss, 1)(W, b))
# Check W_grad with finite differences in a random direction key, subkey = random.split(key)
vec = random.normal(subkey, W.shape)
unitvec = vec / jnp.sqrt(jnp.vdot(vec, vec))
W_grad_numerical = (loss(W + eps / 2. * unitvec, b) - loss(W - eps / 2. * unitvec, b)) / eps print('W_dirderiv_numerical', W_grad_numerical)
print('W_dirderiv_autodiff', jnp.vdot(grad(loss)(W, b), unitvec))
```
```
b_grad_numerical -0.29325485 b_grad_autodiff -0.29227245 W_dirderiv_numerical -0.2002716 W_dirderiv_autodiff -0.19909117
```
JAX provides a simple convenience function that does essentially the same thing, but checks up to any order of differentiation that you like:
```
from jax.test_util import check_grads check_grads(loss, (W, b), order=2) # check up to 2nd order derivatives
```
##### Hessian-vector products with `grad`-of-`grad`[#](#hessian-vector-products-with-grad-of-grad)
One thing we can do with higher-order `grad` is build a Hessian-vector product function. (Later on we’ll write an even more efficient implementation that mixes both forward- and reverse-mode, but this one will use pure reverse-mode.)
A Hessian-vector product function can be useful in a [truncated Newton Conjugate-Gradient algorithm](https://en.wikipedia.org/wiki/Truncated_Newton_method) for minimizing smooth convex functions, or for studying the curvature of neural network training objectives (e.g. [1](https://arxiv.org/abs/1406.2572), [2](https://arxiv.org/abs/1811.07062), [3](https://arxiv.org/abs/1706.04454), [4](https://arxiv.org/abs/1802.03451)).
For a scalar-valued function \(f : \mathbb{R}^n \to \mathbb{R}\) with continuous second derivatives (so that the Hessian matrix is symmetric), the Hessian at a point \(x \in \mathbb{R}^n\) is written as \(\partial^2 f(x)\). A Hessian-vector product function is then able to evaluate
\(\qquad v \mapsto \partial^2 f(x) \cdot v\)
for any \(v \in \mathbb{R}^n\).
The trick is not to instantiate the full Hessian matrix: if \(n\) is large, perhaps in the millions or billions in the context of neural networks, then that might be impossible to store.
Luckily, `grad` already gives us a way to write an efficient Hessian-vector product function. We just have to use the identity
\(\qquad \partial^2 f (x) v = \partial [x \mapsto \partial f(x) \cdot v] = \partial g(x)\),
where \(g(x) = \partial f(x) \cdot v\) is a new scalar-valued function that dots the gradient of \(f\) at \(x\) with the vector \(v\). Notice that we’re only ever differentiating scalar-valued functions of vector-valued arguments, which is exactly where we know `grad` is efficient.
In JAX code, we can just write this:
```
def hvp(f, x, v):
return grad(lambda x: jnp.vdot(grad(f)(x), v))(x)
```
This example shows that you can freely use lexical closure, and JAX will never get perturbed or confused.
We’ll check this implementation a few cells down, once we see how to compute dense Hessian matrices. We’ll also write an even better version that uses both forward-mode and reverse-mode.
##### Jacobians and Hessians using `jacfwd` and `jacrev`[#](#jacobians-and-hessians-using-jacfwd-and-jacrev)
You can compute full Jacobian matrices using the `jacfwd` and `jacrev` functions:
```
from jax import jacfwd, jacrev
# Isolate the function from the weight matrix to the predictions f = lambda W: predict(W, b, inputs)
J = jacfwd(f)(W)
print("jacfwd result, with shape", J.shape)
print(J)
J = jacrev(f)(W)
print("jacrev result, with shape", J.shape)
print(J)
```
```
jacfwd result, with shape (4, 3)
[[ 0.05981758 0.12883787 0.08857603]
[ 0.04015916 -0.04928625 0.00684531]
[ 0.12188288 0.01406341 -0.3047072 ]
[ 0.00140431 -0.00472531 0.00263782]]
jacrev result, with shape (4, 3)
[[ 0.05981757 0.12883787 0.08857603]
[ 0.04015916 -0.04928625 0.00684531]
[ 0.12188289 0.01406341 -0.3047072 ]
[ 0.00140431 -0.00472531 0.00263782]]
```
These two functions compute the same values (up to machine numerics), but differ in their implementation: `jacfwd` uses forward-mode automatic differentiation, which is more efficient for “tall” Jacobian matrices, while `jacrev` uses reverse-mode, which is more efficient for “wide” Jacobian matrices. For matrices that are near-square, `jacfwd` probably has an edge over `jacrev`.
You can also use `jacfwd` and `jacrev` with container types:
```
def predict_dict(params, inputs):
return predict(params['W'], params['b'], inputs)
J_dict = jacrev(predict_dict)({'W': W, 'b': b}, inputs)
for k, v in J_dict.items():
print("Jacobian from {} to logits is".format(k))
print(v)
```
```
Jacobian from W to logits is
[[ 0.05981757 0.12883787 0.08857603]
[ 0.04015916 -0.04928625 0.00684531]
[ 0.12188289 0.01406341 -0.3047072 ]
[ 0.00140431 -0.00472531 0.00263782]]
Jacobian from b to logits is
[0.11503381 0.04563541 0.23439017 0.00189771]
```
For more details on forward- and reverse-mode, as well as how to implement `jacfwd` and `jacrev` as efficiently as possible, read on!
Using a composition of two of these functions gives us a way to compute dense Hessian matrices:
```
def hessian(f):
return jacfwd(jacrev(f))
H = hessian(f)(W)
print("hessian, with shape", H.shape)
print(H)
```
```
hessian, with shape (4, 3, 3)
[[[ 0.02285465 0.04922541 0.03384247]
[ 0.04922541 0.10602397 0.07289147]
[ 0.03384247 0.07289147 0.05011288]]
[[-0.03195215 0.03921401 -0.00544639]
[ 0.03921401 -0.04812629 0.00668421]
[-0.00544639 0.00668421 -0.00092836]]
[[-0.01583708 -0.00182736 0.03959271]
[-0.00182736 -0.00021085 0.00456839]
[ 0.03959271 0.00456839 -0.09898177]]
[[-0.00103524 0.00348343 -0.00194457]
[ 0.00348343 -0.01172127 0.0065432 ]
[-0.00194457 0.0065432 -0.00365263]]]
```
This shape makes sense: if we start with a function \(f : \mathbb{R}^n \to \mathbb{R}^m\), then at a point \(x \in \mathbb{R}^n\) we expect to get the shapes
* \(f(x) \in \mathbb{R}^m\), the value of \(f\) at \(x\),
* \(\partial f(x) \in \mathbb{R}^{m \times n}\), the Jacobian matrix at \(x\),
* \(\partial^2 f(x) \in \mathbb{R}^{m \times n \times n}\), the Hessian at \(x\),
and so on.
To implement `hessian`, we could have used `jacfwd(jacrev(f))` or `jacrev(jacfwd(f))` or any other composition of the two. But forward-over-reverse is typically the most efficient. That’s because in the inner Jacobian computation we’re often differentiating a function wide Jacobian (maybe like a loss function \(f : \mathbb{R}^n \to \mathbb{R}\)), while in the outer Jacobian computation we’re differentiating a function with a square Jacobian (since \(\nabla f : \mathbb{R}^n \to \mathbb{R}^n\)), which is where forward-mode wins out.
#### How it’s made: two foundational autodiff functions[#](#how-it-s-made-two-foundational-autodiff-functions)
##### Jacobian-Vector products (JVPs, aka forward-mode autodiff)[#](#jacobian-vector-products-jvps-aka-forward-mode-autodiff)
JAX includes efficient and general implementations of both forward- and reverse-mode automatic differentiation. The familiar `grad` function is built on reverse-mode, but to explain the difference in the two modes, and when each can be useful, we need a bit of math background.
###### JVPs in math[#](#jvps-in-math)
Mathematically, given a function \(f : \mathbb{R}^n \to \mathbb{R}^m\), the Jacobian of \(f\) evaluated at an input point \(x \in \mathbb{R}^n\), denoted \(\partial f(x)\), is often thought of as a matrix in \(\mathbb{R}^m \times \mathbb{R}^n\):
\(\qquad \partial f(x) \in \mathbb{R}^{m \times n}\).
But we can also think of \(\partial f(x)\) as a linear map, which maps the tangent space of the domain of \(f\) at the point \(x\) (which is just another copy of \(\mathbb{R}^n\)) to the tangent space of the codomain of \(f\) at the point \(f(x)\) (a copy of \(\mathbb{R}^m\)):
\(\qquad \partial f(x) : \mathbb{R}^n \to \mathbb{R}^m\).
This map is called the [pushforward map](https://en.wikipedia.org/wiki/Pushforward_(differential)) of \(f\) at \(x\). The Jacobian matrix is just the matrix for this linear map in a standard basis.
If we don’t commit to one specific input point \(x\), then we can think of the function \(\partial f\) as first taking an input point and returning the Jacobian linear map at that input point:
\(\qquad \partial f : \mathbb{R}^n \to \mathbb{R}^n \to \mathbb{R}^m\).
In particular, we can uncurry things so that given input point \(x \in \mathbb{R}^n\) and a tangent vector \(v \in \mathbb{R}^n\), we get back an output tangent vector in \(\mathbb{R}^m\). We call that mapping, from \((x, v)\) pairs to output tangent vectors, the *Jacobian-vector product*, and write it as
\(\qquad (x, v) \mapsto \partial f(x) v\)
###### JVPs in JAX code[#](#jvps-in-jax-code)
Back in Python code, JAX’s `jvp` function models this transformation. Given a Python function that evaluates \(f\), JAX’s `jvp` is a way to get a Python function for evaluating \((x, v) \mapsto (f(x), \partial f(x) v)\).
```
from jax import jvp
# Isolate the function from the weight matrix to the predictions f = lambda W: predict(W, b, inputs)
key, subkey = random.split(key)
v = random.normal(subkey, W.shape)
# Push forward the vector `v` along `f` evaluated at `W`
y, u = jvp(f, (W,), (v,))
```
In terms of [Haskell-like type signatures](https://wiki.haskell.org/Type_signature),
we could write
```
jvp :: (a -> b) -> a -> T a -> (b, T b)
```
where we use `T a` to denote the type of the tangent space for `a`. In words, `jvp` takes as arguments a function of type `a -> b`, a value of type `a`, and a tangent vector value of type `T a`. It gives back a pair consisting of a value of type `b` and an output tangent vector of type `T b`.
The `jvp`-transformed function is evaluated much like the original function, but paired up with each primal value of type `a` it pushes along tangent values of type `T a`. For each primitive numerical operation that the original function would have applied, the `jvp`-transformed function executes a “JVP rule” for that primitive that both evaluates the primitive on the primals and applies the primitive’s JVP at those primal values.
That evaluation strategy has some immediate implications about computational complexity: since we evaluate JVPs as we go, we don’t need to store anything for later, and so the memory cost is independent of the depth of the computation. In addition, the FLOP cost of the `jvp`-transformed function is about 3x the cost of just evaluating the function (one unit of work for evaluating the original function, for example `sin(x)`; one unit for linearizing, like `cos(x)`; and one unit for applying the linearized function to a vector, like `cos_x * v`). Put another way, for a fixed primal point \(x\), we can evaluate \(v \mapsto \partial f(x) \cdot v\) for about the same marginal cost as evaluating \(f\).
That memory complexity sounds pretty compelling! So why don’t we see forward-mode very often in machine learning?
To answer that, first think about how you could use a JVP to build a full Jacobian matrix. If we apply a JVP to a one-hot tangent vector, it reveals one column of the Jacobian matrix, corresponding to the nonzero entry we fed in. So we can build a full Jacobian one column at a time, and to get each column costs about the same as one function evaluation. That will be efficient for functions with “tall” Jacobians, but inefficient for “wide” Jacobians.
If you’re doing gradient-based optimization in machine learning, you probably want to minimize a loss function from parameters in \(\mathbb{R}^n\) to a scalar loss value in \(\mathbb{R}\). That means the Jacobian of this function is a very wide matrix: \(\partial f(x) \in \mathbb{R}^{1 \times n}\), which we often identify with the Gradient vector \(\nabla f(x) \in \mathbb{R}^n\). Building that matrix one column at a time, with each call taking a similar number of FLOPs to evaluate the original function, sure seems inefficient! In particular, for training neural networks, where \(f\) is a training loss function and \(n\) can be in the millions or billions, this approach just won’t scale.
To do better for functions like this, we just need to use reverse-mode.
##### Vector-Jacobian products (VJPs, aka reverse-mode autodiff)[#](#vector-jacobian-products-vjps-aka-reverse-mode-autodiff)
Where forward-mode gives us back a function for evaluating Jacobian-vector products, which we can then use to build Jacobian matrices one column at a time, reverse-mode is a way to get back a function for evaluating vector-Jacobian products (equivalently Jacobian-transpose-vector products), which we can use to build Jacobian matrices one row at a time.
###### VJPs in math[#](#vjps-in-math)
Let’s again consider a function \(f : \mathbb{R}^n \to \mathbb{R}^m\).
Starting from our notation for JVPs, the notation for VJPs is pretty simple:
\(\qquad (x, v) \mapsto v \partial f(x)\),
where \(v\) is an element of the cotangent space of \(f\) at \(x\) (isomorphic to another copy of \(\mathbb{R}^m\)). When being rigorous, we should think of \(v\) as a linear map \(v : \mathbb{R}^m \to \mathbb{R}\), and when we write \(v \partial f(x)\) we mean function composition \(v \circ \partial f(x)\), where the types work out because \(\partial f(x) : \mathbb{R}^n \to \mathbb{R}^m\). But in the common case we can identify \(v\) with a vector in \(\mathbb{R}^m\) and use the two almost interchangeably, just like we might sometimes flip between “column vectors” and “row vectors” without much comment.
With that identification, we can alternatively think of the linear part of a VJP as the transpose (or adjoint conjugate) of the linear part of a JVP:
\(\qquad (x, v) \mapsto \partial f(x)^\mathsf{T} v\).
For a given point \(x\), we can write the signature as
\(\qquad \partial f(x)^\mathsf{T} : \mathbb{R}^m \to \mathbb{R}^n\).
The corresponding map on cotangent spaces is often called the [pullback](https://en.wikipedia.org/wiki/Pullback_(differential_geometry))
of \(f\) at \(x\). The key for our purposes is that it goes from something that looks like the output of \(f\) to something that looks like the input of \(f\), just like we might expect from a transposed linear function.
###### VJPs in JAX code[#](#vjps-in-jax-code)
Switching from math back to Python, the JAX function `vjp` can take a Python function for evaluating \(f\) and give us back a Python function for evaluating the VJP \((x, v) \mapsto (f(x), v^\mathsf{T} \partial f(x))\).
```
from jax import vjp
# Isolate the function from the weight matrix to the predictions f = lambda W: predict(W, b, inputs)
y, vjp_fun = vjp(f, W)
key, subkey = random.split(key)
u = random.normal(subkey, y.shape)
# Pull back the covector `u` along `f` evaluated at `W`
v = vjp_fun(u)
```
In terms of [Haskell-like type signatures](https://wiki.haskell.org/Type_signature),
we could write
```
vjp :: (a -> b) -> a -> (b, CT b -> CT a)
```
where we use `CT a` to denote the type for the cotangent space for `a`. In words, `vjp` takes as arguments a function of type `a -> b` and a point of type `a`, and gives back a pair consisting of a value of type `b` and a linear map of type `CT b -> CT a`.
This is great because it lets us build Jacobian matrices one row at a time, and the FLOP cost for evaluating \((x, v) \mapsto (f(x), v^\mathsf{T} \partial f(x))\) is only about three times the cost of evaluating \(f\). In particular, if we want the gradient of a function \(f : \mathbb{R}^n \to \mathbb{R}\), we can do it in just one call. That’s how `grad` is efficient for gradient-based optimization, even for objectives like neural network training loss functions on millions or billions of parameters.
There’s a cost, though: though the FLOPs are friendly, memory scales with the depth of the computation. Also, the implementation is traditionally more complex than that of forward-mode, though JAX has some tricks up its sleeve (that’s a story for a future notebook!).
For more on how reverse-mode works, see [this tutorial video from the Deep Learning Summer School in 2017](http://videolectures.net/deeplearning2017_johnson_automatic_differentiation/).
##### Vector-valued gradients with VJPs[#](#vector-valued-gradients-with-vjps)
If you’re interested in taking vector-valued gradients (like `tf.gradients`):
```
from jax import vjp
def vgrad(f, x):
y, vjp_fn = vjp(f, x)
return vjp_fn(jnp.ones(y.shape))[0]
print(vgrad(lambda x: 3*x**2, jnp.ones((2, 2))))
```
```
[[6. 6.]
[6. 6.]]
```
##### Hessian-vector products using both forward- and reverse-mode[#](#hessian-vector-products-using-both-forward-and-reverse-mode)
In a previous section, we implemented a Hessian-vector product function just using reverse-mode (assuming continuous second derivatives):
```
def hvp(f, x, v):
return grad(lambda x: jnp.vdot(grad(f)(x), v))(x)
```
That’s efficient, but we can do even better and save some memory by using forward-mode together with reverse-mode.
Mathematically, given a function \(f : \mathbb{R}^n \to \mathbb{R}\) to differentiate, a point \(x \in \mathbb{R}^n\) at which to linearize the function, and a vector \(v \in \mathbb{R}^n\), the Hessian-vector product function we want is
\((x, v) \mapsto \partial^2 f(x) v\)
Consider the helper function \(g : \mathbb{R}^n \to \mathbb{R}^n\) defined to be the derivative (or gradient) of \(f\), namely \(g(x) = \partial f(x)\). All we need is its JVP, since that will give us
\((x, v) \mapsto \partial g(x) v = \partial^2 f(x) v\).
We can translate that almost directly into code:
```
from jax import jvp, grad
# forward-over-reverse def hvp(f, primals, tangents):
return jvp(grad(f), primals, tangents)[1]
```
Even better, since we didn’t have to call `jnp.dot` directly, this `hvp` function works with arrays of any shape and with arbitrary container types (like vectors stored as nested lists/dicts/tuples), and doesn’t even have a dependence on `jax.numpy`.
Here’s an example of how to use it:
```
def f(X):
return jnp.sum(jnp.tanh(X)**2)
key, subkey1, subkey2 = random.split(key, 3)
X = random.normal(subkey1, (30, 40))
V = random.normal(subkey2, (30, 40))
ans1 = hvp(f, (X,), (V,))
ans2 = jnp.tensordot(hessian(f)(X), V, 2)
print(jnp.allclose(ans1, ans2, 1e-4, 1e-4))
```
```
True
```
Another way you might consider writing this is using reverse-over-forward:
```
# reverse-over-forward def hvp_revfwd(f, primals, tangents):
g = lambda primals: jvp(f, primals, tangents)[1]
return grad(g)(primals)
```
That’s not quite as good, though, because forward-mode has less overhead than reverse-mode, and since the outer differentiation operator here has to differentiate a larger computation than the inner one, keeping forward-mode on the outside works best:
```
# reverse-over-reverse, only works for single arguments def hvp_revrev(f, primals, tangents):
x, = primals
v, = tangents
return grad(lambda x: jnp.vdot(grad(f)(x), v))(x)
print("Forward over reverse")
%timeit -n10 -r3 hvp(f, (X,), (V,))
print("Reverse over forward")
%timeit -n10 -r3 hvp_revfwd(f, (X,), (V,))
print("Reverse over reverse")
%timeit -n10 -r3 hvp_revrev(f, (X,), (V,))
print("Naive full Hessian materialization")
%timeit -n10 -r3 jnp.tensordot(hessian(f)(X), V, 2)
```
```
Forward over reverse 6.69 ms ± 155 µs per loop (mean ± std. dev. of 3 runs, 10 loops each)
Reverse over forward 10.8 ms ± 4.32 ms per loop (mean ± std. dev. of 3 runs, 10 loops each)
Reverse over reverse 15.8 ms ± 7.23 ms per loop (mean ± std. dev. of 3 runs, 10 loops each)
Naive full Hessian materialization 65.5 ms ± 640 µs per loop (mean ± std. dev. of 3 runs, 10 loops each)
```
#### Composing VJPs, JVPs, and `vmap`[#](#composing-vjps-jvps-and-vmap)
##### Jacobian-Matrix and Matrix-Jacobian products[#](#jacobian-matrix-and-matrix-jacobian-products)
Now that we have `jvp` and `vjp` transformations that give us functions to push-forward or pull-back single vectors at a time, we can use JAX’s `vmap` [transformation](https://github.com/google/jax#auto-vectorization-with-vmap) to push and pull entire bases at once. In particular, we can use that to write fast matrix-Jacobian and Jacobian-matrix products.
```
# Isolate the function from the weight matrix to the predictions f = lambda W: predict(W, b, inputs)
# Pull back the covectors `m_i` along `f`, evaluated at `W`, for all `i`.
# First, use a list comprehension to loop over rows in the matrix M.
def loop_mjp(f, x, M):
y, vjp_fun = vjp(f, x)
return jnp.vstack([vjp_fun(mi) for mi in M])
# Now, use vmap to build a computation that does a single fast matrix-matrix
# multiply, rather than an outer loop over vector-matrix multiplies.
def vmap_mjp(f, x, M):
y, vjp_fun = vjp(f, x)
outs, = vmap(vjp_fun)(M)
return outs
key = random.PRNGKey(0)
num_covecs = 128 U = random.normal(key, (num_covecs,) + y.shape)
loop_vs = loop_mjp(f, W, M=U)
print('Non-vmapped Matrix-Jacobian product')
%timeit -n10 -r3 loop_mjp(f, W, M=U)
print('\nVmapped Matrix-Jacobian product')
vmap_vs = vmap_mjp(f, W, M=U)
%timeit -n10 -r3 vmap_mjp(f, W, M=U)
assert jnp.allclose(loop_vs, vmap_vs), 'Vmap and non-vmapped Matrix-Jacobian Products should be identical'
```
```
Non-vmapped Matrix-Jacobian product 160 ms ± 358 µs per loop (mean ± std. dev. of 3 runs, 10 loops each)
Vmapped Matrix-Jacobian product 7.88 ms ± 99.4 µs per loop (mean ± std. dev. of 3 runs, 10 loops each)
```
```
/tmp/ipykernel_1412/2820373052.py:8: DeprecationWarning: vstack requires ndarray or scalar arguments, got <class 'tuple'> at position 0.In a future JAX release this will be an error.
return jnp.vstack([vjp_fun(mi) for mi in M])
```
```
def loop_jmp(f, W, M):
# jvp immediately returns the primal and tangent values as a tuple,
# so we'll compute and select the tangents in a list comprehension
return jnp.vstack([jvp(f, (W,), (mi,))[1] for mi in M])
def vmap_jmp(f, W, M):
_jvp = lambda s: jvp(f, (W,), (s,))[1]
return vmap(_jvp)(M)
num_vecs = 128 S = random.normal(key, (num_vecs,) + W.shape)
loop_vs = loop_jmp(f, W, M=S)
print('Non-vmapped Jacobian-Matrix product')
%timeit -n10 -r3 loop_jmp(f, W, M=S)
vmap_vs = vmap_jmp(f, W, M=S)
print('\nVmapped Jacobian-Matrix product')
%timeit -n10 -r3 vmap_jmp(f, W, M=S)
assert jnp.allclose(loop_vs, vmap_vs), 'Vmap and non-vmapped Jacobian-Matrix products should be identical'
```
```
Non-vmapped Jacobian-Matrix product 400 ms ± 1.13 ms per loop (mean ± std. dev. of 3 runs, 10 loops each)
Vmapped Jacobian-Matrix product 4.85 ms ± 69.4 µs per loop (mean ± std. dev. of 3 runs, 10 loops each)
```
##### The implementation of `jacfwd` and `jacrev`[#](#the-implementation-of-jacfwd-and-jacrev)
Now that we’ve seen fast Jacobian-matrix and matrix-Jacobian products, it’s not hard to guess how to write `jacfwd` and `jacrev`. We just use the same technique to push-forward or pull-back an entire standard basis (isomorphic to an identity matrix) at once.
```
from jax import jacrev as builtin_jacrev
def our_jacrev(f):
def jacfun(x):
y, vjp_fun = vjp(f, x)
# Use vmap to do a matrix-Jacobian product.
# Here, the matrix is the Euclidean basis, so we get all
# entries in the Jacobian at once.
J, = vmap(vjp_fun, in_axes=0)(jnp.eye(len(y)))
return J
return jacfun
assert jnp.allclose(builtin_jacrev(f)(W), our_jacrev(f)(W)), 'Incorrect reverse-mode Jacobian results!'
```
```
from jax import jacfwd as builtin_jacfwd
def our_jacfwd(f):
def jacfun(x):
_jvp = lambda s: jvp(f, (x,), (s,))[1]
Jt =vmap(_jvp, in_axes=1)(jnp.eye(len(x)))
return jnp.transpose(Jt)
return jacfun
assert jnp.allclose(builtin_jacfwd(f)(W), our_jacfwd(f)(W)), 'Incorrect forward-mode Jacobian results!'
```
Interestingly, [Autograd](https://github.com/hips/autograd) couldn’t do this. Our [implementation](https://github.com/HIPS/autograd/blob/96a03f44da43cd7044c61ac945c483955deba957/autograd/differential_operators.py#L60) of reverse-mode `jacobian` in Autograd had to pull back one vector at a time with an outer-loop `map`. Pushing one vector at a time through the computation is much less efficient than batching it all together with `vmap`.
Another thing that Autograd couldn’t do is `jit`. Interestingly, no matter how much Python dynamism you use in your function to be differentiated, we could always use `jit` on the linear part of the computation. For example:
```
def f(x):
try:
if x < 3:
return 2 * x ** 3
else:
raise ValueError
except ValueError:
return jnp.pi * x
y, f_vjp = vjp(f, 4.)
print(jit(f_vjp)(1.))
```
```
(Array(3.1415927, dtype=float32, weak_type=True),)
```
#### Complex numbers and differentiation[#](#complex-numbers-and-differentiation)
JAX is great at complex numbers and differentiation. To support both [holomorphic and non-holomorphic differentiation](https://en.wikipedia.org/wiki/Holomorphic_function), it helps to think in terms of JVPs and VJPs.
Consider a complex-to-complex function \(f: \mathbb{C} \to \mathbb{C}\) and identify it with a corresponding function \(g: \mathbb{R}^2 \to \mathbb{R}^2\),
```
def f(z):
x, y = jnp.real(z), jnp.imag(z)
return u(x, y) + v(x, y) * 1j
def g(x, y):
return (u(x, y), v(x, y))
```
That is, we’ve decomposed \(f(z) = u(x, y) + v(x, y) i\) where \(z = x + y i\), and identified \(\mathbb{C}\) with \(\mathbb{R}^2\) to get \(g\).
Since \(g\) only involves real inputs and outputs, we already know how to write a Jacobian-vector product for it, say given a tangent vector \((c, d) \in \mathbb{R}^2\), namely
\(\begin{bmatrix} \partial_0 u(x, y) & \partial_1 u(x, y) \\ \partial_0 v(x, y) & \partial_1 v(x, y) \end{bmatrix}
\begin{bmatrix} c \\ d \end{bmatrix}\).
To get a JVP for the original function \(f\) applied to a tangent vector \(c + di \in \mathbb{C}\), we just use the same definition and identify the result as another complex number,
\(\partial f(x + y i)(c + d i) =
\begin{matrix} \begin{bmatrix} 1 & i \end{bmatrix} \\ ~ \end{matrix}
\begin{bmatrix} \partial_0 u(x, y) & \partial_1 u(x, y) \\ \partial_0 v(x, y) & \partial_1 v(x, y) \end{bmatrix}
\begin{bmatrix} c \\ d \end{bmatrix}\).
That’s our definition of the JVP of a \(\mathbb{C} \to \mathbb{C}\) function! Notice it doesn’t matter whether or not \(f\) is holomorphic: the JVP is unambiguous.
Here’s a check:
```
def check(seed):
key = random.PRNGKey(seed)
# random coeffs for u and v
key, subkey = random.split(key)
a, b, c, d = random.uniform(subkey, (4,))
def fun(z):
x, y = jnp.real(z), jnp.imag(z)
return u(x, y) + v(x, y) * 1j
def u(x, y):
return a * x + b * y
def v(x, y):
return c * x + d * y
# primal point
key, subkey = random.split(key)
x, y = random.uniform(subkey, (2,))
z = x + y * 1j
# tangent vector
key, subkey = random.split(key)
c, d = random.uniform(subkey, (2,))
z_dot = c + d * 1j
# check jvp
_, ans = jvp(fun, (z,), (z_dot,))
expected = (grad(u, 0)(x, y) * c +
grad(u, 1)(x, y) * d +
grad(v, 0)(x, y) * c * 1j+
grad(v, 1)(x, y) * d * 1j)
print(jnp.allclose(ans, expected))
```
```
check(0)
check(1)
check(2)
```
```
True True
True
```
What about VJPs? We do something pretty similar: for a cotangent vector \(c + di \in \mathbb{C}\) we define the VJP of \(f\) as
\((c + di)^* \; \partial f(x + y i) =
\begin{matrix} \begin{bmatrix} c & -d \end{bmatrix} \\ ~ \end{matrix}
\begin{bmatrix} \partial_0 u(x, y) & \partial_1 u(x, y) \\ \partial_0 v(x, y) & \partial_1 v(x, y) \end{bmatrix}
\begin{bmatrix} 1 \\ -i \end{bmatrix}\).
What’s with the negatives? They’re just to take care of complex conjugation, and the fact that we’re working with covectors.
Here’s a check of the VJP rules:
```
def check(seed):
key = random.PRNGKey(seed)
# random coeffs for u and v
key, subkey = random.split(key)
a, b, c, d = random.uniform(subkey, (4,))
def fun(z):
x, y = jnp.real(z), jnp.imag(z)
return u(x, y) + v(x, y) * 1j
def u(x, y):
return a * x + b * y
def v(x, y):
return c * x + d * y
# primal point
key, subkey = random.split(key)
x, y = random.uniform(subkey, (2,))
z = x + y * 1j
# cotangent vector
key, subkey = random.split(key)
c, d = random.uniform(subkey, (2,))
z_bar = jnp.array(c + d * 1j) # for dtype control
# check vjp
_, fun_vjp = vjp(fun, z)
ans, = fun_vjp(z_bar)
expected = (grad(u, 0)(x, y) * c +
grad(v, 0)(x, y) * (-d) +
grad(u, 1)(x, y) * c * (-1j) +
grad(v, 1)(x, y) * (-d) * (-1j))
assert jnp.allclose(ans, expected, atol=1e-5, rtol=1e-5)
```
```
check(0)
check(1)
check(2)
```
What about convenience wrappers like `grad`, `jacfwd`, and `jacrev`?
For \(\mathbb{R} \to \mathbb{R}\) functions, recall we defined `grad(f)(x)` as being `vjp(f, x)[1](1.0)`, which works because applying a VJP to a `1.0` value reveals the gradient (i.e. Jacobian, or derivative). We can do the same thing for \(\mathbb{C} \to \mathbb{R}\) functions: we can still use `1.0` as the cotangent vector, and we just get out a complex number result summarizing the full Jacobian:
```
def f(z):
x, y = jnp.real(z), jnp.imag(z)
return x**2 + y**2
z = 3. + 4j grad(f)(z)
```
```
Array(6.-8.j, dtype=complex64)
```
For general \(\mathbb{C} \to \mathbb{C}\) functions, the Jacobian has 4 real-valued degrees of freedom (as in the 2x2 Jacobian matrices above), so we can’t hope to represent all of them within a complex number. But we can for holomorphic functions! A holomorphic function is precisely a \(\mathbb{C} \to \mathbb{C}\) function with the special property that its derivative can be represented as a single complex number. (The [Cauchy-Riemann equations](https://en.wikipedia.org/wiki/Cauchy%E2%80%93Riemann_equations) ensure that the above 2x2 Jacobians have the special form of a scale-and-rotate matrix in the complex plane, i.e. the action of a single complex number under multiplication.) And we can reveal that one complex number using a single call to `vjp` with a covector of `1.0`.
Because this only works for holomorphic functions, to use this trick we need to promise JAX that our function is holomorphic; otherwise, JAX will raise an error when `grad` is used for a complex-output function:
```
def f(z):
return jnp.sin(z)
z = 3. + 4j grad(f, holomorphic=True)(z)
```
```
Array(-27.034946-3.8511534j, dtype=complex64, weak_type=True)
```
All the `holomorphic=True` promise does is disable the error when the output is complex-valued. We can still write `holomorphic=True` when the function isn’t holomorphic, but the answer we get out won’t represent the full Jacobian. Instead, it’ll be the Jacobian of the function where we just discard the imaginary part of the output:
```
def f(z):
return jnp.conjugate(z)
z = 3. + 4j grad(f, holomorphic=True)(z) # f is not actually holomorphic!
```
```
Array(1.-0.j, dtype=complex64, weak_type=True)
```
There are some useful upshots for how `grad` works here:
1. We can use `grad` on holomorphic \(\mathbb{C} \to \mathbb{C}\) functions.
2. We can use `grad` to optimize \(f : \mathbb{C} \to \mathbb{R}\) functions, like real-valued loss functions of complex parameters `x`, by taking steps in the direction of the conjugate of `grad(f)(x)`.
3. If we have an \(\mathbb{R} \to \mathbb{R}\) function that just happens to use some complex-valued operations internally (some of which must be non-holomorphic, e.g. FFTs used in convolutions) then `grad` still works and we get the same result that an implementation using only real values would have given.
In any case, JVPs and VJPs are always unambiguous. And if we wanted to compute the full Jacobian matrix of a non-holomorphic \(\mathbb{C} \to \mathbb{C}\) function, we can do it with JVPs or VJPs!
You should expect complex numbers to work everywhere in JAX. Here’s differentiating through a Cholesky decomposition of a complex matrix:
```
A = jnp.array([[5., 2.+3j, 5j],
[2.-3j, 7., 1.+7j],
[-5j, 1.-7j, 12.]])
def f(X):
L = jnp.linalg.cholesky(X)
return jnp.sum((L - jnp.sin(L))**2)
grad(f, holomorphic=True)(A)
```
```
Array([[-0.7534182 +0.j , -3.0509028 -10.940544j,
5.9896846 +3.542303j],
[-3.0509028 +10.940544j, -8.904491 +0.j ,
-5.1351523 -6.559373j],
[ 5.9896846 -3.542303j, -5.1351523 +6.559373j,
0.01320427 +0.j ]], dtype=complex64)
```
#### More advanced autodiff[#](#more-advanced-autodiff)
In this notebook, we worked through some easy, and then progressively more complicated, applications of automatic differentiation in JAX. We hope you now feel that taking derivatives in JAX is easy and powerful.
There’s a whole world of other autodiff tricks and functionality out there. Topics we didn’t cover, but hope to in an “Advanced Autodiff Cookbook” include:
* Gauss-Newton Vector Products, linearizing once
* Custom VJPs and JVPs
* Efficient derivatives at fixed-points
* Estimating the trace of a Hessian using random Hessian-vector products.
* Forward-mode autodiff using only reverse-mode autodiff.
* Taking derivatives with respect to custom data types.
* Checkpointing (binomial checkpointing for efficient reverse-mode, not model snapshotting).
* Optimizing VJPs with Jacobian pre-accumulation.
### Custom derivative rules for JAX-transformable Python functions[#](#custom-derivative-rules-for-jax-transformable-python-functions)
*mattjj@ Mar 19 2020, last updated Oct 14 2020*
There are two ways to define differentiation rules in JAX:
1. using `jax.custom_jvp` and `jax.custom_vjp` to define custom differentiation rules for Python functions that are already JAX-transformable; and 2. defining new `core.Primitive` instances along with all their transformation rules, for example to call into functions from other systems like solvers, simulators, or general numerical computing systems.
This notebook is about #1. To read instead about #2, see the [notebook on adding primitives](https://jax.readthedocs.io/en/latest/notebooks/How_JAX_primitives_work.html).
For an introduction to JAX’s automatic differentiation API, see [The Autodiff Cookbook](https://jax.readthedocs.io/en/latest/notebooks/autodiff_cookbook.html). This notebook assumes some familiarity with [jax.jvp](https://jax.readthedocs.io/en/latest/jax.html#jax.jvp) and [jax.grad](https://jax.readthedocs.io/en/latest/jax.html#jax.grad), and the mathematical meaning of JVPs and VJPs.
#### TL;DR[#](#tl-dr)
##### Custom JVPs with `jax.custom_jvp`[#](#custom-jvps-with-jax-custom-jvp)
```
import jax.numpy as jnp from jax import custom_jvp
@custom_jvp def f(x, y):
return jnp.sin(x) * y
@f.defjvp def f_jvp(primals, tangents):
x, y = primals
x_dot, y_dot = tangents
primal_out = f(x, y)
tangent_out = jnp.cos(x) * x_dot * y + jnp.sin(x) * y_dot
return primal_out, tangent_out
```
```
from jax import jvp, grad
print(f(2., 3.))
y, y_dot = jvp(f, (2., 3.), (1., 0.))
print(y)
print(y_dot)
print(grad(f)(2., 3.))
```
```
2.7278922 2.7278922
-1.2484405
-1.2484405
```
```
# Equivalent alternative using the defjvps convenience wrapper
@custom_jvp def f(x, y):
return jnp.sin(x) * y
f.defjvps(lambda x_dot, primal_out, x, y: jnp.cos(x) * x_dot * y,
lambda y_dot, primal_out, x, y: jnp.sin(x) * y_dot)
```
```
print(f(2., 3.))
y, y_dot = jvp(f, (2., 3.), (1., 0.))
print(y)
print(y_dot)
print(grad(f)(2., 3.))
```
```
2.7278922 2.7278922
-1.2484405
-1.2484405
```
##### Custom VJPs with `jax.custom_vjp`[#](#custom-vjps-with-jax-custom-vjp)
```
from jax import custom_vjp
@custom_vjp def f(x, y):
return jnp.sin(x) * y
def f_fwd(x, y):
# Returns primal output and residuals to be used in backward pass by f_bwd.
return f(x, y), (jnp.cos(x), jnp.sin(x), y)
def f_bwd(res, g):
cos_x, sin_x, y = res # Gets residuals computed in f_fwd
return (cos_x * g * y, sin_x * g)
f.defvjp(f_fwd, f_bwd)
```
```
print(grad(f)(2., 3.))
```
```
-1.2484405
```
#### Example problems[#](#example-problems)
To get an idea of what problems `jax.custom_jvp` and `jax.custom_vjp` are meant to solve, let’s go over a few examples. A more thorough introduction to the `jax.custom_jvp` and `jax.custom_vjp` APIs is in the next section.
##### Numerical stability[#](#numerical-stability)
One application of `jax.custom_jvp` is to improve the numerical stability of differentiation.
Say we want to write a function called `log1pexp`, which computes \(x \mapsto \log ( 1 + e^x )\). We can write that using `jax.numpy`:
```
import jax.numpy as jnp
def log1pexp(x):
return jnp.log(1. + jnp.exp(x))
log1pexp(3.)
```
```
Array(3.0485873, dtype=float32, weak_type=True)
```
Since it’s written in terms of `jax.numpy`, it’s JAX-transformable:
```
from jax import jit, grad, vmap
print(jit(log1pexp)(3.))
print(jit(grad(log1pexp))(3.))
print(vmap(jit(grad(log1pexp)))(jnp.arange(3.)))
```
```
3.0485873 0.95257413
[0.5 0.7310586 0.8807971]
```
But there’s a numerical stability problem lurking here:
```
print(grad(log1pexp)(100.))
```
```
nan
```
That doesn’t seem right! After all, the derivative of \(x \mapsto \log (1 + e^x)\) is \(x \mapsto \frac{e^x}{1 + e^x}\), and so for large values of \(x\) we’d expect the value to be about 1.
We can get a bit more insight into what’s going on by looking at the jaxpr for the gradient computation:
```
from jax import make_jaxpr
make_jaxpr(grad(log1pexp))(100.)
```
```
{ lambda ; a:f32[]. let
b:f32[] = exp a
c:f32[] = add 1.0 b
_:f32[] = log c
d:f32[] = div 1.0 c
e:f32[] = mul d b
in (e,) }
```
Stepping through how the jaxpr would be evaluated, we can see that the last line would involve multiplying values that floating point math will round to 0 and \(\infty\), respectively, which is never a good idea. That is, we’re effectively evaluating `lambda x: (1 / (1 + jnp.exp(x))) * jnp.exp(x)` for large `x`, which effectively turns into `0. * jnp.inf`.
Instead of generating such large and small values, hoping for a cancellation that floats can’t always provide, we’d rather just express the derivative function as a more numerically stable program. In particular, we can write a program that more closely evaluates the equal mathematical expression \(1 - \frac{1}{1 + e^x}\), with no cancellation in sight.
This problem is interesting because even though our definition of `log1pexp` could already be JAX-differentiated (and transformed with `jit`, `vmap`, …), we’re not happy with the result of applying standard autodiff rules to the primitives comprising `log1pexp` and composing the result. Instead, we’d like to specify how the whole function `log1pexp` should be differentiated, as a unit, and thus arrange those exponentials better.
This is one application of custom derivative rules for Python functions that are already JAX transformable: specifying how a composite function should be differentiated, while still using its original Python definition for other transformations (like `jit`, `vmap`, …).
Here’s a solution using `jax.custom_jvp`:
```
from jax import custom_jvp
@custom_jvp def log1pexp(x):
return jnp.log(1. + jnp.exp(x))
@log1pexp.defjvp def log1pexp_jvp(primals, tangents):
x, = primals
x_dot, = tangents
ans = log1pexp(x)
ans_dot = (1 - 1/(1 + jnp.exp(x))) * x_dot
return ans, ans_dot
```
```
print(grad(log1pexp)(100.))
```
```
1.0
```
```
print(jit(log1pexp)(3.))
print(jit(grad(log1pexp))(3.))
print(vmap(jit(grad(log1pexp)))(jnp.arange(3.)))
```
```
3.0485873 0.95257413
[0.5 0.7310586 0.8807971]
```
Here’s a `defjvps` convenience wrapper to express the same thing:
```
@custom_jvp def log1pexp(x):
return jnp.log(1. + jnp.exp(x))
log1pexp.defjvps(lambda t, ans, x: (1 - 1/(1 + jnp.exp(x))) * t)
```
```
print(grad(log1pexp)(100.))
print(jit(log1pexp)(3.))
print(jit(grad(log1pexp))(3.))
print(vmap(jit(grad(log1pexp)))(jnp.arange(3.)))
```
```
1.0 3.0485873 0.95257413
[0.5 0.7310586 0.8807971]
```
##### Enforcing a differentiation convention[#](#enforcing-a-differentiation-convention)
A related application is to enforce a differentiation convention, perhaps at a boundary.
Consider the function \(f : \mathbb{R}_+ \to \mathbb{R}_+\) with \(f(x) = \frac{x}{1 + \sqrt{x}}\), where we take \(\mathbb{R}_+ = [0, \infty)\). We might implement \(f\) as a program like this:
```
def f(x):
return x / (1 + jnp.sqrt(x))
```
As a mathematical function on \(\mathbb{R}\) (the full real line), \(f\) is not differentiable at zero (because the limit defining the derivative doesn’t exist from the left). Correspondingly, autodiff produces a `nan` value:
```
print(grad(f)(0.))
```
```
nan
```
But mathematically if we think of \(f\) as a function on \(\mathbb{R}_+\) then it is differentiable at 0 [Rudin’s Principles of Mathematical Analysis Definition 5.1, or Tao’s Analysis I 3rd ed. Definition 10.1.1 and Example 10.1.6]. Alternatively, we might say as a convention we want to consider the directional derivative from the right. So there is a sensible value for the Python function `grad(f)` to return at `0.0`, namely `1.0`. By default, JAX’s machinery for differentiation assumes all functions are defined over \(\mathbb{R}\) and thus doesn’t produce `1.0` here.
We can use a custom JVP rule! In particular, we can define the JVP rule in terms of the derivative function \(x \mapsto \frac{\sqrt{x} + 2}{2(\sqrt{x} + 1)^2}\) on \(\mathbb{R}_+\),
```
@custom_jvp def f(x):
return x / (1 + jnp.sqrt(x))
@f.defjvp def f_jvp(primals, tangents):
x, = primals
x_dot, = tangents
ans = f(x)
ans_dot = ((jnp.sqrt(x) + 2) / (2 * (jnp.sqrt(x) + 1)**2)) * x_dot
return ans, ans_dot
```
```
print(grad(f)(0.))
```
```
1.0
```
Here’s the convenience wrapper version:
```
@custom_jvp def f(x):
return x / (1 + jnp.sqrt(x))
f.defjvps(lambda t, ans, x: ((jnp.sqrt(x) + 2) / (2 * (jnp.sqrt(x) + 1)**2)) * t)
```
```
print(grad(f)(0.))
```
```
1.0
```
##### Gradient clipping[#](#gradient-clipping)
While in some cases we want to express a mathematical differentiation computation, in other cases we may even want to take a step away from mathematics to adjust the computation autodiff performs. One canonical example is reverse-mode gradient clipping.
For gradient clipping, we can use `jnp.clip` together with a `jax.custom_vjp` reverse-mode-only rule:
```
from functools import partial from jax import custom_vjp
@custom_vjp def clip_gradient(lo, hi, x):
return x # identity function
def clip_gradient_fwd(lo, hi, x):
return x, (lo, hi) # save bounds as residuals
def clip_gradient_bwd(res, g):
lo, hi = res
return (None, None, jnp.clip(g, lo, hi)) # use None to indicate zero cotangents for lo and hi
clip_gradient.defvjp(clip_gradient_fwd, clip_gradient_bwd)
```
```
import matplotlib.pyplot as plt from jax import vmap
t = jnp.linspace(0, 10, 1000)
plt.plot(jnp.sin(t))
plt.plot(vmap(grad(jnp.sin))(t))
```
```
[<matplotlib.lines.Line2D at 0x7fda4c353a60>]
```
```
def clip_sin(x):
x = clip_gradient(-0.75, 0.75, x)
return jnp.sin(x)
plt.plot(clip_sin(t))
plt.plot(vmap(grad(clip_sin))(t))
```
```
[<matplotlib.lines.Line2D at 0x7fda4c224190>]
```
##### Python debugging[#](#python-debugging)
Another application that is motivated by development workflow rather than numerics is to set a `pdb` debugger trace in the backward pass of reverse-mode autodiff.
When trying to track down the source of a `nan` runtime error, or just examine carefully the cotangent (gradient) values being propagated, it can be useful to insert a debugger at a point in the backward pass that corresponds to a specific point in the primal computation. You can do that with `jax.custom_vjp`.
We’ll defer an example until the next section.
##### Implicit function differentiation of iterative implementations[#](#implicit-function-differentiation-of-iterative-implementations)
This example gets pretty deep in the mathematical weeds!
Another application for `jax.custom_vjp` is reverse-mode differentiation of functions that are JAX-transformable (by `jit`, `vmap`, …) but not efficiently JAX-differentiable for some reason, perhaps because they involve `lax.while_loop`. (It’s not possible to produce an XLA HLO program that efficiently computes the reverse-mode derivative of an XLA HLO While loop because that would require a program with unbounded memory use, which isn’t possible to express in XLA HLO, at least without side-effecting interactions through infeed/outfeed.)
For example, consider this `fixed_point` routine which computes a fixed point by iteratively applying a function in a `while_loop`:
```
from jax.lax import while_loop
def fixed_point(f, a, x_guess):
def cond_fun(carry):
x_prev, x = carry
return jnp.abs(x_prev - x) > 1e-6
def body_fun(carry):
_, x = carry
return x, f(a, x)
_, x_star = while_loop(cond_fun, body_fun, (x_guess, f(a, x_guess)))
return x_star
```
This is an iterative procedure for numerically solving the equation \(x = f(a, x)\) for \(x\), by iterating \(x_{t+1} = f(a, x_t)\) until \(x_{t+1}\) is sufficiently close to \(x_t\). The result \(x^*\) depends on the parameters \(a\), and so we can think of there being a function \(a \mapsto x^*(a)\) that is implicitly defined by equation \(x = f(a, x)\).
We can use `fixed_point` to run iterative procedures to convergence, for example running Newton’s method to calculate square roots while only executing adds, multiplies, and divides:
```
def newton_sqrt(a):
update = lambda a, x: 0.5 * (x + a / x)
return fixed_point(update, a, a)
```
```
print(newton_sqrt(2.))
```
```
1.4142135
```
We can `vmap` or `jit` the function as well:
```
print(jit(vmap(newton_sqrt))(jnp.array([1., 2., 3., 4.])))
```
```
[1. 1.4142135 1.7320509 2. ]
```
We can’t apply reverse-mode automatic differentiation because of the `while_loop`, but it turns out we wouldn’t want to anyway: instead of differentiating through the implementation of `fixed_point` and all its iterations, we can exploit the mathematical structure to do something that is much more memory-efficient (and FLOP-efficient in this case, too!). We can instead use the implicit function theorem [Prop A.25 of Bertsekas’s Nonlinear Programming, 2nd ed.], which guarantees (under some conditions) the existence of the mathematical objects we’re about to use. In essence, we linearize at the solution and solve those linear equations iteratively to compute the derivatives we want.
Consider again the equation \(x = f(a, x)\) and the function \(x^*\). We want to evaluate vector-Jacobian products like \(v^\mathsf{T} \mapsto v^\mathsf{T} \partial x^*(a_0)\).
At least in an open neighborhood around the point \(a_0\) at which we want to differentiate, let’s assume that the equation \(x^*(a) = f(a, x^*(a))\) holds for all \(a\). Since the two sides are equal as functions of \(a\), their derivatives must be equal as well, so let’s differentiate both sides:
\(\qquad \partial x^*(a) = \partial_0 f(a, x^*(a)) + \partial_1 f(a, x^*(a)) \partial x^*(a)\).
Setting \(A = \partial_1 f(a_0, x^*(a_0))\) and \(B = \partial_0 f(a_0, x^*(a_0))\), we can write the quantity we’re after more simply as
\(\qquad \partial x^*(a_0) = B + A \partial x^*(a_0)\),
or, by rearranging,
\(\qquad \partial x^*(a_0) = (I - A)^{-1} B\).
That means we can evaluate vector-Jacobian products like
\(\qquad v^\mathsf{T} \partial x^*(a_0) = v^\mathsf{T} (I - A)^{-1} B = w^\mathsf{T} B\),
where \(w^\mathsf{T} = v^\mathsf{T} (I - A)^{-1}\), or equivalently \(w^\mathsf{T} = v^\mathsf{T} + w^\mathsf{T} A\), or equivalently \(w^\mathsf{T}\) is the fixed point of the map \(u^\mathsf{T} \mapsto v^\mathsf{T} + u^\mathsf{T} A\). That last characterization gives us a way to write the VJP for `fixed_point` in terms of a call to `fixed_point`! Moreover, after expanding \(A\) and \(B\) back out, we can see we need only to evaluate VJPs of \(f\) at \((a_0, x^*(a_0))\).
Here’s the upshot:
```
from jax import vjp
@partial(custom_vjp, nondiff_argnums=(0,))
def fixed_point(f, a, x_guess):
def cond_fun(carry):
x_prev, x = carry
return jnp.abs(x_prev - x) > 1e-6
def body_fun(carry):
_, x = carry
return x, f(a, x)
_, x_star = while_loop(cond_fun, body_fun, (x_guess, f(a, x_guess)))
return x_star
def fixed_point_fwd(f, a, x_init):
x_star = fixed_point(f, a, x_init)
return x_star, (a, x_star)
def fixed_point_rev(f, res, x_star_bar):
a, x_star = res
_, vjp_a = vjp(lambda a: f(a, x_star), a)
a_bar, = vjp_a(fixed_point(partial(rev_iter, f),
(a, x_star, x_star_bar),
x_star_bar))
return a_bar, jnp.zeros_like(x_star)
def rev_iter(f, packed, u):
a, x_star, x_star_bar = packed
_, vjp_x = vjp(lambda x: f(a, x), x_star)
return x_star_bar + vjp_x(u)[0]
fixed_point.defvjp(fixed_point_fwd, fixed_point_rev)
```
```
print(newton_sqrt(2.))
```
```
1.4142135
```
```
print(grad(newton_sqrt)(2.))
print(grad(grad(newton_sqrt))(2.))
```
```
0.35355338
-0.088388346
```
We can check our answers by differentiating `jnp.sqrt`, which uses a totally different implementation:
```
print(grad(jnp.sqrt)(2.))
print(grad(grad(jnp.sqrt))(2.))
```
```
0.35355338
-0.08838835
```
A limitation to this approach is that the argument `f` can’t close over any values involved in differentiation. That is, you might notice that we kept the parameter `a` explicit in the argument list of `fixed_point`. For this use case, consider using the low-level primitive `lax.custom_root`, which allows for deriviatives in closed-over variables with custom root-finding functions.
#### Basic usage of `jax.custom_jvp` and `jax.custom_vjp` APIs[#](#basic-usage-of-jax-custom-jvp-and-jax-custom-vjp-apis)
##### Use `jax.custom_jvp` to define forward-mode (and, indirectly, reverse-mode) rules[#](#use-jax-custom-jvp-to-define-forward-mode-and-indirectly-reverse-mode-rules)
Here’s a canonical basic example of using `jax.custom_jvp`, where the comments use
[Haskell-like type signatures](https://wiki.haskell.org/Type_signature):
```
from jax import custom_jvp import jax.numpy as jnp
# f :: a -> b
@custom_jvp def f(x):
return jnp.sin(x)
# f_jvp :: (a, T a) -> (b, T b)
def f_jvp(primals, tangents):
x, = primals
t, = tangents
return f(x), jnp.cos(x) * t
f.defjvp(f_jvp)
```
```
<function __main__.f_jvp(primals, tangents)>
```
```
from jax import jvp
print(f(3.))
y, y_dot = jvp(f, (3.,), (1.,))
print(y)
print(y_dot)
```
```
0.14112 0.14112
-0.9899925
```
In words, we start with a primal function `f` that takes inputs of type `a` and produces outputs of type `b`. We associate with it a JVP rule function `f_jvp` that takes a pair of inputs representing the primal inputs of type `a` and the corresponding tangent inputs of type `T a`, and produces a pair of outputs representing the primal outputs of type `b` and tangent outputs of type `T b`. The tangent outputs should be a linear function of the tangent inputs.
You can also use `f.defjvp` as a decorator, as in
```
@custom_jvp def f(x):
...
@f.defjvp def f_jvp(primals, tangents):
...
```
Even though we defined only a JVP rule and no VJP rule, we can use both forward- and reverse-mode differentiation on `f`. JAX will automatically transpose the linear computation on tangent values from our custom JVP rule, computing the VJP as efficiently as if we had written the rule by hand:
```
from jax import grad
print(grad(f)(3.))
print(grad(grad(f))(3.))
```
```
-0.9899925
-0.14112
```
For automatic transposition to work, the JVP rule’s output tangents must be linear as a function of the input tangents. Otherwise a transposition error is raised.
Multiple arguments work like this:
```
@custom_jvp def f(x, y):
return x ** 2 * y
@f.defjvp def f_jvp(primals, tangents):
x, y = primals
x_dot, y_dot = tangents
primal_out = f(x, y)
tangent_out = 2 * x * y * x_dot + x ** 2 * y_dot
return primal_out, tangent_out
```
```
print(grad(f)(2., 3.))
```
```
12.0
```
The `defjvps` convenience wrapper lets us define a JVP for each argument separately, and the results are computed separately then summed:
```
@custom_jvp def f(x):
return jnp.sin(x)
f.defjvps(lambda t, ans, x: jnp.cos(x) * t)
```
```
print(grad(f)(3.))
```
```
-0.9899925
```
Here’s a `defjvps` example with multiple arguments:
```
@custom_jvp def f(x, y):
return x ** 2 * y
f.defjvps(lambda x_dot, primal_out, x, y: 2 * x * y * x_dot,
lambda y_dot, primal_out, x, y: x ** 2 * y_dot)
```
```
print(grad(f)(2., 3.))
print(grad(f, 0)(2., 3.)) # same as above print(grad(f, 1)(2., 3.))
```
```
12.0 12.0 4.0
```
As a shorthand, with `defjvps` you can pass a `None` value to indicate that the JVP for a particular argument is zero:
```
@custom_jvp def f(x, y):
return x ** 2 * y
f.defjvps(lambda x_dot, primal_out, x, y: 2 * x * y * x_dot,
None)
```
```
print(grad(f)(2., 3.))
print(grad(f, 0)(2., 3.)) # same as above print(grad(f, 1)(2., 3.))
```
```
12.0 12.0 0.0
```
Calling a `jax.custom_jvp` function with keyword arguments, or writing a `jax.custom_jvp` function definition with default arguments, are both allowed so long as they can be unambiguously mapped to positional arguments based on the function signature retrieved by the standard library `inspect.signature` mechanism.
When you’re not performing differentiation, the function `f` is called just as if it weren’t decorated by `jax.custom_jvp`:
```
@custom_jvp def f(x):
print('called f!') # a harmless side-effect
return jnp.sin(x)
@f.defjvp def f_jvp(primals, tangents):
print('called f_jvp!') # a harmless side-effect
x, = primals
t, = tangents
return f(x), jnp.cos(x) * t
```
```
from jax import vmap, jit
print(f(3.))
```
```
called f!
0.14112
```
```
print(vmap(f)(jnp.arange(3.)))
print(jit(f)(3.))
```
```
called f!
[0. 0.84147096 0.9092974 ]
called f!
0.14112
```
The custom JVP rule is invoked during differentiation, whether forward or reverse:
```
y, y_dot = jvp(f, (3.,), (1.,))
print(y_dot)
```
```
called f_jvp!
called f!
-0.9899925
```
```
print(grad(f)(3.))
```
```
called f_jvp!
called f!
-0.9899925
```
Notice that `f_jvp` calls `f` to compute the primal outputs. In the context of higher-order differentiation, each application of a differentiation transform will use the custom JVP rule if and only if the rule calls the original `f` to compute the primal outputs. (This represents a kind of fundamental tradeoff, where we can’t make use of intermediate values from the evaluation of `f` in our rule *and also* have the rule apply in all orders of higher-order differentiation.)
```
grad(grad(f))(3.)
```
```
called f_jvp!
called f_jvp!
called f!
```
```
Array(-0.14112, dtype=float32, weak_type=True)
```
You can use Python control flow with `jax.custom_jvp`:
```
@custom_jvp def f(x):
if x > 0:
return jnp.sin(x)
else:
return jnp.cos(x)
@f.defjvp def f_jvp(primals, tangents):
x, = primals
x_dot, = tangents
ans = f(x)
if x > 0:
return ans, 2 * x_dot
else:
return ans, 3 * x_dot
```
```
print(grad(f)(1.))
print(grad(f)(-1.))
```
```
2.0 3.0
```
##### Use `jax.custom_vjp` to define custom reverse-mode-only rules[#](#use-jax-custom-vjp-to-define-custom-reverse-mode-only-rules)
While `jax.custom_jvp` suffices for controlling both forward- and, via JAX’s automatic transposition, reverse-mode differentiation behavior, in some cases we may want to directly control a VJP rule, for example in the latter two example problems presented above. We can do that with `jax.custom_vjp`:
```
from jax import custom_vjp import jax.numpy as jnp
# f :: a -> b
@custom_vjp def f(x):
return jnp.sin(x)
# f_fwd :: a -> (b, c)
def f_fwd(x):
return f(x), jnp.cos(x)
# f_bwd :: (c, CT b) -> CT a def f_bwd(cos_x, y_bar):
return (cos_x * y_bar,)
f.defvjp(f_fwd, f_bwd)
```
```
from jax import grad
print(f(3.))
print(grad(f)(3.))
```
```
0.14112
-0.9899925
```
In words, we again start with a primal function `f` that takes inputs of type `a` and produces outputs of type `b`. We associate with it two functions, `f_fwd` and `f_bwd`, which describe how to perform the forward- and backward-passes of reverse-mode autodiff, respectively.
The function `f_fwd` describes the forward pass, not only the primal computation but also what values to save for use on the backward pass. Its input signature is just like that of the primal function `f`, in that it takes a primal input of type `a`. But as output it produces a pair, where the first element is the primal output `b` and the second element is any “residual” data of type `c` to be stored for use by the backward pass. (This second output is analogous to [PyTorch’s save_for_backward mechanism](https://pytorch.org/tutorials/beginner/examples_autograd/two_layer_net_custom_function.html).)
The function `f_bwd` describes the backward pass. It takes two inputs, where the first is the residual data of type `c` produced by `f_fwd` and the second is the output cotangents of type `CT b` corresponding to the output of the primal function. It produces an output of type `CT a` representing the cotangents corresponding to the input of the primal function. In particular, the output of `f_bwd` must be a sequence (e.g. a tuple) of length equal to the number of arguments to the primal function.
So multiple arguments work like this:
```
from jax import custom_vjp
@custom_vjp def f(x, y):
return jnp.sin(x) * y
def f_fwd(x, y):
return f(x, y), (jnp.cos(x), jnp.sin(x), y)
def f_bwd(res, g):
cos_x, sin_x, y = res
return (cos_x * g * y, sin_x * g)
f.defvjp(f_fwd, f_bwd)
```
```
print(grad(f)(2., 3.))
```
```
-1.2484405
```
Calling a `jax.custom_vjp` function with keyword arguments, or writing a `jax.custom_vjp` function definition with default arguments, are both allowed so long as they can be unambiguously mapped to positional arguments based on the function signature retrieved by the standard library `inspect.signature` mechanism.
As with `jax.custom_jvp`, the custom VJP rule comprised by `f_fwd` and `f_bwd` is not invoked if differentiation is not applied. If function is evaluated, or transformed with `jit`, `vmap`, or other non-differentiation transformations, then only `f` is called.
```
@custom_vjp def f(x):
print("called f!")
return jnp.sin(x)
def f_fwd(x):
print("called f_fwd!")
return f(x), jnp.cos(x)
def f_bwd(cos_x, y_bar):
print("called f_bwd!")
return (cos_x * y_bar,)
f.defvjp(f_fwd, f_bwd)
```
```
print(f(3.))
```
```
called f!
0.14112
```
```
print(grad(f)(3.))
```
```
called f_fwd!
called f!
called f_bwd!
-0.9899925
```
```
from jax import vjp
y, f_vjp = vjp(f, 3.)
print(y)
```
```
called f_fwd!
called f!
0.14112
```
```
print(f_vjp(1.))
```
```
called f_bwd!
(Array(-0.9899925, dtype=float32, weak_type=True),)
```
**Forward-mode autodiff cannot be used on the** `jax.custom_vjp` **function** and will raise an error:
```
from jax import jvp
try:
jvp(f, (3.,), (1.,))
except TypeError as e:
print('ERROR! {}'.format(e))
```
```
called f_fwd!
called f!
ERROR! can't apply forward-mode autodiff (jvp) to a custom_vjp function.
```
If you want to use both forward- and reverse-mode, use `jax.custom_jvp` instead.
We can use `jax.custom_vjp` together with `pdb` to insert a debugger trace in the backward pass:
```
import pdb
@custom_vjp def debug(x):
return x # acts like identity
def debug_fwd(x):
return x, x
def debug_bwd(x, g):
import pdb; pdb.set_trace()
return g
debug.defvjp(debug_fwd, debug_bwd)
```
```
def foo(x):
y = x ** 2
y = debug(y) # insert pdb in corresponding backward pass step
return jnp.sin(y)
```
```
jax.grad(foo)(3.)
> <ipython-input-113-b19a2dc1abf7>(12)debug_bwd()
-> return g
(Pdb) p x Array(9., dtype=float32)
(Pdb) p g Array(-0.91113025, dtype=float32)
(Pdb) q
```
#### More features and details[#](#more-features-and-details)
##### Working with `list` / `tuple` / `dict` containers (and other pytrees)[#](#working-with-list-tuple-dict-containers-and-other-pytrees)
You should expect standard Python containers like lists, tuples, namedtuples, and dicts to just work, along with nested versions of those. In general, any [pytrees](https://jax.readthedocs.io/en/latest/pytrees.html) are permissible, so long as their structures are consistent according to the type constraints.
Here’s a contrived example with `jax.custom_jvp`:
```
from collections import namedtuple Point = namedtuple("Point", ["x", "y"])
@custom_jvp def f(pt):
x, y = pt.x, pt.y
return {'a': x ** 2,
'b': (jnp.sin(x), jnp.cos(y))}
@f.defjvp def f_jvp(primals, tangents):
pt, = primals
pt_dot, = tangents
ans = f(pt)
ans_dot = {'a': 2 * pt.x * pt_dot.x,
'b': (jnp.cos(pt.x) * pt_dot.x, -jnp.sin(pt.y) * pt_dot.y)}
return ans, ans_dot
def fun(pt):
dct = f(pt)
return dct['a'] + dct['b'][0]
```
```
pt = Point(1., 2.)
print(f(pt))
```
```
{'a': 1.0, 'b': (Array(0.84147096, dtype=float32, weak_type=True), Array(-0.41614684, dtype=float32, weak_type=True))}
```
```
print(grad(fun)(pt))
```
```
Point(x=Array(2.5403023, dtype=float32, weak_type=True), y=Array(0., dtype=float32, weak_type=True))
```
And an analogous contrived example with `jax.custom_vjp`:
```
@custom_vjp def f(pt):
x, y = pt.x, pt.y
return {'a': x ** 2,
'b': (jnp.sin(x), jnp.cos(y))}
def f_fwd(pt):
return f(pt), pt
def f_bwd(pt, g):
a_bar, (b0_bar, b1_bar) = g['a'], g['b']
x_bar = 2 * pt.x * a_bar + jnp.cos(pt.x) * b0_bar
y_bar = -jnp.sin(pt.y) * b1_bar
return (Point(x_bar, y_bar),)
f.defvjp(f_fwd, f_bwd)
def fun(pt):
dct = f(pt)
return dct['a'] + dct['b'][0]
```
```
pt = Point(1., 2.)
print(f(pt))
```
```
{'a': 1.0, 'b': (Array(0.84147096, dtype=float32, weak_type=True), Array(-0.41614684, dtype=float32, weak_type=True))}
```
```
print(grad(fun)(pt))
```
```
Point(x=Array(2.5403023, dtype=float32, weak_type=True), y=Array(-0., dtype=float32, weak_type=True))
```
##### Handling non-differentiable arguments[#](#handling-non-differentiable-arguments)
Some use cases, like the final example problem, call for non-differentiable arguments like function-valued arguments to be passed to functions with custom differentiation rules, and for those arguments to also be passed to the rules themselves. In the case of `fixed_point`, the function argument `f` was such a non-differentiable argument. A similar situation arises with `jax.experimental.odeint`.
###### `jax.custom_jvp` with `nondiff_argnums`[#](#jax-custom-jvp-with-nondiff-argnums)
Use the optional `nondiff_argnums` parameter to `jax.custom_jvp` to indicate arguments like these. Here’s an example with `jax.custom_jvp`:
```
from functools import partial
@partial(custom_jvp, nondiff_argnums=(0,))
def app(f, x):
return f(x)
@app.defjvp def app_jvp(f, primals, tangents):
x, = primals
x_dot, = tangents
return f(x), 2. * x_dot
```
```
print(app(lambda x: x ** 3, 3.))
```
```
27.0
```
```
print(grad(app, 1)(lambda x: x ** 3, 3.))
```
```
2.0
```
Notice the gotcha here: no matter where in the argument list these parameters appear, they’re placed at the *start* of the signature of the corresponding JVP rule. Here’s another example:
```
@partial(custom_jvp, nondiff_argnums=(0, 2))
def app2(f, x, g):
return f(g((x)))
@app2.defjvp def app2_jvp(f, g, primals, tangents):
x, = primals
x_dot, = tangents
return f(g(x)), 3. * x_dot
```
```
print(app2(lambda x: x ** 3, 3., lambda y: 5 * y))
```
```
3375.0
```
```
print(grad(app2, 1)(lambda x: x ** 3, 3., lambda y: 5 * y))
```
```
3.0
```
###### `jax.custom_vjp` with `nondiff_argnums`[#](#jax-custom-vjp-with-nondiff-argnums)
A similar option exists for `jax.custom_vjp`, and, similarly, the convention is that the non-differentiable arguments are passed as the first arguments to the `_bwd` rule, no matter where they appear in the signature of the original function. The signature of the `_fwd` rule remains unchanged - it is the same as the signature of the primal function. Here’s an example:
```
@partial(custom_vjp, nondiff_argnums=(0,))
def app(f, x):
return f(x)
def app_fwd(f, x):
return f(x), x
def app_bwd(f, x, g):
return (5 * g,)
app.defvjp(app_fwd, app_bwd)
```
```
print(app(lambda x: x ** 2, 4.))
```
```
16.0
```
```
print(grad(app, 1)(lambda x: x ** 2, 4.))
```
```
5.0
```
See `fixed_point` above for another usage example.
**You don’t need to use** `nondiff_argnums` **with array-valued arguments**, for example ones with integer dtype. Instead, `nondiff_argnums` should only be used for argument values that don’t correspond to JAX types (essentially don’t correspond to array types), like Python callables or strings. If JAX detects that an argument indicated by `nondiff_argnums` contains a JAX Tracer, then an error is raised. The `clip_gradient` function above is a good example of not using `nondiff_argnums` for integer-dtype array arguments.
### Control autodiff’s saved values with `jax.checkpoint` (aka `jax.remat`)[#](#control-autodiff-s-saved-values-with-jax-checkpoint-aka-jax-remat)
```
import jax import jax.numpy as jnp
```
#### TL;DR[#](#tl-dr)
Use the `jax.checkpoint` decorator (aliased as `jax.remat`) with `jax.grad` to control which intermediates are saved on the forward pass versus recomputed on the backward pass, trading off memory and FLOPs.
**Don’t miss the [practical notes](#practical-notes) for a discussion about how `jax.checkpoint` interacts with `jax.jit`.**
Without using `jax.checkpoint`, the forward pass of `jax.grad(f)(x)` saves, for use on the backward pass, the values of Jacobian coefficients and other intermediates. We call these saved values *residuals*:
```
def g(W, x):
y = jnp.dot(W, x)
return jnp.sin(y)
def f(W1, W2, W3, x):
x = g(W1, x)
x = g(W2, x)
x = g(W3, x)
return x
W1 = jnp.ones((5, 4))
W2 = jnp.ones((6, 5))
W3 = jnp.ones((7, 6))
x = jnp.ones(4)
# Inspect the 'residual' values to be saved on the forward pass
# if we were to evaluate `jax.grad(f)(W1, W2, W3, x)`
from jax.ad_checkpoint import print_saved_residuals jax.ad_checkpoint.print_saved_residuals(f, W1, W2, W3, x)
```
```
f32[5,4] from the argument 'W1'
f32[6,5] from the argument 'W2'
f32[7,6] from the argument 'W3'
f32[4] from the argument 'x'
f32[5] output of sin from <ipython-input-4-f510dde58e22>:3 (g)
f32[5] output of cos from <ipython-input-4-f510dde58e22>:3 (g)
f32[6] output of sin from <ipython-input-4-f510dde58e22>:3 (g)
f32[6] output of cos from <ipython-input-4-f510dde58e22>:3 (g)
f32[7] output of cos from <ipython-input-4-f510dde58e22>:3 (g)
```
By applying `jax.checkpoint` to sub-functions, as a decorator or at specific application sites, we force JAX not to save any of that sub-function’s residuals. Instead, only the inputs of a `jax.checkpoint`-decorated function might be saved, and any residuals consumed on the backward pass are re-computed from those inputs as needed:
```
def f2(W1, W2, W3, x):
x = jax.checkpoint(g)(W1, x)
x = jax.checkpoint(g)(W2, x)
x = jax.checkpoint(g)(W3, x)
return x
jax.ad_checkpoint.print_saved_residuals(f2, W1, W2, W3, x)
```
```
f32[5,4] from the argument 'W1'
f32[6,5] from the argument 'W2'
f32[7,6] from the argument 'W3'
f32[4] from the argument 'x'
f32[5] output of sin from <ipython-input-4-f510dde58e22>:3 (g)
f32[6] output of sin from <ipython-input-4-f510dde58e22>:3 (g)
```
Here the values of two `sin` applications are saved because they are arguments in subsequent applications of the `jax.checkpoint`-decorated `g` function, and inputs to a `jax.checkpoint`-decorated function may be saved. But no values of
`cos` applications are saved.
To control which values are saveable without having to edit the definition of the function to be differentiated, you can use a rematerialization *policy*. Here is an example that saves only the results of `dot` operations with no batch dimensions (since they are often FLOP-bound, and hence worth saving rather than recomputing):
```
f3 = jax.checkpoint(f, policy=jax.checkpoint_policies.dots_with_no_batch_dims_saveable)
jax.ad_checkpoint.print_saved_residuals(f3, W1, W2, W3, x)
```
```
f32[5,4] from the argument 'W1'
f32[6,5] from the argument 'W2'
f32[7,6] from the argument 'W3'
f32[4] from the argument 'x'
f32[5] output of dot_general from <ipython-input-4-f510dde58e22>:2 (g)
f32[6] output of dot_general from <ipython-input-4-f510dde58e22>:2 (g)
f32[7] output of dot_general from <ipython-input-4-f510dde58e22>:2 (g)
```
You can also use policies to refer to intermediate values you name using `jax.ad_checkpoint.checkpoint_name`:
```
from jax.ad_checkpoint import checkpoint_name
def f4(W1, W2, W3, x):
x = checkpoint_name(g(W1, x), name='a')
x = checkpoint_name(g(W2, x), name='b')
x = checkpoint_name(g(W3, x), name='c')
return x
f4 = jax.checkpoint(f4, policy=jax.checkpoint_policies.save_only_these_names('a'))
jax.ad_checkpoint.print_saved_residuals(f4, W1, W2, W3, x)
```
```
f32[5,4] from the argument 'W1'
f32[6,5] from the argument 'W2'
f32[7,6] from the argument 'W3'
f32[4] from the argument 'x'
f32[5] named 'a' from <ipython-input-7-fc0ed1c14b8d>:4 (f4)
```
When playing around with these toy examples, we can get a closer look at what’s going on using the `print_fwd_bwd` utility defined in this notebook:
```
from jax.tree_util import tree_flatten, tree_unflatten
from rich.console import Console from rich.table import Table import rich.text
def print_fwd_bwd(f, *args, **kwargs) -> None:
args, in_tree = tree_flatten((args, kwargs))
def f_(*args):
args, kwargs = tree_unflatten(in_tree, args)
return f(*args, **kwargs)
fwd = jax.make_jaxpr(lambda *args: jax.vjp(f_, *args))(*args).jaxpr
y, f_vjp = jax.vjp(f_, *args)
res, in_tree = tree_flatten(f_vjp)
def g_(*args):
*res, y = args
f_vjp = tree_unflatten(in_tree, res)
return f_vjp(y)
bwd = jax.make_jaxpr(g_)(*res, y).jaxpr
table = Table(show_header=False, show_lines=True, padding=(1, 2, 0, 2), box=None)
table.add_row("[bold green]forward computation:",
"[bold green]backward computation:")
table.add_row(rich.text.Text.from_ansi(str(fwd)),
rich.text.Text.from_ansi(str(bwd)))
console = Console(width=240, force_jupyter=True)
console.print(table)
def _renderable_repr(self):
return self.html rich.jupyter.JupyterRenderable._repr_html_ = _renderable_repr
```
```
# no use of jax.checkpoint:
print_fwd_bwd(f, W1, W2, W3, x)
```
```
forward computation: backward computation:
{ lambda ; a:f32[5,4] b:f32[6,5] c:f32[7,6] d:f32[4]. let { lambda ; a:f32[7] b:f32[6] c:f32[7,6] d:f32[6] e:f32[5] f:f32[6,5] g:f32[5] h:f32[4]
e:f32[5] = dot_general[dimension_numbers=(([1], [0]), ([], []))] a d i:f32[5,4] j:f32[7]. let
f:f32[5] = sin e k:f32[7] = mul j a
g:f32[5] = cos e l:f32[6] = dot_general[dimension_numbers=(([0], [0]), ([], []))] k c
h:f32[6] = dot_general[dimension_numbers=(([1], [0]), ([], []))] b f m:f32[7,6] = dot_general[dimension_numbers=(([], []), ([], []))] k b
i:f32[6] = sin h n:f32[6] = mul l d
j:f32[6] = cos h o:f32[5] = dot_general[dimension_numbers=(([0], [0]), ([], []))] n f
k:f32[7] = dot_general[dimension_numbers=(([1], [0]), ([], []))] c i p:f32[6,5] = dot_general[dimension_numbers=(([], []), ([], []))] n e
l:f32[7] = sin k q:f32[5] = mul o g
m:f32[7] = cos k r:f32[4] = dot_general[dimension_numbers=(([0], [0]), ([], []))] q i
in (l, m, i, c, j, f, b, g, d, a) } s:f32[5,4] = dot_general[dimension_numbers=(([], []), ([], []))] q h
in (s, p, m, r) }
```
```
# using jax.checkpoint with policy=jax.checkpoint_policies.dots_with_no_batch_dims_saveable:
print_fwd_bwd(f3, W1, W2, W3, x)
```
```
forward computation: backward computation:
{ lambda ; a:f32[5,4] b:f32[6,5] c:f32[7,6] d:f32[4]. let { lambda ; a:f32[5] b:f32[6] c:f32[7] d:f32[5,4] e:f32[6,5] f:f32[7,6] g:f32[4] h:f32[7]. let
e:f32[5] = dot_general[dimension_numbers=(([1], [0]), ([], []))] a d i:f32[5,4] j:f32[6,5] k:f32[7,6] l:f32[4] = remat2[
f:f32[5] = sin e differentiated=True
g:f32[6] = dot_general[dimension_numbers=(([1], [0]), ([], []))] b f jaxpr={ lambda ; m:f32[5] n:f32[6] o:f32[7] p:f32[5,4] q:f32[6,5] r:f32[7,6]
h:f32[6] = sin g s:f32[4] t:f32[7]. let
i:f32[7] = dot_general[dimension_numbers=(([1], [0]), ([], []))] c h u:f32[5] = sin m
j:f32[7] = sin i v:f32[5] = cos m
in (j, e, g, i, a, b, c, d) } w:f32[6] = sin n
x:f32[6] = cos n
y:f32[7] = cos o
z:f32[7] = mul t y
ba:f32[6] = dot_general[dimension_numbers=(([0], [0]), ([], []))] z r
bb:f32[6] = mul ba x
bc:f32[5] = dot_general[dimension_numbers=(([0], [0]), ([], []))] bb q
bd:f32[5] = mul bc v
be:f32[4] = dot_general[dimension_numbers=(([0], [0]), ([], []))] bd p
bf:f32[5,4] = dot_general[dimension_numbers=(([], []), ([], []))] bd s
bg:f32[6,5] = dot_general[dimension_numbers=(([], []), ([], []))] bb u
bh:f32[7,6] = dot_general[dimension_numbers=(([], []), ([], []))] z w
in (bf, bg, bh, be) }
policy=<function dot_with_no_batch_dims at 0x7f5e469b1700 prevent_cse=True
] a b c d e f g h
in (i, j, k, l) }
```
#### Let’s think step by step[#](#let-s-think-step-by-step)
You might want to first (re)read [the Autodiff Cookbook Part 1](https://jax.readthedocs.io/en/latest/notebooks/autodiff_cookbook.html).
##### Fundamentals of `jax.checkpoint`[#](#fundamentals-of-jax-checkpoint)
In both `jax.linearize` and `jax.vjp` there is flexibility in how and when some values are computed. Different choices can trade off memory use against FLOPs. JAX provides control over these choices with `jax.checkpoint`.
One such choice is whether to perform Jacobian coefficient computations on the forward pass, as soon as the inputs are available, or on the backward pass, just before the coefficients are needed. Consider the example of `sin_vjp`:
```
def sin_vjp(x):
y = jnp.sin(x)
cos_x = jnp.cos(x)
return y, lambda y_bar: cos_x * y_bar
```
Another valid implementation would compute the value of `jnp.cos(x)` on the backward pass rather than on the forward pass:
```
def sin_vjp2(x):
y = jnp.sin(x)
return y, lambda y_bar: jnp.cos(x) * y_bar
```
For this particular function, the amount of memory used by the two versions is the same, though we’ve reduced the FLOPs for the primal computation (i.e. the forward pass) and increased the FLOPs for the cotangent computation (i.e. the backward pass).
There’s another choice when it comes to function composition. Recall our VJP rule for a composition of two functions:
```
def f(x):
y = g(x)
z = h(y)
return z
def f_vjp(x):
y, g_vjp = jax.vjp(g, x)
z, h_vjp = jax.vjp(h, y)
def f_bwd(z_bar):
y_bar, = h_vjp(z_bar)
x_bar, = g_vjp(y_bar)
return x_bar
return z, f_bwd
```
An alternative is:
```
def f_vjp_checkpoint(x):
y = g(x)
z, h_vjp = jax.vjp(h, y)
def f_bwd2(z_bar):
y_bar, = h_vjp(z_bar)
_, g_vjp = jax.vjp(g, x)
x_bar, = g_vjp(y_bar)
return x_bar
return z, f_bwd2
```
In words, this alternative implementation doesn’t compute `g_vjp`, or the residual values in its closure, on the forward pass. Instead it only computes them in the backward pass `f_bwd2`. That means `f_vjp_checkpoint` requires less memory: if `g` and `h` each required similar amounts of memory for their residuals, each much larger than `x`, then the function produced by `f_vjp_checkpoint(x)` requires half the memory as that of `f_vjp(x)`!
The cost we pay is redundant work: in `f_bwd2` we must re-evaluate `g(x)` as part of `jax.vjp(g, x)` just to discard its value (in the underscore variable on the line `_, g_vjp = jax.vjp(g, x)`).
We can get this VJP behavior in autodiff � without having to write VJP functions directly � by instead using `jax.checkpoint` in an alternative definition of the original function `f`:
```
def f_checkpoint(x):
y = jax.checkpoint(g)(x)
z = h(y)
return z
```
In other words, we apply `jax.checkpoint` to `g`, the first stage of `f`, rather than to `f` itself. This way, when we evaluate `jax.grad(f_checkpoint)(x)`, we’d get a computation like:
1. run the forward pass of `g`, discarding residual values;
2. run the forward pass of `h`, saving residuals;
3. run the backward pass of `h`, consuming residuals from step 2;
4. re-run the forward pass of `g`, saving residuals;
5. run the backward pass of `g`, consuming residuals from step 4.
That is, by evaluating `jax.grad(f_checkpoint)(x)` we’d get the same computation as:
```
def f_checkpoint_grad(x):
y = g(x) # step 1
_, h_vjp = jax.vjp(h)(y) # step 2
y_bar, = h_vjp(1.0) # step 3
_, g_vjp = jax.vjp(g, x) # step 4
x_bar, = g_vjp(y_bar) # step 5
return x_bar
```
In general, `jax.checkpoint(foo)` is a new function which has the same input-output behavior as `foo`, but behaves differently under autodiff, particularly under `jax.linearize` and `jax.vjp` (and their wrappers, like `jax.grad`) but not `jax.jvp`. When differentiated, only the input to a `jax.checkpoint`-differentiated function is stored on the forward pass; on the backward pass, residuals (i.e. intermediates from `foo` and its Jacobian coefficient values needed for the backward pass) are recomputed.
Notice that if `f = lambda x: h(g(x))` is the function we want to differentiate, i.e. if we want to apply `jax.grad(f)`, we don’t get any memory savings by applying `jax.checkpoint` to `f` itself. That’s because evaluating `jax.grad(jax.checkpoint(f))(x)` would lead to a computation like:
1. run the forward pass, discarding all residuals;
2. immediately re-run the forward pass, saving residuals;
3. run the backward pass, consuming residuals from step 2.
That is, in code we’d have something like:
```
def f_grad_bad(x):
_ = f(x) # step 1
_, f_vjp = jax.vjp(f, x) # step 2
x_bar, = f_vjp(1.0) # step 3
return x_bar
```
We also wouldn’t get any memory savings by applying `jax.checkpoint` to `h`, the second stage of `f`. That’s because evaluating `jax.grad(lambda x: jax.checkpoint(h)(g(x)))` would lead to a computation like:
1. run the forward pass of `g`, saving residuals;
2. run the forward pass of `h`, discarding residuals;
3. immediately re-run the forward pass of `h`, saving residuals;
4. run the backward pass of `h`, consuming residuals from step 3;
5. run the backward pass of `g`, consuming residuals from step 1.
That is, in code we’d have something like:
```
def f_grad_bad2(x):
y, g_vjp = jax.vjp(g, x) # step 1
z = h(y) # step 2
_, h_vjp = jax.vjp(h, y) # step 3
y_bar, = h_vjp(1.0) # step 3
x_bar, = g_vjp(y_bar) # step 5
return x_bar
```
Slightly more generally, if we had a chain composition of functions, like `f = lambda x: f3(f2(f1(x)))`, and we were interested in evaluating `jax.grad(f)`, we could say that:
* we shouldn’t apply `jax.checkpoint` to the whole function `f`, since that wouldn’t save any memory (and will perform wasteful recomputation);
* we shouldn’t apply `jax.checkpoint` to the last sub-function `f3`, since that wouldn’t save any memory (and will perform wasteful recomputation);
* we could apply `jax.checkpoint` to `f1`, `f2`, or their composition `lambda x: f2(f1(x))`, since any of those might save memory and would express different memory/recompute tradeoffs.
##### Custom policies for what’s saveable[#](#custom-policies-for-what-s-saveable)
As shown so far, using `jax.checkpoint` switches from one extreme to another:
* without `jax.checkpoint`, JAX’s autodiff tends to compute everything possible on the forward pass and store it for the backward pass;
* with a `jax.checkpoint` decorator, we instead compute as little as possible on the forward pass and recompute values as needed on the backward pass.
To operate between these two extremes, saving some things and not others, we can carefully place `jax.checkpoint` decorators on sub-functions. But that requires editing the function to be differentiated, e.g. model code, which may be inconvenient. It can also be hard to experiment with variations.
So an alternative is to use the `policy` argument to `jax.checkpoint`. A policy is a callable (i.e. a function) which takes as input a type-level specification of a first order primitive application and returns a boolean indicating whether the corresponding output value(s) are allowed to be saved as residuals (or instead must be recomputed in the (co)tangent computation as needed). To write robust code, a policy should be selected from the attributes on `jax.checkpoint_policies`, like `jax.checkpoint_policies.dots_with_no_batch_dims_saveable`, since the API for writing custom policy callables is considered internal.
For example, consider this function to be differentiated:
```
def loss(params, x, y):
return jnp.sum((predict(params, x) - y)**2)
def predict(params, x):
*Ws, Wlast = params
for W in Ws:
x = layer(W, x)
x = jnp.dot(Wlast, x)
return x
def layer(W, x):
return jnp.sin(jnp.dot(W, x))
```
```
W1 = W2 = W3 = jnp.ones((4, 4))
params = [W1, W2, W3]
x = jnp.ones(4)
y = jnp.ones(4)
```
```
print_saved_residuals(loss, params, x, y)
```
```
f32[4,4] from the argument 'params'
f32[4,4] from the argument 'params'
f32[4,4] from the argument 'params'
f32[4] from the argument 'x'
f32[4] output of sin from <ipython-input-18-3808b5023c3d>:12 (layer)
f32[4] output of cos from <ipython-input-18-3808b5023c3d>:12 (layer)
f32[4] output of sin from <ipython-input-18-3808b5023c3d>:12 (layer)
f32[4] output of cos from <ipython-input-18-3808b5023c3d>:12 (layer)
f32[4] output of mul from <ipython-input-18-3808b5023c3d>:2 (loss)
```
Instead of saving so many values on the forward pass, perhaps we only want to save the results of matrix multiplications with no batch dimension (since they may be FLOP- rather than memory-bound). We can do that using the policy `jax.checkpoint_policies.dots_with_no_batch_dims_saveable`:
```
loss_checkpoint = jax.checkpoint(loss, policy=jax.checkpoint_policies.dots_with_no_batch_dims_saveable)
print_saved_residuals(loss_checkpoint, params, x, y)
```
```
f32[4,4] from the argument 'params'
f32[4,4] from the argument 'params'
f32[4,4] from the argument 'params'
f32[4] from the argument 'x'
f32[4] from the argument 'y'
f32[4] output of dot_general from <ipython-input-18-3808b5023c3d>:12 (layer)
f32[4] output of dot_general from <ipython-input-18-3808b5023c3d>:12 (layer)
f32[4] output of dot_general from <ipython-input-18-3808b5023c3d>:8 (predict)
```
Notice also that by providing a policy, we didn’t need to edit the code defining `loss`, `predict`, or `layer`. That is particularly convenient if we want to experiment with policies in calling code (e.g. a training script) without changing library code (e.g. the neural network library).
Some policies can refer to values named with `jax.ad_checkpoint.checkpoint_name`:
```
from jax.ad_checkpoint import checkpoint_name
def predict(params, x):
*Ws, Wlast = params
for i, W in enumerate(Ws):
x = layer(W, x)
x = checkpoint_name(x, name=f'layer{i}_output')
x = jnp.dot(Wlast, x)
return x
```
By itself, `checkpoint_name` is just an identity function. But because some policy functions know to look for them, we can use the names to control whether certain values output by `checkpoint_name` are considered saveable:
```
print_saved_residuals(loss, params, x, y)
```
```
f32[4,4] from the argument 'params'
f32[4,4] from the argument 'params'
f32[4,4] from the argument 'params'
f32[4] from the argument 'x'
f32[4] output of cos from <ipython-input-18-3808b5023c3d>:12 (layer)
f32[4] named 'layer0_output' from <ipython-input-22-e48aedf368ad>:7 (predict)
f32[4] output of cos from <ipython-input-18-3808b5023c3d>:12 (layer)
f32[4] named 'layer1_output' from <ipython-input-22-e48aedf368ad>:7 (predict)
f32[4] output of mul from <ipython-input-18-3808b5023c3d>:2 (loss)
```
```
loss_checkpoint2 = jax.checkpoint(loss, policy=jax.checkpoint_policies.save_any_names_but_these('layer1_output'))
print_saved_residuals(loss_checkpoint2, params, x, y)
```
```
f32[4,4] from the argument 'params'
f32[4,4] from the argument 'params'
f32[4,4] from the argument 'params'
f32[4] from the argument 'x'
f32[4] from the argument 'y'
```
Another policy which refers to names is `jax.checkpoint_policies.save_only_these_names`.
Some of the policies are:
* `everything_saveable` (the default strategy, as if `jax.checkpoint` were not being used at all)
* `nothing_saveable` (i.e. rematerialize everything, as if a custom policy were not being used at all)
* `dots_saveable` or its alias `checkpoint_dots`
* `dots_with_no_batch_dims_saveable` or its alias `checkpoint_dots_with_no_batch_dims`
* `save_anything_but_these_names` (save any values except for the output of
`checkpoint_name` with any of the names given)
* `save_any_names_but_these` (save only named values, i.e. any outputs of
`checkpoint_name`, except for those with the names given)
* `save_only_these_names` (save only named values, and only among the names given)
Policies only indicate what is saveable; a value is only saved if it’s actually needed by the backward pass.
##### Advanced: recursive `jax.checkpoint`[#](#advanced-recursive-jax-checkpoint)
By applying `jax.checkpoint` in the right way, there are many tradeoffs between memory usage and (re)computation that can be expressed. One surprising example is *recursive* checkpointing, where we apply `jax.checkpoint` to a function which itself calls `jax.checkpoint`-decorated functions in a way so that memory usage from the chain composition of \(D\) functions scales like \(\mathcal{O}(\log_2 D)\) rather than \(\mathcal{O}(D)\).
As a toy example, consider the chain composition of multiple `jnp.sin` functions:
```
def chain_compose(funs):
def f(x):
for fun in funs:
x = fun(x)
return x
return f
f = chain_compose([jnp.sin] * 8)
print_saved_residuals(f, 3.)
```
```
f32[] output of cos from <ipython-input-25-46b5594773cb>:4 (f)
f32[] output of cos from <ipython-input-25-46b5594773cb>:4 (f)
f32[] output of cos from <ipython-input-25-46b5594773cb>:4 (f)
f32[] output of cos from <ipython-input-25-46b5594773cb>:4 (f)
f32[] output of cos from <ipython-input-25-46b5594773cb>:4 (f)
f32[] output of cos from <ipython-input-25-46b5594773cb>:4 (f)
f32[] output of cos from <ipython-input-25-46b5594773cb>:4 (f)
f32[] output of cos from <ipython-input-25-46b5594773cb>:4 (f)
```
In general, the number of stored residuals scales linearly with the length of the chain:
```
f = chain_compose([jnp.sin] * 16)
print_saved_residuals(f, 3.)
```
```
f32[] output of cos from <ipython-input-25-46b5594773cb>:4 (f)
f32[] output of cos from <ipython-input-25-46b5594773cb>:4 (f)
f32[] output of cos from <ipython-input-25-46b5594773cb>:4 (f)
f32[] output of cos from <ipython-input-25-46b5594773cb>:4 (f)
f32[] output of cos from <ipython-input-25-46b5594773cb>:4 (f)
f32[] output of cos from <ipython-input-25-46b5594773cb>:4 (f)
f32[] output of cos from <ipython-input-25-46b5594773cb>:4 (f)
f32[] output of cos from <ipython-input-25-46b5594773cb>:4 (f)
f32[] output of cos from <ipython-input-25-46b5594773cb>:4 (f)
f32[] output of cos from <ipython-input-25-46b5594773cb>:4 (f)
f32[] output of cos from <ipython-input-25-46b5594773cb>:4 (f)
f32[] output of cos from <ipython-input-25-46b5594773cb>:4 (f)
f32[] output of cos from <ipython-input-25-46b5594773cb>:4 (f)
f32[] output of cos from <ipython-input-25-46b5594773cb>:4 (f)
f32[] output of cos from <ipython-input-25-46b5594773cb>:4 (f)
f32[] output of cos from <ipython-input-25-46b5594773cb>:4 (f)
```
But we can apply `jax.checkpoint` recursively to improve the scaling:
```
def recursive_checkpoint(funs):
if len(funs) == 1:
return funs[0]
elif len(funs) == 2:
f1, f2 = funs
return lambda x: f1(f2(x))
else:
f1 = recursive_checkpoint(funs[:len(funs)//2])
f2 = recursive_checkpoint(funs[len(funs)//2:])
return lambda x: f1(jax.checkpoint(f2)(x))
```
```
f = recursive_checkpoint([jnp.sin] * 8)
print_saved_residuals(f, 3.)
```
```
f32[] from the argument 'x'
f32[] output of sin from <ipython-input-27-86f83c871e81>:6 (<lambda>)
f32[] output of cos from <ipython-input-27-86f83c871e81>:6 (<lambda>)
f32[] output of cos from <ipython-input-27-86f83c871e81>:6 (<lambda>)
```
```
f = recursive_checkpoint([jnp.sin] * 16)
print_saved_residuals(f, 3.)
```
```
f32[] from the argument 'x'
f32[] output of sin from <ipython-input-27-86f83c871e81>:6 (<lambda>)
f32[] output of sin from <ipython-input-27-86f83c871e81>:6 (<lambda>)
f32[] output of cos from <ipython-input-27-86f83c871e81>:6 (<lambda>)
f32[] output of cos from <ipython-input-27-86f83c871e81>:6 (<lambda>)
```
The cost here, as usual, is recomputation: in particular, we end up performing \(\mathcal{O}(\log_2 D)\) times as many FLOPs:
```
f = chain_compose([jnp.sin] * 8)
print_fwd_bwd(f, 3.)
```
```
forward computation: backward computation:
{ lambda ; a:f32[]. let { lambda ; a:f32[] b:f32[] c:f32[] d:f32[] e:f32[] f:f32[] g:f32[] h:f32[] i:f32[]. let
b:f32[] = sin a j:f32[] = mul i a
c:f32[] = cos a k:f32[] = mul j b
d:f32[] = sin b l:f32[] = mul k c
e:f32[] = cos b m:f32[] = mul l d
f:f32[] = sin d n:f32[] = mul m e
g:f32[] = cos d o:f32[] = mul n f
h:f32[] = sin f p:f32[] = mul o g
i:f32[] = cos f q:f32[] = mul p h
j:f32[] = sin h in (q,) }
k:f32[] = cos h
l:f32[] = sin j
m:f32[] = cos j
n:f32[] = sin l
o:f32[] = cos l
p:f32[] = sin n
q:f32[] = cos n
in (p, q, o, m, k, i, g, e, c) }
```
```
f = recursive_checkpoint([jnp.sin] * 8)
print_fwd_bwd(f, 3.)
```
```
forward computation: backward computation:
{ lambda ; a:f32[]. let { lambda ; a:f32[] b:f32[] c:f32[] d:f32[]. let
b:f32[] = remat2[ e:f32[] = mul d a
differentiated=False f:f32[] = mul e b
jaxpr={ lambda ; c:f32[]. let d:f32[] = sin c; e:f32[] = sin d in (e,) } g:f32[] = remat2[
policy=None differentiated=True
prevent_cse=True jaxpr={ lambda ; h:f32[] i:f32[]. let
] a j:f32[] = sin h
f:f32[] = sin b k:f32[] = cos h
g:f32[] = sin f l:f32[] = cos j
h:f32[] = sin g m:f32[] = mul i l
i:f32[] = sin h n:f32[] = mul m k
j:f32[] = sin i in (n,) }
k:f32[] = cos i policy=None
l:f32[] = sin j prevent_cse=True
m:f32[] = cos j ] c f
in (l, m, k, g, a) } o:f32[] = remat2[
differentiated=True
jaxpr={ lambda ; p:f32[] q:f32[]. let
r:f32[] = sin p
s:f32[] = sin r
t:f32[] = sin s
u:f32[] = cos s
v:f32[] = cos t
w:f32[] = mul q v
x:f32[] = mul w u
y:f32[] = remat2[
differentiated=True
jaxpr={ lambda ; z:f32[] ba:f32[]. let
bb:f32[] = sin z
bc:f32[] = cos z
bd:f32[] = cos bb
be:f32[] = mul ba bd
bf:f32[] = mul be bc
in (bf,) }
policy=None
prevent_cse=True
] p x
in (y,) }
policy=None
prevent_cse=True
] 3.0 g
in (o,) }
```
#### Practical notes[#](#practical-notes)
When differentiated functions are staged out to XLA for compilation, for example by applying `jax.jit` to a function which contains a `jax.grad` call, XLA will automatically optimize the computation, including decisions about when to compute or rematerialize values. As a result, **`jax.checkpoint` often isn’t needed for differentiated functions under a `jax.jit`**. XLA will optimize things for you.
One exception is when using staged-out control flow, like `jax.lax.scan`. Automatic compiler optimizations across multiple control flow primitives, e.g. across a forward-pass `scan` and the corresponding backward-pass `scan`, typically aren’t aren’t as thorough. As a result, it’s often a good idea to use `jax.checkpoint` on the body function passed to `jax.lax.scan`.
For example, one common pattern in large [Transformer models](https://en.wikipedia.org/wiki/Transformer_(machine_learning_model)) is to express the architecture as a `jax.lax.scan` over layers so as to reduce compilation times. That is, using a simple fully-connected network as an analogy, instead of writing something like this:
```
LayerParam = tuple[jnp.ndarray, jnp.ndarray] # weights, bias pair for a layer ParamsList = list[LayerParam]
def net(params: ParamsList, x: jnp.ndarray):
for W, b in params:
x = jnp.maximum(jnp.dot(x, W) + b, 0.)
return x
```
We would instead iterate over the layer application with `jax.lax.scan`:
```
StackedWeights = jnp.ndarray # all weight matrices stacked together StackedBiases = jnp.ndarray # all bias vectors stacked together
all_weights = jnp.stack([W for W, _ in params])
all_biases = jnp.stack([b for _, b in params])
def layer(x, W_b_pair):
W, b = W_b_pair
out = jnp.maximum(jnp.dot(x, W) + b, 0.)
return out, None
def net(all_weights, all_biases, x):
x, _ = jax.lax.scan(layer, x, (all_weights, all_biases))
return x
```
This scan-over-layers version reduces compile times, but by foiling some compiler optimizations it can lead to inefficient computation of gradients. To mitigate the issue, we would use `jax.checkpoint` on the scanned function:
```
from functools import partial
@partial(jax.checkpoint,
policy=jax.checkpoint_policies.dots_with_no_batch_dims_saveable)
def layer(x, W_b_pair):
W, b = W_b_pair
out = jnp.maximum(jnp.dot(x, W) + b, 0.)
return out, None
```
By using `jax.checkpoint` this way, we’re manually controlling which values JAX’s autodiff saves between the forward and backward passes, and hence not relying on XLA optimizations to choose for us.
### How JAX primitives work[#](#how-jax-primitives-work)
*<EMAIL>*, October 2019.
JAX implements certain transformations of Python functions, e.g., `jit`, `grad`,
`vmap`, or `pmap`. The Python functions to be transformed must be JAX-traceable,
which means that as the Python function executes the only operations it applies to the data are either inspections of data attributes such as shape or type, or special operations called JAX primitives.
In particular, a JAX-traceable function is sometimes invoked by JAX with abstract arguments. An example of a JAX abstract value is `ShapedArray(float32[2,2])`,
which captures the type and the shape of values, but not the concrete data values.
JAX primitives know how to operate on both concrete data values and on the JAX abstract values.
The JAX-transformed functions must themselves be JAX-traceable functions,
to ensure that these transformations can be composed, e.g., `jit(jacfwd(grad(f)))`.
There are pre-defined JAX primitives corresponding to most XLA operations,
e.g., add, matmul, sin, cos, indexing.
JAX comes with an implementation of numpy functions in terms of JAX primitives, which means that Python programs using JAX’s implementation of numpy are JAX-traceable and therefore transformable.
Other libraries can be made JAX-traceable by implementing them in terms of JAX primitives.
The set of JAX primitives is extensible. Instead of reimplementing a function in terms of pre-defined JAX primitives,
one can define a new primitive that encapsulates the behavior of the function.
**The goal of this document is to explain the interface that a JAX primitive must support in order to allow JAX to perform all its transformations.**
Consider that we want to add to JAX support for a multiply-add function with three arguments, defined mathematically as “multiply_add(x, y, z) = x * y + z”.
This function operates on 3 identically-shaped tensors of floating point values and performs the operations pointwise.
#### Using existing primitives[#](#using-existing-primitives)
The easiest way to define new functions is to write them in terms of JAX primitives, or in terms of other functions that are themselves written using JAX primitives, e.g., those defined in the `jax.lax` module:
```
from jax import lax from jax._src import api
def multiply_add_lax(x, y, z):
"""Implementation of multiply-add using the jax.lax primitives."""
return lax.add(lax.mul(x, y), z)
def square_add_lax(a, b):
"""A square-add function using the newly defined multiply-add."""
return multiply_add_lax(a, a, b)
print("square_add_lax = ", square_add_lax(2., 10.))
# Differentiate w.r.t. the first argument print("grad(square_add_lax) = ", api.grad(square_add_lax, argnums=0)(2.0, 10.))
```
```
square_add_lax = 14.0 grad(square_add_lax) = 4.0
```
In order to understand how JAX is internally using the primitives,
we add some helpers for tracing function calls.
```
#@title Helper functions (execute this cell)
import functools import traceback
_indentation = 0 def _trace(msg=None):
"""Print a message at current indentation."""
if msg is not None:
print(" " * _indentation + msg)
def _trace_indent(msg=None):
"""Print a message and then indent the rest."""
global _indentation
_trace(msg)
_indentation = 1 + _indentation
def _trace_unindent(msg=None):
"""Unindent then print a message."""
global _indentation
_indentation = _indentation - 1
_trace(msg)
def trace(name):
"""A decorator for functions to trace arguments and results."""
def trace_func(func): # pylint: disable=missing-docstring
def pp(v):
"""Print certain values more succinctly"""
vtype = str(type(v))
if "jax._src.xla_bridge._JaxComputationBuilder" in vtype:
return "<JaxComputationBuilder>"
elif "jaxlib.xla_extension.XlaOp" in vtype:
return "<XlaOp at 0x{:x}>".format(id(v))
elif ("partial_eval.JaxprTracer" in vtype or
"batching.BatchTracer" in vtype or
"ad.JVPTracer" in vtype):
return "Traced<{}>".format(v.aval)
elif isinstance(v, tuple):
return "({})".format(pp_values(v))
else:
return str(v)
def pp_values(args):
return ", ".join([pp(arg) for arg in args])
@functools.wraps(func)
def func_wrapper(*args):
_trace_indent("call {}({})".format(name, pp_values(args)))
res = func(*args)
_trace_unindent("|<- {} = {}".format(name, pp(res)))
return res
return func_wrapper
return trace_func
class expectNotImplementedError(object):
"""Context manager to check for NotImplementedError."""
def __enter__(self): pass
def __exit__(self, type, value, tb):
global _indentation
_indentation = 0
if type is NotImplementedError:
print("\nFound expected exception:")
traceback.print_exc(limit=3)
return True
elif type is None: # No exception
assert False, "Expected NotImplementedError"
else:
return False
```
Instead of using `jax.lax` primitives directly, we can use other functions that are already written in terms of those primitives, such as those in `jax.numpy`:
```
import jax.numpy as jnp import numpy as np
@trace("multiply_add_numpy")
def multiply_add_numpy(x, y, z):
return jnp.add(jnp.multiply(x, y), z)
@trace("square_add_numpy")
def square_add_numpy(a, b):
return multiply_add_numpy(a, a, b)
print("\nNormal evaluation:")
print("square_add_numpy = ", square_add_numpy(2., 10.))
print("\nGradient evaluation:")
print("grad(square_add_numpy) = ", api.grad(square_add_numpy)(2.0, 10.))
```
```
Normal evaluation:
call square_add_numpy(2.0, 10.0)
call multiply_add_numpy(2.0, 2.0, 10.0)
|<- multiply_add_numpy = 14.0
|<- square_add_numpy = 14.0 square_add_numpy = 14.0
Gradient evaluation:
call square_add_numpy(Traced<ConcreteArray(2.0, dtype=float32, weak_type=True)>, 10.0)
call multiply_add_numpy(Traced<ConcreteArray(2.0, dtype=float32, weak_type=True)>, Traced<ConcreteArray(2.0, dtype=float32, weak_type=True)>, 10.0)
|<- multiply_add_numpy = Traced<ConcreteArray(14.0, dtype=float32, weak_type=True)>
|<- square_add_numpy = Traced<ConcreteArray(14.0, dtype=float32, weak_type=True)>
grad(square_add_numpy) = 4.0
```
Notice that in the process of computing `grad`, JAX invokes `square_add_numpy` and
`multiply_add_numpy` with special arguments `ConcreteArray(...)` (described further below in this colab).
It is important to remember that a JAX-traceable function must be able to operate not only on concrete arguments but also on special abstract arguments that JAX may use to abstract the function execution.
The JAX traceability property is satisfied as long as the function is written in terms of JAX primitives.
#### Defining new JAX primitives[#](#defining-new-jax-primitives)
The right way to add support for multiply-add is in terms of existing JAX primitives, as shown above. However, in order to demonstrate how JAX primitives work let us pretend that we want to add a new primitive to JAX for the multiply-add functionality.
```
from jax import core multiply_add_p = core.Primitive("multiply_add") # Create the primitive
@trace("multiply_add_prim")
def multiply_add_prim(x, y, z):
"""The JAX-traceable way to use the JAX primitive.
Note that the traced arguments must be passed as positional arguments
to `bind`.
"""
return multiply_add_p.bind(x, y, z)
@trace("square_add_prim")
def square_add_prim(a, b):
"""A square-add function implemented using the new JAX-primitive."""
return multiply_add_prim(a, a, b)
```
If we try to call the newly defined functions we get an error, because we have not yet told JAX anything about the semantics of the new primitive.
```
with expectNotImplementedError():
square_add_prim(2., 10.)
```
```
call square_add_prim(2.0, 10.0)
call multiply_add_prim(2.0, 2.0, 10.0)
Found expected exception:
```
```
Traceback (most recent call last):
File "/tmp/ipykernel_1354/2844449444.py", line 2, in <module>
square_add_prim(2., 10.)
File "/tmp/ipykernel_1354/1393342955.py", line 48, in func_wrapper
res = func(*args)
File "/tmp/ipykernel_1354/1308506715.py", line 16, in square_add_prim
return multiply_add_prim(a, a, b)
NotImplementedError: Evaluation rule for 'multiply_add' not implemented
```
##### Primal evaluation rules[#](#primal-evaluation-rules)
```
@trace("multiply_add_impl")
def multiply_add_impl(x, y, z):
"""Concrete implementation of the primitive.
This function does not need to be JAX traceable.
Args:
x, y, z: the concrete arguments of the primitive. Will only be called with
concrete values.
Returns:
the concrete result of the primitive.
"""
# Note that we can use the original numpy, which is not JAX traceable
return np.add(np.multiply(x, y), z)
# Now we register the primal implementation with JAX multiply_add_p.def_impl(multiply_add_impl)
```
```
<function __main__.multiply_add_impl(x, y, z)>
```
```
assert square_add_prim(2., 10.) == 14.
```
```
call square_add_prim(2.0, 10.0)
call multiply_add_prim(2.0, 2.0, 10.0)
call multiply_add_impl(2.0, 2.0, 10.0)
|<- multiply_add_impl = 14.0
|<- multiply_add_prim = 14.0
|<- square_add_prim = 14.0
```
##### JIT[#](#jit)
If we now try to use `jit` we get a `NotImplementedError`:
```
with expectNotImplementedError():
api.jit(square_add_prim)(2., 10.)
```
```
call square_add_prim(Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>, Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>)
call multiply_add_prim(Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>, Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>, Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>)
Found expected exception:
```
```
Traceback (most recent call last):
File "/tmp/ipykernel_1354/1813425700.py", line 2, in <module>
api.jit(square_add_prim)(2., 10.)
File "/home/docs/checkouts/readthedocs.org/user_builds/jax/envs/latest/lib/python3.9/site-packages/jax/_src/traceback_util.py", line 177, in reraise_with_filtered_traceback
return fun(*args, **kwargs)
File "/home/docs/checkouts/readthedocs.org/user_builds/jax/envs/latest/lib/python3.9/site-packages/jax/_src/pjit.py", line 256, in cache_miss
outs, out_flat, out_tree, args_flat, jaxpr = _python_pjit_helper(
NotImplementedError: Abstract evaluation for 'multiply_add' not implemented
```
###### Abstract evaluation rules[#](#abstract-evaluation-rules)
In order to JIT the function, and for other transformations as well,
JAX first evaluates it abstractly using only the shape and type of the arguments. This abstract evaluation serves multiple purposes:
* Gets the sequence of JAX primitives that are used in the computation. This sequence will be compiled.
* Computes the shape and type of all vectors and operations used in the computation.
For example, the abstraction of a vector with 3 elements may be `ShapedArray(float32[3])`, or `ConcreteArray([1., 2., 3.])`.
In the latter case, JAX uses the actual concrete value wrapped as an abstract value.
```
from jax import core
@trace("multiply_add_abstract_eval")
def multiply_add_abstract_eval(xs, ys, zs):
"""Abstract evaluation of the primitive.
This function does not need to be JAX traceable. It will be invoked with
abstractions of the actual arguments.
Args:
xs, ys, zs: abstractions of the arguments.
Result:
a ShapedArray for the result of the primitive.
"""
assert xs.shape == ys.shape
assert xs.shape == zs.shape
return core.ShapedArray(xs.shape, xs.dtype)
# Now we register the abstract evaluation with JAX multiply_add_p.def_abstract_eval(multiply_add_abstract_eval)
```
```
<function __main__.multiply_add_abstract_eval(xs, ys, zs)>
```
If we re-attempt to JIT, we see how the abstract evaluation proceeds, but we get another error, about missing the actual XLA compilation rule:
```
with expectNotImplementedError():
api.jit(square_add_prim)(2., 10.)
```
```
call square_add_prim(Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>, Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>)
call multiply_add_prim(Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>, Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>, Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>)
call multiply_add_abstract_eval(ShapedArray(float32[], weak_type=True), ShapedArray(float32[], weak_type=True), ShapedArray(float32[], weak_type=True))
|<- multiply_add_abstract_eval = ShapedArray(float32[])
|<- multiply_add_prim = Traced<ShapedArray(float32[])>with<DynamicJaxprTrace(level=1/0)>
|<- square_add_prim = Traced<ShapedArray(float32[])>with<DynamicJaxprTrace(level=1/0)Found expected exception:
```
```
Traceback (most recent call last):
File "/home/docs/.asdf/installs/python/3.9.18/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/docs/.asdf/installs/python/3.9.18/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/docs/checkouts/readthedocs.org/user_builds/jax/envs/latest/lib/python3.9/site-packages/ipykernel_launcher.py", line 17, in <module>
app.launch_new_instance()
jax._src.source_info_util.JaxStackTraceBeforeTransformation: NotImplementedError: MLIR translation rule for primitive 'multiply_add' not found for platform cpu
The preceding stack trace is the source of the JAX operation that, once transformed by JAX, triggered the following exception.
---
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/tmp/ipykernel_1354/1813425700.py", line 2, in <module>
api.jit(square_add_prim)(2., 10.)
File "/home/docs/checkouts/readthedocs.org/user_builds/jax/envs/latest/lib/python3.9/site-packages/jax/_src/traceback_util.py", line 177, in reraise_with_filtered_traceback
return fun(*args, **kwargs)
File "/home/docs/checkouts/readthedocs.org/user_builds/jax/envs/latest/lib/python3.9/site-packages/jax/_src/pjit.py", line 256, in cache_miss
outs, out_flat, out_tree, args_flat, jaxpr = _python_pjit_helper(
NotImplementedError: MLIR translation rule for primitive 'multiply_add' not found for platform cpu
```
###### XLA Compilation rules[#](#xla-compilation-rules)
JAX compilation works by compiling each primitive into a graph of XLA operations.
This is the biggest hurdle to adding new functionality to JAX, because the set of XLA operations is limited, and JAX already has pre-defined primitives for most of them. However, XLA includes a `CustomCall` operation that can be used to encapsulate arbitrary functionality defined using C++.
```
from jax._src.lib.mlir.dialects import hlo
@trace("multiply_add_lowering")
def multiply_add_lowering(ctx, xc, yc, zc):
"""The compilation to XLA of the primitive.
Given an mlir.ir.Value for each argument, return the mlir.ir.Values for
the results of the function.
Does not need to be a JAX-traceable function.
"""
return [hlo.AddOp(hlo.MulOp(xc, yc), zc).result]
# Now we register the lowering rule with JAX
# For GPU see the [Custom operations for GPUs](https://jax.readthedocs.io/en/latest/Custom_Operation_for_GPUs.html)
# TODO: TPU?
from jax.interpreters import mlir mlir.register_lowering(multiply_add_p, multiply_add_lowering, platform='cpu')
```
```
<function __main__.multiply_add_lowering(ctx, xc, yc, zc)>
```
Now we succeed to JIT. Notice below that JAX first evaluates the function abstractly, which triggers the `multiply_add_abstract_eval` function, and then compiles the set of primitives it has encountered, including `multiply_add`.
At this point JAX invokes `multiply_add_xla_translation`.
```
assert api.jit(lambda x, y: square_add_prim(x, y))(2., 10.) == 14.
```
```
call square_add_prim(Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>, Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>)
call multiply_add_prim(Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>, Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>, Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>)
call multiply_add_abstract_eval(ShapedArray(float32[], weak_type=True), ShapedArray(float32[], weak_type=True), ShapedArray(float32[], weak_type=True))
|<- multiply_add_abstract_eval = ShapedArray(float32[])
|<- multiply_add_prim = Traced<ShapedArray(float32[])>with<DynamicJaxprTrace(level=1/0)>
|<- square_add_prim = Traced<ShapedArray(float32[])>with<DynamicJaxprTrace(level=1/0)>
call multiply_add_lowering(LoweringRuleContext(module_context=ModuleContext(context=<jaxlib.mlir._mlir_libs._site_initialize.<locals>.Context object at 0x7f25a8891180>, module=<jaxlib.mlir._mlir_libs._mlir.ir.Module object at 0x7f25a8892670>, ip=<jaxlib.mlir._mlir_libs._mlir.ir.InsertionPoint object at 0x7f25a88926f0>, symbol_table=<jaxlib.mlir._mlir_libs._mlir.ir.SymbolTable object at 0x7f25a88926b0>, backend_or_name=<jaxlib.xla_extension.Client object at 0x7f25d0071470>, platform='cpu', axis_context=ShardingContext(device_assignment=(CpuDevice(id=0),)), name_stack=NameStack(stack=(Scope(name='jit(<lambda>)'), Scope(name='jit(main)'))), keepalives=[], channel_iterator=count(1), host_callbacks=[], shape_poly_state=<jax._src.interpreters.mlir.ShapePolyLoweringState object at 0x7f25e0673c70>, cached_primitive_lowerings={}, cached_call_jaxpr_lowerings={}, lowering_parameters=LoweringParameters(override_lowering_rules=None, platforms=None)), primitive=multiply_add, avals_in=[ShapedArray(float32[], weak_type=True), ShapedArray(float32[], weak_type=True), ShapedArray(float32[], weak_type=True)], avals_out=[ShapedArray(float32[])], tokens_in=<jax._src.interpreters.mlir.TokenSet object at 0x7f25a8896490>, tokens_out=None, axis_size_env=None, dim_var_values=[]), Value(<block argument> of type 'tensor<f32>' at index: 0), Value(<block argument> of type 'tensor<f32>' at index: 0), Value(<block argument> of type 'tensor<f32>' at index: 1))
|<- multiply_add_lowering = [<jaxlib.mlir._mlir_libs._mlir.ir.OpResult object at 0x7f25a8895f30>]
```
Below is another use of `jit` where we compile only with respect to the first argument. Notice how the second argument to `square_add_prim` is concrete, which leads in the third argument to `multiply_add_abstract_eval` being
`ConcreteArray`. We see that `multiply_add_abstract_eval` may be used with both `ShapedArray` and `ConcreteArray`.
```
assert api.jit(lambda x, y: square_add_prim(x, y),
static_argnums=1)(2., 10.) == 14.
```
```
call square_add_prim(Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>, 10.0)
call multiply_add_prim(Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>, Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>, 10.0)
call multiply_add_abstract_eval(ShapedArray(float32[], weak_type=True), ShapedArray(float32[], weak_type=True), ShapedArray(float32[], weak_type=True))
|<- multiply_add_abstract_eval = ShapedArray(float32[])
|<- multiply_add_prim = Traced<ShapedArray(float32[])>with<DynamicJaxprTrace(level=1/0)>
|<- square_add_prim = Traced<ShapedArray(float32[])>with<DynamicJaxprTrace(level=1/0)>
call multiply_add_lowering(LoweringRuleContext(module_context=ModuleContext(context=<jaxlib.mlir._mlir_libs._site_initialize.<locals>.Context object at 0x7f25a889e400>, module=<jaxlib.mlir._mlir_libs._mlir.ir.Module object at 0x7f25a88997f0>, ip=<jaxlib.mlir._mlir_libs._mlir.ir.InsertionPoint object at 0x7f25a8899870>, symbol_table=<jaxlib.mlir._mlir_libs._mlir.ir.SymbolTable object at 0x7f25a8899830>, backend_or_name=<jaxlib.xla_extension.Client object at 0x7f25d0071470>, platform='cpu', axis_context=ShardingContext(device_assignment=(CpuDevice(id=0),)), name_stack=NameStack(stack=(Scope(name='jit(<lambda>)'), Scope(name='jit(main)'))), keepalives=[], channel_iterator=count(1), host_callbacks=[], shape_poly_state=<jax._src.interpreters.mlir.ShapePolyLoweringState object at 0x7f25a889d730>, cached_primitive_lowerings={}, cached_call_jaxpr_lowerings={}, lowering_parameters=LoweringParameters(override_lowering_rules=None, platforms=None)), primitive=multiply_add, avals_in=[ShapedArray(float32[], weak_type=True), ShapedArray(float32[], weak_type=True), ShapedArray(float32[], weak_type=True)], avals_out=[ShapedArray(float32[])], tokens_in=<jax._src.interpreters.mlir.TokenSet object at 0x7f25a889ddc0>, tokens_out=None, axis_size_env=None, dim_var_values=[]), Value(<block argument> of type 'tensor<f32>' at index: 0), Value(<block argument> of type 'tensor<f32>' at index: 0), Value(%0 = "stablehlo.constant"() {value = dense<1.000000e+01> : tensor<f32>} : () -> tensor<f32>))
|<- multiply_add_lowering = [<jaxlib.mlir._mlir_libs._mlir.ir.OpResult object at 0x7f25a88a0a30>]
```
##### Forward differentiation[#](#forward-differentiation)
JAX implements forward differentiation in the form of a Jacobian-vector product (see the [JAX autodiff cookbook](https://jax.readthedocs.io/en/latest/notebooks/autodiff_cookbook.html#Jacobian-Matrix-and-Matrix-Jacobian-products)).
If we attempt now to compute the `jvp` function we get an error because we have not yet told JAX how to differentiate the `multiply_add` primitive.
```
# The second argument `(2., 10.)` are the argument values
# where we evaluate the Jacobian, and the third `(1., 1.)`
# are the values of the tangents for the arguments.
with expectNotImplementedError():
api.jvp(square_add_prim, (2., 10.), (1., 1.))
```
```
call square_add_prim(Traced<ConcreteArray(2.0, dtype=float32, weak_type=True)>, Traced<ConcreteArray(10.0, dtype=float32, weak_type=True)>)
call multiply_add_prim(Traced<ConcreteArray(2.0, dtype=float32, weak_type=True)>, Traced<ConcreteArray(2.0, dtype=float32, weak_type=True)>, Traced<ConcreteArray(10.0, dtype=float32, weak_type=True)>)
Found expected exception:
```
```
Traceback (most recent call last):
File "/tmp/ipykernel_1354/800067577.py", line 5, in <module>
api.jvp(square_add_prim, (2., 10.), (1., 1.))
File "/home/docs/checkouts/readthedocs.org/user_builds/jax/envs/latest/lib/python3.9/site-packages/jax/_src/api.py", line 1960, in jvp
return _jvp(lu.wrap_init(fun), primals, tangents, has_aux=has_aux)
File "/home/docs/checkouts/readthedocs.org/user_builds/jax/envs/latest/lib/python3.9/site-packages/jax/_src/api.py", line 1989, in _jvp
out_primals, out_tangents = ad.jvp(flat_fun).call_wrapped(ps_flat, ts_flat)
NotImplementedError: Differentiation rule for 'multiply_add' not implemented
```
```
from jax.interpreters import ad
@trace("multiply_add_value_and_jvp")
def multiply_add_value_and_jvp(arg_values, arg_tangents):
"""Evaluates the primal output and the tangents (Jacobian-vector product).
Given values of the arguments and perturbation of the arguments (tangents),
compute the output of the primitive and the perturbation of the output.
This method must be JAX-traceable. JAX may invoke it with abstract values
for the arguments and tangents.
Args:
arg_values: a tuple of arguments
arg_tangents: a tuple with the tangents of the arguments. The tuple has
the same length as the arg_values. Some of the tangents may also be the
special value ad.Zero to specify a zero tangent.
Returns:
a pair of the primal output and the tangent.
"""
x, y, z = arg_values
xt, yt, zt = arg_tangents
_trace("Primal evaluation:")
# Now we have a JAX-traceable computation of the output.
# Normally, we can use the ma primitive itself to compute the primal output.
primal_out = multiply_add_prim(x, y, z)
_trace("Tangent evaluation:")
# We must use a JAX-traceable way to compute the tangent. It turns out that
# the output tangent can be computed as (xt * y + x * yt + zt),
# which we can implement in a JAX-traceable way using the same "multiply_add_prim" primitive.
# We do need to deal specially with Zero. Here we just turn it into a
# proper tensor of 0s (of the same shape as 'x').
# An alternative would be to check for Zero and perform algebraic
# simplification of the output tangent computation.
def make_zero(tan):
return lax.zeros_like_array(x) if type(tan) is ad.Zero else tan
output_tangent = multiply_add_prim(make_zero(xt), y, multiply_add_prim(x, make_zero(yt), make_zero(zt)))
return (primal_out, output_tangent)
# Register the forward differentiation rule with JAX
ad.primitive_jvps[multiply_add_p] = multiply_add_value_and_jvp
```
```
# Tangent is: xt*y + x*yt + zt = 1.*2. + 2.*1. + 1. = 5.
assert api.jvp(square_add_prim, (2., 10.), (1., 1.)) == (14., 5.)
```
```
call square_add_prim(Traced<ConcreteArray(2.0, dtype=float32, weak_type=True)>, Traced<ConcreteArray(10.0, dtype=float32, weak_type=True)>)
call multiply_add_prim(Traced<ConcreteArray(2.0, dtype=float32, weak_type=True)>, Traced<ConcreteArray(2.0, dtype=float32, weak_type=True)>, Traced<ConcreteArray(10.0, dtype=float32, weak_type=True)>)
call multiply_add_value_and_jvp((2.0, 2.0, 10.0), (1.0, 1.0, 1.0))
Primal evaluation:
call multiply_add_prim(2.0, 2.0, 10.0)
call multiply_add_impl(2.0, 2.0, 10.0)
|<- multiply_add_impl = 14.0
|<- multiply_add_prim = 14.0
Tangent evaluation:
call multiply_add_prim(2.0, 1.0, 1.0)
call multiply_add_impl(2.0, 1.0, 1.0)
|<- multiply_add_impl = 3.0
|<- multiply_add_prim = 3.0
call multiply_add_prim(1.0, 2.0, 3.0)
call multiply_add_impl(1.0, 2.0, 3.0)
|<- multiply_add_impl = 5.0
|<- multiply_add_prim = 5.0
|<- multiply_add_value_and_jvp = (14.0, 5.0)
|<- multiply_add_prim = Traced<ConcreteArray(14.0, dtype=float32)>
|<- square_add_prim = Traced<ConcreteArray(14.0, dtype=float32)>
```
TO EXPLAIN:
* Why is JAX using ConcreteArray in square_add_prim? There is no abstract evaluation going on here.
* Not sure how to explain that multiply_add_prim is invoked with ConcreteValue, yet we do not call the multiply_add_abstract_eval.
* I think it would be useful to show the jaxpr here
###### JIT of forward differentiation[#](#jit-of-forward-differentiation)
We can apply JIT to the forward differentiation function:
```
assert api.jit(lambda arg_values, arg_tangents:
api.jvp(square_add_prim, arg_values, arg_tangents))(
(2., 10.), (1., 1.)) == (14., 5.)
```
```
call square_add_prim(Traced<ShapedArray(float32[], weak_type=True)>, Traced<ShapedArray(float32[], weak_type=True)>)
call multiply_add_prim(Traced<ShapedArray(float32[], weak_type=True)>, Traced<ShapedArray(float32[], weak_type=True)>, Traced<ShapedArray(float32[], weak_type=True)>)
call multiply_add_value_and_jvp((Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>, Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>, Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>), (Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>, Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>, Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>))
Primal evaluation:
call multiply_add_prim(Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>, Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>, Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>)
call multiply_add_abstract_eval(ShapedArray(float32[], weak_type=True), ShapedArray(float32[], weak_type=True), ShapedArray(float32[], weak_type=True))
|<- multiply_add_abstract_eval = ShapedArray(float32[])
|<- multiply_add_prim = Traced<ShapedArray(float32[])>with<DynamicJaxprTrace(level=1/0)>
Tangent evaluation:
call multiply_add_prim(Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>, Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>, Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>)
call multiply_add_abstract_eval(ShapedArray(float32[], weak_type=True), ShapedArray(float32[], weak_type=True), ShapedArray(float32[], weak_type=True))
|<- multiply_add_abstract_eval = ShapedArray(float32[])
|<- multiply_add_prim = Traced<ShapedArray(float32[])>with<DynamicJaxprTrace(level=1/0)>
call multiply_add_prim(Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>, Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>, Traced<ShapedArray(float32[])>with<DynamicJaxprTrace(level=1/0)>)
call multiply_add_abstract_eval(ShapedArray(float32[], weak_type=True), ShapedArray(float32[], weak_type=True), ShapedArray(float32[]))
|<- multiply_add_abstract_eval = ShapedArray(float32[])
|<- multiply_add_prim = Traced<ShapedArray(float32[])>with<DynamicJaxprTrace(level=1/0)>
|<- multiply_add_value_and_jvp = (Traced<ShapedArray(float32[])>with<DynamicJaxprTrace(level=1/0)>, Traced<ShapedArray(float32[])>with<DynamicJaxprTrace(level=1/0)>)
|<- multiply_add_prim = Traced<ShapedArray(float32[])>
|<- square_add_prim = Traced<ShapedArray(float32[])>
call multiply_add_lowering(LoweringRuleContext(module_context=ModuleContext(context=<jaxlib.mlir._mlir_libs._site_initialize.<locals>.Context object at 0x7f25a8874270>, module=<jaxlib.mlir._mlir_libs._mlir.ir.Module object at 0x7f25a88738f0>, ip=<jaxlib.mlir._mlir_libs._mlir.ir.InsertionPoint object at 0x7f25a88736f0>, symbol_table=<jaxlib.mlir._mlir_libs._mlir.ir.SymbolTable object at 0x7f25a88739b0>, backend_or_name=<jaxlib.xla_extension.Client object at 0x7f25d0071470>, platform='cpu', axis_context=ShardingContext(device_assignment=(CpuDevice(id=0),)), name_stack=NameStack(stack=(Scope(name='jit(<lambda>)'), Scope(name='jit(main)'), Transform(name='jvp'))), keepalives=[], channel_iterator=count(1), host_callbacks=[], shape_poly_state=<jax._src.interpreters.mlir.ShapePolyLoweringState object at 0x7f25a888d9d0>, cached_primitive_lowerings={}, cached_call_jaxpr_lowerings={}, lowering_parameters=LoweringParameters(override_lowering_rules=None, platforms=None)), primitive=multiply_add, avals_in=[ShapedArray(float32[], weak_type=True), ShapedArray(float32[], weak_type=True), ShapedArray(float32[], weak_type=True)], avals_out=[ShapedArray(float32[])], tokens_in=<jax._src.interpreters.mlir.TokenSet object at 0x7f25a88ec730>, tokens_out=None, axis_size_env=None, dim_var_values=[]), Value(<block argument> of type 'tensor<f32>' at index: 0), Value(<block argument> of type 'tensor<f32>' at index: 0), Value(<block argument> of type 'tensor<f32>' at index: 1))
|<- multiply_add_lowering = [<jaxlib.mlir._mlir_libs._mlir.ir.OpResult object at 0x7f25a8899d30>]
call multiply_add_lowering(LoweringRuleContext(module_context=ModuleContext(context=<jaxlib.mlir._mlir_libs._site_initialize.<locals>.Context object at 0x7f25a8874270>, module=<jaxlib.mlir._mlir_libs._mlir.ir.Module object at 0x7f25a88738f0>, ip=<jaxlib.mlir._mlir_libs._mlir.ir.InsertionPoint object at 0x7f25a88736f0>, symbol_table=<jaxlib.mlir._mlir_libs._mlir.ir.SymbolTable object at 0x7f25a88739b0>, backend_or_name=<jaxlib.xla_extension.Client object at 0x7f25d0071470>, platform='cpu', axis_context=ShardingContext(device_assignment=(CpuDevice(id=0),)), name_stack=NameStack(stack=(Scope(name='jit(<lambda>)'), Scope(name='jit(main)'), Transform(name='jvp'))), keepalives=[], channel_iterator=count(1), host_callbacks=[], shape_poly_state=<jax._src.interpreters.mlir.ShapePolyLoweringState object at 0x7f25a888d9d0>, cached_primitive_lowerings={}, cached_call_jaxpr_lowerings={}, lowering_parameters=LoweringParameters(override_lowering_rules=None, platforms=None)), primitive=multiply_add, avals_in=[ShapedArray(float32[], weak_type=True), ShapedArray(float32[], weak_type=True), ShapedArray(float32[], weak_type=True)], avals_out=[ShapedArray(float32[])], tokens_in=<jax._src.interpreters.mlir.TokenSet object at 0x7f25a88ec610>, tokens_out=None, axis_size_env=None, dim_var_values=[]), Value(<block argument> of type 'tensor<f32>' at index: 0), Value(<block argument> of type 'tensor<f32>' at index: 2), Value(<block argument> of type 'tensor<f32>' at index: 3))
|<- multiply_add_lowering = [<jaxlib.mlir._mlir_libs._mlir.ir.OpResult object at 0x7f25a8899a30>]
call multiply_add_lowering(LoweringRuleContext(module_context=ModuleContext(context=<jaxlib.mlir._mlir_libs._site_initialize.<locals>.Context object at 0x7f25a8874270>, module=<jaxlib.mlir._mlir_libs._mlir.ir.Module object at 0x7f25a88738f0>, ip=<jaxlib.mlir._mlir_libs._mlir.ir.InsertionPoint object at 0x7f25a88736f0>, symbol_table=<jaxlib.mlir._mlir_libs._mlir.ir.SymbolTable object at 0x7f25a88739b0>, backend_or_name=<jaxlib.xla_extension.Client object at 0x7f25d0071470>, platform='cpu', axis_context=ShardingContext(device_assignment=(CpuDevice(id=0),)), name_stack=NameStack(stack=(Scope(name='jit(<lambda>)'), Scope(name='jit(main)'), Transform(name='jvp'))), keepalives=[], channel_iterator=count(1), host_callbacks=[], shape_poly_state=<jax._src.interpreters.mlir.ShapePolyLoweringState object at 0x7f25a888d9d0>, cached_primitive_lowerings={}, cached_call_jaxpr_lowerings={}, lowering_parameters=LoweringParameters(override_lowering_rules=None, platforms=None)), primitive=multiply_add, avals_in=[ShapedArray(float32[], weak_type=True), ShapedArray(float32[], weak_type=True), ShapedArray(float32[])], avals_out=[ShapedArray(float32[])], tokens_in=<jax._src.interpreters.mlir.TokenSet object at 0x7f25a88ec6d0>, tokens_out=None, axis_size_env=None, dim_var_values=[]), Value(<block argument> of type 'tensor<f32>' at index: 2), Value(<block argument> of type 'tensor<f32>' at index: 0), Value(%3 = "stablehlo.add"(%2, %arg3) : (tensor<f32>, tensor<f32>) -> tensor<f32>))
|<- multiply_add_lowering = [<jaxlib.mlir._mlir_libs._mlir.ir.OpResult object at 0x7f25a8875fb0>]
```
Notice that first we evaluate `multiply_add_value_and_jvp` abstractly, which in turn evaluates abstractly both the primal and the tangent evaluation (a total of 3 invocations of the `ma` primitive). Then we compile the 3 occurrences of the primitive.
##### Reverse differentiation[#](#reverse-differentiation)
If we attempt now to use reverse differentiation we see that JAX starts by using the `multiply_add_value_and_jvp` to compute the forward differentiation for abstract values, but then runs into a `NotImplementedError`.
When computing the reverse differentiation JAX first does abstract evaluation of the forward differentiation code `multiply_add_value_and_jvp` to obtain a trace of primitives that compute the output tangent.
Observe that JAX performs this abstract evaluation with concrete values for the differentiation point, and abstract values for the tangents.
Observe also that JAX uses the special abstract tangent value `Zero` for the tangent corresponding to the 3rd argument of `ma`. This reflects the fact that we do not differentiate w.r.t. the 2nd argument to `square_add_prim`,
which flows to the 3rd argument to `multiply_add_prim`.
Observe also that during the abstract evaluation of the tangent we pass the value 0.0 as the tangent for the 3rd argument. This is due to the use of the `make_zero` function in the definition of `multiply_add_value_and_jvp`.
```
# This is reverse differentiation w.r.t. the first argument of square_add_prim with expectNotImplementedError():
api.grad(square_add_prim)(2., 10.)
```
```
call square_add_prim(Traced<ConcreteArray(2.0, dtype=float32, weak_type=True)>, 10.0)
call multiply_add_prim(Traced<ConcreteArray(2.0, dtype=float32, weak_type=True)>, Traced<ConcreteArray(2.0, dtype=float32, weak_type=True)>, 10.0)
call multiply_add_value_and_jvp((2.0, 2.0, 10.0), (Traced<ShapedArray(float32[], weak_type=True)>, Traced<ShapedArray(float32[], weak_type=True)>, Zero(ShapedArray(float32[], weak_type=True))))
Primal evaluation:
call multiply_add_prim(2.0, 2.0, 10.0)
call multiply_add_impl(2.0, 2.0, 10.0)
|<- multiply_add_impl = 14.0
|<- multiply_add_prim = 14.0
Tangent evaluation:
call multiply_add_prim(2.0, Traced<ShapedArray(float32[], weak_type=True)>, 0.0)
call multiply_add_abstract_eval(ConcreteArray(2.0, dtype=float32, weak_type=True), ShapedArray(float32[], weak_type=True), ConcreteArray(0.0, dtype=float32, weak_type=True))
|<- multiply_add_abstract_eval = ShapedArray(float32[])
|<- multiply_add_prim = Traced<ShapedArray(float32[])>
call multiply_add_prim(Traced<ShapedArray(float32[], weak_type=True)>, 2.0, Traced<ShapedArray(float32[])>)
call multiply_add_abstract_eval(ShapedArray(float32[], weak_type=True), ConcreteArray(2.0, dtype=float32, weak_type=True), ShapedArray(float32[]))
|<- multiply_add_abstract_eval = ShapedArray(float32[])
|<- multiply_add_prim = Traced<ShapedArray(float32[])>
|<- multiply_add_value_and_jvp = (14.0, Traced<ShapedArray(float32[])>)
|<- multiply_add_prim = Traced<ConcreteArray(14.0, dtype=float32)>
|<- square_add_prim = Traced<ConcreteArray(14.0, dtype=float32)Found expected exception:
```
```
Traceback (most recent call last):
File "/home/docs/checkouts/readthedocs.org/user_builds/jax/envs/latest/lib/python3.9/site-packages/jax/_src/interpreters/ad.py", line 285, in get_primitive_transpose
return primitive_transposes[p]
KeyError: multiply_add
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/docs/.asdf/installs/python/3.9.18/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/docs/.asdf/installs/python/3.9.18/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/docs/checkouts/readthedocs.org/user_builds/jax/envs/latest/lib/python3.9/site-packages/ipykernel_launcher.py", line 17, in <module>
app.launch_new_instance()
jax._src.source_info_util.JaxStackTraceBeforeTransformation: NotImplementedError: Transpose rule (for reverse-mode differentiation) for 'multiply_add' not implemented
The preceding stack trace is the source of the JAX operation that, once transformed by JAX, triggered the following exception.
---
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/tmp/ipykernel_1354/339076514.py", line 3, in <module>
api.grad(square_add_prim)(2., 10.)
File "/home/docs/checkouts/readthedocs.org/user_builds/jax/envs/latest/lib/python3.9/site-packages/jax/_src/traceback_util.py", line 177, in reraise_with_filtered_traceback
return fun(*args, **kwargs)
File "/home/docs/checkouts/readthedocs.org/user_builds/jax/envs/latest/lib/python3.9/site-packages/jax/_src/api.py", line 656, in grad_f
_, g = value_and_grad_f(*args, **kwargs)
NotImplementedError: Transpose rule (for reverse-mode differentiation) for 'multiply_add' not implemented
```
The above error is because there is a missing piece for JAX to be able to use the forward differentiation code to compute reverse differentiation.
###### Transposition[#](#transposition)
As explained above, when computing reverse differentiation JAX obtains a trace of primitives that compute the tangent using forward differentiation.
Then, **JAX interprets this trace abstractly backwards** and for each primitive it applies a **transposition** rule.
To understand what is going on, consider for now a simpler example of the function “f(x, y) = x * y + y”. Assume we need to differentiate at the point `(2., 4.)`. JAX will produce the following JVP tangent calculation of `ft` from the tangents of the input `xt` and `yt`:
```
a = xt * 4.
b = 2. * yt
c = a + b
ft = c + yt
```
By construction, the tangent calculation is always linear in the input tangents.
The only non-linear operator that may arise in the tangent calculation is multiplication,
but then one of the operands is constant.
JAX will produce the reverse differentiation computation by processing the JVP computation backwards. For each operation in the tangent computation,
it accumulates the cotangents of the variables used by the operation, using the cotangent of the result of the operation:
```
# Initialize cotangents of inputs and intermediate vars
xct = yct = act = bct = cct = 0.
# Initialize cotangent of the output
fct = 1.
# Process "ft = c + yt"
cct += fct
yct += fct
# Process "c = a + b"
act += cct
bct += cct
# Process "b = 2. * yt"
yct += 2. * bct
# Process "a = xt * 4."
xct += act * 4.
```
One can verify that this computation produces `xct = 4.` and `yct = 3.`, which are the partial derivatives of the function `f`.
JAX knows for each primitive that may appear in a JVP calculation how to transpose it. Conceptually, if the primitive `p(x, y, z)` is linear in the arguments `y` and `z` for a constant value of `x`, e.g., `p(x, y, z) = y*cy + z*cz`, then the transposition of the primitive is:
```
p_transpose(out_ct, x, _, _) = (None, out_ct*cy, out_ct*cz)
```
Notice that `p_transpose` takes the cotangent of the output of the primitive and a value corresponding to each argument of the primitive. For the linear arguments, the transposition gets an undefined `_` value, and for the other arguments it gets the actual constants. The transposition returns a cotangent value for each argument of the primitive, with the value `None` returned for the constant arguments.
In particular,
```
add_transpose(out_ct, _, _) = (out_ct, out_ct)
mult_transpose(out_ct, x, _) = (None, x * out_ct)
mult_transpose(out_ct, _, y) = (out_ct * y, None)
```
```
@trace("multiply_add_transpose")
def multiply_add_transpose(ct, x, y, z):
"""Evaluates the transpose of a linear primitive.
This method is only used when computing the backward gradient following
value_and_jvp, and is only needed for primitives that are used in the JVP
calculation for some other primitive. We need transposition for multiply_add_prim,
because we have used multiply_add_prim in the computation of the output_tangent in
multiply_add_value_and_jvp.
In our case, multiply_add is not a linear primitive. However, it is used linearly
w.r.t. tangents in multiply_add_value_and_jvp:
output_tangent(xt, yt, zt) = multiply_add_prim(xt, y, multiply_add_prim(x, yt, zt))
Always one of the first two multiplicative arguments is a constant.
Args:
ct: the cotangent of the output of the primitive.
x, y, z: values of the arguments. The arguments that are used linearly
get an ad.UndefinedPrimal value. The other arguments get a constant
value.
Returns:
a tuple with the cotangent of the inputs, with the value None
corresponding to the constant arguments.
"""
if not ad.is_undefined_primal(x):
# This use of multiply_add is with a constant "x"
assert ad.is_undefined_primal(y)
ct_y = ad.Zero(y.aval) if type(ct) is ad.Zero else multiply_add_prim(x, ct, lax.zeros_like_array(x))
res = None, ct_y, ct
else:
# This use of multiply_add is with a constant "y"
assert ad.is_undefined_primal(x)
ct_x = ad.Zero(x.aval) if type(ct) is ad.Zero else multiply_add_prim(ct, y, lax.zeros_like_array(y))
res = ct_x, None, ct
return res
ad.primitive_transposes[multiply_add_p] = multiply_add_transpose
```
Now we can complete the run of the `grad`:
```
assert api.grad(square_add_prim)(2., 10.) == 4.
```
```
call square_add_prim(Traced<ConcreteArray(2.0, dtype=float32, weak_type=True)>, 10.0)
call multiply_add_prim(Traced<ConcreteArray(2.0, dtype=float32, weak_type=True)>, Traced<ConcreteArray(2.0, dtype=float32, weak_type=True)>, 10.0)
call multiply_add_value_and_jvp((2.0, 2.0, 10.0), (Traced<ShapedArray(float32[], weak_type=True)>, Traced<ShapedArray(float32[], weak_type=True)>, Zero(ShapedArray(float32[], weak_type=True))))
Primal evaluation:
call multiply_add_prim(2.0, 2.0, 10.0)
call multiply_add_impl(2.0, 2.0, 10.0)
|<- multiply_add_impl = 14.0
|<- multiply_add_prim = 14.0
Tangent evaluation:
call multiply_add_prim(2.0, Traced<ShapedArray(float32[], weak_type=True)>, 0.0)
call multiply_add_abstract_eval(ConcreteArray(2.0, dtype=float32, weak_type=True), ShapedArray(float32[], weak_type=True), ConcreteArray(0.0, dtype=float32, weak_type=True))
|<- multiply_add_abstract_eval = ShapedArray(float32[])
|<- multiply_add_prim = Traced<ShapedArray(float32[])>
call multiply_add_prim(Traced<ShapedArray(float32[], weak_type=True)>, 2.0, Traced<ShapedArray(float32[])>)
call multiply_add_abstract_eval(ShapedArray(float32[], weak_type=True), ConcreteArray(2.0, dtype=float32, weak_type=True), ShapedArray(float32[]))
|<- multiply_add_abstract_eval = ShapedArray(float32[])
|<- multiply_add_prim = Traced<ShapedArray(float32[])>
|<- multiply_add_value_and_jvp = (14.0, Traced<ShapedArray(float32[])>)
|<- multiply_add_prim = Traced<ConcreteArray(14.0, dtype=float32)>
|<- square_add_prim = Traced<ConcreteArray(14.0, dtype=float32)>
call multiply_add_transpose(1.0, UndefinedPrimal(ShapedArray(float32[], weak_type=True)), 2.0, UndefinedPrimal(ShapedArray(float32[])))
call multiply_add_prim(1.0, 2.0, 0.0)
call multiply_add_impl(1.0, 2.0, 0.0)
|<- multiply_add_impl = 2.0
|<- multiply_add_prim = 2.0
|<- multiply_add_transpose = (2.0, None, 1.0)
call multiply_add_transpose(1.0, 2.0, UndefinedPrimal(ShapedArray(float32[], weak_type=True)), 0.0)
call multiply_add_prim(2.0, 1.0, 0.0)
call multiply_add_impl(2.0, 1.0, 0.0)
|<- multiply_add_impl = 2.0
|<- multiply_add_prim = 2.0
|<- multiply_add_transpose = (None, 2.0, 1.0)
```
Notice the two calls to `multiply_add_transpose`. They correspond to the two uses of `multiply_add_prim` in the computation of the `output_tangent` in `multiply_add_value_and_jvp`. The first call to transpose corresponds to the last use of `multiply_add_prim`: `multiply_add_prim(xt, y, ...)` where `y` is the constant 2.0.
###### JIT of reverse differentiation[#](#jit-of-reverse-differentiation)
Notice that the abstract evaluation of the `multiply_add_value_and_jvp` is using only abstract values, while in the absence of JIT we used `ConcreteArray`.
```
assert api.jit(api.grad(square_add_prim))(2., 10.) == 4.
```
```
call square_add_prim(Traced<ShapedArray(float32[], weak_type=True)>, Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>)
call multiply_add_prim(Traced<ShapedArray(float32[], weak_type=True)>, Traced<ShapedArray(float32[], weak_type=True)>, Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>)
call multiply_add_value_and_jvp((Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>, Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>, Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>), (Traced<ShapedArray(float32[], weak_type=True)>, Traced<ShapedArray(float32[], weak_type=True)>, Zero(ShapedArray(float32[], weak_type=True))))
Primal evaluation:
call multiply_add_prim(Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>, Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>, Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>)
call multiply_add_abstract_eval(ShapedArray(float32[], weak_type=True), ShapedArray(float32[], weak_type=True), ShapedArray(float32[], weak_type=True))
|<- multiply_add_abstract_eval = ShapedArray(float32[])
|<- multiply_add_prim = Traced<ShapedArray(float32[])>with<DynamicJaxprTrace(level=1/0)>
Tangent evaluation:
call multiply_add_prim(Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>, Traced<ShapedArray(float32[], weak_type=True)>, Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>)
call multiply_add_abstract_eval(ShapedArray(float32[], weak_type=True), ShapedArray(float32[], weak_type=True), ShapedArray(float32[], weak_type=True))
|<- multiply_add_abstract_eval = ShapedArray(float32[])
|<- multiply_add_prim = Traced<ShapedArray(float32[])>
call multiply_add_prim(Traced<ShapedArray(float32[], weak_type=True)>, Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>, Traced<ShapedArray(float32[])>)
call multiply_add_abstract_eval(ShapedArray(float32[], weak_type=True), ShapedArray(float32[], weak_type=True), ShapedArray(float32[]))
|<- multiply_add_abstract_eval = ShapedArray(float32[])
|<- multiply_add_prim = Traced<ShapedArray(float32[])>
|<- multiply_add_value_and_jvp = (Traced<ShapedArray(float32[])>with<DynamicJaxprTrace(level=1/0)>, Traced<ShapedArray(float32[])>)
|<- multiply_add_prim = Traced<ShapedArray(float32[])>
|<- square_add_prim = Traced<ShapedArray(float32[])>
call multiply_add_transpose(Traced<ShapedArray(float32[])>with<DynamicJaxprTrace(level=1/0)>, UndefinedPrimal(ShapedArray(float32[], weak_type=True)), Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>, UndefinedPrimal(ShapedArray(float32[])))
call multiply_add_prim(Traced<ShapedArray(float32[])>with<DynamicJaxprTrace(level=1/0)>, Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>, Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>)
call multiply_add_abstract_eval(ShapedArray(float32[]), ShapedArray(float32[], weak_type=True), ShapedArray(float32[], weak_type=True))
|<- multiply_add_abstract_eval = ShapedArray(float32[])
|<- multiply_add_prim = Traced<ShapedArray(float32[])>with<DynamicJaxprTrace(level=1/0)>
|<- multiply_add_transpose = (Traced<ShapedArray(float32[])>with<DynamicJaxprTrace(level=1/0)>, None, Traced<ShapedArray(float32[])>with<DynamicJaxprTrace(level=1/0)>)
call multiply_add_transpose(Traced<ShapedArray(float32[])>with<DynamicJaxprTrace(level=1/0)>, Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>, UndefinedPrimal(ShapedArray(float32[], weak_type=True)), Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>)
call multiply_add_prim(Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>, Traced<ShapedArray(float32[])>with<DynamicJaxprTrace(level=1/0)>, Traced<ShapedArray(float32[], weak_type=True)>with<DynamicJaxprTrace(level=1/0)>)
call multiply_add_abstract_eval(ShapedArray(float32[], weak_type=True), ShapedArray(float32[]), ShapedArray(float32[], weak_type=True))
|<- multiply_add_abstract_eval = ShapedArray(float32[])
|<- multiply_add_prim = Traced<ShapedArray(float32[])>with<DynamicJaxprTrace(level=1/0)>
|<- multiply_add_transpose = (None, Traced<ShapedArray(float32[])>with<DynamicJaxprTrace(level=1/0)>, Traced<ShapedArray(float32[])>with<DynamicJaxprTrace(level=1/0)>)
call multiply_add_lowering(LoweringRuleContext(module_context=ModuleContext(context=<jaxlib.mlir._mlir_libs._site_initialize.<locals>.Context object at 0x7f25a886bd10>, module=<jaxlib.mlir._mlir_libs._mlir.ir.Module object at 0x7f25a880a530>, ip=<jaxlib.mlir._mlir_libs._mlir.ir.InsertionPoint object at 0x7f25a8873df0>, symbol_table=<jaxlib.mlir._mlir_libs._mlir.ir.SymbolTable object at 0x7f25a880aef0>, backend_or_name=<jaxlib.xla_extension.Client object at 0x7f25d0071470>, platform='cpu', axis_context=ShardingContext(device_assignment=(CpuDevice(id=0),)), name_stack=NameStack(stack=(Scope(name='jit(square_add_prim)'), Scope(name='jit(main)'), Transform(name='transpose'), Transform(name='jvp'))), keepalives=[], channel_iterator=count(1), host_callbacks=[], shape_poly_state=<jax._src.interpreters.mlir.ShapePolyLoweringState object at 0x7f25a8896550>, cached_primitive_lowerings={}, cached_call_jaxpr_lowerings={}, lowering_parameters=LoweringParameters(override_lowering_rules=None, platforms=None)), primitive=multiply_add, avals_in=[ShapedArray(float32[]), ShapedArray(float32[], weak_type=True), ShapedArray(float32[], weak_type=True)], avals_out=[ShapedArray(float32[])], tokens_in=<jax._src.interpreters.mlir.TokenSet object at 0x7f25a880e6d0>, tokens_out=None, axis_size_env=None, dim_var_values=[]), Value(%0 = "stablehlo.constant"() {value = dense<1.000000e+00> : tensor<f32>} : () -> tensor<f32>), Value(<block argument> of type 'tensor<f32>' at index: 0), Value(%1 = "stablehlo.constant"() {value = dense<0.000000e+00> : tensor<f32>} : () -> tensor<f32>))
|<- multiply_add_lowering = [<jaxlib.mlir._mlir_libs._mlir.ir.OpResult object at 0x7f25a8875d70>]
call multiply_add_lowering(LoweringRuleContext(module_context=ModuleContext(context=<jaxlib.mlir._mlir_libs._site_initialize.<locals>.Context object at 0x7f25a886bd10>, module=<jaxlib.mlir._mlir_libs._mlir.ir.Module object at 0x7f25a880a530>, ip=<jaxlib.mlir._mlir_libs._mlir.ir.InsertionPoint object at 0x7f25a8873df0>, symbol_table=<jaxlib.mlir._mlir_libs._mlir.ir.SymbolTable object at 0x7f25a880aef0>, backend_or_name=<jaxlib.xla_extension.Client object at 0x7f25d0071470>, platform='cpu', axis_context=ShardingContext(device_assignment=(CpuDevice(id=0),)), name_stack=NameStack(stack=(Scope(name='jit(square_add_prim)'), Scope(name='jit(main)'), Transform(name='transpose'), Transform(name='jvp'))), keepalives=[], channel_iterator=count(1), host_callbacks=[], shape_poly_state=<jax._src.interpreters.mlir.ShapePolyLoweringState object at 0x7f25a8896550>, cached_primitive_lowerings={}, cached_call_jaxpr_lowerings={}, lowering_parameters=LoweringParameters(override_lowering_rules=None, platforms=None)), primitive=multiply_add, avals_in=[ShapedArray(float32[], weak_type=True), ShapedArray(float32[]), ShapedArray(float32[], weak_type=True)], avals_out=[ShapedArray(float32[])], tokens_in=<jax._src.interpreters.mlir.TokenSet object at 0x7f25a880e910>, tokens_out=None, axis_size_env=None, dim_var_values=[]), Value(<block argument> of type 'tensor<f32>' at index: 0), Value(%4 = "stablehlo.constant"() {value = dense<1.000000e+00> : tensor<f32>} : () -> tensor<f32>), Value(%5 = "stablehlo.constant"() {value = dense<0.000000e+00> : tensor<f32>} : () -> tensor<f32>))
|<- multiply_add_lowering = [<jaxlib.mlir._mlir_libs._mlir.ir.OpResult object at 0x7f25a880cef0>]
```
##### Batching[#](#batching)
The batching transformation takes a point-wise computation and turns it into a computation on vectors. If we try it right now, we get a `NotImplementedError`:
```
# The arguments are two vectors instead of two scalars with expectNotImplementedError():
api.vmap(square_add_prim, in_axes=0, out_axes=0)(np.array([2., 3.]),
np.array([10., 20.]))
```
```
call square_add_prim(Traced<ShapedArray(float32[])>, Traced<ShapedArray(float32[])>)
call multiply_add_prim(Traced<ShapedArray(float32[])>, Traced<ShapedArray(float32[])>, Traced<ShapedArray(float32[])>)
Found expected exception:
```
```
Traceback (most recent call last):
File "/tmp/ipykernel_1354/2641678767.py", line 3, in <module>
api.vmap(square_add_prim, in_axes=0, out_axes=0)(np.array([2., 3.]),
File "/home/docs/checkouts/readthedocs.org/user_builds/jax/envs/latest/lib/python3.9/site-packages/jax/_src/traceback_util.py", line 177, in reraise_with_filtered_traceback
return fun(*args, **kwargs)
File "/home/docs/checkouts/readthedocs.org/user_builds/jax/envs/latest/lib/python3.9/site-packages/jax/_src/api.py", line 1249, in vmap_f
out_flat = batching.batch(
NotImplementedError: Batching rule for 'multiply_add' not implemented
```
We need to tell JAX how to evaluate the batched version of the primitive. In this particular case, the `multiply_add_prim` already operates pointwise for any dimension of input vectors. So the batched version can use the same `multiply_add_prim` implementation.
```
from jax.interpreters import batching
@trace("multiply_add_batch")
def multiply_add_batch(vector_arg_values, batch_axes):
"""Computes the batched version of the primitive.
This must be a JAX-traceable function.
Since the multiply_add primitive already operates pointwise on arbitrary
dimension tensors, to batch it we can use the primitive itself. This works as
long as both the inputs have the same dimensions and are batched along the
same axes. The result is batched along the axis that the inputs are batched.
Args:
vector_arg_values: a tuple of two arguments, each being a tensor of matching
shape.
batch_axes: the axes that are being batched. See vmap documentation.
Returns:
a tuple of the result, and the result axis that was batched.
"""
assert batch_axes[0] == batch_axes[1]
assert batch_axes[0] == batch_axes[2]
_trace("Using multiply_add to compute the batch:")
res = multiply_add_prim(*vector_arg_values)
return res, batch_axes[0]
batching.primitive_batchers[multiply_add_p] = multiply_add_batch
```
```
assert np.allclose(api.vmap(square_add_prim, in_axes=0, out_axes=0)(
np.array([2., 3.]),
np.array([10., 20.])),
[14., 29.])
```
```
call square_add_prim(Traced<ShapedArray(float32[])>, Traced<ShapedArray(float32[])>)
call multiply_add_prim(Traced<ShapedArray(float32[])>, Traced<ShapedArray(float32[])>, Traced<ShapedArray(float32[])>)
call multiply_add_batch(([2. 3.], [2. 3.], [10. 20.]), (0, 0, 0))
Using multiply_add to compute the batch:
call multiply_add_prim([2. 3.], [2. 3.], [10. 20.])
call multiply_add_impl([2. 3.], [2. 3.], [10. 20.])
|<- multiply_add_impl = [14. 29.]
|<- multiply_add_prim = [14. 29.]
|<- multiply_add_batch = ([14. 29.], 0)
|<- multiply_add_prim = Traced<ShapedArray(float32[])>
|<- square_add_prim = Traced<ShapedArray(float32[])>
```
###### JIT of batching[#](#jit-of-batching)
```
assert np.allclose(api.jit(api.vmap(square_add_prim, in_axes=0, out_axes=0))
(np.array([2., 3.]),
np.array([10., 20.])),
[14., 29.])
```
```
call square_add_prim(Traced<ShapedArray(float32[])>, Traced<ShapedArray(float32[])>)
call multiply_add_prim(Traced<ShapedArray(float32[])>, Traced<ShapedArray(float32[])>, Traced<ShapedArray(float32[])>)
call multiply_add_batch((Traced<ShapedArray(float32[2])>with<DynamicJaxprTrace(level=1/0)>, Traced<ShapedArray(float32[2])>with<DynamicJaxprTrace(level=1/0)>, Traced<ShapedArray(float32[2])>with<DynamicJaxprTrace(level=1/0)>), (0, 0, 0))
Using multiply_add to compute the batch:
call multiply_add_prim(Traced<ShapedArray(float32[2])>with<DynamicJaxprTrace(level=1/0)>, Traced<ShapedArray(float32[2])>with<DynamicJaxprTrace(level=1/0)>, Traced<ShapedArray(float32[2])>with<DynamicJaxprTrace(level=1/0)>)
call multiply_add_abstract_eval(ShapedArray(float32[2]), ShapedArray(float32[2]), ShapedArray(float32[2]))
|<- multiply_add_abstract_eval = ShapedArray(float32[2])
|<- multiply_add_prim = Traced<ShapedArray(float32[2])>with<DynamicJaxprTrace(level=1/0)>
|<- multiply_add_batch = (Traced<ShapedArray(float32[2])>with<DynamicJaxprTrace(level=1/0)>, 0)
|<- multiply_add_prim = Traced<ShapedArray(float32[])>
|<- square_add_prim = Traced<ShapedArray(float32[])>
call multiply_add_lowering(LoweringRuleContext(module_context=ModuleContext(context=<jaxlib.mlir._mlir_libs._site_initialize.<locals>.Context object at 0x7f25a8874540>, module=<jaxlib.mlir._mlir_libs._mlir.ir.Module object at 0x7f25a886fcb0>, ip=<jaxlib.mlir._mlir_libs._mlir.ir.InsertionPoint object at 0x7f25a88f3470>, symbol_table=<jaxlib.mlir._mlir_libs._mlir.ir.SymbolTable object at 0x7f25a88f33b0>, backend_or_name=<jaxlib.xla_extension.Client object at 0x7f25d0071470>, platform='cpu', axis_context=ShardingContext(device_assignment=(CpuDevice(id=0),)), name_stack=NameStack(stack=(Scope(name='jit(square_add_prim)'), Scope(name='jit(main)'), Transform(name='vmap'))), keepalives=[], channel_iterator=count(1), host_callbacks=[], shape_poly_state=<jax._src.interpreters.mlir.ShapePolyLoweringState object at 0x7f25aaaa7ee0>, cached_primitive_lowerings={}, cached_call_jaxpr_lowerings={}, lowering_parameters=LoweringParameters(override_lowering_rules=None, platforms=None)), primitive=multiply_add, avals_in=[ShapedArray(float32[2]), ShapedArray(float32[2]), ShapedArray(float32[2])], avals_out=[ShapedArray(float32[2])], tokens_in=<jax._src.interpreters.mlir.TokenSet object at 0x7f25a8808820>, tokens_out=None, axis_size_env=None, dim_var_values=[]), Value(<block argument> of type 'tensor<2xf32>' at index: 0), Value(<block argument> of type 'tensor<2xf32>' at index: 0), Value(<block argument> of type 'tensor<2xf32>' at index: 1))
|<- multiply_add_lowering = [<jaxlib.mlir._mlir_libs._mlir.ir.OpResult object at 0x7f25a887af30>]
```
### Writing custom Jaxpr interpreters in JAX[#](#writing-custom-jaxpr-interpreters-in-jax)
JAX offers several composable function transformations (`jit`, `grad`, `vmap`,
etc.) that enable writing concise, accelerated code.
Here we show how to add your own function transformations to the system, by writing a custom Jaxpr interpreter. And we’ll get composability with all the other transformations for free.
**This example uses internal JAX APIs, which may break at any time. Anything not in [the API Documentation](https://jax.readthedocs.io/en/latest/jax.html) should be assumed internal.**
```
import numpy as np import jax import jax.numpy as jnp from jax import jit, grad, vmap from jax import random
```
#### What is JAX doing?[#](#what-is-jax-doing)
JAX provides a NumPy-like API for numerical computing which can be used as is, but JAX’s true power comes from composable function transformations. Take the `jit` function transformation, which takes in a function and returns a semantically identical function but is lazily compiled by XLA for accelerators.
```
x = random.normal(random.PRNGKey(0), (5000, 5000))
def f(w, b, x):
return jnp.tanh(jnp.dot(x, w) + b)
fast_f = jit(f)
```
When we call `fast_f`, what happens? JAX traces the function and constructs an XLA computation graph. The graph is then JIT-compiled and executed. Other transformations work similarly in that they first trace the function and handle the output trace in some way. To learn more about Jax’s tracing machinery, you can refer to the [“How it works”](https://github.com/google/jax#how-it-works) section in the README.
#### Jaxpr tracer[#](#jaxpr-tracer)
A tracer of special importance in Jax is the Jaxpr tracer, which records ops into a Jaxpr (Jax expression). A Jaxpr is a data structure that can be evaluated like a mini functional programming language and thus Jaxprs are a useful intermediate representation for function transformation.
To get a first look at Jaxprs, consider the `make_jaxpr` transformation. `make_jaxpr` is essentially a “pretty-printing” transformation:
it transforms a function into one that, given example arguments, produces a Jaxpr representation of its computation.
`make_jaxpr` is useful for debugging and introspection.
Let’s use it to look at how some example Jaxprs are structured.
```
def examine_jaxpr(closed_jaxpr):
jaxpr = closed_jaxpr.jaxpr
print("invars:", jaxpr.invars)
print("outvars:", jaxpr.outvars)
print("constvars:", jaxpr.constvars)
for eqn in jaxpr.eqns:
print("equation:", eqn.invars, eqn.primitive, eqn.outvars, eqn.params)
print()
print("jaxpr:", jaxpr)
def foo(x):
return x + 1 print("foo")
print("===")
examine_jaxpr(jax.make_jaxpr(foo)(5))
print()
def bar(w, b, x):
return jnp.dot(w, x) + b + jnp.ones(5), x print("bar")
print("===")
examine_jaxpr(jax.make_jaxpr(bar)(jnp.ones((5, 10)), jnp.ones(5), jnp.ones(10)))
```
```
foo
===
invars: [a]
outvars: [b]
constvars: []
equation: [a, 1] add [b] {}
jaxpr: { lambda ; a:i32[]. let b:i32[] = add a 1 in (b,) }
bar
===
invars: [a, b, c]
outvars: [g, c]
constvars: []
equation: [a, c] dot_general [d] {'dimension_numbers': (((1,), (0,)), ((), ())), 'precision': None, 'preferred_element_type': dtype('float32')}
equation: [d, b] add [e] {}
equation: [1.0] broadcast_in_dim [f] {'shape': (5,), 'broadcast_dimensions': ()}
equation: [e, f] add [g] {}
jaxpr: { lambda ; a:f32[5,10] b:f32[5] c:f32[10]. let
d:f32[5] = dot_general[
dimension_numbers=(([1], [0]), ([], []))
preferred_element_type=float32
] a c
e:f32[5] = add d b
f:f32[5] = broadcast_in_dim[broadcast_dimensions=() shape=(5,)] 1.0
g:f32[5] = add e f
in (g, c) }
```
* `jaxpr.invars` - the `invars` of a Jaxpr are a list of the input variables to Jaxpr, analogous to arguments in Python functions.
* `jaxpr.outvars` - the `outvars` of a Jaxpr are the variables that are returned by the Jaxpr. Every Jaxpr has multiple outputs.
* `jaxpr.constvars` - the `constvars` are a list of variables that are also inputs to the Jaxpr, but correspond to constants from the trace (we’ll go over these in more detail later).
* `jaxpr.eqns` - a list of equations, which are essentially let-bindings. Each equation is a list of input variables, a list of output variables, and a *primitive*, which is used to evaluate inputs to produce outputs. Each equation also has a `params`, a dictionary of parameters.
Altogether, a Jaxpr encapsulates a simple program that can be evaluated with inputs to produce an output. We’ll go over how exactly to do this later. The important thing to note now is that a Jaxpr is a data structure that can be manipulated and evaluated in whatever way we want.
##### Why are Jaxprs useful?[#](#why-are-jaxprs-useful)
Jaxprs are simple program representations that are easy to transform. And because Jax lets us stage out Jaxprs from Python functions, it gives us a way to transform numerical programs written in Python.
#### Your first interpreter: `invert`[#](#your-first-interpreter-invert)
Let’s try to implement a simple function “inverter”, which takes in the output of the original function and returns the inputs that produced those outputs. For now, let’s focus on simple, unary functions which are composed of other invertible unary functions.
Goal:
```
def f(x):
return jnp.exp(jnp.tanh(x))
f_inv = inverse(f)
assert jnp.allclose(f_inv(f(1.0)), 1.0)
```
The way we’ll implement this is by (1) tracing `f` into a Jaxpr, then (2) interpreting the Jaxpr *backwards*. While interpreting the Jaxpr backwards, for each equation we’ll look up the primitive’s inverse in a table and apply it.
##### 1. Tracing a function[#](#tracing-a-function)
Let’s use `make_jaxpr` to trace a function into a Jaxpr.
```
# Importing Jax functions useful for tracing/interpreting.
import numpy as np from functools import wraps
from jax import core from jax import lax from jax._src.util import safe_map
```
`jax.make_jaxpr` returns a *closed* Jaxpr, which is a Jaxpr that has been bundled with the constants (`literals`) from the trace.
```
def f(x):
return jnp.exp(jnp.tanh(x))
closed_jaxpr = jax.make_jaxpr(f)(jnp.ones(5))
print(closed_jaxpr.jaxpr)
print(closed_jaxpr.literals)
```
```
{ lambda ; a:f32[5]. let b:f32[5] = tanh a; c:f32[5] = exp b in (c,) }
[]
```
##### 2. Evaluating a Jaxpr[#](#evaluating-a-jaxpr)
Before we write a custom Jaxpr interpreter, let’s first implement the “default” interpreter, `eval_jaxpr`, which evaluates the Jaxpr as-is, computing the same values that the original, un-transformed Python function would.
To do this, we first create an environment to store the values for each of the variables, and update the environment with each equation we evaluate in the Jaxpr.
```
def eval_jaxpr(jaxpr, consts, *args):
# Mapping from variable -> value
env = {}
def read(var):
# Literals are values baked into the Jaxpr
if type(var) is core.Literal:
return var.val
return env[var]
def write(var, val):
env[var] = val
# Bind args and consts to environment
safe_map(write, jaxpr.invars, args)
safe_map(write, jaxpr.constvars, consts)
# Loop through equations and evaluate primitives using `bind`
for eqn in jaxpr.eqns:
# Read inputs to equation from environment
invals = safe_map(read, eqn.invars)
# `bind` is how a primitive is called
outvals = eqn.primitive.bind(*invals, **eqn.params)
# Primitives may return multiple outputs or not
if not eqn.primitive.multiple_results:
outvals = [outvals]
# Write the results of the primitive into the environment
safe_map(write, eqn.outvars, outvals)
# Read the final result of the Jaxpr from the environment
return safe_map(read, jaxpr.outvars)
```
```
closed_jaxpr = jax.make_jaxpr(f)(jnp.ones(5))
eval_jaxpr(closed_jaxpr.jaxpr, closed_jaxpr.literals, jnp.ones(5))
```
```
[Array([2.1416876, 2.1416876, 2.1416876, 2.1416876, 2.1416876], dtype=float32)]
```
Notice that `eval_jaxpr` will always return a flat list even if the original function does not.
Furthermore, this interpreter does not handle higher-order primitives (like `jit` and `pmap`), which we will not cover in this guide. You can refer to `core.eval_jaxpr` ([link](https://github.com/google/jax/blob/main/jax/core.py)) to see the edge cases that this interpreter does not cover.
##### Custom `inverse` Jaxpr interpreter[#](#custom-inverse-jaxpr-interpreter)
An `inverse` interpreter doesn’t look too different from `eval_jaxpr`. We’ll first set up the registry which will map primitives to their inverses. We’ll then write a custom interpreter that looks up primitives in the registry.
It turns out that this interpreter will also look similar to the “transpose” interpreter used in reverse-mode autodifferentiation [found here](https://github.com/google/jax/blob/main/jax/interpreters/ad.py#L164-L234).
```
inverse_registry = {}
```
We’ll now register inverses for some of the primitives. By convention, primitives in Jax end in `_p` and a lot of the popular ones live in `lax`.
```
inverse_registry[lax.exp_p] = jnp.log inverse_registry[lax.tanh_p] = jnp.arctanh
```
`inverse` will first trace the function, then custom-interpret the Jaxpr. Let’s set up a simple skeleton.
```
def inverse(fun):
@wraps(fun)
def wrapped(*args, **kwargs):
# Since we assume unary functions, we won't worry about flattening and
# unflattening arguments.
closed_jaxpr = jax.make_jaxpr(fun)(*args, **kwargs)
out = inverse_jaxpr(closed_jaxpr.jaxpr, closed_jaxpr.literals, *args)
return out[0]
return wrapped
```
Now we just need to define `inverse_jaxpr`, which will walk through the Jaxpr backward and invert primitives when it can.
```
def inverse_jaxpr(jaxpr, consts, *args):
env = {}
def read(var):
if type(var) is core.Literal:
return var.val
return env[var]
def write(var, val):
env[var] = val
# Args now correspond to Jaxpr outvars
safe_map(write, jaxpr.outvars, args)
safe_map(write, jaxpr.constvars, consts)
# Looping backward
for eqn in jaxpr.eqns[::-1]:
# outvars are now invars
invals = safe_map(read, eqn.outvars)
if eqn.primitive not in inverse_registry:
raise NotImplementedError(
f"{eqn.primitive} does not have registered inverse.")
# Assuming a unary function
outval = inverse_registry[eqn.primitive](*invals)
safe_map(write, eqn.invars, [outval])
return safe_map(read, jaxpr.invars)
```
That’s it!
```
def f(x):
return jnp.exp(jnp.tanh(x))
f_inv = inverse(f)
assert jnp.allclose(f_inv(f(1.0)), 1.0)
```
Importantly, you can trace through a Jaxpr interpreter.
```
jax.make_jaxpr(inverse(f))(f(1.))
```
```
{ lambda ; a:f32[]. let b:f32[] = log a; c:f32[] = atanh b in (c,) }
```
That’s all it takes to add a new transformation to a system, and you get composition with all the others for free! For example, we can use `jit`, `vmap`, and `grad` with `inverse`!
```
jit(vmap(grad(inverse(f))))((jnp.arange(5) + 1.) / 5.)
```
```
Array([-3.1440797, 15.584931 , 2.2551253, 1.3155028, 1. ], dtype=float32, weak_type=True)
```
#### Exercises for the reader[#](#exercises-for-the-reader)
* Handle primitives with multiple arguments where inputs are partially known, for example `lax.add_p`, `lax.mul_p`.
* Handle `xla_call` and `xla_pmap` primitives, which will not work with both `eval_jaxpr` and `inverse_jaxpr` as written.
### Custom operations for GPUs with C++ and CUDA[#](#custom-operations-for-gpus-with-c-and-cuda)
JAX ships with a large number of built-in operations, but users occasionally run into a situation where they need a new operation that is not supported by JAX.
To accommodate such scenarios, JAX allows users to define custom operations and this tutorial is to explain how we can define one for GPUs and use it in single-GPU and multi-GPU environments.
This tutorial contains information from [Extending JAX with custom C++ and CUDA code](https://github.com/dfm/extending-jax) and supposes that you are familiar with [JAX primitive](https://jax.readthedocs.io/en/latest/notebooks/How_JAX_primitives_work.html).
#### RMS normalization[#](#rms-normalization)
For this tutorial, we are going to add the RMS normalization as a custom operation in JAX.
Note that the RMS normalization can be expressed with [`jax.numpy`](https://jax.readthedocs.io/en/latest/jax.numpy.html) directly. However, we are using it as an example to show the process of creating a custom operation for GPUs.
The CUDA code in `gpu_ops/rms_norm_kernels.cu` for this operation has been borrowed from [Apex](https://github.com/NVIDIA/apex/blob/master/csrc/layer_norm_cuda_kernel.cu) and adapted to eliminate any dependency on PyTorch.
#### High-level steps[#](#high-level-steps)
This tutorial shows how to write both a custom operation and its gradient.
In C:
You need to follow these steps in C for each new JAX primitive:
* Have CUDA kernel(s).
* Create a C function that dispatches the CUDA kernel that will be called by XLA.
* Create a descriptor to convey information needed for the computation.
+ The types, the shapes and other attributes.
* Bind C functions to Python
+ To create the descriptor and to call the primitive during execution.
In Python:
You need to follow these steps in Python:
* Define a new JAX primitive (instruction/operation)
* Write Python functions to build the graph nodes with the primitive.
* Define its abstract evaluation.
* Define its lowering to MLIR.
* [Optional] Define the gradient.
* [Optional] Use [xmap](https://jax.readthedocs.io/en/latest/notebooks/xmap_tutorial.html) (or one of the experimental [custom_partitioning](https://jax.readthedocs.io/en/latest/jax.experimental.custom_partitioning.html) or [shard_map](https://jax.readthedocs.io/en/latest/jep/14273-shard-map.html) functions) for fast multi-GPU.
#### C code[#](#c-code)
See [`gpu_ops` code listing](#gpu-ops-code-listing) for a complete code listing of C++ and CUDA files.
`gpu_ops/rms_norm_kernels.cu` defines the following functions, which are declared with the XLA custom function signature.
These functions are responsible for launching RMS normalization kernels with the given `buffers` on the specified `stream`.
```
namespace gpu_ops {
void rms_forward_affine_mixed_dtypes(cudaStream_t stream, void **buffers,
const char *opaque,
std::size_t opaque_len);
void rms_backward_affine(cudaStream_t stream, void **buffers,
const char *opaque, std::size_t opaque_len);
} // namespace gpu_ops
```
* `stream` is the CUDA stream to be used to execute any kernel on the GPU.
* `buffers` has all pointers to input buffers followed by all pointers to output buffers.
* `opaque` is a buffer for any extra information that is being passed to the custom functions and `opaque_len` is the length of `opaque`.
For this tutorial, an `RMSNormDescriptor` object will be passed to these functions as `opaque`.
```
namespace gpu_ops {
enum ElementType { BF16, F16, F32, F64 };
struct RMSNormDescriptor {
int n1;
int n2;
double eps;
ElementType x_type;
ElementType w_type;
int part_grad_size;
};
} // namespace gpu_ops
```
Now, we need to expose these functions as well as `ElementType` and `RMSNormDescriptor` as a Python module, `gpu_ops`, through `pybind11`.
```
pybind11::dict RMSNormRegistrations() {
pybind11::dict dict;
dict["rms_forward_affine_mixed_dtype"] =
gpu_ops::EncapsulateFunction(gpu_ops::rms_forward_affine_mixed_dtypes);
dict["rms_backward_affine"] =
gpu_ops::EncapsulateFunction(gpu_ops::rms_backward_affine);
return dict;
}
PYBIND11_MODULE(gpu_ops, m) {
m.def("get_rms_norm_registrations", &RMSNormRegistrations);
m.def("create_rms_norm_descriptor",
[](int n1, int n2, double eps, gpu_ops::ElementType x_type,
gpu_ops::ElementType w_type, int part_grad_size) {
return gpu_ops::PackDescriptor(gpu_ops::RMSNormDescriptor{
n1, n2, eps, x_type, w_type, part_grad_size});
});
pybind11::enum_<gpu_ops::ElementType>(m, "ElementType")
.value("BF16", gpu_ops::ElementType::BF16)
.value("F16", gpu_ops::ElementType::F16)
.value("F32", gpu_ops::ElementType::F32)
.value("F64", gpu_ops::ElementType::F64);
}
```
#### Build `gpu_ops` extension module[#](#build-gpu-ops-extension-module)
We build the `gpu_ops` Python extension module with the aforementioned code.
(See [`gpu_ops` code listing](#gpu-ops-code-listing) for a complete code listing of C++ and CUDA files.)
```
python -m pip install pybind11==2.10.1 mkdir -p build pybind_include_path=$(python -c "import pybind11; print(pybind11.get_include())")
python_executable=$(python -c 'import sys; print(sys.executable)')
nvcc --threads 4 -Xcompiler -Wall -ldl --expt-relaxed-constexpr -O3 -DNDEBUG -Xcompiler -O3 --generate-code=arch=compute_70,code=[compute_70,sm_70] --generate-code=arch=compute_75,code=[compute_75,sm_75] --generate-code=arch=compute_80,code=[compute_80,sm_80] --generate-code=arch=compute_86,code=[compute_86,sm_86] -Xcompiler=-fPIC -Xcompiler=-fvisibility=hidden -x cu -c gpu_ops/rms_norm_kernels.cu -o build/rms_norm_kernels.cu.o c++ -I/usr/local/cuda/include -I$pybind_include_path $(${python_executable}-config --cflags) -O3 -DNDEBUG -O3 -fPIC -fvisibility=hidden -flto -fno-fat-lto-objects -o build/gpu_ops.cpp.o -c gpu_ops/gpu_ops.cpp c++ -fPIC -O3 -DNDEBUG -O3 -flto -shared -o build/gpu_ops$(${python_executable}-config --extension-suffix) build/gpu_ops.cpp.o build/rms_norm_kernels.cu.o -L/usr/local/cuda/lib64 -lcudadevrt -lcudart_static -lrt -lpthread -ldl strip build/gpu_ops$(${python_executable}-config --extension-suffix)
```
#### Add RMS normalization to JAX as custom call[#](#add-rms-normalization-to-jax-as-custom-call)
`gpu_ops` is just a Python extension module and we need more work to plug it into JAX.
##### Create primitives[#](#create-primitives)
We first create primitives, `_rms_norm_fwd_p` and `_rms_norm_bwd_p`, which the custom functions can be mapped to.
We set the `multiple_results` attribute to `True` for these operations, which means that the operation produces multiple outputs as a tuple.
When it is set to `False`, the operation produces a single output without a tuple.
For more details, see [How JAX primitives work](https://jax.readthedocs.io/en/latest/notebooks/How_JAX_primitives_work.html).
```
from functools import partial
import jax import jax.numpy as jnp import jax._src.test_util as jtu from build import gpu_ops from jax import core, dtypes from jax.interpreters import xla from jax.lib import xla_client
# Create _rms_norm_fwd_p for forward operation.
_rms_norm_fwd_p = core.Primitive("rms_norm_fwd")
_rms_norm_fwd_p.multiple_results = True
_rms_norm_fwd_p.def_impl(partial(xla.apply_primitive, _rms_norm_fwd_p))
def rms_norm_fwd(x, weight, eps=1e-05):
output, invvar = _rms_norm_fwd_p.bind(x, weight, eps=eps)
return output
# Create _rms_norm_bwd_p for backward operation.
_rms_norm_bwd_p = core.Primitive("rms_norm_bwd")
_rms_norm_bwd_p.multiple_results = True
_rms_norm_bwd_p.def_impl(partial(xla.apply_primitive, _rms_norm_bwd_p))
def rms_norm_bwd(g, invvar, x, weight, eps):
grad_input, grad_weight, part_grad = _rms_norm_bwd_p.bind(
g, invvar, x, weight, eps=eps
)
return grad_input, grad_weight
```
##### Lowering to MLIR custom call[#](#lowering-to-mlir-custom-call)
To map the custom functions to the new primitives, `_rms_norm_fwd_p` and `_rms_norm_bwd_p`, we need to:
* Register custom functions as custom call targets with `xla_client.register_custom_call_target`, and
* Register lowering functions that lower the primitives to MLIR custom calls with the registered custom call targets.
The functions `_rms_norm_fwd_cuda_lowering` and `_rms_norm_bwd_cuda_lowering` below lower the primitives to MLIR custom call operations with the custom targets from `gpu_ops`. These functions are registered with `jax.interpreters.mlir.register_lowering`.
Note that an `RMSNormDescriptor` object is created in the lowering function, and passed to the custom call as `opaque`.
```
from functools import reduce
from jax.interpreters import mlir from jax.interpreters.mlir import ir from jaxlib.hlo_helpers import custom_call
# Register functions defined in gpu_ops as custom call target for GPUs for _name, _value in gpu_ops.get_rms_norm_registrations().items():
xla_client.register_custom_call_target(_name, _value, platform="gpu")
def element_type_to_descriptor_type_mapping(element_type):
_element_type_to_descriptor_type_mapping = {
ir.BF16Type.get(): gpu_ops.ElementType.BF16,
ir.F16Type.get(): gpu_ops.ElementType.F16,
ir.F32Type.get(): gpu_ops.ElementType.F32,
ir.F64Type.get(): gpu_ops.ElementType.F64,
}
return _element_type_to_descriptor_type_mapping.get(element_type)
def default_layouts(*shapes):
return [range(len(shape) - 1, -1, -1) for shape in shapes]
def _rms_norm_fwd_cuda_lowering(ctx, x, weight, eps):
x_type = ir.RankedTensorType(x.type)
x_shape = x_type.shape
w_type = ir.RankedTensorType(weight.type)
w_shape = w_type.shape
iv_element_type = (
ir.F32Type.get()
if x_type.element_type in [ir.F16Type.get(), ir.BF16Type.get()]
else x_type.element_type
)
n2 = reduce(lambda x, y: x * y, w_shape)
n1 = reduce(lambda x, y: x * y, x_shape) // n2
opaque = gpu_ops.create_rms_norm_descriptor(
n1,
n2,
eps,
element_type_to_descriptor_type_mapping(x_type.element_type),
element_type_to_descriptor_type_mapping(w_type.element_type),
0, # unused
)
out = custom_call(
b"rms_forward_affine_mixed_dtype",
out_types=[
ir.RankedTensorType.get(x_shape, w_type.element_type),
ir.RankedTensorType.get((n1,), iv_element_type),
],
operands=[x, weight],
backend_config=opaque,
operand_layouts=default_layouts(x_shape, w_shape),
result_layouts=default_layouts(x_shape, (n1,)),
)
return out
mlir.register_lowering(
_rms_norm_fwd_p,
_rms_norm_fwd_cuda_lowering,
platform="gpu",
)
def _rms_norm_bwd_cuda_lowering(ctx, grad_output, invvar, x, weight, eps):
x_type = ir.RankedTensorType(x.type)
x_shape = x_type.shape
w_type = ir.RankedTensorType(weight.type)
w_shape = w_type.shape
iv_type = ir.RankedTensorType(invvar.type)
n2 = reduce(lambda x, y: x * y, w_shape)
n1 = reduce(lambda x, y: x * y, x_shape) // n2
part_grad_shape = ctx.avals_out[-1].shape
opaque = gpu_ops.create_rms_norm_descriptor(
n1,
n2,
eps,
element_type_to_descriptor_type_mapping(x_type.element_type),
element_type_to_descriptor_type_mapping(w_type.element_type),
part_grad_shape[0],
)
out = custom_call(
b"rms_backward_affine",
out_types=[
ir.RankedTensorType.get(x_shape, x_type.element_type),
ir.RankedTensorType.get(w_shape, w_type.element_type),
ir.RankedTensorType.get(part_grad_shape, iv_type.element_type),
],
operands=[grad_output, invvar, x, weight],
backend_config=opaque,
operand_layouts=default_layouts(x_shape, (n1,), x_shape, w_shape),
result_layouts=default_layouts(x_shape, w_shape, part_grad_shape),
)
return out
mlir.register_lowering(
_rms_norm_bwd_p,
_rms_norm_bwd_cuda_lowering,
platform="gpu",
)
```
#### Let’s test it[#](#let-s-test-it)
```
per_core_batch_size=4 seq_len=512 emb_dim=512 x = jax.random.normal(
jax.random.PRNGKey(0),
shape=(jax.local_device_count() * per_core_batch_size, seq_len, emb_dim),
dtype=jnp.bfloat16,
)
norm_shape = x.shape[-2:]
weight = jnp.ones(norm_shape, dtype=jnp.bfloat16)
```
##### Test forward function[#](#test-forward-function)
```
out = rms_norm_fwd(x, weight)
```
```
---
NotImplementedError Traceback (most recent call last)
Cell In [5], line 1
---> 1 out = rms_norm_fwd(x, weight)
...
NotImplementedError: Abstract evaluation for 'rms_norm_fwd' not implemented
```
#### Abstract evaluation[#](#abstract-evaluation)
The test above failed with `NotImplementedError: Abstract evaluation for 'rms_norm_fwd' not implemented`. Why did the test fail? What does it mean?
As part of the execution, JAX performs abstract evaluation. As JAX has no knowledge about the new primitives, it doesn’t know how to compute the output shapes and output data types, thus can’t evaluate these operations abstractly.
We need to provide a function for abstract evaluation of each primitive.
These abstract evaluation functions compute the shape and the data type of the outputs, but don’t compute actual values for the operations.
These functions are passed to `.def_abstract_eval` method to be registered with the corresponding primitives.
See [How JAX primitives work](https://jax.readthedocs.io/en/latest/notebooks/How_JAX_primitives_work.html#abstract-evaluation-rules) for more information on abstract evaluation.
```
from functools import reduce from operator import mul
from jax.abstract_arrays import ShapedArray
def _rms_norm_fwd_abstract(x, weight, eps):
w_dtype = dtypes.canonicalize_dtype(weight.dtype)
iv_dtype = dtypes.canonicalize_dtype(x.dtype)
if iv_dtype in [jnp.float16, jnp.bfloat16]:
iv_dtype = jnp.float32
n2 = reduce(mul, weight.shape)
n1 = reduce(mul, x.shape) // n2
return (
ShapedArray(x.shape, w_dtype, named_shape=x.named_shape), # output
ShapedArray((n1,), iv_dtype, named_shape=x.named_shape), # invvar
)
_rms_norm_fwd_p.def_abstract_eval(_rms_norm_fwd_abstract)
def _rms_norm_bwd_abstract(grad_output, invvar, x, weight, eps):
iv_dtype = dtypes.canonicalize_dtype(invvar.dtype)
w_dtype = dtypes.canonicalize_dtype(weight.dtype)
x_dtype = dtypes.canonicalize_dtype(x.dtype)
n2 = reduce(lambda x, y: x * y, weight.shape)
n1 = reduce(lambda x, y: x * y, x.shape) // n2
part_grad_shape = (16, n2)
assert dtypes.canonicalize_dtype(grad_output.dtype) == w_dtype
assert grad_output.shape == x.shape
assert invvar.shape == (n1,)
assert (
iv_dtype == jnp.float32 if x_dtype in [jnp.float16, jnp.bfloat16] else x_dtype
)
assert grad_output.named_shape == x.named_shape
weight_named_shape = (
weight_named_shape if weight.named_shape else x.named_shape
)
return (
ShapedArray(
x.shape, x_dtype, named_shape=x.named_shape
), # grad input
ShapedArray(
weight.shape, w_dtype, named_shape=weight_named_shape
), # grad weight
ShapedArray(
part_grad_shape, iv_dtype, named_shape=weight_named_shape
), # part grad
)
_rms_norm_bwd_p.def_abstract_eval(_rms_norm_bwd_abstract)
```
#### Let’s test it again[#](#let-s-test-it-again)
##### Test the forward function[#](#test-the-forward-function)
```
out = rms_norm_fwd(x, weight)
```
##### Test the backward function[#](#test-the-backward-function)
Now let’s test the backward operation using `jax.grad` and `jtu.check_grads`.
```
def loss(x, weight):
predictions = rms_norm_fwd(x, weight)
return -jnp.mean(predictions**2)
loss_grad = jax.grad(loss)
out = loss_grad(x, weight)
jtu.check_grads(loss, (x, weight), modes=["rev"], order=1)
```
```
---
NotImplementedError Traceback (most recent call last)
Cell In [8], line 7
3 return -jnp.mean(predictions**2)
6 loss_grad = jax.grad(loss)
---> 7 out = loss_grad(x, weight)
...
NotImplementedError: Differentiation rule for 'rms_norm_fwd' not implemented
```
#### Differentiation rule[#](#differentiation-rule)
The backward operation failed with the error `NotImplementedError: Differentiation rule for 'rms_norm_fwd' not implemented`. It means that, although we have defined `rms_norm_fwd` and `rms_norm_bwd`, JAX doesn’t know the relationship between them.
We can teach JAX that `rms_norm_bwd` is the backward operation for `rms_norm_fwd`, using `jax.custom_vjp` and its convention. As the first step, we need to refine the definition of `rms_norm_fwd` and `rms_norm_bwd`.
```
# rms_norm_fwd was previously defined as
#
# def rms_norm_fwd(x, weight, eps=1e-05):
# output, invvar = _rms_norm_fwd_p.bind(x, weight, eps=eps)
# return output
#
def rms_norm_fwd(x, weight, eps=1e-05):
output, invvar = _rms_norm_fwd_p.bind(x, weight, eps=eps)
return output, (invvar, x, weight)
# rms_norm_bwd was previously defined as
#
# def rms_norm_bwd(g, invvar, x, weight, eps):
# grad_input, grad_weight, part_grad = _rms_norm_bwd_p.bind(
# g, invvar, x, weight, eps=eps
# )
# return grad_input, grad_weight
#
def rms_norm_bwd(eps, res, g):
invvar, x, weight = res
grad_input, grad_weight, part_grad = _rms_norm_bwd_p.bind(
g, invvar, x, weight, eps=eps
)
return grad_input, grad_weight
```
`rms_norm_fwd` now returns an extra output `(invvar, x, weight)` for the residual data and `rms_norm_bwd` takes `eps`, `res`, and `g` as the parameters.
Once the relationship between `rms_norm_fwd` and `rms_norm_bwd` is established through `jax.custom_vjp`, JAX will ensure that the residual data from `rms_norm_fwd` is passed to `rms_norm_bwd` as `res` for backward operation.
For non-differentiable parameters such as `eps`, JAX ensures that they are passed to the backward operation before the residual data. That’s why `eps` precedes `res` in the parameter list of `rms_norm_bwd`.
Now that `rms_norm_fwd` returns the residual data, which is not needed for simple RMS normalization operation, we define a wrapper around it, `rms_norm`. It simply calls `rms_norm_fwd` and returns only `output`. Note that `rms_norm` is annotated with `@partial(jax.custom_vjp, nondiff_argnums=(2,))` and we are passing `rms_norm_fwd` and `rms_norm_bwd` to `rms_norm.defvjp`. It teaches JAX that, when `rms_norm` is differentiated, `rms_norm_fwd` is to be used for forward operation, and `rms_norm_bwd` is to be used for backward operation.
See [Custom derivative rules for JAX-transformable Python functions](https://jax.readthedocs.io/en/latest/notebooks/Custom_derivative_rules_for_Python_code.html#use-jax-custom-vjp-to-define-custom-reverse-mode-only-rules) for more information on `jax.custom_vjp`.
```
@partial(jax.custom_vjp, nondiff_argnums=(2,))
def rms_norm(x, weight, eps=1e-05):
output, _ = rms_norm_fwd(x, weight, eps=eps)
return output
rms_norm.defvjp(rms_norm_fwd, rms_norm_bwd)
```
With the refinement we have made, the backward operation test works with a modification: `loss` now calls `rms_norm` instead of `rms_norm_fwd`.
```
def loss(x, weight):
predictions = rms_norm(x, weight)
return -jnp.mean(predictions**2)
loss_grad = jax.grad(loss)
out = loss_grad(x, weight)
jtu.check_grads(loss, (x, weight), modes=["rev"], order=1)
```
#### Let’s test it on multiple devices[#](#let-s-test-it-on-multiple-devices)
We are using `jax.experimental.pjit.pjit` for parallel execution on multiple devices, and we produce reference values with sequential execution on a single device.
##### Test the forward function[#](#id1)
Let’s first test the forward operation on multiple devices. We are creating a simple 1D mesh and sharding `x` on all devices.
```
from jax.sharding import Mesh, PartitionSpec from jax.experimental.pjit import pjit
mesh = Mesh(jax.local_devices(), ("x",))
ref = rms_norm(x, weight)
pjitted = pjit(
rms_norm,
# Shard x by batch dimension and replicate weight on all devices.
in_shardings=(PartitionSpec("x", None, None), PartitionSpec(None, None)),
# Shard the output by batch dimension.
out_shardings=PartitionSpec("x", None, None),
)
with mesh:
print(pjitted.lower(x, weight).compile().runtime_executable().hlo_modules()[0].to_string())
out = pjitted(x, weight)
jnp.allclose(ref, out, atol=1e-2, rtol=1e-2)
```
```
HloModule pjit_rms_norm, entry_computation_layout={(bf16[4,512,512]{2,1,0},bf16[512,512]{1,0})->bf16[4,512,512]{2,1,0}}
%fused_computation (param_1: bf16[32,512,512], param_1.3: u32[]) -> bf16[4,512,512] {
%param_1 = bf16[32,512,512]{2,1,0} parameter(0)
%param_1.3 = u32[] parameter(1)
%convert.2 = s32[] convert(u32[] %param_1.3), metadata={op_name="pjit(rms_norm)/jit(main)/rms_norm_fwd[eps=1e-05]" source_file="/tmp/ipykernel_25235/3343076723.py" source_line=8}
%constant_9 = s32[] constant(4), metadata={op_name="pjit(rms_norm)/jit(main)/rms_norm_fwd[eps=1e-05]" source_file="/tmp/ipykernel_25235/3343076723.py" source_line=8}
%multiply.3 = s32[] multiply(s32[] %convert.2, s32[] %constant_9), metadata={op_name="pjit(rms_norm)/jit(main)/rms_norm_fwd[eps=1e-05]" source_file="/tmp/ipykernel_25235/3343076723.py" source_line=8}
%constant_8 = s32[] constant(0), metadata={op_name="pjit(rms_norm)/jit(main)/rms_norm_fwd[eps=1e-05]" source_file="/tmp/ipykernel_25235/3343076723.py" source_line=8}
ROOT %dynamic-slice.2 = bf16[4,512,512]{2,1,0} dynamic-slice(bf16[32,512,512]{2,1,0} %param_1, s32[] %multiply.3, s32[] %constant_8, s32[] %constant_8), dynamic_slice_sizes={4,512,512}, metadata={op_name="pjit(rms_norm)/jit(main)/rms_norm_fwd[eps=1e-05]" source_file="/tmp/ipykernel_25235/3343076723.py" source_line=8}
}
ENTRY %main.7_spmd (param: bf16[4,512,512], param.1: bf16[512,512]) -> bf16[4,512,512] {
%param = bf16[4,512,512]{2,1,0} parameter(0), sharding={devices=[8,1,1]0,1,2,3,4,5,6,7}
%all-gather = bf16[32,512,512]{2,1,0} all-gather(bf16[4,512,512]{2,1,0} %param), channel_id=1, replica_groups={{0,1,2,3,4,5,6,7}}, dimensions={0}, use_global_device_ids=true, metadata={op_name="pjit(rms_norm)/jit(main)/rms_norm_fwd[eps=1e-05]" source_file="/tmp/ipykernel_25235/3343076723.py" source_line=8}
%param.1 = bf16[512,512]{1,0} parameter(1), sharding={replicated}
%custom-call.0 = (bf16[32,512,512]{2,1,0}, f32[32]{0}) custom-call(bf16[32,512,512]{2,1,0} %all-gather, bf16[512,512]{1,0} %param.1), custom_call_target="rms_forward_affine_mixed_dtype", operand_layout_constraints={bf16[32,512,512]{2,1,0}, bf16[512,512]{1,0}}, api_version=API_VERSION_STATUS_RETURNING, metadata={op_name="pjit(rms_norm)/jit(main)/rms_norm_fwd[eps=1e-05]" source_file="/tmp/ipykernel_25235/3343076723.py" source_line=8}, backend_config=" \000\000\000\000\000\004\000\361h\343\210\265\370\344>\000\000\000\000\000\000\000\000\000\000\000\000\255\177\000\000"
%get-tuple-element = bf16[32,512,512]{2,1,0} get-tuple-element((bf16[32,512,512]{2,1,0}, f32[32]{0}) %custom-call.0), index=0, metadata={op_name="pjit(rms_norm)/jit(main)/rms_norm_fwd[eps=1e-05]" source_file="/tmp/ipykernel_25235/3343076723.py" source_line=8}
%partition-id = u32[] partition-id(), metadata={op_name="pjit(rms_norm)/jit(main)/rms_norm_fwd[eps=1e-05]" source_file="/tmp/ipykernel_25235/3343076723.py" source_line=8}
ROOT %fusion = bf16[4,512,512]{2,1,0} fusion(bf16[32,512,512]{2,1,0} %get-tuple-element, u32[] %partition-id), kind=kLoop, calls=%fused_computation, metadata={op_name="pjit(rms_norm)/jit(main)/rms_norm_fwd[eps=1e-05]" source_file="/tmp/ipykernel_25235/3343076723.py" source_line=8}
}
```
```
True
```
The values have been computed correctly for forward operation, however, the generated HLO modules shows an `all-gather` operation to replicate `x` on all devices, incurring large communication overhead.
As XLA does not have enough knowledge about the custom functions to shard input tensors, it decides to replicate them to produce correct values before making the custom call.
To avoid this overhead, we need to use the xmap manual sharding with the following configuration updates
```
jax.config.update("experimental_xmap_spmd_lowering", True)
jax.config.update("experimental_xmap_spmd_lowering_manual", True)
```
We need to modify the test code to use the xmap manual sharding with the custom operation.
We first define a function that wraps `rms_norm` with `xmap`. As the size of the data axis that is being sharded must match the size of the corresponding mesh axis in the xmap manual sharding mode, we reshape `x` with the new shape `(device_count, x.shape[0] // device_count, *x.shape[1:])`, and `device_count` represents the size of the corresponding mesh axis.
After running `rms_norm` through `xmap`, we reshape the output to match the shape of `x` to match the expectation from clients.
```
from jax.experimental.maps import xmap
def xmap_rms_norm(x, weight, *, device_count):
reshaped = x.reshape(device_count, x.shape[0] // device_count, *x.shape[1:])
xmapped = xmap(
rms_norm,
in_axes=(("x", None, None, None), (None, None)),
out_axes=("x", None, None, None),
axis_resources={"x": "x"},
)
reshaped_out = xmapped(reshaped, weight)
return reshaped_out.reshape(x.shape)
```
Now we need to run `xmap_rms_norm`, not `rms_norm` through `pjit`.
```
with mesh:
pjitted = pjit(
partial(xmap_rms_norm, device_count=jax.local_device_count()),
# Shard x by batch dimension and replicate weight on all devices.
in_shardings=(
PartitionSpec("x", None, None),
PartitionSpec(None, None),
),
# Shard the output by batch dimension.
out_shardings=PartitionSpec("x", None, None),
)
print(pjitted.lower(x, weight).compile().runtime_executable().hlo_modules()[0].to_string())
out = pjitted(x, weight)
jnp.allclose(ref, out, atol=1e-2, rtol=1e-2)
```
```
HloModule pjit__unnamed_wrapped_function_, entry_computation_layout={(bf16[4,512,512]{2,1,0},bf16[512,512]{1,0})->bf16[4,512,512]{2,1,0}}
ENTRY %main.17_spmd (param: bf16[4,512,512], param.1: bf16[512,512]) -> bf16[4,512,512] {
%param = bf16[4,512,512]{2,1,0} parameter(0), sharding={devices=[8,1,1]0,1,2,3,4,5,6,7}, metadata={op_name="pjit(<unnamed wrapped function>)/jit(main)/xmap(rms_norm)/squeeze[dimensions=(0,)]" source_file="/tmp/ipykernel_25235/3123505662.py" source_line=13}
%param.1 = bf16[512,512]{1,0} parameter(1), sharding={replicated}, metadata={op_name="pjit(<unnamed wrapped function>)/jit(main)/xmap(rms_norm)/full_to_shard[axes=OrderedDict() mesh=Mesh(device_ids=array([0, 1, 2, 3, 4, 5, 6, 7]), axis_names=(\'x\',)) manual_axes=(\'x\',)]" source_file="/tmp/ipykernel_25235/3123505662.py" source_line=13}
%custom-call.0 = (bf16[4,512,512]{2,1,0}, f32[4]{0}) custom-call(bf16[4,512,512]{2,1,0} %param, bf16[512,512]{1,0} %param.1), custom_call_target="rms_forward_affine_mixed_dtype", operand_layout_constraints={bf16[4,512,512]{2,1,0}, bf16[512,512]{1,0}}, api_version=API_VERSION_STATUS_RETURNING, metadata={op_name="pjit(<unnamed wrapped function>)/jit(main)/xmap(rms_norm)/rms_norm_fwd[eps=1e-05]" source_file="/tmp/ipykernel_25235/3343076723.py" source_line=8}, backend_config="\004\000\000\000\000\000\004\000\361h\343\210\265\370\344>\000\000\000\000\000\000\000\000\000\000\000\000\027\177\000\000"
ROOT %get-tuple-element = bf16[4,512,512]{2,1,0} get-tuple-element((bf16[4,512,512]{2,1,0}, f32[4]{0}) %custom-call.0), index=0, metadata={op_name="pjit(<unnamed wrapped function>)/jit(main)/xmap(rms_norm)/rms_norm_fwd[eps=1e-05]" source_file="/tmp/ipykernel_25235/3343076723.py" source_line=8}
}
```
```
True
```
With this modification, the `all-gather` operation is eliminated and the custom call is made on each shard of `x`.
##### Test the backward function[#](#id2)
We are moving onto the backward operation using `jax.grad` on multiple devices.
Similarly to the forward operation test, we are creating a simple 1D mesh and sharding `x` on all devices.
We also define the `loss` function with `xmap_rms_norm` instead of `rms_norm`
```
def loss_ref(x, weight):
predictions = rms_norm(x, weight)
return -jnp.mean(predictions**2)
ref = jax.grad(loss_ref, argnums=(0, 1))(x, weight)
# Re-define loss to use xmap_rms_norm instead of rms_norm def loss(x, weight, *, device_count):
predictions = xmap_rms_norm(x, weight, device_count=device_count)
return -jnp.mean(predictions**2)
with mesh:
pjitted = pjit(
jax.grad(partial(loss, device_count=jax.local_device_count()), argnums=(0, 1)),
# Shard x by batch dimension and replicate weight on all devices.
in_shardings=(
PartitionSpec("x", None, None),
PartitionSpec(None, None),
),
# Shard the output by batch dimension and replicate weight grad on all devices.
out_shardings=(
PartitionSpec("x", None, None),
PartitionSpec(None, None),
),
)
out = pjitted(x, weight)
for r, o in zip(ref, out):
print(jnp.allclose(r, o, atol=1e-2, rtol=1e-2))
```
```
True True
```
We can inspect the generated jaxpr, which is the JAX internal representation, to make sure `jax.grad` inserts a `psum` for the gradient accumulation across the devices when needed.
```
with mesh:
print(jax.make_jaxpr(pjitted)(x, weight))
```
```
{ lambda ; a:bf16[32,512,512] b:bf16[512,512]. let
c:bf16[32,512,512] d:bf16[512,512] = pjit[
donated_invars=(False, False)
in_positional_semantics=(<_PositionalSemantics.GLOBAL: 1>, <_PositionalSemantics.GLOBAL: 1>)
in_shardings=(GSPMDSharding({devices=[8,1,1]0,1,2,3,4,5,6,7}), GSPMDSharding({replicated}))
jaxpr={ lambda ; e:bf16[32,512,512] f:bf16[512,512]. let
g:bf16[8,4,512,512] = reshape[
dimensions=None
new_sizes=(8, 4, 512, 512)
] e
h:bf16[8,4,512,512] i:f32[8,4] j:bf16[8,4,512,512] k:bf16[512,512] = xmap[
axis_resources=FrozenDict({'x': ('x',)})
backend=None
call_jaxpr={ lambda ; l:bf16[4,512,512;x:8] m:bf16[512,512]. let
n:bf16[4,512,512;x:8] o:f32[4;x:8] = rms_norm_fwd[eps=1e-05] l m
in (n, o, l, m) }
donated_invars=(False, False)
global_axis_sizes=FrozenDict({'x': 8})
in_axes=(FrozenDict({'x': 0}), FrozenDict({}))
in_positional_semantics=(<_PositionalSemantics.GLOBAL: 1>, <_PositionalSemantics.GLOBAL: 1>)
name=rms_norm
out_axes=(FrozenDict({'x': 0}), FrozenDict({'x': 0}), FrozenDict({'x': 0}), FrozenDict({}))
out_positional_semantics=_PositionalSemantics.GLOBAL
resource_env=ResourceEnv(Mesh(device_ids=array([0, 1, 2, 3, 4, 5, 6, 7]), axis_names=('x',)), ())
spmd_in_axes=None
spmd_out_axes=None
] g f
p:bf16[32,512,512] = reshape[dimensions=None new_sizes=(32, 512, 512)] h
q:bf16[32,512,512] = integer_pow[y=2] p
r:bf16[32,512,512] = integer_pow[y=1] p
s:bf16[32,512,512] = mul 2 r
t:f32[32,512,512] = convert_element_type[
new_dtype=float32
weak_type=False
] q
u:f32[] = reduce_sum[axes=(0, 1, 2)] t
v:bf16[] = convert_element_type[new_dtype=bfloat16 weak_type=False] u
w:bf16[] = div v 8.38861e+06
_:bf16[] = neg w
x:bf16[] = neg 1
y:bf16[] = div x 8.38861e+06
z:f32[] = convert_element_type[new_dtype=float32 weak_type=False] y
ba:f32[32,512,512] = broadcast_in_dim[
broadcast_dimensions=()
shape=(32, 512, 512)
] z
bb:bf16[32,512,512] = convert_element_type[
new_dtype=bfloat16
weak_type=False
] ba
bc:bf16[32,512,512] = mul bb s
bd:bf16[8,4,512,512] = reshape[
dimensions=None
new_sizes=(8, 4, 512, 512)
] bc
be:bf16[8,4,512,512] bf:bf16[512,512] = xmap[
axis_resources=FrozenDict({'x': ('x',)})
backend=None
call_jaxpr={ lambda ; bg:f32[4;x:8] bh:bf16[4,512,512;x:8] bi:bf16[512,512]
bj:bf16[4,512,512;x:8]. let
bk:bf16[4,512,512;x:8] bl:bf16[512,512;x:8] _:f32[16,262144;x:8] = rms_norm_bwd[
eps=1e-05
] bj bg bh bi
bm:bf16[512,512] = psum[axes=('x',) axis_index_groups=None] bl
in (bk, bm) }
donated_invars=(False, False, False, False)
global_axis_sizes=FrozenDict({'x': 8})
in_axes=(FrozenDict({'x': 0}), FrozenDict({'x': 0}), FrozenDict({}), FrozenDict({'x': 0}))
in_positional_semantics=(<_PositionalSemantics.GLOBAL: 1>, <_PositionalSemantics.GLOBAL: 1>)
name=transpose(rms_norm)
out_axes=(FrozenDict({'x': 0}), FrozenDict({}))
out_positional_semantics=_PositionalSemantics.GLOBAL
resource_env=ResourceEnv(Mesh(device_ids=array([0, 1, 2, 3, 4, 5, 6, 7]), axis_names=('x',)), ())
spmd_in_axes=None
spmd_out_axes=None
] i j k bd
bn:bf16[32,512,512] = reshape[
dimensions=None
new_sizes=(32, 512, 512)
] be
in (bn, bf) }
name=<unnamed function>
out_positional_semantics=_PositionalSemantics.GLOBAL
out_shardings=(GSPMDSharding({devices=[8,1,1]0,1,2,3,4,5,6,7}), GSPMDSharding({replicated}))
resource_env=ResourceEnv(Mesh(device_ids=array([0, 1, 2, 3, 4, 5, 6, 7]), axis_names=('x',)), ())
] a b
in (c, d) }
```
We see that `bm:bf16[512,512] = psum[axes=('x',) axis_index_groups=None] bl` has been added after the call to `rms_norm_bwd` to reduce `grad_weight` across the devices on the axis `"x"`, but there is no `psum` for `grad_input`.
This is controlled by `named_shape` passed to the `ShapedArray` construction in abstract evaluation and the axes given to `xmap`.
The following code snippet from `_rms_norm_bwd_abstract` shows that `grad_input` has the exact same shape, type, and named shape as `x` does, which means `grad_input` is sharded the same way as `x`, hence no need for a `psum` for `grad_input`.
In contrast, `grad_weight` has the same shape and type as `weight` does, but, when `weight.named_shape` is empty, `x.named_shape` is used for `grad_weight`. In `in_axes` of our `xmap` call, `weight` has no named axis and `weight.named_shape` is empty, but `grad_weight` now has a named axis `"x"` in `grad_weight.named_shape`.
This makes `jax.grad` insert `psum` on the axis `"x"` for `grad_weight`.
```
weight_named_shape = (
weight_named_shape if weight.named_shape else x.named_shape
)
...
return (
ShapedArray(
x.shape, x_dtype, named_shape=x.named_shape
), # grad input
ShapedArray(
weight.shape, w_dtype, named_shape=weight_named_shape
), # grad weight
....
)
```
#### Let’s put it together[#](#let-s-put-it-together)
Here is the complete code.
```
from functools import partial, reduce from operator import mul
import jax import jax.numpy as jnp from build import gpu_ops from jax import core, dtypes from jax.abstract_arrays import ShapedArray from jax.experimental.maps import xmap from jax.experimental.pjit import pjit from jax.interpreters import mlir, xla from jax.interpreters.mlir import ir from jax.lib import xla_client from jax.sharding import Mesh, PartitionSpec from jaxlib.hlo_helpers import custom_call
# Create _rms_norm_fwd_p for forward operation.
_rms_norm_fwd_p = core.Primitive("rms_norm_fwd")
_rms_norm_fwd_p.multiple_results = True
_rms_norm_fwd_p.def_impl(partial(xla.apply_primitive, _rms_norm_fwd_p))
def rms_norm_fwd(x, weight, eps=1e-05):
output, invvar = _rms_norm_fwd_p.bind(x, weight, eps=eps)
return output, (invvar, x, weight)
# Create _rms_norm_bwd_p for backward operation.
_rms_norm_bwd_p = core.Primitive("rms_norm_bwd")
_rms_norm_bwd_p.multiple_results = True
_rms_norm_bwd_p.def_impl(partial(xla.apply_primitive, _rms_norm_bwd_p))
def rms_norm_bwd(eps, res, g):
invvar, x, weight = res
grad_input, grad_weight, part_grad = _rms_norm_bwd_p.bind(
g, invvar, x, weight, eps=eps
)
return grad_input, grad_weight
####################
# Lowering to MLIR #
####################
# Register functions defined in gpu_ops as custom call target for GPUs for _name, _value in gpu_ops.get_rms_norm_registrations().items():
xla_client.register_custom_call_target(_name, _value, platform="gpu")
def element_type_to_descriptor_type_mapping(element_type):
_element_type_to_descriptor_type_mapping = {
ir.BF16Type.get(): gpu_ops.ElementType.BF16,
ir.F16Type.get(): gpu_ops.ElementType.F16,
ir.F32Type.get(): gpu_ops.ElementType.F32,
ir.F64Type.get(): gpu_ops.ElementType.F64,
}
return _element_type_to_descriptor_type_mapping.get(element_type)
def default_layouts(*shapes):
return [range(len(shape) - 1, -1, -1) for shape in shapes]
def _rms_norm_fwd_cuda_lowering(ctx, x, weight, eps):
x_type = ir.RankedTensorType(x.type)
x_shape = x_type.shape
w_type = ir.RankedTensorType(weight.type)
w_shape = w_type.shape
iv_element_type = (
ir.F32Type.get()
if x_type.element_type in [ir.F16Type.get(), ir.BF16Type.get()]
else x_type.element_type
)
n2 = reduce(lambda x, y: x * y, w_shape)
n1 = reduce(lambda x, y: x * y, x_shape) // n2
opaque = gpu_ops.create_rms_norm_descriptor(
n1,
n2,
eps,
element_type_to_descriptor_type_mapping(x_type.element_type),
element_type_to_descriptor_type_mapping(w_type.element_type),
0, # unused
)
out = custom_call(
b"rms_forward_affine_mixed_dtype",
out_types=[
ir.RankedTensorType.get(x_shape, w_type.element_type),
ir.RankedTensorType.get((n1,), iv_element_type),
],
operands=[x, weight],
backend_config=opaque,
operand_layouts=default_layouts(x_shape, w_shape),
result_layouts=default_layouts(x_shape, (n1,)),
)
return out
mlir.register_lowering(
_rms_norm_fwd_p,
_rms_norm_fwd_cuda_lowering,
platform="gpu",
)
def _rms_norm_bwd_cuda_lowering(ctx, grad_output, invvar, x, weight, eps):
x_type = ir.RankedTensorType(x.type)
x_shape = x_type.shape
w_type = ir.RankedTensorType(weight.type)
w_shape = w_type.shape
iv_type = ir.RankedTensorType(invvar.type)
n2 = reduce(lambda x, y: x * y, w_shape)
n1 = reduce(lambda x, y: x * y, x_shape) // n2
part_grad_shape = ctx.avals_out[-1].shape
opaque = gpu_ops.create_rms_norm_descriptor(
n1,
n2,
eps,
element_type_to_descriptor_type_mapping(x_type.element_type),
element_type_to_descriptor_type_mapping(w_type.element_type),
part_grad_shape[0],
)
out = custom_call(
b"rms_backward_affine",
out_types=[
ir.RankedTensorType.get(x_shape, x_type.element_type),
ir.RankedTensorType.get(w_shape, w_type.element_type),
ir.RankedTensorType.get(part_grad_shape, iv_type.element_type),
],
operands=[grad_output, invvar, x, weight],
backend_config=opaque,
operand_layouts=default_layouts(x_shape, (n1,), x_shape, w_shape),
result_layouts=default_layouts(x_shape, w_shape, part_grad_shape),
)
return out
mlir.register_lowering(
_rms_norm_bwd_p,
_rms_norm_bwd_cuda_lowering,
platform="gpu",
)
#######################
# Abstract evaluation #
#######################
def _rms_norm_fwd_abstract(x, weight, eps):
w_dtype = dtypes.canonicalize_dtype(weight.dtype)
iv_dtype = dtypes.canonicalize_dtype(x.dtype)
if iv_dtype in [jnp.float16, jnp.bfloat16]:
iv_dtype = jnp.float32
n2 = reduce(mul, weight.shape)
n1 = reduce(mul, x.shape) // n2
return (
ShapedArray(x.shape, w_dtype, named_shape=x.named_shape), # output
ShapedArray((n1,), iv_dtype, named_shape=x.named_shape), # invvar
)
_rms_norm_fwd_p.def_abstract_eval(_rms_norm_fwd_abstract)
def _rms_norm_bwd_abstract(grad_output, invvar, x, weight, eps):
iv_dtype = dtypes.canonicalize_dtype(invvar.dtype)
w_dtype = dtypes.canonicalize_dtype(weight.dtype)
x_dtype = dtypes.canonicalize_dtype(x.dtype)
n2 = reduce(lambda x, y: x * y, weight.shape)
n1 = reduce(lambda x, y: x * y, x.shape) // n2
part_grad_shape = (16, n2)
assert dtypes.canonicalize_dtype(grad_output.dtype) == w_dtype
assert grad_output.shape == x.shape
assert invvar.shape == (n1,)
assert (
iv_dtype == jnp.float32 if x_dtype in [jnp.float16, jnp.bfloat16] else x_dtype
)
assert grad_output.named_shape == x.named_shape
weight_named_shape = (
weight_named_shape if weight.named_shape else grad_output.named_shape
)
return (
ShapedArray(
x.shape, x_dtype, named_shape=x.named_shape
), # grad input
ShapedArray(
weight.shape, w_dtype, named_shape=weight_named_shape
), # grad weight
ShapedArray(
part_grad_shape, iv_dtype, named_shape=weight_named_shape
), # part grad
)
_rms_norm_bwd_p.def_abstract_eval(_rms_norm_bwd_abstract)
#######################################
# Top-level interface with custom vjp #
#######################################
@partial(jax.custom_vjp, nondiff_argnums=(2,))
def rms_norm(x, weight, eps=1e-05):
output, _ = rms_norm_fwd(x, weight, eps=eps)
return output
rms_norm.defvjp(rms_norm_fwd, rms_norm_bwd)
######################
# RMS norm with xmap #
######################
jax.config.update("experimental_xmap_spmd_lowering", True)
jax.config.update("experimental_xmap_spmd_lowering_manual", True)
def xmap_rms_norm(x, weight, *, device_count):
reshaped = x.reshape(device_count, x.shape[0] // device_count, *x.shape[1:])
xmapped = xmap(
rms_norm,
in_axes=(("x", None, None, None), (None, None)),
out_axes=("x", None, None, None),
axis_resources={"x": "x"},
)
reshaped_out = xmapped(reshaped, weight)
return reshaped_out.reshape(x.shape)
########
# Test #
########
import jax
per_core_batch_size=4 seq_len=512 emb_dim=512 x = jax.random.normal(
jax.random.PRNGKey(0),
shape=(jax.local_device_count() * per_core_batch_size, seq_len, emb_dim),
dtype=jnp.bfloat16,
)
norm_shape = x.shape[-2:]
weight = jnp.ones(norm_shape, dtype=jnp.bfloat16)
def loss_ref(x, weight):
predictions = rms_norm(x, weight)
return -jnp.mean(predictions**2)
ref = jax.grad(loss_ref, argnums=(0, 1))(x, weight)
def loss(x, weight, *, device_count):
predictions = xmap_rms_norm(x, weight, device_count=device_count)
return -jnp.mean(predictions**2)
with Mesh(jax.local_devices(), ("x",)):
pjitted = pjit(
jax.grad(partial(loss, device_count=jax.local_device_count()), argnums=(0, 1)),
# Shard x by batch dimension and replicate weight on all devices.
in_shardings=(
PartitionSpec("x", None, None),
PartitionSpec(None, None),
),
# Shard the output by batch dimension and replicate weight grad on all devices.
out_shardings=(
PartitionSpec("x", None, None),
PartitionSpec(None, None),
),
)
out = pjitted(x, weight)
for r, o in zip(ref, out):
print(jnp.allclose(r, o, atol=1e-2, rtol=1e-2))
```
```
True True
```
#### Appendix[#](#appendix)
##### `gpu_ops` code listing[#](#gpu-ops-code-listing)
###### `gpu_ops/kernel_helpers.h`[#](#gpu-ops-kernel-helpers-h)
```
// This header is not specific to our application and you'll probably want
// something like this for any extension you're building. This includes the
// infrastructure needed to serialize descriptors that are used with the
// "opaque" parameter of the GPU custom call. In our example we'll use this
// parameter to pass the size of our problem.
#ifndef _GPU_OPS_KERNEL_HELPERS_H_
#define _GPU_OPS_KERNEL_HELPERS_H_
#include <cstdint>
#include <stdexcept>
#include <string>
#include <type_traits#define JAX_APEX_WARP_SIZE 32
namespace gpu_ops {
// https://en.cppreference.com/w/cpp/numeric/bit_cast template <class To, class From>
typename std::enable_if<sizeof(To) == sizeof(From) &&
std::is_trivially_copyable<From>::value &&
std::is_trivially_copyable<To>::value,
To>::type bit_cast(const From &src) noexcept {
static_assert(std::is_trivially_constructible<To>::value,
"This implementation additionally requires destination type to "
"be trivially constructible");
To dst;
memcpy(&dst, &src, sizeof(To));
return dst;
}
template <typename T> std::string PackDescriptorAsString(const T &descriptor) {
return std::string(bit_cast<const char *>(&descriptor), sizeof(T));
}
template <typename T>
const T *UnpackDescriptor(const char *opaque, std::size_t opaque_len) {
if (opaque_len != sizeof(T)) {
throw std::runtime_error("Invalid opaque object size");
}
return bit_cast<const T *>(opaque);
}
} // namespace gpu_ops
#endif
```
###### `gpu_ops/kernels.h`[#](#gpu-ops-kernels-h)
```
#ifndef _GPU_OPS_KERNELS_H_
#define _GPU_OPS_KERNELS_H_
#include <cuda_runtime_api.h#include <cstddef>
#include <cstdintnamespace gpu_ops {
enum ElementType { BF16, F16, F32, F64 };
struct RMSNormDescriptor {
int n1;
int n2;
double eps;
ElementType x_type;
ElementType w_type;
int part_grad_size;
};
void rms_forward_affine_mixed_dtypes(cudaStream_t stream, void **buffers,
const char *opaque,
std::size_t opaque_len);
void rms_backward_affine(cudaStream_t stream, void **buffers,
const char *opaque, std::size_t opaque_len);
} // namespace gpu_ops
#endif
```
###### `gpu_ops/pybind11_kernel_helpers.h`[#](#gpu-ops-pybind11-kernel-helpers-h)
```
// This header extends kernel_helpers.h with the pybind11 specific interface to
// serializing descriptors. It also adds a pybind11 function for wrapping our
// custom calls in a Python capsule. This is separate from kernel_helpers so
// that the CUDA code itself doesn't include pybind11. I don't think that this
// is strictly necessary, but they do it in jaxlib, so let's do it here too.
#ifndef _GPU_OPS_PYBIND11_KERNEL_HELPERS_H_
#define _GPU_OPS_PYBIND11_KERNEL_HELPERS_H_
#include <pybind11/pybind11.h#include "kernel_helpers.h"
namespace gpu_ops {
template <typename T> pybind11::bytes PackDescriptor(const T &descriptor) {
return pybind11::bytes(PackDescriptorAsString(descriptor));
}
template <typename T> pybind11::capsule EncapsulateFunction(T *fn) {
return pybind11::capsule(bit_cast<void *>(fn), "xla._CUSTOM_CALL_TARGET");
}
} // namespace gpu_ops
#endif
```
###### `gpu_ops/gpu_ops.cpp`[#](#gpu-ops-gpu-ops-cpp)
```
#include "kernels.h"
#include "pybind11_kernel_helpers.h"
namespace {
pybind11::dict RMSNormRegistrations() {
pybind11::dict dict;
dict["rms_forward_affine_mixed_dtype"] =
gpu_ops::EncapsulateFunction(gpu_ops::rms_forward_affine_mixed_dtypes);
dict["rms_backward_affine"] =
gpu_ops::EncapsulateFunction(gpu_ops::rms_backward_affine);
return dict;
}
PYBIND11_MODULE(gpu_ops, m) {
m.def("get_rms_norm_registrations", &RMSNormRegistrations);
m.def("create_rms_norm_descriptor",
[](int n1, int n2, double eps, gpu_ops::ElementType x_type,
gpu_ops::ElementType w_type, int part_grad_size) {
return gpu_ops::PackDescriptor(gpu_ops::RMSNormDescriptor{
n1, n2, eps, x_type, w_type, part_grad_size});
});
pybind11::enum_<gpu_ops::ElementType>(m, "ElementType")
.value("BF16", gpu_ops::ElementType::BF16)
.value("F16", gpu_ops::ElementType::F16)
.value("F32", gpu_ops::ElementType::F32)
.value("F64", gpu_ops::ElementType::F64);
}
} // namespace
```
###### `gpu_ops/rms_norm_kernels.cu`[#](#gpu-ops-rms-norm-kernels-cu)
```
#include "kernel_helpers.h"
#include "kernels.h"
#include "stdio.h"
#include <cuda_bf16.h>
#include <cuda_fp16.h>
#include <iostreamnamespace {
#define DISPATCH_DOUBLE_FLOAT_HALF_AND_BFLOAT_INOUT_TYPES(TYPEIN, TYPEOUT, \
NAME, ...) \
switch (TYPEIN) { \
case gpu_ops::ElementType::F64: { \
using scalar_t_in = double; \
using accscalar_t = double; \
switch (TYPEOUT) { \
case gpu_ops::ElementType::F64: { \
using scalar_t_out = double; \
__VA_ARGS__; \
break; \
} \
case gpu_ops::ElementType::F32: { \
using scalar_t_out = float; \
__VA_ARGS__; \
break; \
} \
case gpu_ops::ElementType::F16: { \
using scalar_t_out = __half; \
__VA_ARGS__; \
break; \
} \
case gpu_ops::ElementType::BF16: { \
using scalar_t_out = __nv_bfloat16; \
__VA_ARGS__; \
break; \
} \
default: \
break; \
} \
break; \
} \
case gpu_ops::ElementType::F32: { \
using scalar_t_in = float; \
using accscalar_t = float; \
switch (TYPEOUT) { \
case gpu_ops::ElementType::F64: { \
using scalar_t_out = double; \
__VA_ARGS__; \
break; \
} \
case gpu_ops::ElementType::F32: { \
using scalar_t_out = float; \
__VA_ARGS__; \
break; \
} \
case gpu_ops::ElementType::F16: { \
using scalar_t_out = __half; \
__VA_ARGS__; \
break; \
} \
case gpu_ops::ElementType::BF16: { \
using scalar_t_out = __nv_bfloat16; \
__VA_ARGS__; \
break; \
} \
default: \
break; \
} \
break; \
} \
case gpu_ops::ElementType::F16: { \
using scalar_t_in = __half; \
using accscalar_t = float; \
switch (TYPEOUT) { \
case gpu_ops::ElementType::F64: { \
using scalar_t_out = double; \
__VA_ARGS__; \
break; \
} \
case gpu_ops::ElementType::F32: { \
using scalar_t_out = float; \
__VA_ARGS__; \
break; \
} \
case gpu_ops::ElementType::F16: { \
using scalar_t_out = __half; \
__VA_ARGS__; \
break; \
} \
case gpu_ops::ElementType::BF16: { \
using scalar_t_out = __nv_bfloat16; \
__VA_ARGS__; \
break; \
} \
default: \
break; \
} \
break; \
} \
case gpu_ops::ElementType::BF16: { \
using scalar_t_in = __nv_bfloat16; \
using accscalar_t = float; \
switch (TYPEOUT) { \
case gpu_ops::ElementType::F64: { \
using scalar_t_out = double; \
__VA_ARGS__; \
break; \
} \
case gpu_ops::ElementType::F32: { \
using scalar_t_out = float; \
__VA_ARGS__; \
break; \
} \
case gpu_ops::ElementType::F16: { \
using scalar_t_out = __half; \
__VA_ARGS__; \
break; \
} \
case gpu_ops::ElementType::BF16: { \
using scalar_t_out = __nv_bfloat16; \
__VA_ARGS__; \
break; \
} \
default: \
break; \
} \
break; \
} \
default: \
break; \
}
template <typename U>
__device__ void cuWelfordOnlineSum(const U curr, U &mu, U &sigma2, U &count) {
count = count + U(1);
U delta = curr - mu;
U lmean = mu + delta / count;
mu = lmean;
U delta2 = curr - lmean;
sigma2 = sigma2 + delta * delta2;
}
template <typename U>
__device__ void cuChanOnlineSum(const U muB, const U sigma2B, const U countB,
U &mu, U &sigma2, U &count) {
U delta = muB - mu;
U nA = count;
U nB = countB;
count = count + countB;
U nX = count;
if (nX > U(0)) {
nA = nA / nX;
nB = nB / nX;
mu = nA * mu + nB * muB;
sigma2 = sigma2 + sigma2B + delta * delta * nA * nB * nX;
} else {
mu = U(0);
sigma2 = U(0);
}
}
template <typename U> __device__ void cuRMSOnlineSum(const U curr, U &sigma2) {
sigma2 = sigma2 + curr * curr;
}
template <typename U>
__device__ void cuChanRMSOnlineSum(const U sigma2B, U &sigma2) {
sigma2 = sigma2 + sigma2B;
}
template <typename T, typename U>
__device__ void cuWelfordMuSigma2(const T *__restrict__ vals, const int n1,
const int n2, const int i1, U &mu, U &sigma2,
U *buf, bool rms_only) {
// Assumptions:
// 1) blockDim.x == warpSize
// 2) Tensor is contiguous
// 3) 2*blockDim.y*sizeof(U)+blockDim.y*sizeof(int) shared memory available.
//
// compute variance and mean over n2
U count = U(0);
mu = U(0);
sigma2 = U(0);
if (i1 < n1) {
// one warp normalizes one n1 index,
// synchronization is implicit
// initialize with standard Welford algorithm
const int numx = blockDim.x * blockDim.y;
const int thrx = threadIdx.x + threadIdx.y * blockDim.x;
const T *lvals = vals + i1 * n2;
int l = 4 * thrx;
for (; l + 3 < n2; l += 4 * numx) {
for (int k = 0; k < 4; ++k) {
U curr = static_cast<U>(lvals[l + k]);
if (!rms_only) {
cuWelfordOnlineSum<U>(curr, mu, sigma2, count);
} else {
cuRMSOnlineSum<U>(curr, sigma2);
}
}
}
for (; l < n2; ++l) {
U curr = static_cast<U>(lvals[l]);
if (!rms_only) {
cuWelfordOnlineSum<U>(curr, mu, sigma2, count);
} else {
cuRMSOnlineSum<U>(curr, sigma2);
}
}
// intra-warp reductions
for (int l = 0; l <= 4; ++l) {
int srcLaneB = (threadIdx.x + (1 << l)) & 31;
U sigma2B = __shfl_sync(0xffffffff, sigma2, srcLaneB, warpSize);
if (!rms_only) {
U muB = __shfl_sync(0xffffffff, mu, srcLaneB, warpSize);
U countB = __shfl_sync(0xffffffff, count, srcLaneB, warpSize);
cuChanOnlineSum<U>(muB, sigma2B, countB, mu, sigma2, count);
} else {
cuChanRMSOnlineSum<U>(sigma2B, sigma2);
}
}
// threadIdx.x == 0 has correct values for each warp
// inter-warp reductions
if (blockDim.y > 1) {
U *ubuf = (U *)buf;
U *ibuf = (U *)(ubuf + blockDim.y);
for (int offset = blockDim.y / 2; offset > 0; offset /= 2) {
// upper half of warps write to shared
if (threadIdx.x == 0 && threadIdx.y >= offset &&
threadIdx.y < 2 * offset) {
const int wrt_y = threadIdx.y - offset;
if (!rms_only) {
ubuf[2 * wrt_y] = mu;
ibuf[wrt_y] = count;
}
ubuf[2 * wrt_y + 1] = sigma2;
}
__syncthreads();
// lower half merges
if (threadIdx.x == 0 && threadIdx.y < offset) {
U sigma2B = ubuf[2 * threadIdx.y + 1];
if (!rms_only) {
U muB = ubuf[2 * threadIdx.y];
U countB = ibuf[threadIdx.y];
cuChanOnlineSum<U>(muB, sigma2B, countB, mu, sigma2, count);
} else {
cuChanRMSOnlineSum<U>(sigma2B, sigma2);
}
}
__syncthreads();
}
// threadIdx.x = 0 && threadIdx.y == 0 only thread that has correct values
if (threadIdx.x == 0 && threadIdx.y == 0) {
if (!rms_only) {
ubuf[0] = mu;
}
ubuf[1] = sigma2;
}
__syncthreads();
if (!rms_only) {
mu = ubuf[0];
}
sigma2 = ubuf[1] / U(n2);
// don't care about final value of count, we know count == n2
} else {
if (!rms_only) {
mu = __shfl_sync(0xffffffff, mu, 0, warpSize);
}
sigma2 = __shfl_sync(0xffffffff, sigma2 / U(n2), 0, warpSize);
}
}
}
template <>
__device__ void cuWelfordMuSigma2(const __half *__restrict__ vals, const int n1,
const int n2, const int i1, float &mu,
float &sigma2, float *buf, bool rms_only) {
// Assumptions:
// 1) blockDim.x == warpSize
// 2) Tensor is contiguous
// 3) 2*blockDim.y*sizeof(U)+blockDim.y*sizeof(int) shared memory available.
//
// compute variance and mean over n2
float count = 0.0f;
mu = float(0);
sigma2 = float(0);
if (i1 < n1) {
// one warp normalizes one n1 index,
// synchronization is implicit
// initialize with standard Welford algorithm
const int numx = blockDim.x * blockDim.y;
const int thrx = threadIdx.x + threadIdx.y * blockDim.x;
const __half *lvals = vals + i1 * n2;
int l = 8 * thrx;
if ((((size_t)lvals) & 3) != 0) {
// 16 bit alignment
// first thread consumes first point
if (thrx == 0) {
float curr = static_cast<float>(lvals[0]);
if (!rms_only) {
cuWelfordOnlineSum(curr, mu, sigma2, count);
} else {
cuRMSOnlineSum(curr, sigma2);
}
}
++l;
}
// at this point, lvals[l] are 32 bit aligned for all threads.
for (; l + 7 < n2; l += 8 * numx) {
for (int k = 0; k < 8; k += 2) {
float2 curr = __half22float2(*((__half2 *)(lvals + l + k)));
if (!rms_only) {
cuWelfordOnlineSum(curr.x, mu, sigma2, count);
cuWelfordOnlineSum(curr.y, mu, sigma2, count);
} else {
cuRMSOnlineSum(curr.x, sigma2);
cuRMSOnlineSum(curr.y, sigma2);
}
}
}
for (; l < n2; ++l) {
float curr = static_cast<float>(lvals[l]);
if (!rms_only) {
cuWelfordOnlineSum(curr, mu, sigma2, count);
} else {
cuRMSOnlineSum(curr, sigma2);
}
}
// intra-warp reductions
for (int l = 0; l <= 4; ++l) {
int srcLaneB = (threadIdx.x + (1 << l)) & 31;
float sigma2B = __shfl_sync(0xffffffff, sigma2, srcLaneB, warpSize);
if (!rms_only) {
float muB = __shfl_sync(0xffffffff, mu, srcLaneB, warpSize);
float countB = __shfl_sync(0xffffffff, count, srcLaneB, warpSize);
cuChanOnlineSum(muB, sigma2B, countB, mu, sigma2, count);
} else {
cuChanRMSOnlineSum(sigma2B, sigma2);
}
}
// threadIdx.x == 0 has correct values for each warp
// inter-warp reductions
if (blockDim.y > 1) {
float *ubuf = (float *)buf;
float *ibuf = (float *)(ubuf + blockDim.y);
for (int offset = blockDim.y / 2; offset > 0; offset /= 2) {
// upper half of warps write to shared
if (threadIdx.x == 0 && threadIdx.y >= offset &&
threadIdx.y < 2 * offset) {
const int wrt_y = threadIdx.y - offset;
ubuf[2 * wrt_y + 1] = sigma2;
if (!rms_only) {
ubuf[2 * wrt_y] = mu;
ibuf[wrt_y] = count;
}
}
__syncthreads();
// lower half merges
if (threadIdx.x == 0 && threadIdx.y < offset) {
float sigma2B = ubuf[2 * threadIdx.y + 1];
if (!rms_only) {
float muB = ubuf[2 * threadIdx.y];
float countB = ibuf[threadIdx.y];
cuChanOnlineSum(muB, sigma2B, countB, mu, sigma2, count);
} else {
cuChanRMSOnlineSum(sigma2B, sigma2);
}
}
__syncthreads();
}
// threadIdx.x = 0 && threadIdx.y == 0 only thread that has correct values
if (threadIdx.x == 0 && threadIdx.y == 0) {
if (!rms_only) {
ubuf[0] = mu;
}
ubuf[1] = sigma2;
}
__syncthreads();
if (!rms_only) {
mu = ubuf[0];
}
sigma2 = ubuf[1] / float(n2);
// don't care about final value of count, we know count == n2
} else {
if (!rms_only) {
mu = __shfl_sync(0xffffffff, mu, 0, warpSize);
}
sigma2 = __shfl_sync(0xffffffff, sigma2 / float(n2), 0, warpSize);
}
}
}
// This is the un-specialized struct. Note that we prevent instantiation of
// this struct by putting an undefined symbol in the function body so it won't
// compile.
// template <typename T>
// struct SharedMemory
// {
// // Ensure that we won't compile any un-specialized types
// __device__ T *getPointer()
// {
// extern __device__ void error(void);
// error();
// return NULL;
// }
// };
// https://github.com/NVIDIA/apex/issues/246 template <typename T> struct SharedMemory;
template <> struct SharedMemory<float> {
__device__ float *getPointer() {
extern __shared__ float s_float[];
return s_float;
}
};
template <> struct SharedMemory<double> {
__device__ double *getPointer() {
extern __shared__ double s_double[];
return s_double;
}
};
template <typename T, typename U, typename V>
__device__ void cuApplyLayerNorm_(V *__restrict__ output_vals,
U *__restrict__ mean, U *__restrict__ invvar,
const T *__restrict__ vals, const int n1,
const int n2, const U epsilon,
const V *__restrict__ gamma,
const V *__restrict__ beta, bool rms_only) {
// Assumptions:
// 1) blockDim.x == warpSize
// 2) Tensors are contiguous
//
for (auto i1 = blockIdx.y; i1 < n1; i1 += gridDim.y) {
SharedMemory<U> shared;
U *buf = shared.getPointer();
U mu, sigma2;
cuWelfordMuSigma2(vals, n1, n2, i1, mu, sigma2, buf, rms_only);
const T *lvals = vals + i1 * n2;
V *ovals = output_vals + i1 * n2;
U c_invvar = rsqrt(sigma2 + epsilon);
const int numx = blockDim.x * blockDim.y;
const int thrx = threadIdx.x + threadIdx.y * blockDim.x;
if (gamma != NULL && (beta != NULL || rms_only)) {
for (int i = thrx; i < n2; i += numx) {
U curr = static_cast<U>(lvals[i]);
if (!rms_only) {
ovals[i] =
gamma[i] * static_cast<V>(c_invvar * (curr - mu)) + beta[i];
} else {
ovals[i] = gamma[i] * static_cast<V>(c_invvar * curr);
}
}
} else {
for (int i = thrx; i < n2; i += numx) {
U curr = static_cast<U>(lvals[i]);
if (!rms_only) {
ovals[i] = static_cast<V>(c_invvar * (curr - mu));
} else {
ovals[i] = static_cast<V>(c_invvar * curr);
}
}
}
if (threadIdx.x == 0 && threadIdx.y == 0) {
if (!rms_only) {
mean[i1] = mu;
}
invvar[i1] = c_invvar;
}
__syncthreads();
}
}
template <typename T, typename U, typename V = T>
__global__ void cuApplyRMSNorm(V *__restrict__ output_vals, U *__restrict__ invvar,
const T *__restrict__ vals, const int n1, const int n2,
const U epsilon, const V *__restrict__ gamma) {
cuApplyLayerNorm_<T, U, V>(output_vals, NULL, invvar, vals, n1, n2, epsilon,
gamma, NULL, true);
}
template <typename T, typename U, typename V = T>
void HostApplyRMSNorm(cudaStream_t stream, V *output, U *invvar, const T *input,
int n1, int n2, double epsilon, const V *gamma) {
auto getMaxGridY = []() {
int device;
int val;
cudaGetDevice(&device);
cudaDeviceGetAttribute(&val, cudaDevAttrMaxGridDimY, device);
return val;
};
const dim3 threads(32, 4, 1);
const uint64_t maxGridY = getMaxGridY();
const dim3 blocks(1, std::min((uint64_t)n1, maxGridY), 1);
int nshared =
threads.y > 1 ? threads.y * sizeof(U) + (threads.y / 2) * sizeof(U) : 0;
cuApplyRMSNorm<<<blocks, threads, nshared, stream>>>(
output, invvar, input, n1, n2, U(epsilon), gamma);
}
template <typename T, typename U, typename V>
__device__ void cuLoadWriteStridedInputs(
const int i1_block, const int thr_load_row_off, const int thr_load_col_off,
const int i2_off, const int row_stride, U *warp_buf1, U *warp_buf2,
const T *input, const V *dout, const int i1_end, const int n2,
const U *__restrict__ mean, const U *__restrict__ invvar, bool rms_only) {
int i1 = i1_block + thr_load_row_off;
if (i1 < i1_end) {
U curr_mean;
if (!rms_only) {
curr_mean = mean[i1];
}
U curr_invvar = invvar[i1];
for (int k = 0; k < blockDim.y; ++k) {
int i2 = i2_off + k;
int load_idx = i1 * n2 + i2;
int write_idx = thr_load_row_off * row_stride + thr_load_col_off + k;
if (i2 < n2) {
U curr_input = static_cast<U>(input[load_idx]);
U curr_dout = static_cast<U>(dout[load_idx]);
if (!rms_only) {
warp_buf1[write_idx] = curr_dout;
warp_buf2[write_idx] =
curr_dout * (curr_input - curr_mean) * curr_invvar;
} else {
warp_buf2[write_idx] = curr_dout * (curr_input)*curr_invvar;
}
} else {
if (!rms_only) {
warp_buf1[write_idx] = U(0);
}
warp_buf2[write_idx] = U(0);
}
}
} else {
for (int k = 0; k < blockDim.y; ++k) {
int write_idx = thr_load_row_off * row_stride + thr_load_col_off + k;
if (!rms_only) {
warp_buf1[write_idx] = U(0);
}
warp_buf2[write_idx] = U(0);
}
}
}
template <typename T, typename U, typename V>
__device__ void cuLoadAddStridedInputs(
const int i1_block, const int thr_load_row_off, const int thr_load_col_off,
const int i2_off, const int row_stride, U *warp_buf1, U *warp_buf2,
const T *input, const V *dout, const int i1_end, const int n2,
const U *__restrict__ mean, const U *__restrict__ invvar, bool rms_only) {
int i1 = i1_block + thr_load_row_off;
if (i1 < i1_end) {
U curr_mean;
if (!rms_only) {
curr_mean = mean[i1];
}
U curr_invvar = invvar[i1];
for (int k = 0; k < blockDim.y; ++k) {
int i2 = i2_off + k;
int load_idx = i1 * n2 + i2;
int write_idx = thr_load_row_off * row_stride + thr_load_col_off + k;
if (i2 < n2) {
U curr_input = static_cast<U>(input[load_idx]);
U curr_dout = static_cast<U>(dout[load_idx]);
if (!rms_only) {
warp_buf1[write_idx] += curr_dout;
warp_buf2[write_idx] +=
curr_dout * (curr_input - curr_mean) * curr_invvar;
} else {
warp_buf2[write_idx] += curr_dout * (curr_input)*curr_invvar;
}
}
}
}
}
template <typename T, typename U, typename V>
__global__ void cuComputePartGradGammaBeta(
const V *__restrict__ dout, const T *__restrict__ input, const int n1,
const int n2, const U *__restrict__ mean, const U *__restrict__ invvar,
U epsilon, U *part_grad_gamma, U *part_grad_beta, bool rms_only) {
const int numsegs_n1 =
(n1 + blockDim.y * blockDim.y - 1) / (blockDim.y * blockDim.y);
const int segs_per_block = (numsegs_n1 + gridDim.y - 1) / gridDim.y;
const int i1_beg = blockIdx.y * segs_per_block * blockDim.y * blockDim.y;
const int i1_beg_plus_one =
(blockIdx.y + 1) * segs_per_block * blockDim.y * blockDim.y;
const int i1_end = i1_beg_plus_one < n1 ? i1_beg_plus_one : n1;
const int row_stride = blockDim.x + 1;
const int thr_load_col_off = (threadIdx.x * blockDim.y) & (blockDim.x - 1);
const int thr_load_row_off =
(threadIdx.x * blockDim.y) / blockDim.x + threadIdx.y * blockDim.y;
const int i2_off = blockIdx.x * blockDim.x + thr_load_col_off;
SharedMemory<U> shared;
U *buf = shared.getPointer(); // buf has at least blockDim.x * blockDim.y *
// blockDim.y + (blockDim.y -
// 1)*(blockDim.x/blockDim.y) elements
U *warp_buf1 = (U *)buf;
U *warp_buf2 = warp_buf1 + blockDim.y * blockDim.y * row_stride;
// compute partial sums from strided inputs
// do this to increase number of loads in flight
cuLoadWriteStridedInputs(i1_beg, thr_load_row_off, thr_load_col_off, i2_off,
row_stride, warp_buf1, warp_buf2, input, dout,
i1_end, n2, mean, invvar, rms_only);
for (int i1_block = i1_beg + blockDim.y * blockDim.y; i1_block < i1_end;
i1_block += blockDim.y * blockDim.y) {
cuLoadAddStridedInputs(i1_block, thr_load_row_off, thr_load_col_off, i2_off,
row_stride, warp_buf1, warp_buf2, input, dout,
i1_end, n2, mean, invvar, rms_only);
}
__syncthreads();
// inter-warp reductions
// sum within each warp
U acc1 = U(0);
U acc2 = U(0);
for (int k = 0; k < blockDim.y; ++k) {
int row1 = threadIdx.y + k * blockDim.y;
int idx1 = row1 * row_stride + threadIdx.x;
if (!rms_only) {
acc1 += warp_buf1[idx1];
}
acc2 += warp_buf2[idx1];
}
if (!rms_only) {
warp_buf1[threadIdx.y * row_stride + threadIdx.x] = acc1;
}
warp_buf2[threadIdx.y * row_stride + threadIdx.x] = acc2;
__syncthreads();
// sum all warps
for (int offset = blockDim.y / 2; offset > 1; offset /= 2) {
if (threadIdx.y < offset) {
int row1 = threadIdx.y;
int row2 = threadIdx.y + offset;
int idx1 = row1 * row_stride + threadIdx.x;
int idx2 = row2 * row_stride + threadIdx.x;
if (!rms_only) {
warp_buf1[idx1] += warp_buf1[idx2];
}
warp_buf2[idx1] += warp_buf2[idx2];
}
__syncthreads();
}
int i2 = blockIdx.x * blockDim.x + threadIdx.x;
if (threadIdx.y == 0 && i2 < n2) {
int row1 = threadIdx.y;
int row2 = threadIdx.y + 1;
int idx1 = row1 * row_stride + threadIdx.x;
int idx2 = row2 * row_stride + threadIdx.x;
if (!rms_only) {
part_grad_beta[blockIdx.y * n2 + i2] = warp_buf1[idx1] + warp_buf1[idx2];
}
part_grad_gamma[blockIdx.y * n2 + i2] = warp_buf2[idx1] + warp_buf2[idx2];
}
}
template <typename U, typename V>
__global__ void cuComputeGradGammaBeta(const U *part_grad_gamma, const U *part_grad_beta,
const int part_size, const int n1, const int n2,
V *grad_gamma, V *grad_beta, bool rms_only) {
// sum partial gradients for gamma and beta
SharedMemory<U> shared;
U *buf = shared.getPointer();
int i2 = blockIdx.x * blockDim.x + threadIdx.x;
if (i2 < n2) {
// each warp does sequential reductions until reduced part_size is num_warps
int num_warp_reductions = part_size / blockDim.y;
U sum_gamma = U(0);
U sum_beta = U(0);
const U *part_grad_gamma_ptr =
part_grad_gamma + threadIdx.y * num_warp_reductions * n2 + i2;
const U *part_grad_beta_ptr =
part_grad_beta + threadIdx.y * num_warp_reductions * n2 + i2;
for (int warp_offset = 0; warp_offset < num_warp_reductions;
++warp_offset) {
sum_gamma += part_grad_gamma_ptr[warp_offset * n2];
if (!rms_only) {
sum_beta += part_grad_beta_ptr[warp_offset * n2];
}
}
// inter-warp reductions
const int nbsize3 = blockDim.x * blockDim.y / 2;
for (int offset = blockDim.y / 2; offset >= 1; offset /= 2) {
// top half write to shared memory
if (threadIdx.y >= offset && threadIdx.y < 2 * offset) {
const int write_idx = (threadIdx.y - offset) * blockDim.x + threadIdx.x;
buf[write_idx] = sum_gamma;
if (!rms_only) {
buf[write_idx + nbsize3] = sum_beta;
}
}
__syncthreads();
// bottom half sums
if (threadIdx.y < offset) {
const int read_idx = threadIdx.y * blockDim.x + threadIdx.x;
sum_gamma += buf[read_idx];
if (!rms_only) {
sum_beta += buf[read_idx + nbsize3];
}
}
__syncthreads();
}
// write out fully summed gradients
if (threadIdx.y == 0) {
grad_gamma[i2] = sum_gamma;
if (!rms_only) {
grad_beta[i2] = sum_beta;
}
}
}
}
template <typename T, typename U, typename V>
__global__ void cuComputeGradInput(const V *__restrict__ dout, const T *__restrict__ input,
const int n1, const int n2, const U *__restrict__ mean,
const U *__restrict__ invvar, U epsilon, const V *gamma,
T *grad_input, bool rms_only) {
for (auto i1 = blockIdx.y; i1 < n1; i1 += gridDim.y) {
U sum_loss1 = U(0);
U sum_loss2 = U(0);
U c_mean;
if (!rms_only) {
c_mean = mean[i1];
}
const U c_invvar = invvar[i1];
const T *k_input = input + i1 * n2;
const V *k_dout = dout + i1 * n2;
const int numx = blockDim.x * blockDim.y;
const int thrx = threadIdx.x + threadIdx.y * blockDim.x;
if (gamma != NULL) {
int l = 4 * thrx;
for (; l + 3 < n2; l += 4 * numx) {
for (int k = 0; k < 4; ++k) {
const U c_h = static_cast<U>(k_input[l + k]);
const U c_loss = static_cast<U>(k_dout[l + k]);
if (!rms_only) {
sum_loss1 += c_loss * static_cast<U>(gamma[l + k]);
sum_loss2 += c_loss * static_cast<U>(gamma[l + k]) *
(c_h - c_mean) * c_invvar;
} else {
sum_loss2 += c_loss * static_cast<U>(gamma[l + k]) * (c_h)*c_invvar;
}
}
}
for (; l < n2; ++l) {
const U c_h = static_cast<U>(k_input[l]);
const U c_loss = static_cast<U>(k_dout[l]);
if (!rms_only) {
sum_loss1 += c_loss * static_cast<U>(gamma[l]);
sum_loss2 +=
c_loss * static_cast<U>(gamma[l]) * (c_h - c_mean) * c_invvar;
} else {
sum_loss2 += c_loss * static_cast<U>(gamma[l]) * (c_h)*c_invvar;
}
}
} else {
int l = 4 * thrx;
for (; l + 3 < n2; l += 4 * numx) {
for (int k = 0; k < 4; ++k) {
const U c_h = static_cast<U>(k_input[l + k]);
const U c_loss = static_cast<U>(k_dout[l + k]);
if (!rms_only) {
sum_loss1 += c_loss;
sum_loss2 += c_loss * (c_h - c_mean) * c_invvar;
} else {
sum_loss2 += c_loss * (c_h)*c_invvar;
}
}
}
for (; l < n2; ++l) {
const U c_h = static_cast<U>(k_input[l]);
const U c_loss = static_cast<U>(k_dout[l]);
if (!rms_only) {
sum_loss1 += c_loss;
sum_loss2 += c_loss * (c_h - c_mean) * c_invvar;
} else {
sum_loss2 += c_loss * (c_h)*c_invvar;
}
}
}
// intra-warp reductions
for (int mask = blockDim.x / 2; mask > 0; mask /= 2) {
if (!rms_only) {
sum_loss1 += __shfl_xor_sync(0xffffffff, sum_loss1, mask, warpSize);
}
sum_loss2 += __shfl_xor_sync(0xffffffff, sum_loss2, mask, warpSize);
}
// inter-warp reductions
if (blockDim.y > 1) {
SharedMemory<U> shared;
U *buf = shared.getPointer();
for (int offset = blockDim.y / 2; offset > 0; offset /= 2) {
// upper half of warps write to shared
if (threadIdx.y >= offset && threadIdx.y < 2 * offset) {
const int wrt_i = (threadIdx.y - offset) * blockDim.x + threadIdx.x;
if (!rms_only) {
buf[2 * wrt_i] = sum_loss1;
}
buf[2 * wrt_i + 1] = sum_loss2;
}
__syncthreads();
// lower half merges
if (threadIdx.y < offset) {
const int read_i = threadIdx.y * blockDim.x + threadIdx.x;
if (!rms_only) {
sum_loss1 += buf[2 * read_i];
}
sum_loss2 += buf[2 * read_i + 1];
}
__syncthreads();
}
if (threadIdx.y == 0) {
if (!rms_only) {
buf[2 * threadIdx.x] = sum_loss1;
}
buf[2 * threadIdx.x + 1] = sum_loss2;
}
__syncthreads();
if (threadIdx.y != 0) {
if (!rms_only) {
sum_loss1 = buf[2 * threadIdx.x];
}
sum_loss2 = buf[2 * threadIdx.x + 1];
}
}
// all threads now have the two sums over l
U fH = (U)n2;
U term1 = (U(1) / fH) * c_invvar;
T *k_grad_input = grad_input + i1 * n2;
if (gamma != NULL) {
for (int l = thrx; l < n2; l += numx) {
const U c_h = static_cast<U>(k_input[l]);
const U c_loss = static_cast<U>(k_dout[l]);
U f_grad_input = fH * c_loss * static_cast<U>(gamma[l]);
if (!rms_only) {
f_grad_input -= sum_loss1;
f_grad_input -= (c_h - c_mean) * c_invvar * sum_loss2;
} else {
f_grad_input -= (c_h)*c_invvar * sum_loss2;
}
f_grad_input *= term1;
k_grad_input[l] = static_cast<T>(f_grad_input);
}
} else {
for (int l = thrx; l < n2; l += numx) {
const U c_h = static_cast<U>(k_input[l]);
const U c_loss = static_cast<U>(k_dout[l]);
U f_grad_input = fH * c_loss;
if (!rms_only) {
f_grad_input -= sum_loss1;
f_grad_input -= (c_h - c_mean) * c_invvar * sum_loss2;
} else {
f_grad_input -= (c_h)*c_invvar * sum_loss2;
}
f_grad_input *= term1;
k_grad_input[l] = static_cast<T>(f_grad_input);
}
}
// prevent race where buf is written again before reads are done
__syncthreads();
}
}
template <typename T, typename U = float, typename V = T>
void HostRMSNormGradient(cudaStream_t stream, const V *dout, const U *invvar,
const T *input, int n1, int n2, const V *gamma,
double epsilon, T *grad_input, V *grad_gamma,
int part_size, U *part_grad_gamma) {
auto getMaxGridY = []() {
int device;
int val;
cudaGetDevice(&device);
cudaDeviceGetAttribute(&val, cudaDevAttrMaxGridDimY, device);
return val;
};
const uint64_t maxGridY = getMaxGridY();
if (gamma != NULL) {
const dim3 threads2(32, 4, 1);
const dim3 blocks2((n2 + threads2.x - 1) / threads2.x, part_size, 1);
const int nshared2_a =
2 * sizeof(U) * threads2.y * threads2.y * (threads2.x + 1);
const int nshared2_b = threads2.x * threads2.y * sizeof(U);
const int nshared2 = nshared2_a > nshared2_b ? nshared2_a : nshared2_b;
// note (mkozuki): I can hard code part_grad_gamma's dtype as float given
// that the `cuda_layer_norm_gradient` doesn't support double.
cuComputePartGradGammaBeta<<<blocks2, threads2, nshared2, stream>>>(
dout, input, n1, n2,
invvar, // unused
invvar, U(epsilon), part_grad_gamma, part_grad_gamma, /* unused */
true);
const dim3 threads3(32, 8, 1);
const dim3 blocks3((n2 + threads2.x - 1) / threads2.x, 1, 1);
const int nshared3 = threads3.x * threads3.y * sizeof(U);
cuComputeGradGammaBeta<<<blocks3, threads3, nshared3, stream>>>(
part_grad_gamma, part_grad_gamma, /* unused */
part_size, n1, n2, grad_gamma, grad_gamma, /* unused */
true);
}
// compute grad_input
const dim3 blocks1(1, std::min((uint64_t)n1, maxGridY), 1);
const dim3 threads1(32, 4, 1);
int nshared = threads1.y > 1 ? threads1.y * threads1.x * sizeof(U) : 0;
cuComputeGradInput<<<blocks1, threads1, nshared, stream>>>(
dout, input, n1, n2, invvar, /* unused */
invvar, U(epsilon), gamma, grad_input, true);
}
} // namespace
namespace gpu_ops {
void rms_forward_affine_mixed_dtypes(cudaStream_t stream, void **buffers,
const char *opaque,
std::size_t opaque_len) {
const RMSNormDescriptor &d =
*UnpackDescriptor<RMSNormDescriptor>(opaque, opaque_len);
DISPATCH_DOUBLE_FLOAT_HALF_AND_BFLOAT_INOUT_TYPES(
d.x_type, d.w_type, "rms_norm_cuda_kernel",
HostApplyRMSNorm<scalar_t_in, accscalar_t, scalar_t_out>(
stream, static_cast<scalar_t_out *>(buffers[2]),
static_cast<accscalar_t *>(buffers[3]),
static_cast<scalar_t_in *>(buffers[0]), d.n1, d.n2, d.eps,
/*gamma=*/static_cast<scalar_t_out *>(buffers[1]));)
}
void rms_backward_affine(cudaStream_t stream, void **buffers,
const char *opaque, std::size_t opaque_len) {
const RMSNormDescriptor &d =
*UnpackDescriptor<RMSNormDescriptor>(opaque, opaque_len);
DISPATCH_DOUBLE_FLOAT_HALF_AND_BFLOAT_INOUT_TYPES(
d.x_type, d.w_type, "cuComputeGradInputRMS",
HostRMSNormGradient(
stream,
/*dout=*/static_cast<scalar_t_out *>(buffers[0]),
/*invvar=*/static_cast<accscalar_t *>(buffers[1]),
/*input=*/static_cast<scalar_t_in *>(buffers[2]), d.n1, d.n2,
// TMJ pass NULL argument for gamma, beta, grad_gamma and grad_beta
// if gamma Tensor is NULL on input.
/*gamma=*/static_cast<scalar_t_out *>(buffers[3]), d.eps,
/*grad_input=*/static_cast<scalar_t_in *>(buffers[4]),
/*grad_gamma=*/static_cast<scalar_t_out *>(buffers[5]),
d.part_grad_size,
/*part_grad_gamma=*/static_cast<accscalar_t *>(buffers[6]));)
}
} // namespace gpu_ops
```
### Generalized Convolutions in JAX[#](#generalized-convolutions-in-jax)
JAX provides a number of interfaces to compute convolutions across data, including:
* [`jax.numpy.convolve()`](index.html#jax.numpy.convolve) (also [`jax.numpy.correlate()`](index.html#jax.numpy.correlate))
* [`jax.scipy.signal.convolve()`](index.html#jax.scipy.signal.convolve) (also [`correlate()`](index.html#jax.scipy.signal.correlate))
* [`jax.scipy.signal.convolve2d()`](index.html#jax.scipy.signal.convolve2d) (also [`correlate2d()`](index.html#jax.scipy.signal.correlate2d))
* [`jax.lax.conv_general_dilated()`](index.html#jax.lax.conv_general_dilated)
For basic convolution operations, the `jax.numpy` and `jax.scipy` operations are usually sufficient. If you want to do more general batched multi-dimensional convolution, the `jax.lax` function is where you should start.
#### Basic One-dimensional Convolution[#](#basic-one-dimensional-convolution)
Basic one-dimensional convolution is implemented by [`jax.numpy.convolve()`](index.html#jax.numpy.convolve), which provides a JAX interface for [`numpy.convolve()`](https://numpy.org/doc/stable/reference/generated/numpy.convolve.html#numpy.convolve). Here is a simple example of 1D smoothing implemented via a convolution:
```
import matplotlib.pyplot as plt
from jax import random import jax.numpy as jnp import numpy as np
key = random.PRNGKey(1701)
x = jnp.linspace(0, 10, 500)
y = jnp.sin(x) + 0.2 * random.normal(key, shape=(500,))
window = jnp.ones(10) / 10 y_smooth = jnp.convolve(y, window, mode='same')
plt.plot(x, y, 'lightgray')
plt.plot(x, y_smooth, 'black');
```
The `mode` parameter controls how boundary conditions are treated; here we use `mode='same'` to ensure that the output is the same size as the input.
For more information, see the [`jax.numpy.convolve()`](index.html#jax.numpy.convolve) documentation, or the documentation associated with the original [`numpy.convolve()`](https://numpy.org/doc/stable/reference/generated/numpy.convolve.html#numpy.convolve) function.
#### Basic N-dimensional Convolution[#](#basic-n-dimensional-convolution)
For *N*-dimensional convolution, [`jax.scipy.signal.convolve()`](index.html#jax.scipy.signal.convolve) provides a similar interface to that of [`jax.numpy.convolve()`](index.html#jax.numpy.convolve), generalized to *N* dimensions.
For example, here is a simple approach to de-noising an image based on convolution with a Gaussian filter:
```
from scipy import misc import jax.scipy as jsp
fig, ax = plt.subplots(1, 3, figsize=(12, 5))
# Load a sample image; compute mean() to convert from RGB to grayscale.
image = jnp.array(misc.face().mean(-1))
ax[0].imshow(image, cmap='binary_r')
ax[0].set_title('original')
# Create a noisy version by adding random Gaussian noise key = random.PRNGKey(1701)
noisy_image = image + 50 * random.normal(key, image.shape)
ax[1].imshow(noisy_image, cmap='binary_r')
ax[1].set_title('noisy')
# Smooth the noisy image with a 2D Gaussian smoothing kernel.
x = jnp.linspace(-3, 3, 7)
window = jsp.stats.norm.pdf(x) * jsp.stats.norm.pdf(x[:, None])
smooth_image = jsp.signal.convolve(noisy_image, window, mode='same')
ax[2].imshow(smooth_image, cmap='binary_r')
ax[2].set_title('smoothed');
```
```
/tmp/ipykernel_1471/2619134571.py:7: DeprecationWarning: scipy.misc.face has been deprecated in SciPy v1.10.0; and will be completely removed in SciPy v1.12.0. Dataset methods have moved into the scipy.datasets module. Use scipy.datasets.face instead.
image = jnp.array(misc.face().mean(-1))
```
Like in the one-dimensional case, we use `mode='same'` to specify how we would like edges to be handled. For more information on available options in *N*-dimensional convolutions, see the [`jax.scipy.signal.convolve()`](index.html#jax.scipy.signal.convolve) documentation.
#### General Convolutions[#](#general-convolutions)
For the more general types of batched convolutions often useful in the context of building deep neural networks, JAX and XLA offer the very general N-dimensional **conv_general_dilated** function, but it’s not very obvious how to use it. We’ll give some examples of the common use-cases.
A survey of the family of convolutional operators, [a guide to convolutional arithmetic](https://arxiv.org/abs/1603.07285), is highly recommended reading!
Let’s define a simple diagonal edge kernel:
```
# 2D kernel - HWIO layout kernel = jnp.zeros((3, 3, 3, 3), dtype=jnp.float32)
kernel += jnp.array([[1, 1, 0],
[1, 0,-1],
[0,-1,-1]])[:, :, jnp.newaxis, jnp.newaxis]
print("Edge Conv kernel:")
plt.imshow(kernel[:, :, 0, 0]);
```
```
Edge Conv kernel:
```
And we’ll make a simple synthetic image:
```
# NHWC layout img = jnp.zeros((1, 200, 198, 3), dtype=jnp.float32)
for k in range(3):
x = 30 + 60*k
y = 20 + 60*k
img = img.at[0, x:x+10, y:y+10, k].set(1.0)
print("Original Image:")
plt.imshow(img[0]);
```
```
Original Image:
```
##### lax.conv and lax.conv_with_general_padding[#](#lax-conv-and-lax-conv-with-general-padding)
These are the simple convenience functions for convolutions
️⚠️ The convenience `lax.conv`, `lax.conv_with_general_padding` helper function assume **NCHW** images and **OIHW** kernels.
```
from jax import lax out = lax.conv(jnp.transpose(img,[0,3,1,2]), # lhs = NCHW image tensor
jnp.transpose(kernel,[3,2,0,1]), # rhs = OIHW conv kernel tensor
(1, 1), # window strides
'SAME') # padding mode print("out shape: ", out.shape)
print("First output channel:")
plt.figure(figsize=(10,10))
plt.imshow(np.array(out)[0,0,:,:]);
```
```
out shape: (1, 3, 200, 198)
First output channel:
```
```
out = lax.conv_with_general_padding(
jnp.transpose(img,[0,3,1,2]), # lhs = NCHW image tensor
jnp.transpose(kernel,[2,3,0,1]), # rhs = IOHW conv kernel tensor
(1, 1), # window strides
((2,2),(2,2)), # general padding 2x2
(1,1), # lhs/image dilation
(1,1)) # rhs/kernel dilation print("out shape: ", out.shape)
print("First output channel:")
plt.figure(figsize=(10,10))
plt.imshow(np.array(out)[0,0,:,:]);
```
```
out shape: (1, 3, 202, 200)
First output channel:
```
##### Dimension Numbers define dimensional layout for conv_general_dilated[#](#dimension-numbers-define-dimensional-layout-for-conv-general-dilated)
The important argument is the 3-tuple of axis layout arguments:
(Input Layout, Kernel Layout, Output Layout)
* **N** - batch dimension
* **H** - spatial height
* **W** - spatial width
* **C** - channel dimension
* **I** - kernel *input* channel dimension
* **O** - kernel *output* channel dimension
⚠️ To demonstrate the flexibility of dimension numbers we choose a **NHWC** image and **HWIO** kernel convention for `lax.conv_general_dilated` below.
```
dn = lax.conv_dimension_numbers(img.shape, # only ndim matters, not shape
kernel.shape, # only ndim matters, not shape
('NHWC', 'HWIO', 'NHWC')) # the important bit print(dn)
```
```
ConvDimensionNumbers(lhs_spec=(0, 3, 1, 2), rhs_spec=(3, 2, 0, 1), out_spec=(0, 3, 1, 2))
```
###### SAME padding, no stride, no dilation[#](#same-padding-no-stride-no-dilation)
```
out = lax.conv_general_dilated(img, # lhs = image tensor
kernel, # rhs = conv kernel tensor
(1,1), # window strides
'SAME', # padding mode
(1,1), # lhs/image dilation
(1,1), # rhs/kernel dilation
dn) # dimension_numbers = lhs, rhs, out dimension permutation print("out shape: ", out.shape)
print("First output channel:")
plt.figure(figsize=(10,10))
plt.imshow(np.array(out)[0,:,:,0]);
```
```
out shape: (1, 200, 198, 3)
First output channel:
```
###### VALID padding, no stride, no dilation[#](#valid-padding-no-stride-no-dilation)
```
out = lax.conv_general_dilated(img, # lhs = image tensor
kernel, # rhs = conv kernel tensor
(1,1), # window strides
'VALID', # padding mode
(1,1), # lhs/image dilation
(1,1), # rhs/kernel dilation
dn) # dimension_numbers = lhs, rhs, out dimension permutation print("out shape: ", out.shape, "DIFFERENT from above!")
print("First output channel:")
plt.figure(figsize=(10,10))
plt.imshow(np.array(out)[0,:,:,0]);
```
```
out shape: (1, 198, 196, 3) DIFFERENT from above!
First output channel:
```
###### SAME padding, 2,2 stride, no dilation[#](#same-padding-2-2-stride-no-dilation)
```
out = lax.conv_general_dilated(img, # lhs = image tensor
kernel, # rhs = conv kernel tensor
(2,2), # window strides
'SAME', # padding mode
(1,1), # lhs/image dilation
(1,1), # rhs/kernel dilation
dn) # dimension_numbers = lhs, rhs, out dimension permutation print("out shape: ", out.shape, " <-- half the size of above")
plt.figure(figsize=(10,10))
print("First output channel:")
plt.imshow(np.array(out)[0,:,:,0]);
```
```
out shape: (1, 100, 99, 3) <-- half the size of above First output channel:
```
###### VALID padding, no stride, rhs kernel dilation ~ Atrous convolution (excessive to illustrate)[#](#valid-padding-no-stride-rhs-kernel-dilation-atrous-convolution-excessive-to-illustrate)
```
out = lax.conv_general_dilated(img, # lhs = image tensor
kernel, # rhs = conv kernel tensor
(1,1), # window strides
'VALID', # padding mode
(1,1), # lhs/image dilation
(12,12), # rhs/kernel dilation
dn) # dimension_numbers = lhs, rhs, out dimension permutation print("out shape: ", out.shape)
plt.figure(figsize=(10,10))
print("First output channel:")
plt.imshow(np.array(out)[0,:,:,0]);
```
```
out shape: (1, 176, 174, 3)
First output channel:
```
###### VALID padding, no stride, lhs=input dilation ~ Transposed Convolution[#](#valid-padding-no-stride-lhs-input-dilation-transposed-convolution)
```
out = lax.conv_general_dilated(img, # lhs = image tensor
kernel, # rhs = conv kernel tensor
(1,1), # window strides
((0, 0), (0, 0)), # padding mode
(2,2), # lhs/image dilation
(1,1), # rhs/kernel dilation
dn) # dimension_numbers = lhs, rhs, out dimension permutation print("out shape: ", out.shape, "<-- larger than original!")
plt.figure(figsize=(10,10))
print("First output channel:")
plt.imshow(np.array(out)[0,:,:,0]);
```
```
out shape: (1, 397, 393, 3) <-- larger than original!
First output channel:
```
We can use the last to, for instance, implement *transposed convolutions*:
```
# The following is equivalent to tensorflow:
# N,H,W,C = img.shape
# out = tf.nn.conv2d_transpose(img, kernel, (N,2*H,2*W,C), (1,2,2,1))
# transposed conv = 180deg kernel rotation plus LHS dilation
# rotate kernel 180deg:
kernel_rot = jnp.rot90(jnp.rot90(kernel, axes=(0,1)), axes=(0,1))
# need a custom output padding:
padding = ((2, 1), (2, 1))
out = lax.conv_general_dilated(img, # lhs = image tensor
kernel_rot, # rhs = conv kernel tensor
(1,1), # window strides
padding, # padding mode
(2,2), # lhs/image dilation
(1,1), # rhs/kernel dilation
dn) # dimension_numbers = lhs, rhs, out dimension permutation print("out shape: ", out.shape, "<-- transposed_conv")
plt.figure(figsize=(10,10))
print("First output channel:")
plt.imshow(np.array(out)[0,:,:,0]);
```
```
out shape: (1, 400, 396, 3) <-- transposed_conv First output channel:
```
##### 1D Convolutions[#](#d-convolutions)
You aren’t limited to 2D convolutions, a simple 1D demo is below:
```
# 1D kernel - WIO layout kernel = jnp.array([[[1, 0, -1], [-1, 0, 1]],
[[1, 1, 1], [-1, -1, -1]]],
dtype=jnp.float32).transpose([2,1,0])
# 1D data - NWC layout data = np.zeros((1, 200, 2), dtype=jnp.float32)
for i in range(2):
for k in range(2):
x = 35*i + 30 + 60*k
data[0, x:x+30, k] = 1.0
print("in shapes:", data.shape, kernel.shape)
plt.figure(figsize=(10,5))
plt.plot(data[0]);
dn = lax.conv_dimension_numbers(data.shape, kernel.shape,
('NWC', 'WIO', 'NWC'))
print(dn)
out = lax.conv_general_dilated(data, # lhs = image tensor
kernel, # rhs = conv kernel tensor
(1,), # window strides
'SAME', # padding mode
(1,), # lhs/image dilation
(1,), # rhs/kernel dilation
dn) # dimension_numbers = lhs, rhs, out dimension permutation print("out shape: ", out.shape)
plt.figure(figsize=(10,5))
plt.plot(out[0]);
```
```
in shapes: (1, 200, 2) (3, 2, 2)
ConvDimensionNumbers(lhs_spec=(0, 2, 1), rhs_spec=(2, 1, 0), out_spec=(0, 2, 1))
out shape: (1, 200, 2)
```
##### 3D Convolutions[#](#id1)
```
import matplotlib as mpl
# Random 3D kernel - HWDIO layout kernel = jnp.array([
[[0, 0, 0], [0, 1, 0], [0, 0, 0]],
[[0, -1, 0], [-1, 0, -1], [0, -1, 0]],
[[0, 0, 0], [0, 1, 0], [0, 0, 0]]],
dtype=jnp.float32)[:, :, :, jnp.newaxis, jnp.newaxis]
# 3D data - NHWDC layout data = jnp.zeros((1, 30, 30, 30, 1), dtype=jnp.float32)
x, y, z = np.mgrid[0:1:30j, 0:1:30j, 0:1:30j]
data += (jnp.sin(2*x*jnp.pi)*jnp.cos(2*y*jnp.pi)*jnp.cos(2*z*jnp.pi))[None,:,:,:,None]
print("in shapes:", data.shape, kernel.shape)
dn = lax.conv_dimension_numbers(data.shape, kernel.shape,
('NHWDC', 'HWDIO', 'NHWDC'))
print(dn)
out = lax.conv_general_dilated(data, # lhs = image tensor
kernel, # rhs = conv kernel tensor
(1,1,1), # window strides
'SAME', # padding mode
(1,1,1), # lhs/image dilation
(1,1,1), # rhs/kernel dilation
dn) # dimension_numbers print("out shape: ", out.shape)
# Make some simple 3d density plots:
from mpl_toolkits.mplot3d import Axes3D def make_alpha(cmap):
my_cmap = cmap(jnp.arange(cmap.N))
my_cmap[:,-1] = jnp.linspace(0, 1, cmap.N)**3
return mpl.colors.ListedColormap(my_cmap)
my_cmap = make_alpha(plt.cm.viridis)
fig = plt.figure()
ax = fig.add_subplot(projection='3d')
ax.scatter(x.ravel(), y.ravel(), z.ravel(), c=data.ravel(), cmap=my_cmap)
ax.axis('off')
ax.set_title('input')
fig = plt.figure()
ax = fig.add_subplot(projection='3d')
ax.scatter(x.ravel(), y.ravel(), z.ravel(), c=out.ravel(), cmap=my_cmap)
ax.axis('off')
ax.set_title('3D conv output');
```
```
in shapes: (1, 30, 30, 30, 1) (3, 3, 3, 1, 1)
ConvDimensionNumbers(lhs_spec=(0, 4, 1, 2, 3), rhs_spec=(4, 3, 0, 1, 2), out_spec=(0, 4, 1, 2, 3))
out shape: (1, 30, 30, 30, 1)
```
Developer Documentation[#](#developer-documentation)
---
JAX welcomes contributions from the community.
See below for various install guides to get setup as a developer as well as developer-focused resources such as Jax Enhancement Proposals.
### Contributing to JAX[#](#contributing-to-jax)
Everyone can contribute to JAX, and we value everyone’s contributions. There are several ways to contribute, including:
* Answering questions on JAX’s [discussions page](https://github.com/google/jax/discussions)
* Improving or expanding JAX’s [documentation](http://jax.readthedocs.io/)
* Contributing to JAX’s [code-base](http://github.com/google/jax/)
* Contributing in any of the above ways to the broader ecosystem of [libraries built on JAX](https://github.com/google/jax#neural-network-libraries)
The JAX project follows [Google’s Open Source Community Guidelines](https://opensource.google/conduct/).
#### Ways to contribute[#](#ways-to-contribute)
We welcome pull requests, in particular for those issues marked with
[contributions welcome](https://github.com/google/jax/issues?q=is%3Aopen+is%3Aissue+label%3A%22contributions+welcome%22) or
[good first issue](https://github.com/google/jax/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22).
For other proposals, we ask that you first open a GitHub
[Issue](https://github.com/google/jax/issues/new/choose) or
[Discussion](https://github.com/google/jax/discussions)
to seek feedback on your planned contribution.
#### Contributing code using pull requests[#](#contributing-code-using-pull-requests)
We do all of our development using git, so basic knowledge is assumed.
Follow these steps to contribute code:
1. Sign the [Google Contributor License Agreement (CLA)](https://cla.developers.google.com/).
For more information, see the Pull Request Checklist below.
2. Fork the JAX repository by clicking the **Fork** button on the
[repository page](http://www.github.com/google/jax). This creates a copy of the JAX repository in your own account.
3. Install Python >= 3.9 locally in order to run tests.
4. `pip` installing your fork from source. This allows you to modify the code and immediately test it out:
```
git clone https://github.com/YOUR_USERNAME/jax cd jax pip install -r build/test-requirements.txt # Installs all testing requirements.
pip install -e ".[cpu]" # Installs JAX from the current directory in editable mode.
```
5. Add the JAX repo as an upstream remote, so you can use it to sync your changes.
```
git remote add upstream https://www.github.com/google/jax
```
6. Create a branch where you will develop from:
```
git checkout -b name-of-change
```
And implement your changes using your favorite editor (we recommend
[Visual Studio Code](https://code.visualstudio.com/)).
7. Make sure your code passes JAX’s lint and type checks, by running the following from the top of the repository:
```
pip install pre-commit pre-commit run --all
```
See [Linting and Type-checking](#linting-and-type-checking) for more details.
8. Make sure the tests pass by running the following command from the top of the repository:
```
pytest -n auto tests/
```
JAX’s test suite is quite large, so if you know the specific test file that covers your changes, you can limit the tests to that; for example:
```
pytest -n auto tests/lax_scipy_test.py
```
You can narrow the tests further by using the `pytest -k` flag to match particular test names:
```
pytest -n auto tests/lax_scipy_test.py -k testLogSumExp
```
JAX also offers more fine-grained control over which particular tests are run;
see [Running the tests](index.html#running-tests) for more information.
9. Once you are satisfied with your change, create a commit as follows (
[how to write a commit message](https://chris.beams.io/posts/git-commit/)):
```
git add file1.py file2.py ...
git commit -m "Your commit message"
```
Then sync your code with the main repo:
```
git fetch upstream git rebase upstream/main
```
Finally, push your commit on your development branch and create a remote branch in your fork that you can use to create a pull request from:
```
git push --set-upstream origin name-of-change
```
Please ensure your contribution is a single commit (see [Single-change commits and pull requests](#single-change-commits))
10. Create a pull request from the JAX repository and send it for review.
Check the [JAX pull request checklist](#pr-checklist) for considerations when preparing your PR, and consult [GitHub Help](https://help.github.com/articles/about-pull-requests/)
if you need more information on using pull requests.
#### JAX pull request checklist[#](#jax-pull-request-checklist)
As you prepare a JAX pull request, here are a few things to keep in mind:
##### Google contributor license agreement[#](#google-contributor-license-agreement)
Contributions to this project must be accompanied by a Google Contributor License Agreement (CLA). You (or your employer) retain the copyright to your contribution;
this simply gives us permission to use and redistribute your contributions as part of the project. Head over to <https://cla.developers.google.com/> to see your current agreements on file or to sign a new one.
You generally only need to submit a CLA once, so if you’ve already submitted one
(even if it was for a different project), you probably don’t need to do it again. If you’re not certain whether you’ve signed a CLA, you can open your PR and our friendly CI bot will check for you.
##### Single-change commits and pull requests[#](#single-change-commits-and-pull-requests)
A git commit ought to be a self-contained, single change with a descriptive message. This helps with review and with identifying or reverting changes if issues are uncovered later on.
**Pull requests typically comprise a single git commit.** (In some cases, for instance for large refactors or internal rewrites, they may contain several.)
In preparing a pull request for review, you may need to squash together multiple commits. We ask that you do this prior to sending the PR for review if possible. The `git rebase -i` command might be useful to this end.
##### Linting and Type-checking[#](#linting-and-type-checking)
JAX uses [mypy](https://mypy.readthedocs.io/) and [flake8](https://flake8.pycqa.org/)
to statically test code quality; the easiest way to run these checks locally is via the [pre-commit](https://pre-commit.com/) framework:
```
pip install pre-commit pre-commit run --all
```
If your pull request touches documentation notebooks, this will also run some checks on those (See [Update notebooks](index.html#update-notebooks) for more details).
##### Full GitHub test suite[#](#full-github-test-suite)
Your PR will automatically be run through a full test suite on GitHub CI, which covers a range of Python versions, dependency versions, and configuration options.
It’s normal for these tests to turn up failures that you didn’t catch locally; to fix the issues you can push new commits to your branch.
##### Restricted test suite[#](#restricted-test-suite)
Once your PR has been reviewed, a JAX maintainer will mark it as `Pull Ready`. This will trigger a larger set of tests, including tests on GPU and TPU backends that are not available via standard GitHub CI. Detailed results of these tests are not publicly viewable, but the JAX maintainer assigned to your PR will communicate with you regarding any failures these might uncover; it’s not uncommon, for example, that numerical tests need different tolerances on TPU than on CPU.
### Building from source[#](#building-from-source)
First, obtain the JAX source code:
```
git clone https://github.com/google/jax cd jax
```
Building JAX involves two steps:
1. Building or installing `jaxlib`, the C++ support library for `jax`.
2. Installing the `jax` Python package.
#### Building or installing `jaxlib`[#](#building-or-installing-jaxlib)
##### Installing `jaxlib` with pip[#](#installing-jaxlib-with-pip)
If you’re only modifying Python portions of JAX, we recommend installing
`jaxlib` from a prebuilt wheel using pip:
```
pip install jaxlib
```
See the [JAX readme](https://github.com/google/jax#installation) for full guidance on pip installation (e.g., for GPU and TPU support).
##### Building `jaxlib` from source[#](#building-jaxlib-from-source)
To build `jaxlib` from source, you must also install some prerequisites:
* a C++ compiler (g++, clang, or MSVC)
On Ubuntu or Debian you can install the necessary prerequisites with:
```
sudo apt install g++ python python3-dev
```
If you are building on a Mac, make sure XCode and the XCode command line tools are installed.
See below for Windows build instructions.
* Python packages: `numpy`, `wheel`, `build`.
You can install the necessary Python dependencies using `pip`:
```
pip install numpy wheel build
```
To build `jaxlib` without CUDA GPU or TPU support (CPU only), you can run:
```
python build/build.py pip install dist/*.whl # installs jaxlib (includes XLA)
```
To build `jaxlib` with CUDA support, use `python build/build.py --enable_cuda`;
to build with TPU support, use `python build/build.py --enable_tpu`.
See `python build/build.py --help` for configuration options, including ways to specify the paths to CUDA and CUDNN, which you must have installed. Here
`python` should be the name of your Python 3 interpreter; on some systems, you may need to use `python3` instead. By default, the wheel is written to the
`dist/` subdirectory of the current directory.
##### Building jaxlib from source with a modified XLA repository.[#](#building-jaxlib-from-source-with-a-modified-xla-repository)
JAX depends on XLA, whose source code is in the
[XLA GitHub repository](https://github.com/openxla/xla).
By default JAX uses a pinned copy of the XLA repository, but we often want to use a locally-modified copy of XLA when working on JAX. There are two ways to do this:
* use Bazel’s `override_repository` feature, which you can pass as a command line flag to `build.py` as follows:
```
python build/build.py --bazel_options=--override_repository=xla=/path/to/xla
```
* modify the `WORKSPACE` file in the root of the JAX source tree to point to a different XLA tree.
To contribute changes back to XLA, send PRs to the XLA repository.
The version of XLA pinned by JAX is regularly updated, but is updated in particular before each `jaxlib` release.
##### Additional Notes for Building `jaxlib` from source on Windows[#](#additional-notes-for-building-jaxlib-from-source-on-windows)
On Windows, follow [Install Visual Studio](https://docs.microsoft.com/en-us/visualstudio/install/install-visual-studio?view=vs-2019)
to set up a C++ toolchain. Visual Studio 2019 version 16.5 or newer is required.
If you need to build with CUDA enabled, follow the
[CUDA Installation Guide](https://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/index.html)
to set up a CUDA environment.
JAX builds use symbolic links, which require that you activate
[Developer Mode](https://docs.microsoft.com/en-us/windows/apps/get-started/enable-your-device-for-development).
You can either install Python using its
[Windows installer](https://www.python.org/downloads/), or if you prefer, you can use [Anaconda](https://docs.anaconda.com/anaconda/install/windows/)
or [Miniconda](https://docs.conda.io/en/latest/miniconda.html#windows-installers)
to set up a Python environment.
Some targets of Bazel use bash utilities to do scripting, so [MSYS2](https://www.msys2.org)
is needed. See [Installing Bazel on Windows](https://bazel.build/install/windows#install-compilers)
for more details. Install the following packages:
```
pacman -S patch coreutils
```
Once coreutils is installed, the realpath command should be present in your shell’s path.
Once everything is installed. Open PowerShell, and make sure MSYS2 is in the path of the current session. Ensure `bazel`, `patch` and `realpath` are accessible. Activate the conda environment. The following command builds with CUDA enabled, adjust it to whatever suitable for you:
```
python .\build\build.py `
--enable_cuda `
--cuda_path='C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.1' `
--cudnn_path='C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.1' `
--cuda_version='10.1' `
--cudnn_version='7.6.5'
```
To build with debug information, add the flag `--bazel_options='--copt=/Z7'`.
##### Additional notes for building a ROCM `jaxlib` for AMD GPUs[#](#additional-notes-for-building-a-rocm-jaxlib-for-amd-gpus)
You need several ROCM/HIP libraries installed to build for ROCM. For example, on a Ubuntu machine with
[AMD’s `apt` repositories available](https://rocm.docs.amd.com/en/latest/deploy/linux/quick_start.html),
you need a number of packages installed:
```
sudo apt install miopen-hip hipfft-dev rocrand-dev hipsparse-dev hipsolver-dev \
rccl-dev rccl hip-dev rocfft-dev roctracer-dev hipblas-dev rocm-device-libs
```
To build jaxlib with ROCM support, you can run the following build command,
suitably adjusted for your paths and ROCM version.
```
python build/build.py --enable_rocm --rocm_path=/opt/rocm-5.7.0
```
AMD’s fork of the XLA repository may include fixes not present in the upstream XLA repository. If you experience problems with the upstream repository, you can try AMD’s fork, by cloning their repository:
```
git clone https://github.com/ROCmSoftwarePlatform/xla.git
```
and override the XLA repository with which JAX is built:
```
python build/build.py --enable_rocm --rocm_path=/opt/rocm-5.7.0 \
--bazel_options=--override_repository=xla=/path/to/xla-rocm
```
#### Installing `jax`[#](#installing-jax)
Once `jaxlib` has been installed, you can install `jax` by running:
```
pip install -e . # installs jax
```
To upgrade to the latest version from GitHub, just run `git pull` from the JAX repository root, and rebuild by running `build.py` or upgrading `jaxlib` if necessary. You shouldn’t have to reinstall `jax` because `pip install -e`
sets up symbolic links from site-packages into the repository.
#### Running the tests[#](#running-the-tests)
First, install the dependencies by running `pip install -r build/test-requirements.txt`.
There are two supported mechanisms for running the JAX tests, either using Bazel or using pytest.
##### Using Bazel[#](#using-bazel)
First, configure the JAX build by running:
```
python build/build.py --configure_only
```
You may pass additional options to `build.py` to configure the build; see the
`jaxlib` build documentation for details.
By default the Bazel build runs the JAX tests using `jaxlib` built form source.
To run JAX tests, run:
```
bazel test //tests:cpu_tests //tests:backend_independent_tests
```
`//tests:gpu_tests` and `//tests:tpu_tests` are also available, if you have the necessary hardware.
To use a preinstalled `jaxlib` instead of building `jaxlib` from source, run
```
bazel test --//jax:build_jaxlib=false //tests:cpu_tests //tests:backend_independent_tests
```
A number of test behaviors can be controlled using environment variables (see below). Environment variables may be passed to JAX tests using the
`--test_env=FLAG=value` flag to Bazel.
Some of JAX tests are for multiple accelerators (i.e. GPUs, TPUs). When JAX is already installed, you can run GPUs tests like this:
```
bazel test //tests:gpu_tests --local_test_jobs=4 --test_tag_filters=multiaccelerator --//jax:build_jaxlib=false --test_env=XLA_PYTHON_CLIENT_ALLOCATOR=platform
```
You can speed up single accelerator tests by running them in parallel on multiple accelerators. This also triggers multiple concurrent tests per accelerator. For GPUs, you can do it like this:
```
NB_GPUS=2 JOBS_PER_ACC=4 J=$((NB_GPUS * JOBS_PER_ACC))
MULTI_GPU="--run_under $PWD/build/parallel_accelerator_execute.sh --test_env=JAX_ACCELERATOR_COUNT=${NB_GPUS} --test_env=JAX_TESTS_PER_ACCELERATOR=${JOBS_PER_ACC} --local_test_jobs=$J"
bazel test //tests:gpu_tests //tests:backend_independent_tests --test_env=XLA_PYTHON_CLIENT_PREALLOCATE=false --test_tag_filters=-multiaccelerator $MULTI_GPU
```
##### Using `pytest`[#](#using-pytest)
To run all the JAX tests using `pytest`, we recommend using `pytest-xdist`,
which can run tests in parallel. It is installed as a part of
`pip install -r build/test-requirements.txt` command.
From the repository root directory run:
```
pytest -n auto tests
```
##### Controlling test behavior[#](#controlling-test-behavior)
JAX generates test cases combinatorially, and you can control the number of cases that are generated and checked for each test (default is 10) using the
`JAX_NUM_GENERATED_CASES` environment variable. The automated tests currently use 25 by default.
For example, one might write
```
# Bazel bazel test //tests/... --test_env=JAX_NUM_GENERATED_CASES=25`
```
or
```
# pytest JAX_NUM_GENERATED_CASES=25 pytest -n auto tests
```
The automated tests also run the tests with default 64-bit floats and ints
(`JAX_ENABLE_X64`):
```
JAX_ENABLE_X64=1 JAX_NUM_GENERATED_CASES=25 pytest -n auto tests
```
You can run a more specific set of tests using
[pytest](https://docs.pytest.org/en/latest/usage.html#specifying-tests-selecting-tests)’s built-in selection mechanisms, or alternatively you can run a specific test file directly to see more detailed information about the cases being run:
```
JAX_NUM_GENERATED_CASES=5 python tests/lax_numpy_test.py
```
You can skip a few tests known to be slow, by passing environment variable JAX_SKIP_SLOW_TESTS=1.
To specify a particular set of tests to run from a test file, you can pass a string or regular expression via the `--test_targets` flag. For example, you can run all the tests of `jax.numpy.pad` using:
```
python tests/lax_numpy_test.py --test_targets="testPad"
```
The Colab notebooks are tested for errors as part of the documentation build.
##### Doctests[#](#doctests)
JAX uses pytest in doctest mode to test the code examples within the documentation.
You can run this using
```
pytest docs
```
Additionally, JAX runs pytest in `doctest-modules` mode to ensure code examples in function docstrings will run correctly. You can run this locally using, for example:
```
pytest --doctest-modules jax/_src/numpy/lax_numpy.py
```
Keep in mind that there are several files that are marked to be skipped when the doctest command is run on the full package; you can see the details in
[`ci-build.yaml`](https://github.com/google/jax/blob/main/.github/workflows/ci-build.yaml)
#### Type checking[#](#type-checking)
We use `mypy` to check the type hints. To check types locally the same way as the CI checks them:
```
pip install mypy mypy --config=pyproject.toml --show-error-codes jax
```
Alternatively, you can use the [pre-commit](https://pre-commit.com/) framework to run this on all staged files in your git repository, automatically using the same mypy version as in the GitHub CI:
```
pre-commit run mypy
```
#### Linting[#](#linting)
JAX uses the [flake8](https://flake8.pycqa.org/) linter to ensure code quality. You can check your local changes by running:
```
pip install flake8 flake8 jax
```
Alternatively, you can use the [pre-commit](https://pre-commit.com/) framework to run this on all staged files in your git repository, automatically using the same flake8 version as the GitHub tests:
```
pre-commit run flake8
```
#### Update documentation[#](#update-documentation)
To rebuild the documentation, install several packages:
```
pip install -r docs/requirements.txt
```
And then run:
```
sphinx-build -b html docs docs/build/html -j auto
```
This can take a long time because it executes many of the notebooks in the documentation source;
if you’d prefer to build the docs without executing the notebooks, you can run:
```
sphinx-build -b html -D nb_execution_mode=off docs docs/build/html -j auto
```
You can then see the generated documentation in `docs/build/html/index.html`.
The `-j auto` option controls the parallelism of the build. You can use a number in place of `auto` to control how many CPU cores to use.
##### Update notebooks[#](#update-notebooks)
We use [jupytext](https://jupytext.readthedocs.io/) to maintain two synced copies of the notebooks in `docs/notebooks`: one in `ipynb` format, and one in `md` format. The advantage of the former is that it can be opened and executed directly in Colab; the advantage of the latter is that it makes it much easier to track diffs within version control.
###### Editing `ipynb`[#](#editing-ipynb)
For making large changes that substantially modify code and outputs, it is easiest to edit the notebooks in Jupyter or in Colab. To edit notebooks in the Colab interface,
open <http://colab.research.google.com> and `Upload` from your local repo.
Update it as needed, `Run all cells` then `Download ipynb`.
You may want to test that it executes properly, using `sphinx-build` as explained above.
###### Editing `md`[#](#editing-md)
For making smaller changes to the text content of the notebooks, it is easiest to edit the
`.md` versions using a text editor.
###### Syncing notebooks[#](#syncing-notebooks)
After editing either the ipynb or md versions of the notebooks, you can sync the two versions using [jupytext](https://jupytext.readthedocs.io/) by running `jupytext --sync` on the updated notebooks; for example:
```
pip install jupytext==1.14.7 jupytext --sync docs/notebooks/quickstart.ipynb
```
The jupytext version should match that specified in
[.pre-commit-config.yaml](https://github.com/google/jax/blob/main/.pre-commit-config.yaml).
To check that the markdown and ipynb files are properly synced, you may use the
[pre-commit](https://pre-commit.com/) framework to perform the same check used by the github CI:
```
git add docs -u # pre-commit runs on files in git staging.
pre-commit run jupytext
```
###### Creating new notebooks[#](#creating-new-notebooks)
If you are adding a new notebook to the documentation and would like to use the `jupytext --sync`
command discussed here, you can set up your notebook for jupytext by using the following command:
```
jupytext --set-formats ipynb,md:myst path/to/the/notebook.ipynb
```
This works by adding a `"jupytext"` metadata field to the notebook file which specifies the desired formats, and which the `jupytext --sync` command recognizes when invoked.
###### Notebooks within the Sphinx build[#](#notebooks-within-the-sphinx-build)
Some of the notebooks are built automatically as part of the pre-submit checks and as part of the [Read the docs](https://jax.readthedocs.io/en/latest) build.
The build will fail if cells raise errors. If the errors are intentional, you can either catch them,
or tag the cell with `raises-exceptions` metadata ([example PR](https://github.com/google/jax/pull/2402/files)).
You have to add this metadata by hand in the `.ipynb` file. It will be preserved when somebody else re-saves the notebook.
We exclude some notebooks from the build, e.g., because they contain long computations.
See `exclude_patterns` in [conf.py](https://github.com/google/jax/blob/main/docs/conf.py).
##### Documentation building on `readthedocs.io`[#](#documentation-building-on-readthedocs-io)
JAX’s auto-generated documentation is at <https://jax.readthedocs.io/>.
The documentation building is controlled for the entire project by the
[readthedocs JAX settings](https://readthedocs.org/dashboard/jax). The current settings trigger a documentation build as soon as code is pushed to the GitHub `main` branch.
For each code version, the building process is driven by the
`.readthedocs.yml` and the `docs/conf.py` configuration files.
For each automated documentation build you can see the
[documentation build logs](https://readthedocs.org/projects/jax/builds/).
If you want to test the documentation generation on Readthedocs, you can push code to the `test-docs`
branch. That branch is also built automatically, and you can see the generated documentation [here](https://jax.readthedocs.io/en/test-docs/). If the documentation build fails you may want to [wipe the build environment for test-docs](https://docs.readthedocs.io/en/stable/guides/wipe-environment.html).
For a local test, I was able to do it in a fresh directory by replaying the commands I saw in the Readthedocs logs:
```
mkvirtualenv jax-docs # A new virtualenv mkdir jax-docs # A new directory cd jax-docs git clone --no-single-branch --depth 50 https://github.com/google/jax cd jax git checkout --force origin/test-docs git clean -d -f -f workon jax-docs
python -m pip install --upgrade --no-cache-dir pip python -m pip install --upgrade --no-cache-dir -I Pygments==2.3.1 setuptools==41.0.1 docutils==0.14 mock==1.0.1 pillow==5.4.1 alabaster>=0.7,<0.8,!=0.7.5 commonmark==0.8.1 recommonmark==0.5.0 'sphinx<2' 'sphinx-rtd-theme<0.5' 'readthedocs-sphinx-ext<1.1'
python -m pip install --exists-action=w --no-cache-dir -r docs/requirements.txt cd docs python `which sphinx-build` -T -E -b html -d _build/doctrees-readthedocs -D language=en . _build/html
```
### Internal APIs[#](#internal-apis)
#### core[#](#module-jax.core)
| | |
| --- | --- |
| [`Jaxpr`](index.html#jax.core.Jaxpr)(constvars, invars, outvars, eqns[, ...]) |
param constvars:
|
| [`ClosedJaxpr`](index.html#jax.core.ClosedJaxpr)(jaxpr, consts) |
param jaxpr:
|
### Autodidax: JAX core from scratch[#](#autodidax-jax-core-from-scratch)
Ever want to learn how JAX works, but the implementation seemed impenetrable?
Well, you’re in luck! By reading this tutorial, you’ll learn every big idea in JAX’s core system. You’ll even get clued into our weird jargon!
**This is a work-in-progress draft.** There are some important ingredients missing, still to come in parts 5 and 6 (and more?). There are also some simplifications here that we haven’t yet applied to the main system, but we will.
#### Part 1: Transformations as interpreters: standard evaluation, `jvp`, and `vmap`[#](#part-1-transformations-as-interpreters-standard-evaluation-jvp-and-vmap)
We want to transform functions that look like this:
```
def f(x):
y = sin(x) * 2.
z = - y + x
return z
```
Think of functions like `sin` and the arithmetic operations underlying the infix operators (`mul`, `add`, and `neg`) as primitive operations, meaning atomic units of processing rather than compositions.
“Transform” means “interpret differently.” Instead of standard interpretation where we apply primitive operations to numerical inputs to produce numerical outputs, we want to override primitive application and let different values flow through our program. For example, we might want to replace the application of every primitive with an application of [its JVP rule](https://jax.readthedocs.io/en/latest/notebooks/autodiff_cookbook.html),
and let primal-tangent pairs flow through our program. Moreover, we want to be able to compose multiple transformations, leading to stacks of interpreters.
##### JAX core machinery[#](#jax-core-machinery)
We can implement stacks of interpreters and even have them all discharge on the fly as we execute the Python function to be transformed. To start, let’s define these primitives so that we can intercept their application:
```
from typing import NamedTuple
class Primitive(NamedTuple):
name: str
add_p = Primitive('add')
mul_p = Primitive('mul')
neg_p = Primitive("neg")
sin_p = Primitive("sin")
cos_p = Primitive("cos")
reduce_sum_p = Primitive("reduce_sum")
greater_p = Primitive("greater")
less_p = Primitive("less")
transpose_p = Primitive("transpose")
broadcast_p = Primitive("broadcast")
def add(x, y): return bind1(add_p, x, y)
def mul(x, y): return bind1(mul_p, x, y)
def neg(x): return bind1(neg_p, x)
def sin(x): return bind1(sin_p, x)
def cos(x): return bind1(cos_p, x)
def greater(x, y): return bind1(greater_p, x, y)
def less(x, y): return bind1(less_p, x, y)
def transpose(x, perm): return bind1(transpose_p, x, perm=perm)
def broadcast(x, shape, axes): return bind1(broadcast_p, x, shape=shape, axes=axes)
def reduce_sum(x, axis=None):
if axis is None:
axis = tuple(range(np.ndim(x)))
if type(axis) is int:
axis = (axis,)
return bind1(reduce_sum_p, x, axis=axis)
def bind1(prim, *args, **params):
out, = bind(prim, *args, **params)
return out
```
We’ll set up array data types and infix operator methods in a moment.
A `Primitive` is just an object with a name, to which we attach our interpretation rules (one for each transformation). The `bind` function is our interception point: it’ll figure out which transformation rule to apply, based on how the arguments are boxed in tracers and what interpreters are active.
The functions that user code calls, like `add` and `sin`, are just wrappers around calls to `bind`. These wrappers let us control how arguments are passed to `bind`, and in particular we follow a handy internal convention: when we call `bind`, we pass values representing array data as positional arguments,
and we pass metadata like the `axis` argument to `sum_p` via keyword. This calling convention simplifies some core logic (since e.g. instances of the
`Tracer` class to be defined below can only occur in positional arguments to
`bind`). The wrappers can also provide docstrings!
We represent active interpreters as a stack. The stack is just a simple
`list`, and each element is a container with an integer level (corresponding to the element’s height in the stack), an interpreter type (which we’ll call a
`trace_type`), and an optional field for any global data the interpreter needs. We call each element a `MainTrace`, though maybe “Interpreter” would be more descriptive.
```
from collections.abc import Sequence from contextlib import contextmanager from typing import Optional, Any
class MainTrace(NamedTuple):
level: int
trace_type: type['Trace']
global_data: Optional[Any]
trace_stack: list[MainTrace] = []
dynamic_trace: Optional[MainTrace] = None # to be employed in Part 3
@contextmanager def new_main(trace_type: type['Trace'], global_data=None):
level = len(trace_stack)
main = MainTrace(level, trace_type, global_data)
trace_stack.append(main)
try:
yield main
finally:
trace_stack.pop()
```
When we’re about to apply a transformation, we’ll push another interpreter onto the stack using `new_main`. Then, as we apply primitives in the function,
we can think of the `bind` first being interpreted by the trace at the top of the stack (i.e. with the highest level). If that first interpreter itself binds other primitives in its interpretation rule for the primitive, like how the JVP rule of `sin_p` might bind `cos_p` and `mul_p`, then those `bind`
calls will be handled by the interpreter at the next level down.
What goes at the bottom of the interpreter stack? At the bottom, we know all the transformation interpreters are finished, and we just want to do standard evaluation. So at the bottom we’ll put an evaluation interpreter.
Let’s sketch out the interface for interpreters, which is based on the `Trace`
and `Tracer` base classes. A `Tracer` represents a boxed-up value, perhaps carrying some extra context data used by the interpreter. A `Trace` handles boxing up values into `Tracers` and also handles primitive application.
```
class Trace:
main: MainTrace
def __init__(self, main: MainTrace) -> None:
self.main = main
def pure(self, val): assert False # must override
def lift(self, val): assert False # must override
def process_primitive(self, primitive, tracers, params):
assert False # must override
```
The first two methods are about boxing up values in `Tracer`s, which are the objects that flow through the Python programs we transform. The last method is the callback we’ll use to interpret primitive application.
The `Trace` itself doesn’t contain any data, other than a reference to its corresponding `MainTrace` instance. In fact, multiple instances of a `Trace`
might be created and discarded during an application of a transformation,
whereas only a single `MainTrace` instance is created per application of a transformation.
As for `Tracer`s themselves, each one carries an abstract value (and forwards infix operators to it), and the rest is up to the transformation. (The relationship between `Tracer`s and `AbstractValue`s is that there’s one
`Tracer` per transformation, and at least one `AbstractValue` per base type,
like arrays.)
```
import numpy as np
class Tracer:
_trace: Trace
__array_priority__ = 1000
@property
def aval(self):
assert False # must override
def full_lower(self):
return self # default implementation
def __neg__(self): return self.aval._neg(self)
def __add__(self, other): return self.aval._add(self, other)
def __radd__(self, other): return self.aval._radd(self, other)
def __mul__(self, other): return self.aval._mul(self, other)
def __rmul__(self, other): return self.aval._rmul(self, other)
def __gt__(self, other): return self.aval._gt(self, other)
def __lt__(self, other): return self.aval._lt(self, other)
def __bool__(self): return self.aval._bool(self)
def __nonzero__(self): return self.aval._nonzero(self)
def __getattr__(self, name):
try:
return getattr(self.aval, name)
except AttributeError:
raise AttributeError(f"{self.__class__.__name__} has no attribute {name}")
def swap(f): return lambda x, y: f(y, x)
```
```
class ShapedArray:
array_abstraction_level = 1
shape: tuple[int, ...]
dtype: np.dtype
def __init__(self, shape, dtype):
self.shape = shape
self.dtype = dtype
@property
def ndim(self):
return len(self.shape)
_neg = staticmethod(neg)
_add = staticmethod(add)
_radd = staticmethod(swap(add))
_mul = staticmethod(mul)
_rmul = staticmethod(swap(mul))
_gt = staticmethod(greater)
_lt = staticmethod(less)
@staticmethod
def _bool(tracer):
raise Exception("ShapedArray can't be unambiguously converted to bool")
@staticmethod
def _nonzero(tracer):
raise Exception("ShapedArray can't be unambiguously converted to bool")
def str_short(self):
return f'{self.dtype.name}[{",".join(str(d) for d in self.shape)}]'
def __hash__(self):
return hash((self.shape, self.dtype))
def __eq__(self, other):
return (type(self) is type(other) and
self.shape == other.shape and self.dtype == other.dtype)
def __repr__(self):
return f"ShapedArray(shape={self.shape}, dtype={self.dtype})"
class ConcreteArray(ShapedArray):
array_abstraction_level = 2
val: np.ndarray
def __init__(self, val):
self.val = val
self.shape = val.shape
self.dtype = val.dtype
@staticmethod
def _bool(tracer):
return bool(tracer.aval.val)
@staticmethod
def _nonzero(tracer):
return bool(tracer.aval.val)
def get_aval(x):
if isinstance(x, Tracer):
return x.aval
elif type(x) in jax_types:
return ConcreteArray(np.asarray(x))
else:
raise TypeError(x)
jax_types = {bool, int, float,
np.bool_, np.int32, np.int64, np.float32, np.float64, np.ndarray}
```
Notice that we actually have two `AbstractValue`s for arrays, representing different levels of abstraction. A `ShapedArray` represents the set of all possible arrays with a given shape and dtype. A `ConcreteArray` represents a singleton set consisting of a single array value.
Now that we’ve set up the interpreter stack, the Trace/Tracer API for interpreters, and abstract values, we can come back to implement `bind`:
```
def bind(prim, *args, **params):
top_trace = find_top_trace(args)
tracers = [full_raise(top_trace, arg) for arg in args]
outs = top_trace.process_primitive(prim, tracers, params)
return [full_lower(out) for out in outs]
```
The main action is that we call `find_top_trace` to figure out which interpreter should handle this primitive application. We then call that top trace’s `process_primitive` so that the trace can apply its interpretation rule. The calls to `full_raise` just ensure that the inputs are boxed in the top trace’s `Tracer` instances, and the call to `full_lower` is an optional optimization so that we unbox values out of `Tracer`s as much as possible.
```
import operator as op
def find_top_trace(xs) -> Trace:
top_main = max((x._trace.main for x in xs if isinstance(x, Tracer)),
default=trace_stack[0], key=op.attrgetter('level'))
if dynamic_trace and dynamic_trace.level > top_main.level:
top_main = dynamic_trace
return top_main.trace_type(top_main)
```
In words, ignoring the `dynamic_trace` step until Part 3, `find_top_trace`
returns the highest-level interpreter associated with the `Tracer`s on its inputs, and otherwise returns the interpreter at the bottom of the stack
(which is always an evaluation trace, at least for now). This is a deviation from the description above, where we always start by running the interpreter at the top of the stack and then work our way down, applying every interpreter in the stack. Instead, we’re only applying an interpreter when the input arguments to a primitive bind are boxed in a `Tracer` corresponding to that interpreter. This optimization lets us skip irrelevant transformations, but bakes in an assumption that transformations mostly follow data dependence
(except for the special bottom-of-the-stack interpreter, which interprets everything).
An alternative would be to have every interpreter in the stack interpret every operation. That’s worth exploring! JAX is designed around data dependence in large part because that’s so natural for automatic differentiation, and JAX’s roots are in autodiff. But it may be over-fit.
```
def full_lower(val: Any):
if isinstance(val, Tracer):
return val.full_lower()
else:
return val
def full_raise(trace: Trace, val: Any) -> Tracer:
if not isinstance(val, Tracer):
assert type(val) in jax_types
return trace.pure(val)
level = trace.main.level
if val._trace.main is trace.main:
return val
elif val._trace.main.level < level:
return trace.lift(val)
elif val._trace.main.level > level:
raise Exception(f"Can't lift level {val._trace.main.level} to {level}.")
else: # val._trace.level == level
raise Exception(f"Different traces at same level: {val._trace}, {trace}.")
```
The logic in `full_raise` serves to box values into `Tracer`s for a particular
`Trace`, calling different methods on the `Trace` based on context:
`Trace.pure` is called on non-`Tracer` constants, and `Trace.lift` is called for values that are already `Tracer`s from a lower-level interpreter. These two methods could share the same implementation, but by distinguishing them in the core logic we can provide more information to the `Trace` subclass.
That’s it for the JAX core! Now we can start adding interpreters.
##### Evaluation interpreter[#](#evaluation-interpreter)
We’ll start with the simplest interpreter: the evaluation interpreter that will sit at the bottom of the interpreter stack.
```
class EvalTrace(Trace):
pure = lift = lambda self, x: x # no boxing in Tracers needed
def process_primitive(self, primitive, tracers, params):
return impl_rules[primitive](*tracers, **params)
trace_stack.append(MainTrace(0, EvalTrace, None)) # special bottom of the stack
# NB: in JAX, instead of a dict we attach impl rules to the Primitive instance impl_rules = {}
impl_rules[add_p] = lambda x, y: [np.add(x, y)]
impl_rules[mul_p] = lambda x, y: [np.multiply(x, y)]
impl_rules[neg_p] = lambda x: [np.negative(x)]
impl_rules[sin_p] = lambda x: [np.sin(x)]
impl_rules[cos_p] = lambda x: [np.cos(x)]
impl_rules[reduce_sum_p] = lambda x, *, axis: [np.sum(x, axis)]
impl_rules[greater_p] = lambda x, y: [np.greater(x, y)]
impl_rules[less_p] = lambda x, y: [np.less(x, y)]
impl_rules[transpose_p] = lambda x, *, perm: [np.transpose(x, perm)]
def broadcast_impl(x, *, shape, axes):
for axis in sorted(axes):
x = np.expand_dims(x, axis)
return [np.broadcast_to(x, shape)]
impl_rules[broadcast_p] = broadcast_impl
```
With this interpreter, we can evaluate user functions:
```
def f(x):
y = sin(x) * 2.
z = - y + x
return z
print(f(3.0))
```
```
2.7177599838802657
```
Woo! Like going around in a big circle. But the point of this indirection is that now we can add some real transformations.
##### Forward-mode autodiff with `jvp`[#](#forward-mode-autodiff-with-jvp)
First, a few helper functions:
```
import builtins
def zeros_like(val):
aval = get_aval(val)
return np.zeros(aval.shape, aval.dtype)
def unzip2(pairs):
lst1, lst2 = [], []
for x1, x2 in pairs:
lst1.append(x1)
lst2.append(x2)
return lst1, lst2
def map(f, *xs):
return list(builtins.map(f, *xs))
def zip(*args):
fst, *rest = args = map(list, args)
n = len(fst)
for arg in rest:
assert len(arg) == n
return list(builtins.zip(*args))
```
The `Tracer` for forward-mode autodiff carries a primal-tangent pair. The
`Trace` applies JVP rules.
```
class JVPTracer(Tracer):
def __init__(self, trace, primal, tangent):
self._trace = trace
self.primal = primal
self.tangent = tangent
@property
def aval(self):
return get_aval(self.primal)
class JVPTrace(Trace):
pure = lift = lambda self, val: JVPTracer(self, val, zeros_like(val))
def process_primitive(self, primitive, tracers, params):
primals_in, tangents_in = unzip2((t.primal, t.tangent) for t in tracers)
jvp_rule = jvp_rules[primitive]
primal_outs, tangent_outs = jvp_rule(primals_in, tangents_in, **params)
return [JVPTracer(self, x, t) for x, t in zip(primal_outs, tangent_outs)]
jvp_rules = {}
```
Notice both `pure` and `lift` package a value into a `JVPTracer` with the minimal amount of context, which is a zero tangent value.
Let’s add some JVP rules for primitives:
```
def add_jvp(primals, tangents):
(x, y), (x_dot, y_dot) = primals, tangents
return [x + y], [x_dot + y_dot]
jvp_rules[add_p] = add_jvp
def mul_jvp(primals, tangents):
(x, y), (x_dot, y_dot) = primals, tangents
return [x * y], [x_dot * y + x * y_dot]
jvp_rules[mul_p] = mul_jvp
def sin_jvp(primals, tangents):
(x,), (x_dot,) = primals, tangents
return [sin(x)], [cos(x) * x_dot]
jvp_rules[sin_p] = sin_jvp
def cos_jvp(primals, tangents):
(x,), (x_dot,) = primals, tangents
return [cos(x)], [-sin(x) * x_dot]
jvp_rules[cos_p] = cos_jvp
def neg_jvp(primals, tangents):
(x,), (x_dot,) = primals, tangents
return [neg(x)], [neg(x_dot)]
jvp_rules[neg_p] = neg_jvp
def reduce_sum_jvp(primals, tangents, *, axis):
(x,), (x_dot,) = primals, tangents
return [reduce_sum(x, axis)], [reduce_sum(x_dot, axis)]
jvp_rules[reduce_sum_p] = reduce_sum_jvp
def greater_jvp(primals, tangents):
(x, y), _ = primals, tangents
out_primal = greater(x, y)
return [out_primal], [zeros_like(out_primal)]
jvp_rules[greater_p] = greater_jvp
def less_jvp(primals, tangents):
(x, y), _ = primals, tangents
out_primal = less(x, y)
return [out_primal], [zeros_like(out_primal)]
jvp_rules[less_p] = less_jvp
```
Finally, we add a transformation API to kick off the trace:
```
def jvp_v1(f, primals, tangents):
with new_main(JVPTrace) as main:
trace = JVPTrace(main)
tracers_in = [JVPTracer(trace, x, t) for x, t in zip(primals, tangents)]
out = f(*tracers_in)
tracer_out = full_raise(trace, out)
primal_out, tangent_out = tracer_out.primal, tracer_out.tangent
return primal_out, tangent_out
```
And with that, we can differentiate!
```
x = 3.0 y, sin_deriv_at_3 = jvp_v1(sin, (x,), (1.0,))
print(sin_deriv_at_3)
print(cos(3.0))
```
```
-0.9899924966004454
-0.9899924966004454
```
```
def f(x):
y = sin(x) * 2.
z = - y + x
return z
x, xdot = 3., 1.
y, ydot = jvp_v1(f, (x,), (xdot,))
print(y)
print(ydot)
```
```
2.7177599838802657 2.979984993200891
```
```
def deriv(f):
return lambda x: jvp_v1(f, (x,), (1.,))[1]
print(deriv(sin)(3.))
print(deriv(deriv(sin))(3.))
print(deriv(deriv(deriv(sin)))(3.))
print(deriv(deriv(deriv(deriv(sin))))(3.))
```
```
-0.9899924966004454
-0.1411200080598672 0.9899924966004454 0.1411200080598672
```
```
def f(x):
if x > 0.: # Python control flow
return 2. * x
else:
return x
print(deriv(f)(3.))
print(deriv(f)(-3.))
```
```
2.0 1.0
```
#### Pytrees and flattening user functions’ inputs and outputs[#](#pytrees-and-flattening-user-functions-inputs-and-outputs)
A limitation with `jvp_v1` is that it assumes the user function accepts arrays as positional arguments and produces a single array as output. What if it produced a list as output? Or accepted nested containers as inputs? It would be a pain to deal with all the possible containers in inputs and outputs at every layer of the stack. Instead, we can wrap the user function so that the wrapped version accepts arrays as inputs and returns a flat list of arrays as output. The wrapper just needs to unflatten its input, call the user function,
and flatten the output.
Here’s how we’d like to write `jvp`, assuming the user always gives us functions that take arrays as inputs and produces a flat list of arrays as outputs:
```
def jvp_flat(f, primals, tangents):
with new_main(JVPTrace) as main:
trace = JVPTrace(main)
tracers_in = [JVPTracer(trace, x, t) for x, t in zip(primals, tangents)]
outs = f(*tracers_in)
tracers_out = [full_raise(trace, out) for out in outs]
primals_out, tangents_out = unzip2((t.primal, t.tangent) for t in tracers_out)
return primals_out, tangents_out
```
To support user functions that have arbitrary containers in the inputs and outputs, here’s how we’d write the user-facing `jvp` wrapper:
```
def jvp(f, primals, tangents):
primals_flat, in_tree = tree_flatten(primals)
tangents_flat, in_tree2 = tree_flatten(tangents)
if in_tree != in_tree2: raise TypeError
f, out_tree = flatten_fun(f, in_tree)
primals_out_flat, tangents_out_flat = jvp_flat(f, primals_flat, tangents_flat)
primals_out = tree_unflatten(out_tree(), primals_out_flat)
tangents_out = tree_unflatten(out_tree(), tangents_out_flat)
return primals_out, tangents_out
```
Notice that we had to plumb the tree structure of the user function output back to the caller of `flatten_fun`. That information isn’t available until we actually run the user function, so `flatten_fun` just returns a reference to a mutable cell, represented as a thunk. These side-effects are safe because we always run the user function exactly once. (This safe regime is the reason for the “linear” name in `linear_util.py`, in the sense of [linear types](https://en.wikipedia.org/wiki/Substructural_type_system).)
All that remains is to write `tree_flatten`, `tree_unflatten`, and
`flatten_fun`.
Show code cell source Hide code cell source
```
def flatten_fun(f, in_tree):
store = Store()
def flat_fun(*args_flat):
pytree_args = tree_unflatten(in_tree, args_flat)
out = f(*pytree_args)
out_flat, out_tree = tree_flatten(out)
store.set_value(out_tree)
return out_flat
return flat_fun, store
class Empty: pass empty = Empty()
class Store:
val = empty
def set_value(self, val):
assert self.val is empty
self.val = val
def __call__(self):
return self.val
```
Show code cell source Hide code cell source
```
from collections.abc import Hashable, Iterable, Iterator import itertools as it from typing import Callable
class NodeType(NamedTuple):
name: str
to_iterable: Callable
from_iterable: Callable
def register_pytree_node(ty: type, to_iter: Callable, from_iter: Callable
) -> None:
node_types[ty] = NodeType(str(ty), to_iter, from_iter)
node_types: dict[type, NodeType] = {}
register_pytree_node(tuple, lambda t: (None, t), lambda _, xs: tuple(xs))
register_pytree_node(list, lambda l: (None, l), lambda _, xs: list(xs))
register_pytree_node(dict,
lambda d: map(tuple, unzip2(sorted(d.items()))),
lambda keys, vals: dict(zip(keys, vals)))
class PyTreeDef(NamedTuple):
node_type: NodeType
node_metadata: Hashable
child_treedefs: tuple['PyTreeDef', ...]
class Leaf: pass leaf = Leaf()
def tree_flatten(x: Any) -> tuple[list[Any], PyTreeDef]:
children_iter, treedef = _tree_flatten(x)
return list(children_iter), treedef
def _tree_flatten(x: Any) -> tuple[Iterable, PyTreeDef]:
node_type = node_types.get(type(x))
if node_type:
node_metadata, children = node_type.to_iterable(x)
children_flat, child_trees = unzip2(map(_tree_flatten, children))
flattened = it.chain.from_iterable(children_flat)
return flattened, PyTreeDef(node_type, node_metadata, tuple(child_trees))
else:
return [x], leaf
def tree_unflatten(treedef: PyTreeDef, xs: list[Any]) -> Any:
return _tree_unflatten(treedef, iter(xs))
def _tree_unflatten(treedef: PyTreeDef, xs: Iterator) -> Any:
if treedef is leaf:
return next(xs)
else:
children = (_tree_unflatten(t, xs) for t in treedef.child_treedefs)
return treedef.node_type.from_iterable(treedef.node_metadata, children)
```
With this pytree-handling `jvp` implementation, we can now handle arbitrary input and output containers. That’ll come in handy with future transformations too!
```
def f(x):
y = sin(x) * 2.
z = - y + x
return {'hi': z, 'there': [x, y]}
x, xdot = 3., 1.
y, ydot = jvp(f, (x,), (xdot,))
print(y)
print(ydot)
```
```
{'hi': 2.7177599838802657, 'there': [3.0, 0.2822400161197344]}
{'hi': 2.979984993200891, 'there': [1.0, -1.9799849932008908]}
```
##### Vectorized batching with `vmap`[#](#vectorized-batching-with-vmap)
First, a couple helper functions, one for producing mapped abstract values from unmapped ones (by removing an axis), and one for moving batch dimensions around:
```
def mapped_aval(batch_dim, aval):
shape = list(aval.shape)
del shape[batch_dim]
return ShapedArray(tuple(shape), aval.dtype)
def move_batch_axis(axis_size, src, dst, x):
if src is not_mapped:
target_shape = list(np.shape(x))
target_shape.insert(dst, axis_size)
return broadcast(x, target_shape, [dst])
elif src == dst:
return x
else:
return moveaxis(x, src, dst)
def moveaxis(x, src: int, dst: int):
perm = [i for i in range(np.ndim(x)) if i != src]
perm.insert(dst, src)
return transpose(x, perm)
```
The `Tracer` for vectorized batching carries a batched value and an optional integer indicating which axis (if any) is the batch axis.
```
from typing import Union
class NotMapped: pass not_mapped = NotMapped()
BatchAxis = Union[NotMapped, int]
class BatchTracer(Tracer):
def __init__(self, trace, val, batch_dim: BatchAxis):
self._trace = trace
self.val = val
self.batch_dim = batch_dim
@property
def aval(self):
if self.batch_dim is not_mapped:
return get_aval(self.val)
else:
return mapped_aval(self.batch_dim, get_aval(self.val))
def full_lower(self):
if self.batch_dim is not_mapped:
return full_lower(self.val)
else:
return self
class BatchTrace(Trace):
pure = lift = lambda self, val: BatchTracer(self, val, not_mapped)
def process_primitive(self, primitive, tracers, params):
vals_in, bdims_in = unzip2((t.val, t.batch_dim) for t in tracers)
vmap_rule = vmap_rules[primitive]
val_outs, bdim_outs = vmap_rule(self.axis_size, vals_in, bdims_in, **params)
return [BatchTracer(self, x, bd) for x, bd in zip(val_outs, bdim_outs)]
@property
def axis_size(self):
return self.main.global_data
vmap_rules = {}
```
Here we’ve implemented the optional `Tracer.full_lower` method, which lets us peel off a batching tracer if it’s not needed because it doesn’t represent a batched value.
For `BatchTrace`, analogous to `JVPTrace`, the methods `pure` and `lift` just box a value in a `BatchTracer` with the minimal amount of context, which in this case is a `batch_dim` taking the sentinel value `not_mapped`. Notice we use the `MainTrace`’s interpreter-global data field to store the batch axis size.
Next we can define batching interpreter rules for each primitive:
```
from functools import partial
def binop_batching_rule(op, axis_size, vals_in, dims_in):
(x, y), (x_bdim, y_bdim) = vals_in, dims_in
if x_bdim != y_bdim:
if x_bdim is not_mapped:
x = move_batch_axis(axis_size, x_bdim, y_bdim, x)
x_bdim = y_bdim
else:
y = move_batch_axis(axis_size, y_bdim, x_bdim, y)
return [op(x, y)], [x_bdim]
vmap_rules[add_p] = partial(binop_batching_rule, add)
vmap_rules[mul_p] = partial(binop_batching_rule, mul)
def vectorized_unop_batching_rule(op, axis_size, vals_in, dims_in):
(x,), (x_bdim,) = vals_in, dims_in
return [op(x)], [x_bdim]
vmap_rules[sin_p] = partial(vectorized_unop_batching_rule, sin)
vmap_rules[cos_p] = partial(vectorized_unop_batching_rule, cos)
vmap_rules[neg_p] = partial(vectorized_unop_batching_rule, neg)
def reduce_sum_batching_rule(axis_size, vals_in, dims_in, *, axis):
(x,), (x_bdim,) = vals_in, dims_in
new_axis = tuple(ax + (x_bdim <= ax) for ax in axis)
out_bdim = x_bdim - sum(ax < x_bdim for ax in axis)
return [reduce_sum(x, new_axis)], [out_bdim]
vmap_rules[reduce_sum_p] = reduce_sum_batching_rule
```
Finally, we add a transformation API to kick off the trace:
```
def vmap_flat(f, in_axes, *args):
axis_size, = {x.shape[ax] for x, ax in zip(args, in_axes)
if ax is not not_mapped}
with new_main(BatchTrace, axis_size) as main:
trace = BatchTrace(main)
tracers_in = [BatchTracer(trace, x, ax) if ax is not None else x
for x, ax in zip(args, in_axes)]
outs = f(*tracers_in)
tracers_out = [full_raise(trace, out) for out in outs]
vals_out, bdims_out = unzip2((t.val, t.batch_dim) for t in tracers_out)
outs_transposed = [move_batch_axis(axis_size, bdim, 0, val_out)
for val_out, bdim in zip(vals_out, bdims_out)]
return outs_transposed
def vmap(f, in_axes):
def batched_f(*args):
args_flat, in_tree = tree_flatten(args)
in_axes_flat, in_tree2 = tree_flatten(in_axes)
if in_tree != in_tree2: raise TypeError
f_flat, out_tree = flatten_fun(f, in_tree)
outs_flat = vmap_flat(f_flat, in_axes_flat, *args_flat)
return tree_unflatten(out_tree(), outs_flat)
return batched_f
```
```
def add_one_to_a_scalar(scalar):
assert np.ndim(scalar) == 0
return 1 + scalar
vector_in = np.arange(3.)
vector_out = vmap(add_one_to_a_scalar, (0,))(vector_in)
print(vector_in)
print(vector_out)
```
```
[0. 1. 2.]
[1. 2. 3.]
```
```
def jacfwd(f, x):
pushfwd = lambda v: jvp(f, (x,), (v,))[1]
vecs_in = np.eye(np.size(x)).reshape(np.shape(x) * 2)
return vmap(pushfwd, (0,))(vecs_in)
def f(x):
return sin(x)
jacfwd(f, np.arange(3.))
```
```
array([[ 1. , 0. , -0. ],
[ 0. , 0.54030231, -0. ],
[ 0. , 0. , -0.41614684]])
```
That’s it for `jvp` and `vmap`!
#### Part 2: Jaxprs[#](#part-2-jaxprs)
The next transformations on the horizon are `jit` for just-in-time compilation and `vjp` for reverse-mode autodiff. (`grad` is just a small wrapper around `vjp`.) Whereas `jvp` and `vmap` only needed each `Tracer` to carry a little bit of extra context, for both `jit` and `vjp` we need much richer context: we need to represent *programs*. That is, we need jaxprs!
Jaxprs are JAX’s internal intermediate representation of programs. They are explicitly typed, functional, first-order, and in ANF form. We need a program representation for `jit` because the purpose of `jit` is to stage computation out of Python. For any computation we want to stage out, we need to be able to represent it as data, and build it up as we trace a Python function. Similarly, `vjp` needs a way to represent the computation for the backward pass of reverse-mode autodiff. We use the same jaxpr program representation for both needs.
(Building a program representation is the most
[free](https://en.wikipedia.org/wiki/Free_object) kind of trace-transformation, and so except for issues around handling native Python control flow, any transformation could be implemented by first tracing to a jaxpr and then interpreting the jaxpr.)
##### Jaxpr data structures[#](#jaxpr-data-structures)
The jaxpr term syntax is roughly:
```
jaxpr ::=
{ lambda <binder> , ... .
let <eqn>
...
in ( <atom> , ... ) }
binder ::= <var>:<array_type>
var ::= a | b | c | ...
atom ::= <var> | <literal>
literal ::= <int32> | <int64> | <float32> | <float64eqn ::= <binder> , ... = <primitive> [ <params> ] <atom> , ...
```
The syntax of types is:
```
jaxpr_type ::= [ <array_type> , ... ] -> [ <array_type> , ... ]
array_type ::= <dtype>[<shape>]
dtype ::= f32 | f64 | i32 | i64 shape ::= <int> , ...
```
How do we represent these as Python data structures? We reuse ShapedArrays to represent types, and we can represent the term syntax with a few Python structs:
```
class Var:
aval: ShapedArray
def __init__(self, aval): self.aval = aval
class Lit:
val: Any
aval: ShapedArray
def __init__(self, val):
self.aval = aval = raise_to_shaped(get_aval(val))
self.val = np.array(val, aval.dtype)
Atom = Union[Var, Lit]
class JaxprEqn(NamedTuple):
primitive: Primitive
inputs: list[Atom]
params: dict[str, Any]
out_binders: list[Var]
class Jaxpr(NamedTuple):
in_binders: list[Var]
eqns: list[JaxprEqn]
outs: list[Atom]
def __hash__(self): return id(self)
__eq__ = op.is_
def raise_to_shaped(aval):
return ShapedArray(aval.shape, aval.dtype)
```
Type-checking a jaxpr involves checking that there are no unbound variables,
that variables are only bound once, and that for each equation the type of the primitive application matches the type of the output binders.
```
class JaxprType(NamedTuple):
in_types: list[ShapedArray]
out_types: list[ShapedArray]
def __repr__(self):
in_types = ', '.join(aval.str_short() for aval in self.in_types)
out_types = ', '.join(aval.str_short() for aval in self.out_types)
return f'({in_types}) -> ({out_types})'
def typecheck_jaxpr(jaxpr: Jaxpr) -> JaxprType:
env: set[Var] = set()
for v in jaxpr.in_binders:
if v in env: raise TypeError
env.add(v)
for eqn in jaxpr.eqns:
in_types = [typecheck_atom(env, x) for x in eqn.inputs]
out_types = abstract_eval_rules[eqn.primitive](*in_types, **eqn.params)
for out_binder, out_type in zip(eqn.out_binders, out_types):
if not out_type == out_binder.aval: raise TypeError
for out_binder in eqn.out_binders:
if out_binder in env: raise TypeError
env.add(out_binder)
in_types = [v.aval for v in jaxpr.in_binders]
out_types = [typecheck_atom(env, x) for x in jaxpr.outs]
return JaxprType(in_types, out_types)
def typecheck_atom(env: set[Var], x: Atom) -> ShapedArray:
if isinstance(x, Var):
if x not in env: raise TypeError("unbound variable")
return x.aval
elif isinstance(x, Lit):
return raise_to_shaped(get_aval(x.val))
else:
assert False
```
We can apply the function represented by a jaxpr to arguments with a simple interpreter.
```
def eval_jaxpr(jaxpr: Jaxpr, args: list[Any]) -> list[Any]:
env: dict[Var, Any] = {}
def read(x: Atom) -> Any:
return env[x] if type(x) is Var else x.val
def write(v: Var, val: Any) -> None:
assert v not in env # single-assignment
env[v] = val
map(write, jaxpr.in_binders, args)
for eqn in jaxpr.eqns:
in_vals = map(read, eqn.inputs)
outs = bind(eqn.primitive, *in_vals, **eqn.params)
map(write, eqn.out_binders, outs)
return map(read, jaxpr.outs)
def jaxpr_as_fun(jaxpr: Jaxpr):
return lambda *args: eval_jaxpr(jaxpr, args)
```
By using `bind` in the interpreter, this interpreter itself is traceable.
##### Building jaxprs with tracing[#](#building-jaxprs-with-tracing)
Now that we have jaxprs as a data structure, we need ways to produce these from tracing Python code. In general there are two variants of how we trace to a jaxpr; `jit` uses one and `vjp` uses the other. We’ll start with the one used by `jit`, which is also used by control flow primitives like `lax.cond`,
`lax.while_loop`, and `lax.scan`.
```
def split_list(lst: list[Any], n: int) -> tuple[list[Any], list[Any]]:
assert 0 <= n <= len(lst)
return lst[:n], lst[n:]
def partition_list(bs: list[bool], l: list[Any]) -> tuple[list[Any], list[Any]]:
assert len(bs) == len(l)
lists = lst1, lst2 = [], []
for b, x in zip(bs, l):
lists[b].append(x)
return lst1, lst2
```
```
# NB: the analogous class in JAX is called 'DynamicJaxprTracer'
class JaxprTracer(Tracer):
__slots__ = ['aval']
aval: ShapedArray
def __init__(self, trace, aval):
self._trace = trace
self.aval = aval
# NB: the analogous class in JAX is called 'DynamicJaxprTrace'
class JaxprTrace(Trace):
def new_arg(self, aval: ShapedArray) -> JaxprTracer:
aval = raise_to_shaped(aval)
tracer = self.builder.new_tracer(self, aval)
self.builder.tracer_to_var[id(tracer)] = Var(aval)
return tracer
def get_or_make_const_tracer(self, val: Any) -> JaxprTracer:
tracer = self.builder.const_tracers.get(id(val))
if tracer is None:
tracer = self.builder.new_tracer(self, raise_to_shaped(get_aval(val)))
self.builder.add_const(tracer, val)
return tracer
pure = lift = get_or_make_const_tracer
def process_primitive(self, primitive, tracers, params):
avals_in = [t.aval for t in tracers]
avals_out = abstract_eval_rules[primitive](*avals_in, **params)
out_tracers = [self.builder.new_tracer(self, a) for a in avals_out]
inputs = [self.builder.getvar(t) for t in tracers]
outvars = [self.builder.add_var(t) for t in out_tracers]
self.builder.add_eqn(JaxprEqn(primitive, inputs, params, outvars))
return out_tracers
@property
def builder(self):
return self.main.global_data
# NB: in JAX, we instead attach abstract eval rules to Primitive instances abstract_eval_rules = {}
```
Notice that we keep as interpreter-global data a builder object, which keeps track of variables, constants, and eqns as we build up the jaxpr.
```
class JaxprBuilder:
eqns: list[JaxprEqn]
tracer_to_var: dict[int, Var]
const_tracers: dict[int, JaxprTracer]
constvals: dict[Var, Any]
tracers: list[JaxprTracer]
def __init__(self):
self.eqns = []
self.tracer_to_var = {}
self.const_tracers = {}
self.constvals = {}
self.tracers = []
def new_tracer(self, trace: JaxprTrace, aval: ShapedArray) -> JaxprTracer:
tracer = JaxprTracer(trace, aval)
self.tracers.append(tracer)
return tracer
def add_eqn(self, eqn: JaxprEqn) -> None:
self.eqns.append(eqn)
def add_var(self, tracer: JaxprTracer) -> Var:
assert id(tracer) not in self.tracer_to_var
var = self.tracer_to_var[id(tracer)] = Var(tracer.aval)
return var
def getvar(self, tracer: JaxprTracer) -> Var:
var = self.tracer_to_var.get(id(tracer))
assert var is not None
return var
def add_const(self, tracer: JaxprTracer, val: Any) -> Var:
var = self.add_var(tracer)
self.const_tracers[id(val)] = tracer
self.constvals[var] = val
return var
def build(self, in_tracers: list[JaxprTracer], out_tracers: list[JaxprTracer]
) -> tuple[Jaxpr, list[Any]]:
constvars, constvals = unzip2(self.constvals.items())
t2v = lambda t: self.tracer_to_var[id(t)]
in_binders = constvars + [t2v(t) for t in in_tracers]
out_vars = [t2v(t) for t in out_tracers]
jaxpr = Jaxpr(in_binders, self.eqns, out_vars)
typecheck_jaxpr(jaxpr)
jaxpr, constvals = _inline_literals(jaxpr, constvals)
return jaxpr, constvals
```
```
def _inline_literals(jaxpr: Jaxpr, consts: list[Any]) -> tuple[Jaxpr, list[Any]]:
const_binders, other_binders = split_list(jaxpr.in_binders, len(consts))
scalars = [type(x) in jax_types and not get_aval(x).shape for x in consts]
new_const_binders, lit_binders = partition_list(scalars, const_binders)
new_consts, lit_vals = partition_list(scalars, consts)
literals = dict(zip(lit_binders, map(Lit, lit_vals)))
new_eqns = [JaxprEqn(eqn.primitive, [literals.get(x, x) for x in eqn.inputs],
eqn.params, eqn.out_binders) for eqn in jaxpr.eqns]
new_outs = [literals.get(x, x) for x in jaxpr.outs]
new_jaxpr = Jaxpr(new_const_binders + other_binders, new_eqns, new_outs)
typecheck_jaxpr(new_jaxpr)
return new_jaxpr, new_consts
```
The rules we need for `JaxprTrace.process_primitive` are essentially typing rules for primitive applications: given the primitive, its parameters, and types for the inputs, the rule must produce a type for the output, which is then packaged with the output `JaxprTracer`. We can use abstract evaluation rules for this same purpose, even though they can be more general (since abstract evaluation rules must accept ConcreteArray inputs, and since they need only return an upper bound on the set of possible outputs, they can produce ConcreteArray outputs as well). We’ll reuse these abstract evaluation rules for the other jaxpr-producing trace machinery, where the potential extra generality is useful.
```
def binop_abstract_eval(x: ShapedArray, y: ShapedArray) -> list[ShapedArray]:
if not isinstance(x, ShapedArray) or not isinstance(y, ShapedArray):
raise TypeError
if raise_to_shaped(x) != raise_to_shaped(y): raise TypeError
return [ShapedArray(x.shape, x.dtype)]
abstract_eval_rules[add_p] = binop_abstract_eval abstract_eval_rules[mul_p] = binop_abstract_eval
def compare_abstract_eval(x: ShapedArray, y: ShapedArray) -> list[ShapedArray]:
if not isinstance(x, ShapedArray) or not isinstance(y, ShapedArray):
raise TypeError
if x.shape != y.shape: raise TypeError
return [ShapedArray(x.shape, np.dtype('bool'))]
abstract_eval_rules[greater_p] = compare_abstract_eval abstract_eval_rules[less_p] = compare_abstract_eval
def vectorized_unop_abstract_eval(x: ShapedArray) -> list[ShapedArray]:
return [ShapedArray(x.shape, x.dtype)]
abstract_eval_rules[sin_p] = vectorized_unop_abstract_eval abstract_eval_rules[cos_p] = vectorized_unop_abstract_eval abstract_eval_rules[neg_p] = vectorized_unop_abstract_eval
def reduce_sum_abstract_eval(x: ShapedArray, *, axis: tuple[int, ...]
) -> list[ShapedArray]:
axis_ = set(axis)
new_shape = [d for i, d in enumerate(x.shape) if i not in axis_]
return [ShapedArray(tuple(new_shape), x.dtype)]
abstract_eval_rules[reduce_sum_p] = reduce_sum_abstract_eval
def broadcast_abstract_eval(x: ShapedArray, *, shape: Sequence[int],
axes: Sequence[int]) -> list[ShapedArray]:
return [ShapedArray(tuple(shape), x.dtype)]
abstract_eval_rules[broadcast_p] = broadcast_abstract_eval
```
To check our implementation of jaxprs, we can add a `make_jaxpr`
transformation and a pretty-printer:
```
from functools import lru_cache
@lru_cache() # ShapedArrays are hashable def make_jaxpr_v1(f, *avals_in):
avals_in, in_tree = tree_flatten(avals_in)
f, out_tree = flatten_fun(f, in_tree)
builder = JaxprBuilder()
with new_main(JaxprTrace, builder) as main:
trace = JaxprTrace(main)
tracers_in = [trace.new_arg(aval) for aval in avals_in]
outs = f(*tracers_in)
tracers_out = [full_raise(trace, out) for out in outs]
jaxpr, consts = builder.build(tracers_in, tracers_out)
return jaxpr, consts, out_tree()
```
Show code cell source Hide code cell source
```
from collections import defaultdict import string
class PPrint:
lines: list[tuple[int, str]]
def __init__(self, lines):
self.lines = lines
def indent(self, indent: int) -> 'PPrint':
return PPrint([(indent + orig_indent, s) for orig_indent, s in self.lines])
def __add__(self, rhs: 'PPrint') -> 'PPrint':
return PPrint(self.lines + rhs.lines)
def __rshift__(self, rhs: 'PPrint') -> 'PPrint':
if not rhs.lines: return self
if not self.lines: return rhs
indent, s = self.lines[-1]
indented_block = rhs.indent(indent + len(s))
common_line = s + ' ' * rhs.lines[0][0] + rhs.lines[0][1]
return PPrint(self.lines[:-1]
+ [(indent, common_line)]
+ indented_block.lines[1:])
def __str__(self) -> str:
return '\n'.join(' ' * indent + s for indent, s in self.lines)
def pp(s: Any) -> PPrint:
return PPrint([(0, line) for line in str(s).splitlines()])
def vcat(ps: list[PPrint]) -> PPrint:
return sum(ps, pp(''))
def pp_jaxpr(jaxpr: Jaxpr) -> PPrint:
namegen = (''.join(s) for r in it.count(1)
for s in it.permutations(string.ascii_lowercase, r))
names = defaultdict(lambda: next(namegen))
in_binders = ', '.join(var_str(names, x) for x in jaxpr.in_binders)
eqns = vcat([pp_eqn(names, e) for e in jaxpr.eqns])
outs = ', '.join(names[v] if isinstance(v, Var) else str(v.val)
for v in jaxpr.outs)
return (pp(f'{{ lambda {in_binders} .') +
((pp('let ') >> eqns) + pp(f'in ( {outs} ) }}')).indent(2))
def var_str(names: defaultdict[Var, str], v: Var) -> str:
return f'{names[v]}:{v.aval.str_short()}'
def pp_eqn(names: defaultdict[Var, str], eqn: JaxprEqn) -> PPrint:
rule = pp_rules.get(eqn.primitive)
if rule:
return rule(names, eqn)
else:
lhs = pp(' '.join(var_str(names, v) for v in eqn.out_binders))
rhs = (pp(eqn.primitive.name) >> pp_params(eqn.params) >>
pp(' '.join(names[x] if isinstance(x, Var) else str(x.val)
for x in eqn.inputs)))
return lhs >> pp(' = ') >> rhs
def pp_params(params: dict[str, Any]) -> PPrint:
items = sorted(params.items())
if items:
return pp(' [ ') >> vcat([pp(f'{k}={v}') for k, v in items]) >> pp(' ] ')
else:
return pp(' ')
Jaxpr.__repr__ = lambda self: str(pp_jaxpr(self))
pp_rules: dict[Primitive, Callable[..., PPrint]] = {}
```
```
jaxpr, consts, _ = make_jaxpr_v1(lambda x: 2. * x, raise_to_shaped(get_aval(3.)))
print(jaxpr)
print(typecheck_jaxpr(jaxpr))
```
```
{ lambda a:float64[] .
let b:float64[] = mul 2.0 a
in ( b ) }
(float64[]) -> (float64[])
```
But there’s a limitation here: because of how `find_top_trace` operates by data dependence, `make_jaxpr_v1` can’t stage out all the primitive operations performed by the Python callable it’s given. For example:
```
jaxpr, consts, _ = make_jaxpr_v1(lambda: mul(2., 2.))
print(jaxpr)
```
```
{ lambda .
let
in ( 4.0 ) }
```
This is precisely the issue that
[omnistaging](https://github.com/google/jax/pull/3370) fixed.
We want to ensure that the `JaxprTrace` started by `make_jaxpr` is always applied, regardless of whether any inputs to `bind` are boxed in corresponding
`JaxprTracer` instances. We can achieve this by employing the `dynamic_trace`
global defined in Part 1:
```
@contextmanager def new_dynamic(main: MainTrace):
global dynamic_trace
prev_dynamic_trace, dynamic_trace = dynamic_trace, main
try:
yield
finally:
dynamic_trace = prev_dynamic_trace
@lru_cache()
def make_jaxpr(f: Callable, *avals_in: ShapedArray,
) -> tuple[Jaxpr, list[Any], PyTreeDef]:
avals_in, in_tree = tree_flatten(avals_in)
f, out_tree = flatten_fun(f, in_tree)
builder = JaxprBuilder()
with new_main(JaxprTrace, builder) as main:
with new_dynamic(main):
trace = JaxprTrace(main)
tracers_in = [trace.new_arg(aval) for aval in avals_in]
outs = f(*tracers_in)
tracers_out = [full_raise(trace, out) for out in outs]
jaxpr, consts = builder.build(tracers_in, tracers_out)
return jaxpr, consts, out_tree()
jaxpr, consts, _ = make_jaxpr(lambda: mul(2., 2.))
print(jaxpr)
```
```
{ lambda .
let a:float64[] = mul 2.0 2.0
in ( a ) }
```
Using `dynamic_trace` this way is conceptually the same as stashing the current interpreter stack and starting a new one with the `JaxprTrace` at the bottom. That is, no interpreters lower in the stack than the `dynamic_trace`
are applied (since `JaxprTrace.process_primitive` doesn’t call `bind`), though if the Python callable being traced to a jaxpr itself uses transformations then those can be pushed onto the interpreter stack above the `JaxprTrace`.
But temporarily stashing the interpreter stack would break up the system state. The `dynamic_trace` tag achieves the same goals while keeping the system state simpler.
That’s it for jaxprs! With jaxprs in hand, we can implement the remaining major JAX features.
#### Part 3: `jit`, simplified[#](#part-3-jit-simplified)
While `jit` has a transformation-like API in that it accepts a Python callable as an argument, under the hood it’s really a higher-order primitive rather than a transformation. A primitive is *higher-order* when it’s parameterized by a function.
##### On-the-fly (“final style”) and staged (“initial style”) processing[#](#on-the-fly-final-style-and-staged-initial-style-processing)
There are two options for how to handle higher-order primitives. Each requires a different approach to tracing and engenders different tradeoffs:
1. **On-the-fly processing, where `bind` takes a Python callable as an argument.** We defer forming a jaxpr until as late as possible, namely until we’re running the final interpreter at the bottom of the interpreter stack. That way we can swap a `JaxprTrace` in at the bottom of the interpreter stack and thus stage out rather than execute all primitive operations. With this approach, transformations in the stack get applied as we execute the Python callable as usual. This approach can be very tricky to implement, but it’s as general as possible because it allows higher-order primitives not to raise the abstraction level of their arguments and thus allows data-dependent Python control flow. We refer to this approach as using a “final-style higher-order primitive” employing the discharge-at-tracing-time “final-style transformations” we’ve used so far.
2. **Staged processing, where `bind` takes a jaxpr as an argument.** Before we call `bind`, in the primitive wrapper we can just use `make_jaxpr` to form a jaxpr up-front and be done with the Python callable entirely. In this case, `make_jaxpr` puts its `JaxprTrace` at the top of the interpreter stack, and no transformations lower in the stack, which might enter via closed-over Tracers, are applied to the Python callable as we trace it.
(Transformations applied within the Python callable are applied as usual,
being added to the stack above the JaxprTrace.) Instead, the transformations lower in the stack are later applied to the call primitive,
and the call primitive’s rules must then transform the jaxpr itself.
Because we trace to a jaxpr up-front, this approach can’t support data-dependent Python control flow, but it is more straightforward to implement. We refer to this kind of higher-order primitive as an
“initial-style higher-order primitive”, and say that its jaxpr-processing transformation rules are “initial-style transformation rules.”
The latter approach fits for `jit` because we don’t need to support data-dependent Python control flow in the user-provided Python callable, as the whole purpose of `jit` is to stage computation out of Python to be executed by XLA. (In contrast, `custom_jvp` is a higher-order primitive in which we want to support data-dependent Python control flow.)
Historically, we started using the “initial-style” and “final-style”
terminology after reading the [typed tagless final interpreters](http://okmij.org/ftp/tagless-final/index.html) paper, and jokingly referring to JAX as an implementation of “untyped tagful final interpreters.” We don’t claim to carry over (or understand) any deep meaning behind these terms; we loosely use “initial style” to mean “build an AST and then transform it”, and we use “final style” to mean “transform as we trace.”
But it’s just imprecise yet sticky jargon.
With the initial-style approach, here’s the user-facing `jit` wrapper:
```
def jit(f):
def f_jitted(*args):
avals_in = [raise_to_shaped(get_aval(x)) for x in args]
jaxpr, consts, out_tree = make_jaxpr(f, *avals_in)
outs = bind(xla_call_p, *consts, *args, jaxpr=jaxpr, num_consts=len(consts))
return tree_unflatten(out_tree, outs)
return f_jitted
xla_call_p = Primitive('xla_call')
```
With any new primitive, we need to give it transformation rules, starting with its evaluation rule. When we evaluate an application of the `xla_call`
primitive, we want to stage out out the computation to XLA. That involves translating the jaxpr to an XLA HLO program, transferring the argument values to the XLA device, executing the XLA program, and transferring back the results. We’ll cache the XLA HLO compilation so that for each `jit`ted function it only needs to be performed once per argument shape and dtype signature.
First, some utilities.
```
class IDHashable:
val: Any
def __init__(self, val):
self.val = val
def __hash__(self) -> int:
return id(self.val)
def __eq__(self, other):
return type(other) is IDHashable and id(self.val) == id(other.val)
```
Next, we’ll define the evaluation rule for `xla_call`:
```
from jax._src import xla_bridge as xb from jax._src.lib import xla_client as xc xe = xc._xla xops = xc._xla.ops
def xla_call_impl(*args, jaxpr: Jaxpr, num_consts: int):
consts, args = args[:num_consts], args[num_consts:]
hashable_consts = tuple(map(IDHashable, consts))
execute = xla_callable(IDHashable(jaxpr), hashable_consts)
return execute(*args)
impl_rules[xla_call_p] = xla_call_impl
@lru_cache()
def xla_callable(hashable_jaxpr: IDHashable,
hashable_consts: tuple[IDHashable, ...]):
jaxpr: Jaxpr = hashable_jaxpr.val
typecheck_jaxpr(jaxpr)
consts = [x.val for x in hashable_consts]
in_avals = [v.aval for v in jaxpr.in_binders[len(consts):]]
c = xc.XlaBuilder('xla_call')
xla_consts = _xla_consts(c, consts)
xla_params = _xla_params(c, in_avals)
outs = jaxpr_subcomp(c, jaxpr, xla_consts + xla_params)
out = xops.Tuple(c, outs)
compiled = xb.get_backend(None).compile(
xc._xla.mlir.xla_computation_to_mlir_module(c.build(out)))
return partial(execute_compiled, compiled, [v.aval for v in jaxpr.outs])
def _xla_consts(c: xe.XlaBuilder, consts: list[Any]) -> list[xe.XlaOp]:
unique_consts = {id(cnst): cnst for cnst in consts}
xla_consts = {
id_: xops.ConstantLiteral(c, cnst) for id_, cnst in unique_consts.items()}
return [xla_consts[id(cnst)] for cnst in consts]
def _xla_params(c: xe.XlaBuilder, avals_in: list[ShapedArray]) -> list[xe.XlaOp]:
return [xops.Parameter(c, i, _xla_shape(a)) for i, a in enumerate(avals_in)]
def _xla_shape(aval: ShapedArray) -> xe.Shape:
return xc.Shape.array_shape(xc.dtype_to_etype(aval.dtype), aval.shape)
```
The main action is in `xla_callable`, which compiles a jaxpr into an XLA HLO program using `jaxpr_subcomp`, then returns a callable which executes the compiled program:
```
def jaxpr_subcomp(c: xe.XlaBuilder, jaxpr: Jaxpr, args: list[xe.XlaOp]
) -> list[xe.XlaOp]:
env: dict[Var, xe.XlaOp] = {}
def read(x: Atom) -> xe.XlaOp:
return env[x] if type(x) is Var else xops.Constant(c, np.asarray(x.val))
def write(v: Var, val: xe.XlaOp) -> None:
env[v] = val
map(write, jaxpr.in_binders, args)
for eqn in jaxpr.eqns:
in_avals = [x.aval for x in eqn.inputs]
in_vals = map(read, eqn.inputs)
rule = xla_translations[eqn.primitive]
out_vals = rule(c, in_avals, in_vals, **eqn.params)
map(write, eqn.out_binders, out_vals)
return map(read, jaxpr.outs)
def execute_compiled(compiled, out_avals, *args):
input_bufs = [input_handlers[type(x)](x) for x in args]
out_bufs = compiled.execute(input_bufs)
return [handle_result(aval, buf) for aval, buf in zip(out_avals, out_bufs)]
default_input_handler = xb.get_backend(None).buffer_from_pyval input_handlers = {ty: default_input_handler for ty in
[bool, int, float, np.ndarray, np.float64, np.float32]}
def handle_result(aval: ShapedArray, buf):
del aval # Unused for now
return np.asarray(buf)
xla_translations = {}
```
Notice that `jaxpr_subcomp` has the structure of a simple interpreter. That’s a common pattern: the way we process jaxprs is usually with an interpreter.
And as with any interpreter, we need an interpretation rule for each primitive:
```
def direct_translation(op, c, in_avals, in_vals):
del c, in_avals
return [op(*in_vals)]
xla_translations[add_p] = partial(direct_translation, xops.Add)
xla_translations[mul_p] = partial(direct_translation, xops.Mul)
xla_translations[neg_p] = partial(direct_translation, xops.Neg)
xla_translations[sin_p] = partial(direct_translation, xops.Sin)
xla_translations[cos_p] = partial(direct_translation, xops.Cos)
xla_translations[greater_p] = partial(direct_translation, xops.Gt)
xla_translations[less_p] = partial(direct_translation, xops.Lt)
def reduce_sum_translation(c, in_avals, in_vals, *, axis):
(x_aval,), (x,) = in_avals, in_vals
zero = xops.ConstantLiteral(c, np.array(0, x_aval.dtype))
subc = xc.XlaBuilder('add')
shape = _xla_shape(ShapedArray((), x_aval.dtype))
xops.Add(xops.Parameter(subc, 0, shape), xops.Parameter(subc, 1, shape))
return [xops.Reduce(c, [x], [zero], subc.build(), axis)]
xla_translations[reduce_sum_p] = reduce_sum_translation
def broadcast_translation(c, in_avals, in_vals, *, shape, axes):
x, = in_vals
dims_complement = [i for i in range(len(shape)) if i not in axes]
return [xops.BroadcastInDim(x, shape, dims_complement)]
xla_translations[broadcast_p] = broadcast_translation
```
With that, we can now use `jit` to stage out, compile, and execute programs with XLA!
```
@jit def f(x, y):
print('tracing!')
return sin(x) * cos(y)
```
```
z = f(3., 4.) # 'tracing!' prints the first time print(z)
```
```
tracing!
-0.09224219304455371
```
```
z = f(4., 5.) # 'tracing!' doesn't print, compilation cache hit!
print(z)
```
```
-0.21467624978306993
```
```
@jit def f(x):
return reduce_sum(x, axis=0)
print(f(np.array([1., 2., 3.])))
```
```
6.0
```
```
def f(x):
y = sin(x) * 2.
z = - y + x
return z
def deriv(f):
return lambda x: jvp(f, (x,), (1.,))[1]
print( deriv(deriv(f))(3.))
print(jit(deriv(deriv(f)))(3.))
```
```
0.2822400161197344 0.2822400161197344
```
Instead of implementing `jit` to first trace to a jaxpr and then to lower the jaxpr to XLA HLO, it might appear that we could have skipped the jaxpr step and just lowered to HLO while tracing. That is, perhaps we could have instead implemented `jit` with a `Trace` and `Tracer` that appended to the XLA HLO graph incrementally on each primitive bind. That’s correct for now, but won’t be possible when we introduce compiled SPMD computations because there we must know the number of replicas needed before compiling the program.
We haven’t yet defined any transformation rules for `xla_call_p` other than its evaluation rule. That is, we can’t yet do `vmap`-of-`jit` or
`jvp`-of-`jit` or even `jit`-of`-jit`. Instead `jit` has to be at the “top level.” Let’s fix that!
```
def xla_call_jvp_rule(primals, tangents, *, jaxpr, num_consts):
del num_consts # Unused
new_jaxpr, new_consts = jvp_jaxpr(jaxpr)
outs = bind(xla_call_p, *new_consts, *primals, *tangents, jaxpr=new_jaxpr,
num_consts=len(new_consts))
n = len(outs) // 2
primals_out, tangents_out = outs[:n], outs[n:]
return primals_out, tangents_out jvp_rules[xla_call_p] = xla_call_jvp_rule
@lru_cache()
def jvp_jaxpr(jaxpr: Jaxpr) -> tuple[Jaxpr, list[Any]]:
def jvp_traceable(*primals_and_tangents):
n = len(primals_and_tangents) // 2
primals, tangents = primals_and_tangents[:n], primals_and_tangents[n:]
return jvp(jaxpr_as_fun(jaxpr), primals, tangents)
in_avals = [v.aval for v in jaxpr.in_binders]
new_jaxpr, new_consts, _ = make_jaxpr(jvp_traceable, *in_avals, *in_avals)
return new_jaxpr, new_consts
```
```
def xla_call_vmap_rule(axis_size, vals_in, dims_in, *, jaxpr, num_consts):
del num_consts # Unused
new_jaxpr, new_consts = vmap_jaxpr(jaxpr, axis_size, tuple(dims_in))
outs = bind(xla_call_p, *new_consts, *vals_in, jaxpr=new_jaxpr,
num_consts=len(new_consts))
return outs, [0] * len(outs)
vmap_rules[xla_call_p] = xla_call_vmap_rule
@lru_cache()
def vmap_jaxpr(jaxpr: Jaxpr, axis_size: int, bdims_in: tuple[BatchAxis, ...]
) -> tuple[Jaxpr, list[Any]]:
vmap_traceable = vmap(jaxpr_as_fun(jaxpr), tuple(bdims_in))
in_avals = [unmapped_aval(axis_size, d, v.aval)
for v, d in zip(jaxpr.in_binders, bdims_in)]
new_jaxpr, new_consts, _ = make_jaxpr(vmap_traceable, *in_avals)
return new_jaxpr, new_consts
def unmapped_aval(axis_size: int, batch_dim: BatchAxis, aval: ShapedArray
) -> ShapedArray:
if batch_dim is not_mapped:
return aval
else:
shape = list(aval.shape)
shape.insert(batch_dim, axis_size)
return ShapedArray(tuple(shape), aval.dtype)
```
```
def xla_call_abstract_eval_rule(*in_types, jaxpr, num_consts):
del num_consts # Unused
jaxpr_type = typecheck_jaxpr(jaxpr)
if not all(t1 == t2 for t1, t2 in zip(jaxpr_type.in_types, in_types)):
raise TypeError
return jaxpr_type.out_types abstract_eval_rules[xla_call_p] = xla_call_abstract_eval_rule
def xla_call_translation(c, in_avals, in_vals, *, jaxpr, num_consts):
del num_consts # Only used at top-level.
# Calling jaxpr_subcomp directly would inline. We generate a Call HLO instead.
subc = xc.XlaBuilder('inner xla_call')
xla_params = _xla_params(subc, in_avals)
outs = jaxpr_subcomp(subc, jaxpr, xla_params)
subc = subc.build(xops.Tuple(subc, outs))
return destructure_tuple(c, xops.Call(c, subc, in_vals))
xla_translations[xla_call_p] = xla_call_translation
def destructure_tuple(c, tup):
num_elements = len(c.get_shape(tup).tuple_shapes())
return [xops.GetTupleElement(tup, i) for i in range(num_elements)]
```
```
@jit def f(x):
print('tracing!')
y = sin(x) * 2.
z = - y + x
return z
x, xdot = 3., 1.
y, ydot = jvp(f, (x,), (xdot,))
print(y)
print(ydot)
```
```
tracing!
2.7177599838802657 2.979984993200891
```
```
y, ydot = jvp(f, (x,), (xdot,)) # 'tracing!' not printed
```
```
ys = vmap(f, (0,))(np.arange(3.))
print(ys)
```
```
[ 0. -0.68294197 0.18140515]
```
One piece missing is device memory persistence for arrays. That is, we’ve defined `handle_result` to transfer results back to CPU memory as NumPy arrays, but it’s often preferable to avoid transferring results just to transfer them back for the next operation. We can do that by introducing an
`Array` class, which can wrap XLA buffers and otherwise duck-type
`numpy.ndarray`s:
```
def handle_result(aval: ShapedArray, buf): # noqa: F811
return Array(aval, buf)
class Array:
buf: Any
aval: ShapedArray
def __init__(self, aval, buf):
self.aval = aval
self.buf = buf
dtype = property(lambda self: self.aval.dtype)
shape = property(lambda self: self.aval.shape)
ndim = property(lambda self: self.aval.ndim)
def __array__(self): return np.asarray(self.buf)
def __repr__(self): return repr(np.asarray(self.buf))
def __str__(self): return str(np.asarray(self.buf))
_neg = staticmethod(neg)
_add = staticmethod(add)
_radd = staticmethod(add)
_mul = staticmethod(mul)
_rmul = staticmethod(mul)
_gt = staticmethod(greater)
_lt = staticmethod(less)
input_handlers[Array] = lambda x: x.buf
jax_types.add(Array)
```
```
@jit def f(x):
y = sin(x) * 2.
z = - y + x
return z
x, xdot = 3., 1.
y, ydot = jvp(f, (x,), (xdot,))
print(y)
print(ydot)
```
```
2.7177599838802657 2.979984993200891
```
Show code cell source Hide code cell source
```
def pprint_xla_call(names: defaultdict[Var, str], eqn: JaxprEqn) -> PPrint:
lhs = pp(' '.join(var_str(names, v) for v in eqn.out_binders))
params_without_jaxpr = {k:v for k, v in eqn.params.items() if k != 'jaxpr'}
rhs = (pp(eqn.primitive.name) >> pp_params(params_without_jaxpr) >>
pp(' '.join(names[x] if isinstance(x, Var) else str(x.val)
for x in eqn.inputs)))
return vcat([lhs >> pp(' = ') >> rhs,
pp_jaxpr(eqn.params['jaxpr']).indent(2)])
pp_rules[xla_call_p] = pprint_xla_call
```
#### Part 4: `linearize` and `vjp` (and `grad`!)[#](#part-4-linearize-and-vjp-and-grad)
The `linearize` and `vjp` autodiff functions are built on `jvp`, but involve jaxprs as well. That’s because both involve staging out, or delaying,
computation.
##### `linearize`[#](#linearize)
In the case of `linearize`, we want to stage out the linear part of a `jvp`
computation. That is, in terms of
[Haskell-like type signatures](https://wiki.haskell.org/Type_signature),
if we have `jvp : (a -> b) -> (a, T a) -> (b, T b)`,
then we write `linearize : (a -> b) -> a -> (b, T a -o T b)`, using `T a` to mean “the tangent type of `a`” and using the “lollipop” `-o` rather than the arrow `->` to indicate a *linear* function. We define the semantics of
`linearize` in terms of `jvp` too:
```
y, f_lin = linearize(f, x)
y_dot = f_lin(x_dot)
```
gives the same result for `(y, y_dot)` as
```
y, y_dot = jvp(f, (x,), (x_dot,))
```
where the application of `f_lin` does not redo any of the linearization work.
We’ll represent the delayed linear part `f_lin : T a -o T b` as a jaxpr.
Tangentially, now that we have linear arrows `-o`, we can provide a slightly more informative type for `jvp`:
```
jvp : (a -> b) -> (UnrestrictedUse a, T a) -o (UnrestrictedUse b, T b)
```
Here we’re writing `UnrestrictedUse` just to indicate that we have a special pair where the first element can be used in an unrestricted (nonlinear) way.
In conjunction with the linear arrow, this notation is just meant to express that the function `jvp f` uses its first input in a nonlinear way but its second input in a linear way, producing a corresponding nonlinear output
(which can be used in a nonlinear way) paired with a linear output. This more refined type signature encodes the data dependencies in `jvp f`, which are useful for partial evaluation.
To build the `f_lin` jaxpr from a JVP, we need to perform partial evaluation:
we evaluate all the primal values as we trace, but stage the tangent computations into a jaxpr. This is our second way to build jaxprs. But where
`make_jaxpr` and its underlying `JaxprTrace`/`JaxprTracer` interpreters aim to stage out every primitive bind, this second approach stages out only those primitive binds with a data dependence on tangent inputs.
First, some utilities:
```
def split_half(lst: list[Any]) -> tuple[list[Any], list[Any]]:
assert not len(lst) % 2
return split_list(lst, len(lst) // 2)
def merge_lists(which: list[bool], l1: list[Any], l2: list[Any]) -> list[Any]:
l1, l2 = iter(l1), iter(l2)
out = [next(l2) if b else next(l1) for b in which]
assert next(l1, None) is next(l2, None) is None
return out
```
Next, we’ll write `linearize` by combining `jvp` together with a general partial evaluation transformation, to be added next:
```
def linearize_flat(f, *primals_in):
pvals_in = ([PartialVal.known(x) for x in primals_in] +
[PartialVal.unknown(vspace(get_aval(x))) for x in primals_in])
def f_jvp(*primals_tangents_in):
primals_out, tangents_out = jvp(f, *split_half(primals_tangents_in))
return [*primals_out, *tangents_out]
jaxpr, pvals_out, consts = partial_eval_flat(f_jvp, pvals_in)
primal_pvals, _ = split_half(pvals_out)
assert all(pval.is_known for pval in primal_pvals)
primals_out = [pval.const for pval in primal_pvals]
f_lin = lambda *tangents: eval_jaxpr(jaxpr, [*consts, *tangents])
return primals_out, f_lin
def linearize(f, *primals_in):
primals_in_flat, in_tree = tree_flatten(primals_in)
f, out_tree = flatten_fun(f, in_tree)
primals_out_flat, f_lin_flat = linearize_flat(f, *primals_in_flat)
primals_out = tree_unflatten(out_tree(), primals_out_flat)
def f_lin(*tangents_in):
tangents_in_flat, in_tree2 = tree_flatten(tangents_in)
if in_tree != in_tree2: raise TypeError
tangents_out_flat = f_lin_flat(*tangents_in_flat)
return tree_unflatten(out_tree(), tangents_out_flat)
return primals_out, f_lin
def vspace(aval: ShapedArray) -> ShapedArray:
return raise_to_shaped(aval) # TODO handle integers?
```
Now we turn to the general partial evaluation transformation. The goal is to accept a Python callable and a list of inputs, some known and some unknown,
and to produce (1) all the outputs which can be computed from the known inputs, together with (2) a jaxpr representing the part of the Python callable’s computation which can only be performed after the remaining inputs are known.
This transformation is tricky to summarize in a type signature. If we assume the input function’s type signature is `(a1, a2) -> (b1, b2)`, where
`a1` and `a2` represent the known and unknown inputs, respectively, and where
`b1` only has a data dependency on `a1` while `b2` has some data dependency on
`a2`, then we might write
```
partial_eval : ((a1, a2) -> (b1, b2)) -> a1 -> exists r. (b1, r, (r, a2) -> b2)
```
In words, given values for the inputs of type `a1`, `partial_eval` produces the outputs of type `b1` along with “residual” values of existentially-quantified type `r` representing the intermediates required to complete the computation in the second stage. It also produces a function of type `(r, a2) -> b2` which accepts the residual values as well as the remaining inputs and produces the remaining outputs.
We like to think of partial evaluation as “unzipping” one computation into two. For example, consider this jaxpr:
```
{ lambda a:float64[] .
let b:float64[] = sin a
c:float64[] = neg b
in ( c ) }
```
A jaxpr for the JVP would look like:
```
{ lambda a:float64[] b:float64[] .
let c:float64[] = sin a
d:float64[] = cos a
e:float64[] = mul d b
f:float64[] = neg c
g:float64[] = neg e
in ( f, g ) }
```
If we imagine applying partial evaluation to this jaxpr with the first input known and the second unknown, we end up ‘unzipping’ the JVP jaxpr into primal and tangent jaxprs:
```
{ lambda a:float64[] .
let c:float64[] = sin a
d:float64[] = cos a
f:float64[] = neg c
in ( f, d ) }
```
```
{ lambda d:float64[] b:float64[] .
let e:float64[] = mul d b
g:float64[] = neg e
in ( g ) }
```
This second jaxpr represents the linear computation that we want from
`linearize`.
However, unlike in this jaxpr example, we want the computation on known values to occur while evaluating the input Python callable. That is, rather than forming a jaxpr for the entire function `(a1, a2) -> (b1, b2)`, staging all operations out of Python first before sorting out what can be evaluated now and what must be delayed, we want only to form a jaxpr for those operations that *must* be delayed due to a dependence on unknown inputs. In the context of automatic differentiation, this is the feature that ultimately enables us to handle functions like `grad(lambda x: x**2 if x > 0 else 0.)`. Python control flow works because partial evaluation keeps the primal computation in Python. As a consequence, our `Trace` and `Tracer` subclasses must on the fly sort out what can be evaluated and what must be staged out into a jaxpr.
First, we start with a `PartialVal` class, which represents a value that can be either known or unknown:
```
class PartialVal(NamedTuple):
aval: ShapedArray
const: Optional[Any]
@classmethod
def known(cls, val: Any):
return PartialVal(get_aval(val), val)
@classmethod
def unknown(cls, aval: ShapedArray):
return PartialVal(aval, None)
is_known = property(lambda self: self.const is not None)
is_unknown = property(lambda self: self.const is None)
```
Partial evaluation will take a list of `PartialVal`s representing inputs, and return a list of `PartialVal` outputs along with a jaxpr representing the delayed computation:
```
def partial_eval_flat(f: Callable, pvals_in: list[PartialVal]
) -> tuple[Jaxpr, list[PartialVal], list[Any]]:
with new_main(PartialEvalTrace) as main:
trace = PartialEvalTrace(main)
tracers_in = [trace.new_arg(pval) for pval in pvals_in]
outs = f(*tracers_in)
tracers_out = [full_raise(trace, out) for out in outs]
pvals_out = [t.pval for t in tracers_out]
unk_tracers_in = [t for t in tracers_in if t.pval.is_unknown]
unk_tracers_out = [t for t in tracers_out if t.pval.is_unknown]
jaxpr, consts = tracers_to_jaxpr(unk_tracers_in, unk_tracers_out)
return jaxpr, pvals_out, consts
```
Next we need to implement `PartialEvalTrace` and its `PartialEvalTracer`. This interpreter will build a jaxpr on the fly while tracking data dependencies. To do so, it builds a bipartite directed acyclic graph (DAG) between
`PartialEvalTracer` nodes, representing staged-out values, and `JaxprRecipe`
nodes, representing formulas for how to compute some values from others. One kind of recipe is a `JaxprEqnRecipe`, corresponding to a `JaxprEqn`’s primitive application, but we also have recipe types for constants and lambda binders:
```
from weakref import ref, ReferenceType
class LambdaBindingRecipe(NamedTuple):
pass
class ConstRecipe(NamedTuple):
val: Any
class JaxprEqnRecipe(NamedTuple):
prim: Primitive
tracers_in: list['PartialEvalTracer']
params: dict[str, Any]
avals_out: list[ShapedArray]
tracer_refs_out: list['ReferenceType[PartialEvalTracer]']
JaxprRecipe = Union[LambdaBindingRecipe, ConstRecipe, JaxprEqnRecipe]
```
```
class PartialEvalTracer(Tracer):
pval: PartialVal
recipe: Optional[JaxprRecipe]
def __init__(self, trace, pval, recipe):
self._trace = trace
self.pval = pval
self.recipe = recipe
aval = property(lambda self: self.pval.aval)
def full_lower(self):
if self.pval.is_known:
return full_lower(self.pval.const)
return self
```
The `PartialEvalTrace` contains the logic for constructing the graph of
`JaxprRecipe`s and `PartialEvalTracer`s. Each argument corresponds to a
`LambdaBindingRecipe` leaf node, and each constant is a `ConstRecipe` leaf node holding a reference to the constant. All other tracers and recipes come from `process_primitive`, which forms tracers with `JaxprEqnRecipe`s.
For most primitives, the `process_primitive` logic is straightforward: if all inputs are known then we can bind the primitive on the known values
(evaluating it in Python) and avoid forming tracers corresponding to the output. If instead any input is unknown then we instead stage out into a
`JaxprEqnRecipe` representing the primitive application. To build the tracers representing unknown outputs, we need avals, which we get from the abstract eval rules. (Notice that tracers reference `JaxprEqnRecipe`s, and
`JaxprEqnRecipe`s reference tracers; we avoid circular garbage by using weakrefs.)
That `process_primitive` logic applies to most primitives, but `xla_call_p`
requires recursive treatment. So we special-case its rule in a
`partial_eval_rules` dict.
```
class PartialEvalTrace(Trace):
def new_arg(self, pval: PartialVal) -> Any:
return PartialEvalTracer(self, pval, LambdaBindingRecipe())
def lift(self, val: Any) -> PartialEvalTracer:
return PartialEvalTracer(self, PartialVal.known(val), None)
pure = lift
def instantiate_const(self, tracer: PartialEvalTracer) -> PartialEvalTracer:
if tracer.pval.is_unknown:
return tracer
else:
pval = PartialVal.unknown(raise_to_shaped(tracer.aval))
return PartialEvalTracer(self, pval, ConstRecipe(tracer.pval.const))
def process_primitive(self, primitive, tracers, params):
if all(t.pval.is_known for t in tracers):
return bind(primitive, *map(full_lower, tracers), **params)
rule = partial_eval_rules.get(primitive)
if rule: return rule(self, tracers, **params)
tracers_in = [self.instantiate_const(t) for t in tracers]
avals_in = [t.aval for t in tracers_in]
avals_out = abstract_eval_rules[primitive](*avals_in, **params)
tracers_out = [PartialEvalTracer(self, PartialVal.unknown(aval), None)
for aval in avals_out]
eqn = JaxprEqnRecipe(primitive, tracers_in, params, avals_out,
map(ref, tracers_out))
for t in tracers_out: t.recipe = eqn
return tracers_out
partial_eval_rules = {}
```
Now that we can build graph representations of jaxprs with `PartialEvalTrace`,
we need a mechanism to convert the graph representation to a standard jaxpr.
The jaxpr corresponds to a topological sort of the graph.
```
def tracers_to_jaxpr(tracers_in: list[PartialEvalTracer],
tracers_out: list[PartialEvalTracer]):
tracer_to_var: dict[int, Var] = {id(t): Var(raise_to_shaped(t.aval))
for t in tracers_in}
constvar_to_val: dict[int, Any] = {}
constid_to_var: dict[int, Var] = {}
processed_eqns: set[int] = set()
eqns: list[JaxprEqn] = []
for t in toposort(tracers_out, tracer_parents):
if isinstance(t.recipe, LambdaBindingRecipe):
assert id(t) in set(map(id, tracers_in))
elif isinstance(t.recipe, ConstRecipe):
val = t.recipe.val
var = constid_to_var.get(id(val))
if var is None:
aval = raise_to_shaped(get_aval(val))
var = constid_to_var[id(val)] = Var(aval)
constvar_to_val[var] = val
tracer_to_var[id(t)] = var
elif isinstance(t.recipe, JaxprEqnRecipe):
if id(t.recipe) not in processed_eqns:
eqns.append(recipe_to_eqn(tracer_to_var, t.recipe))
processed_eqns.add(id(t.recipe))
else:
raise TypeError(t.recipe)
constvars, constvals = unzip2(constvar_to_val.items())
in_binders = constvars + [tracer_to_var[id(t)] for t in tracers_in]
out_vars = [tracer_to_var[id(t)] for t in tracers_out]
jaxpr = Jaxpr(in_binders, eqns, out_vars)
typecheck_jaxpr(jaxpr)
return jaxpr, constvals
def recipe_to_eqn(tracer_to_var: dict[int, Var], recipe: JaxprEqnRecipe
) -> JaxprEqn:
inputs = [tracer_to_var[id(t)] for t in recipe.tracers_in]
out_binders = [Var(aval) for aval in recipe.avals_out]
for t_ref, var in zip(recipe.tracer_refs_out, out_binders):
if t_ref() is not None: tracer_to_var[id(t_ref())] = var
return JaxprEqn(recipe.prim, inputs, recipe.params, out_binders)
def tracer_parents(t: PartialEvalTracer) -> list[PartialEvalTracer]:
return t.recipe.tracers_in if isinstance(t.recipe, JaxprEqnRecipe) else []
```
Show code cell source Hide code cell source
```
def toposort(out_nodes: list[Any], parents: Callable[[Any], list[Any]]):
if not out_nodes: return []
out_nodes = remove_duplicates(out_nodes)
child_counts = {}
stack = list(out_nodes)
while stack:
node = stack.pop()
if id(node) in child_counts:
child_counts[id(node)] += 1
else:
child_counts[id(node)] = 1
stack.extend(parents(node))
for node in out_nodes:
child_counts[id(node)] -= 1
sorted_nodes = []
childless_nodes = [node for node in out_nodes if not child_counts[id(node)]]
while childless_nodes:
node = childless_nodes.pop()
sorted_nodes.append(node)
for parent in parents(node):
if child_counts[id(parent)] == 1:
childless_nodes.append(parent)
else:
child_counts[id(parent)] -= 1
sorted_nodes = sorted_nodes[::-1]
check_toposort(sorted_nodes, parents)
return sorted_nodes
def remove_duplicates(lst):
seen = set()
return [x for x in lst if id(x) not in seen and not seen.add(id(x))]
def check_toposort(nodes: list[Any], parents: Callable[[Any], list[Any]]):
seen = set()
for node in nodes:
assert all(id(parent) in seen for parent in parents(node))
seen.add(id(node))
```
Now we can linearize!
```
y, sin_lin = linearize(sin, 3.)
print(y, sin(3.))
print(sin_lin(1.), cos(3.))
```
```
0.1411200080598672 0.1411200080598672
-0.9899924966004454 -0.9899924966004454
```
To handle `linearize`-of-`jit`, we still need to write a partial evaluation rule for `xla_call_p`. Other than tracer bookkeeping, the main task is to perform partial evaluation of a jaxpr, ‘unzipping’ it into two jaxprs.
There are actually two rules to write: one for trace-time partial evaluation,
which we’ll call `xla_call_partial_eval`, and one for partial evaluation of jaxprs, which we’ll call `xla_call_peval_eqn`.
```
def xla_call_partial_eval(trace, tracers, *, jaxpr, num_consts):
del num_consts # Unused
in_unknowns = [not t.pval.is_known for t in tracers]
jaxpr1, jaxpr2, out_unknowns, num_res = partial_eval_jaxpr(jaxpr, in_unknowns)
known_tracers, unknown_tracers = partition_list(in_unknowns, tracers)
known_vals = [t.pval.const for t in known_tracers]
outs1_res = bind(xla_call_p, *known_vals, jaxpr=jaxpr1, num_consts=0)
outs1, res = split_list(outs1_res, len(jaxpr1.outs) - num_res)
res_tracers = [trace.instantiate_const(full_raise(trace, x)) for x in res]
outs2 = [PartialEvalTracer(trace, PartialVal.unknown(v.aval), None)
for v in jaxpr2.outs]
eqn = JaxprEqnRecipe(xla_call_p, res_tracers + unknown_tracers,
dict(jaxpr=jaxpr2, num_consts=0),
[v.aval for v in jaxpr2.outs], map(ref, outs2))
for t in outs2: t.recipe = eqn
return merge_lists(out_unknowns, outs1, outs2)
partial_eval_rules[xla_call_p] = xla_call_partial_eval
def partial_eval_jaxpr(jaxpr: Jaxpr, in_unknowns: list[bool],
instantiate: Optional[list[bool]] = None,
) -> tuple[Jaxpr, Jaxpr, list[bool], int]:
env: dict[Var, bool] = {}
residuals: set[Var] = set()
def read(x: Atom) -> bool:
return type(x) is Var and env[x]
def write(unk: bool, v: Var) -> None:
env[v] = unk
def new_res(x: Atom) -> Atom:
if type(x) is Var: residuals.add(x)
return x
eqns1, eqns2 = [], []
map(write, in_unknowns, jaxpr.in_binders)
for eqn in jaxpr.eqns:
unks_in = map(read, eqn.inputs)
rule = partial_eval_jaxpr_rules.get(eqn.primitive)
if rule:
eqn1, eqn2, unks_out, res = rule(unks_in, eqn)
eqns1.append(eqn1); eqns2.append(eqn2); residuals.update(res)
map(write, unks_out, eqn.out_binders)
elif any(unks_in):
inputs = [v if unk else new_res(v) for unk, v in zip(unks_in, eqn.inputs)]
eqns2.append(JaxprEqn(eqn.primitive, inputs, eqn.params, eqn.out_binders))
map(partial(write, True), eqn.out_binders)
else:
eqns1.append(eqn)
map(partial(write, False), eqn.out_binders)
out_unknowns = map(read, jaxpr.outs)
if instantiate is not None:
for v, uk, inst in zip(jaxpr.outs, out_unknowns, instantiate):
if inst and not uk: new_res(v)
out_unknowns = map(op.or_, out_unknowns, instantiate)
residuals, num_res = list(residuals), len(residuals)
assert all(type(v) is Var for v in residuals), residuals
ins1, ins2 = partition_list(in_unknowns, jaxpr.in_binders)
outs1, outs2 = partition_list(out_unknowns, jaxpr.outs)
jaxpr1 = Jaxpr(ins1, eqns1, outs1 + residuals)
jaxpr2 = Jaxpr(residuals + ins2, eqns2, outs2)
typecheck_partial_eval_jaxpr(jaxpr, in_unknowns, out_unknowns, jaxpr1, jaxpr2)
return jaxpr1, jaxpr2, out_unknowns, num_res
def typecheck_partial_eval_jaxpr(jaxpr, unks_in, unks_out, jaxpr1, jaxpr2):
jaxprty = typecheck_jaxpr(jaxpr) # (a1, a2) -> (b1, b2 )
jaxpr1ty = typecheck_jaxpr(jaxpr1) # a1 -> (b1, res)
jaxpr2ty = typecheck_jaxpr(jaxpr2) # (res, a2) -> b2
a1, a2 = partition_list(unks_in, jaxprty.in_types)
b1, b2 = partition_list(unks_out, jaxprty.out_types)
b1_, res = split_list(jaxpr1ty.out_types, len(b1))
res_, a2_ = split_list(jaxpr2ty.in_types, len(res))
b2_ = jaxpr2ty.out_types
if jaxpr1ty.in_types != a1: raise TypeError
if jaxpr2ty.out_types != b2: raise TypeError
if b1 != b1_: raise TypeError
if res != res_: raise TypeError
if a2 != a2_: raise TypeError
if b2 != b2_: raise TypeError
partial_eval_jaxpr_rules = {}
def xla_call_peval_eqn(unks_in: list[bool], eqn: JaxprEqn,
) -> tuple[JaxprEqn, JaxprEqn, list[bool], list[Var]]:
jaxpr = eqn.params['jaxpr']
jaxpr1, jaxpr2, unks_out, num_res = partial_eval_jaxpr(jaxpr, unks_in)
ins1, ins2 = partition_list(unks_in, eqn.inputs)
out_binders1, out_binders2 = partition_list(unks_out, eqn.out_binders)
residuals = [Var(v.aval) for v in jaxpr2.in_binders[:num_res]]
eqn1 = JaxprEqn(xla_call_p, ins1, dict(jaxpr=jaxpr1, num_consts=0),
out_binders1 + residuals)
eqn2 = JaxprEqn(xla_call_p, residuals + ins2,
dict(jaxpr=jaxpr2, num_consts=0), out_binders2)
return eqn1, eqn2, unks_out, residuals partial_eval_jaxpr_rules[xla_call_p] = xla_call_peval_eqn
```
With that, we can compose `linearize` and `jit` however we like:
```
@jit def f(x):
y = sin(x) * 2.
z = - y + x
return z
y, f_lin = linearize(f, 3.)
y_dot = f_lin(1.)
print(y, y_dot)
```
```
2.7177599838802657 2.979984993200891
```
```
@jit def f(x):
y = sin(x) * 2.
z = g(x, y)
return z
@jit def g(x, y):
return cos(x) + y
y, f_lin = linearize(f, 3.)
y_dot = f_lin(1.)
print(y, y_dot)
```
```
-0.7077524804807109 -2.121105001260758
```
##### `vjp` and `grad`[#](#vjp-and-grad)
The `vjp` transformation works a lot like linearize. Its type signature is analogous:
```
linearize : (a -> b) -> a -> (b, T a -o T b)
vjp : (a -> b) -> a -> (b, T b -o T a)
```
The only difference is that we transpose the linear part of the computation before returning it, so that it goes from type `T a -o T b` to type `T b -o T a`. That is, we’ll implement `vjp` as, essentially,
```
def vjp(f, x):
y, f_lin = linearize(f, x)
f_vjp = lambda y_bar: transpose(f_lin)(y_bar)
return y, f_vjp
```
Since we have the linear computation as a jaxpr, not just a Python callable,
we can implement the transpose transformation as a jaxpr interpreter.
```
def vjp_flat(f, *primals_in):
pvals_in = ([PartialVal.known(x) for x in primals_in] +
[PartialVal.unknown(vspace(get_aval(x))) for x in primals_in])
primal_pvals_in, tangent_pvals_in = split_half(pvals_in)
def f_jvp(*primals_tangents_in):
primals_out, tangents_out = jvp(f, *split_half(primals_tangents_in))
return [*primals_out, *tangents_out]
jaxpr, pvals_out, consts = partial_eval_flat(f_jvp, pvals_in) # linearize
primal_pvals, _ = split_half(pvals_out)
assert all(pval.is_known for pval in primal_pvals)
primals_out = [pval.const for pval in primal_pvals]
transpose_inputs = consts + [UndefPrimal(p.aval) for p in tangent_pvals_in]
f_vjp = lambda *cts: eval_jaxpr_transposed(jaxpr, transpose_inputs, cts)
return primals_out, f_vjp
def vjp(f, *primals_in):
primals_in_flat, in_tree = tree_flatten(primals_in)
f, out_tree = flatten_fun(f, in_tree)
primals_out_flat, f_vjp_flat = vjp_flat(f, *primals_in_flat)
primals_out = tree_unflatten(out_tree(), primals_out_flat)
def f_vjp(*cotangents_out):
cotangents_out_flat, _ = tree_flatten(cotangents_out)
cotangents_in_flat = f_vjp_flat(*cotangents_out_flat)
return tree_unflatten(in_tree, cotangents_in_flat)
return primals_out, f_vjp
class UndefPrimal(NamedTuple):
aval: ShapedArray
register_pytree_node(UndefPrimal,
lambda u: (u.aval, ()),
lambda aval, _: UndefPrimal(aval))
```
We use `UndefPrimal` instances to indicate which arguments with respect to which we want to transpose. These arise because in general, being explicit about closed-over values, we want to transpose functions of type
`a -> b -o c` to functions of type `a -> c -o b`. Even more generally, the inputs with respect to which the function is linear could be scattered through the argument list. So we indicate the linear positions using `UndefPrimal`.
We register `UndefPrimal` as a pytree node because the pytree mechanism gives a handy way to prune these placeholders out of argument lists.
Next, we can write `eval_jaxpr_transposed`, along with transpose rules for all primitives which can be linear in at least one argument:
```
# NB: the analogous function in JAX is called 'backward_pass'
def eval_jaxpr_transposed(jaxpr: Jaxpr, args: list[Any], cotangents: list[Any]
) -> list[Any]:
primal_env: dict[Var, Any] = {}
ct_env: dict[Var, Any] = {}
def read_primal(x: Atom) -> Any:
return primal_env.get(x, UndefPrimal(x.aval)) if type(x) is Var else x.val
def write_primal(v: Var, val: Any) -> None:
if type(val) is not UndefPrimal:
primal_env[v] = val
def read_cotangent(v: Var) -> Any:
return ct_env.pop(v, np.zeros(v.aval.shape, v.aval.dtype))
def write_cotangent(x: Atom, val: Any):
if type(x) is Var and val is not None:
ct_env[x] = add(ct_env[x], val) if x in ct_env else val
map(write_primal, jaxpr.in_binders, args)
map(write_cotangent, jaxpr.outs, cotangents)
for eqn in jaxpr.eqns[::-1]:
primals_in = map(read_primal, eqn.inputs)
cts_in = map(read_cotangent, eqn.out_binders)
rule = transpose_rules[eqn.primitive]
cts_out = rule(cts_in, *primals_in, **eqn.params)
map(write_cotangent, eqn.inputs, cts_out)
return [read_cotangent(v) for v, x in zip(jaxpr.in_binders, args)
if type(x) is UndefPrimal]
transpose_rules = {}
```
```
def mul_transpose_rule(cts, x, y):
z_bar, = cts
assert (type(x) is UndefPrimal) ^ (type(y) is UndefPrimal)
return [mul(z_bar, y), None] if type(x) is UndefPrimal else [None, mul(x, z_bar)]
transpose_rules[mul_p] = mul_transpose_rule
def neg_transpose_rule(cts, x):
ybar, = cts
assert type(x) is UndefPrimal
return [neg(ybar)]
transpose_rules[neg_p] = neg_transpose_rule
def add_transpose_rule(cts, x, y):
z_bar, = cts
return [z_bar, z_bar]
transpose_rules[add_p] = add_transpose_rule
def reduce_sum_transpose_rule(cts, x, *, axis):
y_bar, = cts
return [broadcast(y_bar, x.aval.shape, axis)]
transpose_rules[reduce_sum_p] = reduce_sum_transpose_rule
def xla_call_transpose_rule(cts, *invals, jaxpr, num_consts):
del num_consts # Unused
undef_primals = [type(x) is UndefPrimal for x in invals]
transposed_jaxpr, new_consts = transpose_jaxpr(jaxpr, tuple(undef_primals))
residuals, _ = partition_list(undef_primals, invals)
outs = bind(xla_call_p, *new_consts, *residuals, *cts,
jaxpr=transposed_jaxpr, num_consts=len(new_consts))
outs = iter(outs)
return [next(outs) if undef else None for undef in undef_primals]
transpose_rules[xla_call_p] = xla_call_transpose_rule
@lru_cache()
def transpose_jaxpr(jaxpr: Jaxpr, undef_primals: tuple[bool, ...]
) -> tuple[Jaxpr, list[Any]]:
avals_in, avals_out = typecheck_jaxpr(jaxpr)
traceable = partial(eval_jaxpr_transposed, jaxpr)
args = [UndefPrimal(a) if u else a for a, u in zip(avals_in, undef_primals)]
trans_jaxpr, consts, _ = make_jaxpr(traceable, tuple(args), tuple(avals_out))
typecheck_jaxpr(trans_jaxpr)
return trans_jaxpr, consts
```
Now that we can linearize and transpose, we can finally write `grad`:
```
def grad(f):
def gradfun(x, *xs):
y, f_vjp = vjp(f, x, *xs)
if np.shape(y) != (): raise TypeError
x_bar, *_ = f_vjp(np.ones(np.shape(y), np.result_type(y)))
return x_bar
return gradfun
```
```
y, f_vjp = vjp(sin, 3.)
print(f_vjp(1.), cos(3.))
```
```
(-0.9899924966004454,) -0.9899924966004454
```
```
def f(x):
y = sin(x) * 2.
z = - y + x
return z
print(grad(f)(3.))
```
```
2.979984993200891
```
```
@jit def f(x):
y = x * 2.
z = g(y)
return z
@jit def g(x):
return cos(x) * 2.
print(grad(f)(3.))
```
```
1.1176619927957034
```
Here’s something of a compositionality stress test:
```
# from core_test.py fun_with_nested_calls_2 def foo(x):
@jit
def bar(y):
def baz(w):
q = jit(lambda x: y)(x)
q = q + jit(lambda: y)()
q = q + jit(lambda y: w + y)(y)
q = jit(lambda w: jit(sin)(x) * y)(1.0) + q
return q
p, t = jvp(baz, (x + 1.0,), (y,))
return t + (x * p)
return bar(x)
def assert_allclose(*vals):
for v1, v2 in zip(vals[:-1], vals[1:]):
np.testing.assert_allclose(v1, v2)
ans1 = f(3.)
ans2 = jit(f)(3.)
ans3, _ = jvp(f, (3.,), (5.,))
ans4, _ = jvp(jit(f), (3.,), (5.,))
assert_allclose(ans1, ans2, ans3, ans4)
deriv1 = grad(f)(3.)
deriv2 = grad(jit(f))(3.)
deriv3 = jit(grad(jit(f)))(3.)
_, deriv4 = jvp(f, (3.,), (1.,))
_, deriv5 = jvp(jit(f), (3.,), (1.,))
assert_allclose(deriv1, deriv2, deriv3, deriv4, deriv5)
hess1 = grad(grad(f))(3.)
hess2 = grad(grad(jit(f)))(3.)
hess3 = grad(jit(grad(f)))(3.)
hess4 = jit(grad(grad(f)))(3.)
_, hess5 = jvp(grad(f), (3.,), (1.,))
_, hess6 = jvp(jit(grad(f)), (3.,), (1.,))
_, hess7 = jvp(jit(grad(f)), (3.,), (1.,))
assert_allclose(hess1, hess2, hess3, hess4, hess5, hess6, hess7)
```
#### Part 5: the control flow primitives `cond`[#](#part-5-the-control-flow-primitives-cond)
Next we’ll add higher-order primitives for staged-out control flow. These resemble `jit` from Part 3, another higher-order primitive, but differ in that they are parameterized by multiple callables rather than just one.
##### Adding `cond`[#](#adding-cond)
We introduce a `cond` primitive to represent conditional application of one function or another inside a jaxpr. We write the type of `cond` as
`Bool -> (a -> b) -> (a -> b) -> a -> b`. In words, `cond` takes a boolean representing the predicate and two functions of equal types. Depending on the value of the predicate, it applies one function or the other to its final argument.
In Python, we represent it as a function which itself takes two functions as arguments. As with `jit`, the first step is to call `make_jaxpr` on its callable arguments to turn them into jaxprs:
```
def cond(pred, true_fn, false_fn, *operands):
avals_in = [raise_to_shaped(get_aval(x)) for x in operands]
true_jaxpr, true_consts, out_tree = make_jaxpr(true_fn, *avals_in)
false_jaxpr, false_consts, out_tree_ = make_jaxpr(false_fn, *avals_in)
if out_tree != out_tree_: raise TypeError
true_jaxpr, false_jaxpr = _join_jaxpr_consts(
true_jaxpr, false_jaxpr, len(true_consts), len(false_consts))
if typecheck_jaxpr(true_jaxpr) != typecheck_jaxpr(false_jaxpr):
raise TypeError
outs = bind_cond(pred, *true_consts, *false_consts, *operands,
true_jaxpr=true_jaxpr, false_jaxpr=false_jaxpr)
return tree_unflatten(out_tree, outs)
cond_p = Primitive('cond')
def _join_jaxpr_consts(jaxpr1: Jaxpr, jaxpr2: Jaxpr, n1: int, n2: int
) -> tuple[Jaxpr, Jaxpr]:
jaxpr1_type, jaxpr2_type = typecheck_jaxpr(jaxpr1), typecheck_jaxpr(jaxpr2)
assert jaxpr1_type.in_types[n1:] == jaxpr2_type.in_types[n2:]
consts1, rest1 = split_list(jaxpr1.in_binders, n1)
consts2, rest2 = split_list(jaxpr2.in_binders, n2)
new_jaxpr1 = Jaxpr(consts1 + consts2 + rest1, jaxpr1.eqns, jaxpr1.outs)
new_jaxpr2 = Jaxpr(consts1 + consts2 + rest2, jaxpr2.eqns, jaxpr2.outs)
return new_jaxpr1, new_jaxpr2
def bind_cond(pred, *args, true_jaxpr, false_jaxpr):
assert len(args) == len(true_jaxpr.in_binders) == len(false_jaxpr.in_binders)
return bind(cond_p, pred, *args, true_jaxpr=true_jaxpr, false_jaxpr=false_jaxpr)
```
We require `true_jaxpr` and `false_jaxpr` to have the same type, but because they might close over different constants (and because jaxprs can only represent closed terms, i.e. can’t have free variables and are instead closure-converted) we need to use the helper `_join_jaxpr_consts` to make consistent the input binder lists of the two jaxprs. (To be more economical we could try to identify pairs of constants with the same shapes, but instead we just concatenate the lists of constants.)
Next we can turn to adding interpreter rules for `cond`. Its evaluation rule is simple:
```
def cond_impl(pred, *operands, true_jaxpr, false_jaxpr):
if pred:
return eval_jaxpr(true_jaxpr, operands)
else:
return eval_jaxpr(false_jaxpr, operands)
impl_rules[cond_p] = cond_impl
```
```
out = cond(True, lambda: 3, lambda: 4)
print(out)
```
```
3
```
For its JVP and vmap rules, we only need to call the same `jvp_jaxpr` and
`vmap_jaxpr` utilities we created for `jit`, followed by another pass of
`_join_jaxpr_consts`:
```
def cond_jvp_rule(primals, tangents, *, true_jaxpr, false_jaxpr):
pred, *primals = primals
_ , *tangents = tangents
true_jaxpr , true_consts = jvp_jaxpr(true_jaxpr)
false_jaxpr, false_consts = jvp_jaxpr(false_jaxpr)
true_jaxpr, false_jaxpr = _join_jaxpr_consts(
true_jaxpr, false_jaxpr, len(true_consts), len(false_consts))
assert typecheck_jaxpr(true_jaxpr) == typecheck_jaxpr(false_jaxpr)
outs = bind_cond(pred, *true_consts, *false_consts, *primals, *tangents,
true_jaxpr=true_jaxpr, false_jaxpr=false_jaxpr)
primals_out, tangents_out = split_half(outs)
return primals_out, tangents_out jvp_rules[cond_p] = cond_jvp_rule
```
```
out, out_tan = jvp(lambda x: cond(True, lambda: x * x, lambda: 0.), (1.,), (1.,))
print(out_tan)
```
```
2.0
```
```
def cond_vmap_rule(axis_size, vals_in, dims_in, *, true_jaxpr, false_jaxpr):
pred , *vals_in = vals_in
pred_dim, *dims_in = dims_in
if pred_dim is not not_mapped: raise NotImplementedError # TODO
true_jaxpr, true_consts = vmap_jaxpr(true_jaxpr, axis_size, tuple(dims_in))
false_jaxpr, false_consts = vmap_jaxpr(false_jaxpr, axis_size, tuple(dims_in))
true_jaxpr, false_jaxpr = _join_jaxpr_consts(
true_jaxpr, false_jaxpr, len(true_consts), len(false_consts))
assert typecheck_jaxpr(true_jaxpr) == typecheck_jaxpr(false_jaxpr)
outs = bind_cond(pred, *true_consts, *false_consts, *vals_in,
true_jaxpr=true_jaxpr, false_jaxpr=false_jaxpr)
return outs, [0] * len(outs)
vmap_rules[cond_p] = cond_vmap_rule
```
```
xs = np.array([1., 2., 3])
out = vmap(lambda x: cond(True, lambda: x + 1., lambda: 0.), (0,))(xs)
print(out)
```
```
[2. 3. 4.]
```
Notice that we’re not currently supporting the case where the predicate value itself is batched. In mainline JAX, we handle this case by transforming the conditional to a [select primitive](https://jax.readthedocs.io/en/latest/_autosummary/jax.lax.select.html).
That transformation is semantically correct so long as `true_fun` and
`false_fun` do not involve any side-effecting primitives.
Another thing not represented here, but present in the mainline JAX, is that applying transformations to two jaxprs of equal type might result in jaxprs of different types. For example, applying the mainline JAX version of
`vmap_jaxpr` to the identity-function jaxpr
```
{ lambda a:float32[] .
let
in ( a ) }
```
would result in a jaxpr with a batched output, of type
`[float32[10]] -> [float32[10]]` if the batch size were 10, while applying it to the zero-function jaxpr
```
{ lambda a:float32[] .
let
in ( 0. ) }
```
would result in a jaxpr with an unbatched output, of type
`[float32[10]] -> [float32[]]`. This is an optimization, aimed at not batching values unnecessarily. But it means that in `cond` we’d need an extra step of joining the two transformed jaxprs to have consistent output types. We don’t need this step here because we chose `vmap_jaxpr` always to batch all outputs over the leading axis.
Next we can turn to abstract evaluation and XLA lowering rules:
```
def cond_abstract_eval(pred_type, *in_types, true_jaxpr, false_jaxpr):
if pred_type != ShapedArray((), np.dtype('bool')): raise TypeError
jaxpr_type = typecheck_jaxpr(true_jaxpr)
if jaxpr_type != typecheck_jaxpr(false_jaxpr):
raise TypeError
if not all(t1 == t2 for t1, t2 in zip(jaxpr_type.in_types, in_types)):
raise TypeError
return jaxpr_type.out_types abstract_eval_rules[cond_p] = cond_abstract_eval
def cond_translation(c, in_avals, in_vals, *, true_jaxpr, false_jaxpr):
del in_avals # Unused
pred, *in_vals = in_vals
flat_vals, in_tree = tree_flatten(in_vals)
operand = xops.Tuple(c, flat_vals)
operand_shape = c.get_shape(operand)
def make_comp(name: str, jaxpr: Jaxpr) -> xe.XlaComputation:
c = xc.XlaBuilder(name)
operand = xops.Parameter(c, 0, operand_shape)
operands = tree_unflatten(in_tree, destructure_tuple(c, operand))
outs = jaxpr_subcomp(c, jaxpr, operands)
return c.build(xops.Tuple(c, outs))
true_comp = make_comp('true_fn', true_jaxpr)
false_comp = make_comp('false_fn', false_jaxpr)
int_etype = xc.dtype_to_etype(np.dtype('int32'))
out = xops.Conditional(xops.ConvertElementType(pred, int_etype),
[false_comp, true_comp], [operand] * 2)
return destructure_tuple(c, out)
xla_translations[cond_p] = cond_translation
```
```
out = jit(lambda: cond(False, lambda: 1, lambda: 2))()
print(out)
```
```
2
```
Finally, to support reverse-mode automatic differentiation, we need partial evaluation and transposition rules. For partial evaluation, we need to introduce another jaxpr-munging utility, `_join_jaxpr_res`, to handle the fact that applying partial evaluation to `true_fun` and `false_fun` will in general result in distinct residuals. We use `_join_jaxpr_res` to make the output types of the transformed jaxprs consistent (while `_join_jaxpr_consts` dealt with input types).
```
def cond_partial_eval(trace, tracers, *, true_jaxpr, false_jaxpr):
pred_tracer, *tracers = tracers
assert pred_tracer.pval.is_known
pred = pred_tracer.pval.const
in_uks = [not t.pval.is_known for t in tracers]
*jaxprs, out_uks, num_res = _cond_partial_eval(true_jaxpr, false_jaxpr, in_uks)
t_jaxpr1, f_jaxpr1, t_jaxpr2, f_jaxpr2 = jaxprs
known_tracers, unknown_tracers = partition_list(in_uks, tracers)
known_vals = [t.pval.const for t in known_tracers]
outs1_res = bind_cond(pred, *known_vals,
true_jaxpr=t_jaxpr1, false_jaxpr=f_jaxpr1)
outs1, res = split_list(outs1_res, len(outs1_res) - num_res)
pred_tracer_ = trace.instantiate_const(full_raise(trace, pred_tracer))
res_tracers = [trace.instantiate_const(full_raise(trace, x)) for x in res]
outs2 = [PartialEvalTracer(trace, PartialVal.unknown(v.aval), None)
for v in t_jaxpr2.outs]
eqn = JaxprEqnRecipe(cond_p, [pred_tracer_, *res_tracers, *unknown_tracers],
dict(true_jaxpr=t_jaxpr2, false_jaxpr=f_jaxpr2),
[v.aval for v in t_jaxpr2.outs], map(ref, outs2))
for t in outs2: t.recipe = eqn
return merge_lists(out_uks, outs1, outs2)
partial_eval_rules[cond_p] = cond_partial_eval
def _cond_partial_eval(true_jaxpr: Jaxpr, false_jaxpr: Jaxpr, in_uks: list[bool]
) -> tuple[Jaxpr, Jaxpr, Jaxpr, Jaxpr, list[bool], int]:
_, _, t_out_uks, _ = partial_eval_jaxpr(true_jaxpr , in_uks)
_, _, f_out_uks, _ = partial_eval_jaxpr(false_jaxpr, in_uks)
out_uks = map(op.or_, t_out_uks, f_out_uks)
t_jaxpr1, t_jaxpr2, _, t_nres = partial_eval_jaxpr(true_jaxpr , in_uks, out_uks)
f_jaxpr1, f_jaxpr2, _, f_nres = partial_eval_jaxpr(false_jaxpr, in_uks, out_uks)
t_jaxpr1, f_jaxpr1 = _join_jaxpr_res(t_jaxpr1, f_jaxpr1, t_nres, f_nres)
t_jaxpr2, f_jaxpr2 = _join_jaxpr_consts(t_jaxpr2, f_jaxpr2, t_nres, f_nres)
assert typecheck_jaxpr(t_jaxpr1) == typecheck_jaxpr(f_jaxpr1)
assert typecheck_jaxpr(t_jaxpr2) == typecheck_jaxpr(f_jaxpr2)
num_res = t_nres + f_nres
return t_jaxpr1, f_jaxpr1, t_jaxpr2, f_jaxpr2, out_uks, num_res
def _join_jaxpr_res(jaxpr1: Jaxpr, jaxpr2: Jaxpr, n1: int, n2: int
) -> tuple[Jaxpr, Jaxpr]:
jaxpr1_type, jaxpr2_type = typecheck_jaxpr(jaxpr1), typecheck_jaxpr(jaxpr2)
out_types1, _ = split_list(jaxpr1_type.out_types, len(jaxpr1.outs) - n1)
out_types2, _ = split_list(jaxpr2_type.out_types, len(jaxpr2.outs) - n2)
assert out_types1 == out_types2
outs1, res1 = split_list(jaxpr1.outs, len(jaxpr1.outs) - n1)
outs2, res2 = split_list(jaxpr2.outs, len(jaxpr2.outs) - n2)
zeros_like1 = [Lit(np.zeros(v.aval.shape, v.aval.dtype)) for v in res1]
zeros_like2 = [Lit(np.zeros(v.aval.shape, v.aval.dtype)) for v in res2]
new_jaxpr1 = Jaxpr(jaxpr1.in_binders, jaxpr1.eqns, outs1 + res1 + zeros_like2)
new_jaxpr2 = Jaxpr(jaxpr2.in_binders, jaxpr2.eqns, outs2 + zeros_like1 + res2)
return new_jaxpr1, new_jaxpr2
```
```
_, f_lin = linearize(lambda x: cond(True, lambda: x, lambda: 0.), 1.)
out = f_lin(3.14)
print(out)
```
```
3.14
```
```
def cond_peval_eqn(unks_in: list[bool], eqn: JaxprEqn,
) -> tuple[JaxprEqn, JaxprEqn, list[bool], list[Atom]]:
pred_unk, *unks_in = unks_in
assert not pred_unk
true_jaxpr, false_jaxpr = eqn.params['true_jaxpr'], eqn.params['false_jaxpr']
*jaxprs, unks_out, num_res = _cond_partial_eval(true_jaxpr, false_jaxpr, unks_in)
t_jaxpr1, f_jaxpr1, t_jaxpr2, f_jaxpr2 = jaxprs
ins1, ins2 = partition_list(unks_in, eqn.inputs[1:])
outs1, outs2 = partition_list(unks_out, eqn.out_binders)
residuals, _ = split_list(t_jaxpr2.in_binders, num_res)
eqn1 = JaxprEqn(cond_p, [eqn.inputs[0], *ins1],
dict(true_jaxpr=t_jaxpr1, false_jaxpr=f_jaxpr1),
outs1 + residuals)
eqn2 = JaxprEqn(cond_p, [eqn.inputs[0], *residuals, *ins2],
dict(true_jaxpr=t_jaxpr2, false_jaxpr=f_jaxpr2),
outs2)
res = [eqn.inputs[0], *residuals] if type(eqn.inputs[0]) is Var else residuals
return eqn1, eqn2, unks_out, res partial_eval_jaxpr_rules[cond_p] = cond_peval_eqn
```
```
_, f_lin = linearize(jit(lambda x: cond(True, lambda: x, lambda: 0.)), 1.)
out = f_lin(3.14)
print(out)
```
```
3.14
```
Transposition is a fairly straightforward application of `transpose_jaxpr`:
```
def cond_transpose_rule(cts, pred, *invals, true_jaxpr, false_jaxpr):
undef_primals = tuple(type(x) is UndefPrimal for x in invals)
true_jaxpr, true_consts = transpose_jaxpr(true_jaxpr, undef_primals)
false_jaxpr, false_consts = transpose_jaxpr(false_jaxpr, undef_primals)
true_jaxpr, false_jaxpr = _join_jaxpr_consts(
true_jaxpr, false_jaxpr, len(true_consts), len(false_consts))
res = [x for x in invals if type(x) is not UndefPrimal]
outs = bind_cond(pred, *true_consts, *false_consts, *res, *cts,
true_jaxpr=true_jaxpr, false_jaxpr=false_jaxpr)
outs = iter(outs)
return [None] + [next(outs) if type(x) is UndefPrimal else None for x in invals]
transpose_rules[cond_p] = cond_transpose_rule
```
```
out = grad(lambda x: cond(True, lambda: x * x, lambda: 0.))(1.)
print(out)
```
```
2.0
```
Show code cell source Hide code cell source
```
def pprint_cond(names: defaultdict[Var, str], eqn: JaxprEqn) -> PPrint:
true_jaxpr, false_jaxpr = eqn.params['true_jaxpr'], eqn.params['false_jaxpr']
new_params = {k:v for k, v in eqn.params.items() if not k.endswith('jaxpr')}
lhs = pp(' '.join(var_str(names, v) for v in eqn.out_binders))
rhs = (pp(eqn.primitive.name) >> pp_params(new_params) >>
pp(' '.join(names[x] if isinstance(x, Var) else str(x.val)
for x in eqn.inputs)))
return vcat([lhs >> pp(' = ') >> rhs,
pp_jaxpr(true_jaxpr).indent(2),
pp_jaxpr(false_jaxpr).indent(2)])
pp_rules[cond_p] = pprint_cond
```
### JAX Enhancement Proposals (JEPs)[#](#jax-enhancement-proposals-jeps)
Most changes can be discussed with simple issues/discussions and pull requests.
Some changes though are a bit larger in scope or require more discussion, and these should be implemented as JEP. This allows for writing longer documents that can be discussed in a pull request themselves.
The structure of JEPs is kept as lightweight as possible to start and might be extended later on.
#### When you should use a JEP[#](#when-you-should-use-a-jep)
* When your change requires a design doc. We prefer collecting the designs as JEPs for better discoverability and further reference.
* When your change requires extensive discussion. It’s fine to have relatively short discussions on issues or pull requests, but when the discussion gets longer this becomes unpractical for later digestion. JEPs allow to update the main document with a summary of the discussion and these updates can be discussed themselves in the pull request adding the JEP.
#### How to start a JEP[#](#how-to-start-a-jep)
First, create an issue with the [JEP label](https://github.com/google/jax/issues?q=label%3AJEP). All pull requests that relate to the JEP (i.e. adding the JEP itself as well as any implementing pull requests)
should be linked to this issue.
Then create a pull request that adds a file named
%d-{short-title}.md - with the number being the issue number.
##### JAX PRNG Design[#](#jax-prng-design)
We want a PRNG design that
1. is **expressive** in that it is convenient to use and it doesn’t constrain the user’s ability to write numerical programs with exactly the behavior that they want,
2. enables **reproducible** program execution in a backend-independent way,
3. has semantics that are **invariant to `@jit` compilation boundaries and device backends**,
4. enables **vectorization for generating array values** using SIMD hardware,
5. is **parallelizable** in that it doesn’t add sequencing constraints between random function calls that otherwise would have no data dependence,
6. scales to **multi-replica, multi-core, and distributed computation**,
7. **fits with JAX and XLA semantics** and design philosophies (which are ultimately motivated by other practical concerns).
As a corollary of these we believe the design should be functional. Another corollary is that, at least given current hardware constraints, we’re going to do the PRNG in software.
> TLDR
> **JAX PRNG = [Threefry counter PRNG](http://www.thesalmons.org/john/random123/papers/random123sc11.pdf) + a functional array-oriented [splitting model](https://dl.acm.org/citation.cfm?id=2503784)**
###### Contents[#](#contents)
* [Three programming models and toy example programs](#three-programming-models-and-toy-example-programs)
* [Design](#design)
* [More realistic example user programs](#more-realistic-example-user-programs)
* [Tradeoffs and alternatives](#tradeoffs-and-alternatives)
###### Three programming models and toy example programs[#](#three-programming-models-and-toy-example-programs)
Here’s a toy example of a **stateful global** PRNG like the one often used in Numpy programs:
```
def foo(): return bar() + baz()
def bar(): return rand(RNG, (3, 4))
def baz(): return rand(RNG, (3, 4))
def main():
global RNG
RNG = RandomState(0)
return foo()
```
To achieve reproducibility here we would need to control the order of evaluation for bar() and baz() even though there is no explicit data dependence from one to the other. This kind of sequencing requirement stemming from reproducibility (#2) violates parallelizability (#5) and doesn’t fit with JAX or XLA’s functional semantics (#6) in which subexpressions can be evaluated in any order. Even if we didn’t require reproducibility and thus allowed any evaluation order, parallelization across calls (#5) would still be made difficult by the need to update shared state. Moreover, because the same PRNG state would need to be accessed and maintained in both Python and any compiled code, this model would likely lead to engineering challenges to achieve compilation invariance (#3) and scaling to multiple replicas (#6). Finally, the expressiveness is limited (#1) because there is no way for foo() to call bar() or baz() without affecting its own (implicit) PRNG state.
Whether the model supports vectorization (#4) depends on some additional details. In Numpy, PRNG vectorization is limited by a *sequential-equivalent guarantee*:
```
In [1]: rng = np.random.RandomState(0)
In [2]: rng.randn(2)
Out[2]: array([1.76405235, 0.40015721])
In [3]: rng = np.random.RandomState(0)
In [4]: np.stack([rng.randn() for _ in range(2)])
Out[4]: array([1.76405235, 0.40015721])
```
To allow for vectorization (#4) within primitive PRNG function calls that generate arrays (e.g. to rand() with a shape argument), we drop this sequential-equivalent guarantee. This vectorization can be supported by any of the three programming models discussed in this section, though it motivates the implementation in terms of a counter-based PRNG as described in the next section.
The stateful PRNG user programming model is not promising. Here’s an example of a functional model but lacking a key ingredient that we call splitting:
```
def foo(rng_1):
y, rng_2 = baz(rng_1)
z, rng_3 = bar(rng_2)
return y + z, rng_3
def bar(x, rng):
val, new_rng = rand(rng, (3, 4))
return val, new_rng
def baz(x, rng):
val, new_rng = rand(rng, (3, 4))
return val, new_rng
def main():
foo(RandomState(0))
```
This model explicitly threads the PRNG state through all functions (primitive or non-primitive) that generate random values: that is, every random function must both accept and return the state. Now there is an explicit data dependence between the call to baz() and the call to bar() in foo(), so the data flow (and hence sequencing) is made explicit and fits with JAX’s existing semantics (#7), unlike in the previous model. This explicit threading can also make the semantics invariant to compilation boundaries (#3).
Explicit threading is inconvenient for the programmer. But worse, it hasn’t actually improved the expressiveness (#1): there is still no way for foo() to call into bar() or baz() while maintaining its own PRNG state. Without knowledge of their callers or the subroutines they call, functions must defensively pass in and return the rng state everywhere. Moreover, it also doesn’t improve the prospects for parallelization (#5) or scaling to multiple replicas (#6) because everything is still sequential, even if the sequencing is made explicit in the functional programming sense.
In short, making the code functional by explicitly threading state isn’t enough to achieve our expressiveness (#1) and performance (#5, #6) goals.
The key problem in both the previous models is that there’s too much sequencing. To reduce the amount of sequential dependence we use **functional [splittable](https://dl.acm.org/citation.cfm?id=2503784) PRNGs**. Splitting is a mechanism to ‘fork’ a new PRNG state into two PRNG states while maintaining the usual desirable PRNG properties (the two new streams are computationally parallelizable and produce independent random values, i.e. they behave like [multistreams](http://www.thesalmons.org/john/random123/papers/random123sc11.pdf)).
```
def foo(rng_1):
rng_2, rng_3 = split(rng_1, 2)
return bar(rng_2) + baz(rng_3)
def bar(x, rng):
return rand(rng, (3, 4))
def baz(x, rng):
return rand(rng, (3, 4))
def main():
foo(RandomState(0))
```
Some points to notice:
1. there is no sequential dependence between the calls to bar() and baz() and they can be evaluated in either order without affecting the value of the result, which solves the remaining performance goals (#5, #6),
2. functions do not need to return updated versions of PRNGs and it is straightforward to call a random subroutine without affecting existing PRNG states, improving the expressiveness (#1) from the other functional model.
The example doesn’t show it, but as a consequence of the choice (2) the only way to advance the PRNG state is to call split(). That is, we have two ways to achieve (1), and they differ in whether they burden the user program with explicit calls to split(), as in the above example, or instead burden the user program with explicit threading. We prefer the former, i.e. the version with explicit splitting, because we can easily implement the explicit-threading version in terms of it.
###### Design[#](#design)
We can use the *counter-based PRNG* design, and in particular the Threefry hash function, as described in [Parallel random numbers: as easy as 1, 2, 3](http://www.thesalmons.org/john/random123/papers/random123sc11.pdf). We use the counter to achieve efficient vectorization: for a given key we can generate an array of values in a vectorized fashion by mapping the hash function over a range of integers [k + 1, …, k + sample_size]. We use the key together with the hash function to implement [splittable PRNGs](https://dl.acm.org/citation.cfm?id=2503784): that is, splitting is a way to generate two new keys from an existing one.
```
type Sample = Int256 type Key = Sample -- important identification for splitting type Count = Int32
hash :: Key -> Count -> Int256 -- output type equal to Key and Sample
split :: Key -> (Key, Key)
split key = (hash key 0, hash key 1)
draw_samples :: Key -> Int -> [Sample]
draw_samples key n = map (hash key) [1..n]
```
Surprisingly, drawing a sample is very similar to splitting! The key is the difference in the type of the output (even though the types are identified): in one case the value is to be used in forming random samples of interest (e.g. turning random bits into a Float representing a random normal) while in the other case the value is to be used as a key for further hashing.
The asymmetry in the hash function arguments, of type Key and Count, is that the latter is trivial and computationally cheap to advance by an arbitrary amount, since we just need to increase the integer value, while the former is only advanced by hashing. That’s why we use the count argument for vectorization.
###### More realistic example user programs[#](#more-realistic-example-user-programs)
Here’s what a training loop on the host might look like when the step requires a PRNG (maybe for dropout or for VAE training):
```
rng = lax.rng.new_rng()
for i in xrange(num_steps):
rng, rng_input = lax.rng.split(rng)
params = compiled_update(rng_input, params, next(batches))
```
Notice that we’re burdening the user with explicit splitting of the rng, but the rng does not need to be returned from the code at all.
Here’s how we can use this PRNG model with the stax neural net builder library to implement dropout:
```
def Dropout(rate, mode='train'):
def init_fun(input_shape):
return input_shape, ()
def apply_fun(rng, params, inputs):
if mode == 'train':
keep = lax.random.bernoulli(rng, rate, inputs.shape)
return np.where(keep, inputs / rate, 0)
else:
return inputs
return init_fun, apply_fun
```
The rng value here is just the key used for the hash, not a special object. The rng argument is passed to every apply_fun, and so it needs to be handled in the serial and parallel combinators with splitting:
```
def serial(*layers):
init_funs, apply_funs = zip(*layers)
def init_fun(input_shape):
...
def apply_fun(rng, params, inputs):
rngs = split(rng, len(layers))
for rng, param, apply_fun in zip(rngs, params, apply_funs):
inputs = apply_fun(rng, param, inputs)
return inputs
return init_fun, apply_fun
def parallel(*layers):
init_funs, apply_funs = zip(*layers)
def init_fun(input_shape):
...
def apply_fun(rng, params, inputs):
rngs = split(rng, len(layers))
return [f(r, p, x) for f, r, p, x in zip(apply_funs, rngs, params, inputs)]
return init_fun, apply_fun
```
Here we’re using a simple extended version of split that can produce multiple copies.
###### Tradeoffs and alternatives[#](#tradeoffs-and-alternatives)
1. We’re not exploiting any device hardware PRNG
* We don’t currently have enough control over the hardware PRNG’s state for all backends.
* Even if we did, it would be backend-dependent and we might have to introduce sequential dependencies between random calls to ensure deterministic ordering and hence reproducibility.
* We don’t know of any workloads for which the software PRNG should become a bottleneck.
* We could consider providing an additional API that allows access to a hardware PRNG for users who want to give up other desiderata (like strict reproducibility).
2. We give up the sequential equivalent guarantee, in which creating a random array in one call produces the same values as creating the flattened array one random element at a time.
* This property is likely incompatible with vectorization (a high priority).
* We don’t know of any users or examples for which this property is important.
* Users could write a layer on top of this API to provide this guarantee.
3. We can’t follow the `numpy.random` API exactly.
##### Custom JVP/VJP rules for JAX-transformable functions[#](#custom-jvp-vjp-rules-for-jax-transformable-functions)
This is a design document, explaining some of the thinking behind the design and implementation of `jax.custom_jvp` and `jax.custom_vjp`. For user-oriented documentation, see [the tutorial notebook](https://jax.readthedocs.io/en/latest/notebooks/Custom_derivative_rules_for_Python_code.html).
There are two ways to define differentiation rules in JAX:
1. using `jax.custom_jvp` and `jax.custom_vjp` to define custom differentiation rules for Python functions that are already JAX-transformable; and 2. defining new `core.Primitive` instances along with all their transformation rules, for example to call into functions from other systems like solvers,
simulators, or general numerical computing systems.
This document is about #1 only.
###### Contents[#](#contents)
* [Goals](#goals)
* [Non-goals](#non-goals)
* [Main problem descriptions](#main-problem-descriptions)
+ [The vmap-removes-custom-jvp semantics problem](#the-vmap-removes-custom-jvp-semantics-problem)
+ [The Python flexibility problem](#the-python-flexibility-problem)
* [Solution idea](#solution-idea)
* [Implementation notes](#implementation-notes)
###### Goals[#](#goals)
We want **users** to customize the forward- and/or reverse-mode differentiation behavior of their code. This customization
1. should have a *clear and consistent semantics* in how it works and how it composes with other JAX transformations; and 2. should be *flexible* in supporting use cases and workflows like in
[Autograd](https://github.com/hips/autograd) and
[PyTorch](https://pytorch.org), including cases involving differentiation of Python control flow and workflows for NaN debugging.
As **JAX developers** we want to write library functions, like
[`logit`](https://github.com/google/jax/blob/01039299304b148b405ef9b9fa5e82bbb527471d/jax/scipy/special.py#L83)
and
[`expit`](https://github.com/google/jax/blob/01039299304b148b405ef9b9fa5e82bbb527471d/jax/scipy/special.py#L91),
that are defined in terms of other primitives, but for the purposes of differentiation have primitive-like behavior in the sense that we want to define custom differentiation rules for them, which may be more numerically stable or performant. In particular, we don’t want to have to specify `vmap` or `jit`
rules for functions like `logit` and `expit`.
As a stretch goal, we’d like to make JAX a great environment for power users looking to add custom differentiation rules for higher-order functions like
`fixed_point`, `odeint`, etc.; this design doc won’t solve that problem, but we want to be confident we’re not going to preclude good solutions to that problem.
That is, our primary goals are
1. solve the vmap-removes-custom-jvp semantics problem ([#1249](https://github.com/google/jax/issues/1249)), and 2. allow Python in custom VJPs, e.g. to debug NaNs
([#1275](https://github.com/google/jax/issues/1275)).
Secondary goals are 3. clean up and simplify user experience (symbolic zeros, kwargs, etc)
4. make progress towards a world where users can easily add `fixed_point`,
`odeint`, `root`, etc.
Overall, we want to close
[#116](https://github.com/google/jax/issues/116),
[#1097](https://github.com/google/jax/issues/1097),
[#1249](https://github.com/google/jax/issues/1249),
[#1275](https://github.com/google/jax/issues/1275),
[#1366](https://github.com/google/jax/issues/1366),
[#1723](https://github.com/google/jax/issues/1723),
[#1670](https://github.com/google/jax/issues/1670),
[#1875](https://github.com/google/jax/issues/1875),
[#1938](https://github.com/google/jax/issues/1938),
and replace the custom_transforms machinery (from
[#636](https://github.com/google/jax/issues/636),
[#818](https://github.com/google/jax/issues/818),
and others).
###### Non-goals[#](#non-goals)
Here are objectives we’re **not** aiming to achieve:
1. The `custom_transforms` machinery aimed to provide a transformation-generic mechanism for customizing behavior, in principle (though never really used in practice) allowing users to customize rules for any transformation while somehow inheriting the “transparent” behavior for others. **We are instead only going to solve the customization problem for differentiation (JVP and VJP, separately).** Differentiation is the only case actually requested, and by specializing to differentiation we can reduce complexity and improve flexibility. To control all rules one can just write a primitive.
2. **We’re not going to prioritize mathematical aesthetics** over flexibility and clarity on the user side, and simplicity on the implementation side. In particular, while the custom VJP signature `a -> (b, CT b --o CT a)` is mathematically pleasing, if it’s hard to implement in a Python mechanism because of the closure in the return type, we’re fine doing something that handles residuals more explicitly.
3. **Serialization support**, of the form where the staged-out serialized program representation can be loaded and further JAX-transformed as opposed to just evaluated, is currently out of scope for these custom JVP/VJP transformation rules. Serialization may be useful not only for researchers who want to save some representation of their computation (and transform it after loading it), but also for future considerations like having jaxpr transformations implemented outside Python, or having jaxprs as an MLIR dialect. By defining this as a non-goal for the purpose of this design, we have fewer constraints on where we can stash Python callables.
###### Main problem descriptions[#](#main-problem-descriptions)
###### The vmap-removes-custom-jvp semantics problem[#](#the-vmap-removes-custom-jvp-semantics-problem)
The vmap-removes-custom-jvp semantics problem is that vmap does not compose properly with differentiation of functions with `custom_transforms` rules:
```
# old custom_transforms api to be replaced
@jax.custom_transforms def f(x):
return 2. * x
# f_vjp :: a -> (b, CT b --o CT a)
def f_vjp(x):
return f(x), lambda g: 3. * x # 3 instead of 2
jax.defvjp_all(f, f_vjp)
grad(f)(1.) # 3.
vmap(grad(f))(np.ones(4)) # [3., 3., 3., 3.]
grad(lambda x: vmap(f)(x).sum())(np.ones(4)) # [2., 2., 2., 2.]
```
The last grad-of-vmap line has an unexpected result! In general, applying
`vmap`, or really any non-differentiation transformation, has the effect of removing the custom differentiation rule. (Applying `jvp` causes a failure when a custom VJP rule is defined.)
The problem exists because transformations are like rewrites, and the `vmap`
transformation effectively rewrites the function to no longer call the newly-introduced primitive for which there is a custom rule (and hence `grad`
then doesn’t produce the custom rule’s result). In more detail, the
`custom_transforms` machinery sets things up so that evaluating `f(x)` applies the function
```
{ lambda ; ; a.
let b = f_primitive a
in [b] }
```
where `f_primitive` is a new primitive (introduced for every `custom_transforms`
function and in fact for every call of the function) to which the custom VJP rule is associated. When we evaluate `grad(f)(x)`, the differentiation machinery encounters `f_primitive` and processes it with the custom rule.
However, because `f_primitive` is *transparent* to `vmap`, in the sense that
`vmap` operates on (effectively by inlining) the definition of `f_primitive`,
the function `vmap(f)` is effectively
```
{ lambda ; ; a.
let b = mul 2. a
in [b] }
```
In words, `vmap` rewrites the function in terms of its underlying primitives and their transformation rules, removing `f_primitive` entirely.
More generally, **because `vmap(f)` has semantics defined in terms of calls to f, it is semantically inconsistent to remove the custom derivative rule**. That is, since we define
```
vmap(f)(xs) == np.stack([f(x) for x in xs])
```
we must have
```
jvp(vmap(f))(xs) == jvp(lambda xs: np.stack([f(x) for x in xs]))
```
yet this property is not observed when `f` has a custom derivative rule defined,
as the custom derivative rule is used in the right-hand version but not the left-hand one.
This issue isn’t specific to `vmap`; it applies to all transformations for which the semantics of transforming a function `f` are defined in terms of calls to the function `f`, rather than rewriting it into another function. The `mask`
transformation also falls into this class. Differentiation transforms and the hypothetical all-unary-functions-become-cosine transform are not in this class.
(The interaction between additional custom rules, like custom `vmap` rules, is likely to get even more complex, suggesting the problem framing of
`custom_transforms` is too broad.)
###### The Python flexibility problem[#](#the-python-flexibility-problem)
In JAX, as in [Autograd](https://github.com/hips/autograd) and
[PyTorch](https://pytorch.org) but not TF1, differentiation of a Python function is performed while the function is being executed and traced. This behavior delights users for a few reasons.
**First and most importantly, it enables pdb-based workflows, e.g. for inspecting numerics or catching NaNs.** That is, users can employ the standard Python debugger and other Python-native tools to debug their code, even being able to inspect runtime values to understand numerical behavior on examples and to catch fundamentally runtime errors like NaNs. In fact, just while working on the PR corresponding to this design, especially on the `odeint` primitive, I used runtime value inspection to debug issues many times, increasing my confidence that this is a key user workflow in Python. One especially handy trick, which I’ve used in both JAX and Autograd many times, is the ability to insert a debugger breakpoint in a custom VJP rule to enter a debugger at a specific point in the backward pass.
**Second, it allows differentiation of Python native control flow.** We’re not sure how often this is used in practice in finalized software artifacts, but when users first poke around JAX or Autograd they’re often impressed by this freedom. There’s a reason we include it at the top of our JAX and Autograd READMEs, slide decks, and demos. Ceding this capability would be a step backward from Autograd. We want JAX to have the best automatic differentiation.
However, the `custom_transforms` machinery does not provide this Python-support flexibility. That is, because it’s implemented in terms of up-front jaxpr formation from the Python code for both the user function and custom differentiation rules, code like this leads to an abstract value tracing error:
```
# old custom_transforms api to be replaced
@jax.custom_transforms def f(x):
if x > 0:
return x
else:
return 0.
def f_vjp(x):
return ...
jax.defvjp_all(f, f_vjp)
grad(f)(1.) # Error!
```
###### Solution idea[#](#solution-idea)
The main idea is that **[dougalm@](https://github.com/dougalm) already solved these problems with `core.call`**. That is, we can frame the task of specifying a custom JVP rule for a user function in terms of a new Python-level call primitive (not to be added to the jaxpr language; see below). This new call primitive has a user Python function associated with it just like `core.call`,
but additionally has a second Python callable representing the JVP rule. Let’s refer to this new call primitive as `custom_jvp_call`.
Transformations like `vmap` interact with `custom_jvp_call` as with `core.call`:
they effectively pass right through it and are applied to the underlying Python callables. Schematically, writing in terms of curried versions of the primitives for convenience, analogously to how `vmap` interacts with `core.call` by applying to the function to be called:
```
vmap(call(f)) == call(vmap(f))
```
for the new primitive `custom_jvp_call` we simply apply `vmap` to the two functions it entails:
```
vmap(custom_jvp_call(f, f_jvp)) == custom_jvp_call(vmap(f), vmap(f_jvp))
```
This behavior means we’ve solved the [vmap-removes-custom-jvp semantics problem](#the-vmap-removes-custom-jvp-semantics-problem).
The `jvp` transformation interacts as one might expect: it just calls `f_jvp`,
```
jvp(call(f)) == call(jvp(f))
jvp(custom_jvp_call(f, f_jvp)) == f_jvp
```
Because `custom_jvp_call` acts like `core.call` (and not like `xla.xla_call`) in that it doesn’t raise the abstraction level of its inputs (because it’s not delaying anything or staging anything out), it means we’ve solved [the Python flexibility problem](#the-python-flexibility-problem): there are no constraints on the user Python function (above the usual functional programming constraints required by `jvp` or `vjp`).
What about evaluation and compilation? These are two ways to “exit” the JAX system, in the sense that no additional transformations can be applied after these steps. As a result, their rules are trivial:
```
eval(call(f)) == eval(f)
jit(call(f)) == hlo_call(jit(f))
eval(custom_jvp_call(f, f_jvp)) == eval(f)
jit(custom_jvp_call(f, f_jvp)) == hlo_call(jit(f))
```
In words, if a JVP rule hasn’t already rewritten `custom_jvp_call(f, f_jvp)`
into `f_jvp`, when we get to the point of evaluation with `eval` or staging out to XLA with `jit`, differentiation is never going to be applied, so we just ignore `f_jvp` and behave just like `core.call`. However, due to the wrinkle discussed next, the partial eval rule for `custom_jvp_call` must be a bit more complex, since partial evaluation isn’t just used to stage out to XLA with
`jit`.
The only remaining wrinkle has to do with “initial-style” jaxpr-forming primitives, like `lax.scan`, and their transformation rules. These represent a different kind of “staging out to a jaxpr” than that for compilation because we can perform additional transformations on the staged-out jaxpr. That is, when
`lax.scan` forms a jaxpr, it does not exit the transformation system, since when we apply a jvp or vmap to a `lax.scan` we need to apply it to the function represented by the jaxpr.
Another way to state the wrinkle is that initial-style primitives like `lax.scan`
rely on the ability to round-trip to a jaxpr and back to a Python callable while preserving semantics. That must mean preserving custom differentiation rule semantics too.
The solution is to use a bit of dynamic scoping: when we’re staging out to a jaxpr for an initial-style primitive, like those in lax_control_flow.py, we set a bit on the global trace state. When that bit is set, instead of using the final-style `custom_jvp_call` primitive, we use an initial-style
`custom_jvp_call_jaxpr` primitive, and trace the functions `f` and `f_jvp` to jaxprs up-front to make initial-style processing easier. The
`custom_jvp_call_jaxpr` primitive is otherwise similar to the final-style version.
(Footnote: while morally we form jaxprs for both `f` and `f_jvp` before binding
`custom_jvp_call_jaxpr`, we need to delay the formation of the jaxpr of `f_jvp`
because it may call the custom-JVP function and thus eager processing would lead to an infinite recursion. We delay that jaxpr formation in a thunk.)
If we gave up on [the Python flexibility problem](#the-python-flexibility-problem), we could get away with only having
`custom_jvp_call_jaxpr` and not having the separate Python-level primitive
`custom_jvp_call`.
###### API[#](#api)
The custom JVP for an `a -> b` function is specified with an `(a, Ta) -> (b, T b)` function:
```
# f :: a -> b
@jax.custom_jvp def f(x):
return np.sin(x)
# f_jvp :: (a, T a) -> (b, T b)
def f_jvp(primals, tangents):
x, = primals
t, = tangents
return f(x), np.cos(x) * t
f.defjvp(f_jvp)
```
(Interesting autodiff aside: for the rule to apply to higher-order differentiation, one must call `f` in the body of `f_jvp`; that precludes some kinds of work sharing between the internals of `f` and the tangent calculation.)
The custom VJP for an `a -> b` function is specified with an `a -> (b, c)` forward pass function paired with a `(c, CT b) -> CT` a backward pass function:
```
# f :: a -> b
@jax.custom_vjp def f(x):
return np.sin(x)
# f_fwd :: a -> (b, c)
def f_fwd(x):
return f(x), np.cos(x)
# f_bwd :: (c, CT b) -> CT a def f_bwd(cos_x, g):
return (cos_x * g,)
f.defvjp(f_fwd, f_bwd)
```
The signature `a -> (b, CT b --o CT a)` is more aesthetically pleasing, but supporting it would make the implementation more complex and might require compromising expressibility desiderata. The basic reason that Python callables are opaque (unless we trace them to a jaxpr eagerly, which places expressiveness constraints), and in this case we may be returning a callable with `vmap` tracers inside its closure that we need to know about during the forward pass.
We could add convenience wrappers, for example to define the JVP rule for a single argument at a time (like we do internally for primitives). But because this proposal is complicated enough as it is, I decided against convenience layers; let’s keep things minimal for now.
There are some other bells and whistles to the API:
* Inputs and output types `a`, `b`, and `c` can be arbitrary pytrees of jaxtypes.
* Passing arguments by name (keyword arguments) is supported when they can be resolved to positions using the `inspect` module. This is a bit of an experiment with Python 3’s improved ability to programmatically inspect argument signatures. I believe it is sound but not complete, which is a fine place to be.
(See also [#2069](https://github.com/google/jax/issues/2069).)
* Arguments can be marked non-differentiable using `nondiff_argnums`, and as with
`jit`’s `static_argnums` these arguments don’t have to be JAX types. We need to set a convention for how these arguments are passed to the rules. For a primal function with type signature `(d, a) -> b` where `d` represents the non-differentiable type, the JVP rule’s signature is `(a, T a, d) -> T b` and the VJP rule’s reverse component signature is `(d, c, CT b) -> CT a`. That is,
the non-differentiable arguments are passed in order after `primals` and
`tangents` for a custom JVP rule, and passed in order preceding the residuals in a custom VJP rule’s reverse function.
###### Implementation notes[#](#implementation-notes)
* Updated `jax.experimental.odeint`
+ Since `odeint` is a pretty complex user of a custom VJP rule, in addition to
just updating it to work at all, I wanted to revise it to be a canonical
user of the new custom VJP API as a way to test that the API was a good one.
+ Along the way I made other improvements to the `odeint` implementation:
- remove raveling/unraveling boilerplate
- make use of `lax.scan` to remove the index-update logic
- speed up by 20+% on the simple pendulum benchmark
* Added a custom bind method on each transform for the custom derivative call primitives, `custom_jvp_call` and `custom_vjp_call`. It’s like
`core.call_bind`, except we don’t process env traces: those are just errors.
* Added `custom_lin` primitive, which gets staged out into linear jaxprs to be transposed when using a custom VJP rule.
+ Because our reverse-mode autodiff is decomposed into linearization, partial
evaluation, and transposition, our custom VJP rules are processed in two
separate steps: one during linearization and one during transposition.
+ The linearization step, i.e. the JVP rule for `custom_vjp_call`, applies
`custom_lin` to the tangent values; `custom_lin` carries with it the user’s
custom backward-pass function, and as a primitive it only has a transpose
rule.
+ This mechanism is described more in [#636](https://github.com/google/jax/issues/636).
* To prevent
##### `custom_vjp` and `nondiff_argnums` update guide[#](#custom-vjp-and-nondiff-argnums-update-guide)
*mattjj@*
*Oct 14 2020*
This doc assumes familiarity with `jax.custom_vjp`, as described in the [Custom derivative rules for JAX-transformable Python functions](https://jax.readthedocs.io/en/latest/notebooks/Custom_derivative_rules_for_Python_code.html)
notebook.
###### What to update[#](#what-to-update)
After JAX [PR #4008](https://github.com/google/jax/pull/4008), the arguments passed into a `custom_vjp` function’s `nondiff_argnums` can’t be `Tracer`s (or containers of `Tracer`s), which basically means to allow for arbitrarily-transformable code `nondiff_argnums` shouldn’t be used for array-valued arguments. Instead, `nondiff_argnums` should be used only for non-array values, like Python callables or shape tuples or strings.
Wherever we used to use `nondiff_argnums` for array values, we should just pass those as regular arguments. In the `bwd` rule, we need to produce values for them,
but we can just produce `None` values to indicate there’s no corresponding gradient value.
For example, here’s the **old** way to write `clip_gradient`, which won’t work when `hi` and/or `lo` are `Tracer`s from some JAX transformation.
```
from functools import partial import jax
@partial(jax.custom_vjp, nondiff_argnums=(0, 1))
def clip_gradient(lo, hi, x):
return x # identity function
def clip_gradient_fwd(lo, hi, x):
return x, None # no residual values to save
def clip_gradient_bwd(lo, hi, _, g):
return (jnp.clip(g, lo, hi),)
clip_gradient.defvjp(clip_gradient_fwd, clip_gradient_bwd)
```
Here’s the **new**, awesome way, which supports arbitrary transformations:
```
import jax
@jax.custom_vjp # no nondiff_argnums!
def clip_gradient(lo, hi, x):
return x # identity function
def clip_gradient_fwd(lo, hi, x):
return x, (lo, hi) # save lo and hi values as residuals
def clip_gradient_bwd(res, g):
lo, hi = res
return (None, None, jnp.clip(g, lo, hi)) # return None for lo and hi
clip_gradient.defvjp(clip_gradient_fwd, clip_gradient_bwd)
```
If you use the old way instead of the new way, you’ll get a loud error in any case where something might go wrong (namely when there’s a `Tracer` passed into a `nondiff_argnums` argument).
Here’s a case where we actually need `nondiff_argnums` with `custom_vjp`:
```
from functools import partial import jax
@partial(jax.custom_vjp, nondiff_argnums=(0,))
def skip_app(f, x):
return f(x)
def skip_app_fwd(f, x):
return skip_app(f, x), None
def skip_app_bwd(f, _, g):
return (g,)
skip_app.defvjp(skip_app_fwd, skip_app_bwd)
```
###### Explanation[#](#explanation)
Passing `Tracer`s into `nondiff_argnums` arguments was always buggy. While there were some cases that worked correctly, others would lead to complex and confusing error messages.
The essence of the bug was that `nondiff_argnums` was implemented in a way that acted very much like lexical closure. But lexical closure over `Tracer`s wasn’t at the time intended to work with `custom_jvp`/`custom_vjp`. Implementing
`nondiff_argnums` that way was a mistake!
**[PR #4008](https://github.com/google/jax/pull/4008) fixes all lexical closure issues with `custom_jvp` and `custom_vjp`.** Woohoo! That is, now `custom_jvp`
and `custom_vjp` functions and rules can close over `Tracer`s to our hearts’
content. For all non-autodiff transformations, things will Just Work. For autodiff transformations, we’ll get a clear error message about why we can’t differentiate with respect to values over which a `custom_jvp` or `custom_vjp`
closes:
> Detected differentiation of a custom_jvp function with respect to a closed-over
> value. That isn’t supported because the custom JVP rule only specifies how to
> differentiate the custom_jvp function with respect to explicit input parameters.
> Try passing the closed-over value into the custom_jvp function as an argument,
> and adapting the custom_jvp rule.
In tightening up and robustifying `custom_jvp` and `custom_vjp` in this way, we found that allowing `custom_vjp` to accept `Tracer`s in its `nondiff_argnums`
would take a significant amount of bookkeeping: we’d need to rewrite the user’s
`fwd` function to return the values as residuals, and rewrite the user’s `bwd`
function to accept them as normal residuals (rather than accepting them as special leading arguments, as happens with `nondiff_argnums`). This seems maybe manageable, until you think through how we have to handle arbitrary pytrees!
Moreover, that complexity isn’t necessary: if user code treats array-like non-differentiable arguments just like regular arguments and residuals,
everything already works. (Before
[#4039](https://github.com/google/jax/pull/4039) JAX might’ve complained about involving integer-valued inputs and outputs in autodiff, but after
[#4039](https://github.com/google/jax/pull/4039) those will just work!)
Unlike `custom_vjp`, it was easy to make `custom_jvp` work with
`nondiff_argnums` arguments that were `Tracer`s. So these updates only need to happen with `custom_vjp`.
##### Omnistaging[#](#omnistaging)
*mattjj@*
*Sept 25 2020*
This is more of an upgrade guide than a design doc.
###### Contents[#](#contents)
* [tl;dr](#tl-dr)
* [What is “omnistaging” and why is it useful?](#what-is-omnistaging-and-why-is-it-useful)
* [What issues can arise when omnistaging is switched on?](#what-issues-can-arise-when-omnistaging-is-switched-on)
+ [Using `jax.numpy` for shape computations](#using-jax-numpy-for-shape-computations)
+ [Side-effects](#side-effects)
+ [Small numerical differences based on XLA optimizations](#small-numerical-differences-based-on-xla-optimizations)
+ [Dependence on JAX internal APIs that changed](#dependence-on-jax-internal-apis-that-changed)
+ [Triggering XLA compile time bugs](#triggering-xla-compile-time-bugs)
###### tl;dr[#](#tl-dr)
###### What’s going on?[#](#what-s-going-on)
A change to JAX’s tracing infrastructure called “omnistaging”
([google/jax#3370](https://github.com/google/jax/pull/3370)) was switched on in jax==0.2.0. This change improves memory performance, trace execution time, and simplifies jax internals, but may cause some existing code to break. Breakage is usually a result of buggy code, so long-term it’s best to fix the bugs, but omnistaging can also be disabled as a temporary workaround. And we’re happy to help you with fixes!
###### How do I know if omnistaging broke my code?[#](#how-do-i-know-if-omnistaging-broke-my-code)
The easiest way to tell if omnistaging is responsible is to disable omnistaging and see if the issues go away. See the [What issues can arise when omnistaging is switched on?](#what-issues-can-arise-when-omnistaging-is-switched-on) section below.
###### How can I disable omnistaging for now?[#](#how-can-i-disable-omnistaging-for-now)
*Note: this applies to JAX versions 0.2.0 through 0.2.11; omnistaging cannot be disabled in JAX versions 0.2.12 and higher*
It is temporarily possible to disable omnistaging by
1. setting the shell environment variable `JAX_OMNISTAGING` to something falsey;
2. setting the boolean flag `jax_omnistaging` to something falsey if your code parses flags with absl;
3. using this statement near the top of your main file:
```
jax.config.disable_omnistaging()
```
###### How do I fix bugs exposed by omnistaging?[#](#how-do-i-fix-bugs-exposed-by-omnistaging)
By far the most common issue with omnistaging is using `jax.numpy` to compute shape values or other trace-time constants. See the code block below for a quick example, and for full details along with other issues see the section [What issues can arise when omnistaging is switched on?](#what-issues-can-arise-when-omnistaging-is-switched-on).
Instead of this:
```
@jit def f(x):
input_size = jnp.prod(x.shape)
if input_size > 100:
...
```
do this:
```
import numpy as np
@jit def f(x):
input_size = np.prod(x.shape)
if input_size > 100:
...
```
Instead of thinking of `jax.numpy` as a drop-in replacement for `numpy`, it’s now better to think of using `jax.numpy` operations only when you want to perform a computation on an accelerator (like your GPU).
###### What is “omnistaging” and why is it useful?[#](#what-is-omnistaging-and-why-is-it-useful)
Omnistaging is the name for a JAX core upgrade aimed at staging out more computation from op-by-op Python to XLA, and avoiding any “trace-time constant folding” in `jit`, `pmap`, and control flow primitives. As a result, omnistaging improves JAX’s memory performance (sometimes dramatically) both by reducing fragmentation during tracing and by producing fewer large compile-time constants for XLA. It can also improve tracing performance by eliminating op-by-op execution at tracing time. Further, omnistaging simplifies JAX core internals,
fixing many outstanding bugs and setting the stage for important upcoming features.
The name “omnistaging” means staging out everything possible.
###### Toy example[#](#toy-example)
JAX transformations like `jit` and `pmap` stage out computations to XLA. That is, we apply them to functions comprising multiple primitive operations so that rather being executed one at a time from Python the operations are all part of one end-to-end optimized XLA computation.
But exactly which operations get staged out? Until omnistaging, JAX staged out computation based on data dependence only. Here’s an example function, followed by the XLA HLO program it stages out *before* the omnistaging change:
```
from jax import jit import jax.numpy as jnp
@jit def f(x):
y = jnp.add(1, 1)
return x * y
f(3)
```
```
ENTRY jit_f.6 {
constant.2 = pred[] constant(false)
parameter.1 = s32[] parameter(0)
constant.3 = s32[] constant(2)
multiply.4 = s32[] multiply(parameter.1, constant.3)
ROOT tuple.5 = (s32[]) tuple(multiply.4)
}
```
Notice that the `add` operation is not staged out. Instead, we only see a multiply.
Here’s the HLO generated from this function *after* the omnistaging change:
```
ENTRY jit_f.8 {
constant.2 = pred[] constant(false)
parameter.1 = s32[] parameter(0)
constant.3 = s32[] constant(1)
constant.4 = s32[] constant(1)
add.5 = s32[] add(constant.3, constant.4)
multiply.6 = s32[] multiply(parameter.1, add.5)
ROOT tuple.7 = (s32[]) tuple(multiply.6)
}
```
###### Slightly less toy example[#](#slightly-less-toy-example)
Here’s a less toy example which can arise in practice when we want to create boolean masks:
```
import jax.numpy as jnp from jax import lax
@jit def select_tril(x):
mask = jnp.arange(x.shape[0])[:, None] > jnp.arange(x.shape[1])
return lax.select(mask, x, jnp.zeros_like(x)) # lax.select is like jnp.where
x = np.arange(12).reshape((3, 4))
select_tril(x)
```
*Before* omnistaging:
```
ENTRY jit_select_tril.8 {
constant.3 = pred[] constant(false)
constant.1 = pred[3,4]{1,0} constant({...})
parameter.2 = s32[3,4]{1,0} parameter(0)
constant.4 = s32[] constant(0)
broadcast.5 = s32[3,4]{1,0} broadcast(constant.4), dimensions={}
select.6 = s32[3,4]{1,0} select(constant.1, parameter.2, broadcast.5)
ROOT tuple.7 = (s32[3,4]{1,0}) tuple(select.6)
}
```
The `select` operation is staged out, but the operations for constructing the constant `mask` are not. Rather than being staged out, the operations that construct `mask` are executed op-by-op at Python tracing time, and XLA only sees a compile time constant `constant.1` representing the value of `mask`. That’s unfortunate, because if we had staged out the operations for constructing
`mask`, XLA could have fused them into the `select` and avoided materializing the result at all. As a result we end up wasting memory with a potentially-large constant, wasting time dispatching multiple un-fused op-by-op XLA computations,
and potentially even fragmenting memory.
(The `broadcast` that corresponds to the construction of the zeros array for
`jnp.zeros_like(x)` is staged out because JAX is lazy about very simple expressions from [google/jax#1668](https://github.com/google/jax/pull/1668). After omnistaging, we can remove that lazy sublanguage and simplify JAX internals.)
The reason the creation of `mask` is not staged out is that, before omnistaging,
`jit` operates based on data dependence. That is, `jit` stages out only those operations in a function that have a data dependence on an argument. Control flow primitives and `pmap` behave similarly. In the case of `select_tril`, the operations to construct the constant `mask` do not have a data dependence on the argument x, so they are not staged out; only the `lax.select` call has a data dependence.
With omnistaging all `jax.numpy` calls in the dynamic context of a
`jit`-transformed function are staged out to XLA. That is, after omnistaging the computation XLA sees for `select_tril` is
```
ENTRY jit_select_tril.16 {
constant.4 = pred[] constant(false)
iota.1 = s32[3]{0} iota(), iota_dimension=0
broadcast.5 = s32[3,1]{1,0} broadcast(iota.1), dimensions={0}
reshape.7 = s32[3]{0} reshape(broadcast.5)
broadcast.8 = s32[3,4]{1,0} broadcast(reshape.7), dimensions={0}
iota.2 = s32[4]{0} iota(), iota_dimension=0
broadcast.6 = s32[1,4]{1,0} broadcast(iota.2), dimensions={1}
reshape.9 = s32[4]{0} reshape(broadcast.6)
broadcast.10 = s32[3,4]{1,0} broadcast(reshape.9), dimensions={1}
compare.11 = pred[3,4]{1,0} compare(broadcast.8, broadcast.10), direction=GT
parameter.3 = s32[3,4]{1,0} parameter(0)
constant.12 = s32[] constant(0)
broadcast.13 = s32[3,4]{1,0} broadcast(constant.12), dimensions={}
select.14 = s32[3,4]{1,0} select(compare.11, parameter.3, broadcast.13)
ROOT tuple.15 = (s32[3,4]{1,0}) tuple(select.14)
}
```
###### What issues can arise when omnistaging is switched on?[#](#what-issues-can-arise-when-omnistaging-is-switched-on)
As a consequence of staging out all `jax.numpy` operations from Python to XLA when in the dynamic context of a `jit` or `pmap`, some code that worked previously can start raising loud errors. As explained below, these behaviors were already buggy before omnistaging, but omnistaging makes them into hard errors.
###### Using `jax.numpy` for shape computations[#](#using-jax-numpy-for-shape-computations)
###### Example[#](#example)
```
from jax import jit import jax.numpy as jnp
@jit def ex1(x):
size = jnp.prod(jnp.array(x.shape))
return x.reshape((size,))
ex1(jnp.ones((3, 4)))
```
###### Error message[#](#error-message)
```
[... full traceback ...]
File "/home/mattjj/packages/jax/jax/core.py", line 862, in raise_concretization_error
raise ConcretizationTypeError(msg)
jax.core.ConcretizationTypeError: Abstract tracer value encountered where concrete value is expected.
The error arose in jax.numpy.reshape.
While tracing the function ex1 at ex1.py:4, this value became a tracer due to JAX operations on these lines:
operation c:int32[] = reduce_prod[ axes=(0,) ] b:int32[2]
from line ex1.py:6 (ex1)
You can use transformation parameters such as `static_argnums` for `jit` to avoid tracing particular arguments of transformed functions.
See https://jax.readthedocs.io/en/latest/faq.html#abstract-tracer-value-encountered-where-concrete-value-is-expected-error for more information.
Encountered tracer value: Traced<ShapedArray(int32[])>with<DynamicJaxprTrace(level=0/1)>
```
###### Explanation[#](#explanation)
With omnistaging, we can’t use `jax.numpy` for shape computations as in the use of `jnp.prod` above because in the dynamic context of a jit function those operations will be staged out of Python as values to be computed at execution time, yet we need them to be compile-time (and hence trace-time) constants.
Before omnistaging, this code wouldn’t have raised an error, but it was a common performance bug: the `jnp.prod` computation would have been executed on the device at tracing time, meaning extra compilation, transfers, synchronization,
allocations, and potentially memory fragmentation.
###### Solution[#](#solution)
The solution is simply to use the original `numpy` for shape calculations like these. Not only do we avoid the error, but also we keep the computations on the host (and with lower overheads).
This issue was common enough in code that we tried to make the error message especially good. In addition to the stack trace showing where an abstract tracer value caused a problem (the `jnp.reshape` line in the full stack trace, on omni.py:10), we also explain why this value became a tracer in the first place by pointing to the upstream primitive operation that caused it to become an abstract tracer (the `reduce_prod` from `jnp.prod` on omni.py:9) and to which `jit`-decorated function the tracer belongs (`ex1` on omni.py:6).
###### Side-effects[#](#side-effects)
###### Example[#](#id1)
```
from jax import jit from jax import random
key = random.PRNGKey(0)
def init():
global key
key, subkey = random.split(key)
return random.normal(subkey, ())
print(init()) # -1.2515389 print(init()) # -0.58665067
init = jit(init)
print(init()) # 0.48648298 print(init()) # 0.48648298 !!
```
That last call has repeated randomness but no hard error, because we aren’t re-executing the Python. But if we look at `key`, we see an escaped tracer *when omnistaging is on*:
```
print(key) # Traced<ShapedArray(uint32[2])>with<DynamicJaxprTrace(level=0/1)>
```
Before omnistaging, the `random.split` call would not be staged out and so we wouldn’t get an escaped tracer. The code would still be buggy in that the jitted function wouldn’t be reproducing the semantics of the original function (because of the repeated use of the same PRNG key), ultimately due to the side effect.
With omnistaging on, if we touch `key` again, we’ll get an escaped tracer error:
```
random.normal(key, ())
```
###### Error message[#](#id2)
```
[... full stack trace …]
File "/home/mattjj/packages/jax/jax/interpreters/partial_eval.py", line 836, in _assert_live
raise core.escaped_tracer_error(msg)
jax.core.UnexpectedTracerError: Encountered an unexpected tracer. Perhaps this tracer escaped through global state from a previously traced function.
The functions being transformed should not save traced values to global state. Detail: tracer created on line example.py:8 (init).
```
###### Explanation[#](#id3)
The second largest category of omnistaging issues we found had to do with side-effecting code. This code already voided the JAX warranty by transforming effectful functions, but due to pre-omnistaging “trace-time constant folding”
behavior, some side effecting functions could nevertheless behave correctly.
Omnistaging catches more of these errors.
###### Solution[#](#id4)
The solution is to identify JAX-transformed functions that rely on side effects,
and to rewrite them not to be effectful.
###### Small numerical differences based on XLA optimizations[#](#small-numerical-differences-based-on-xla-optimizations)
Because with omnistaging more computations are being staged out to XLA, rather than some being executed at trace time, that can have the effect of reordering floating point operations. As a result, we’ve seen numerical behaviors change in a way that causes tests with overly tight tolerances to fail when omnistaging is switched on.
###### Dependence on JAX internal APIs that changed[#](#dependence-on-jax-internal-apis-that-changed)
Omnistaging involved some big revisions to JAX’s core code, including removing or changing internal functions. Any code that relies on such internal JAX APIs can break when omnistaging is switched on, either with build errors
(from pytype) or runtime errors.
###### Triggering XLA compile time bugs[#](#triggering-xla-compile-time-bugs)
Because omnistaging involves staging out more code to XLA, we’ve seen it trigger pre-existing XLA compile-time bugs on some backends. The best thing to do with these is to report them so we can work with the XLA teams on fixes.
##### JEP 9263: Typed keys & pluggable RNGs[#](#jep-9263-typed-keys-pluggable-rngs)
*<NAME>, <NAME>*
*August 2023*
###### Overview[#](#overview)
Going forward, RNG keys in JAX will be more type-safe and customizable.
Rather than representing a single PRNG key by a length-2 `uint32` array,
it will be represented as a scalar array with a special RNG dtype that satisfies `jnp.issubdtype(key.dtype, jax.dtypes.prng_key)`.
For now, old-style RNG keys can still be created with
[`jax.random.PRNGKey()`](index.html#jax.random.PRNGKey):
```
>>> key = jax.random.PRNGKey(0)
>>> key Array([0, 0], dtype=uint32)
>>> key.shape
(2,)
>>> key.dtype dtype('uint32')
```
Starting now, new-style RNG keys can be created with
[`jax.random.key()`](index.html#jax.random.key):
```
>>> key = jax.random.key(0)
>>> key Array((), dtype=key<fry>) overlaying:
[0 0]
>>> key.shape
()
>>> key.dtype key<fry>
```
This (scalar-shaped) array behaves the same as any other JAX array, except that its element type is a key (and associated metadata). We can make non-scalar key arrays as well, for example by applying [`jax.vmap()`](index.html#jax.vmap) to
[`jax.random.key()`](index.html#jax.random.key):
```
>>> key_arr = jax.vmap(jax.random.key)(jnp.arange(4))
>>> key_arr Array((4,), dtype=key<fry>) overlaying:
[[0 0]
[0 1]
[0 2]
[0 3]]
>>> key_arr.shape
(4,)
```
Aside from switching to a new constructor, most PRNG-related code should continue to work as expected. You can continue to use keys in
[`jax.random`](index.html#module-jax.random) APIs as before; for example:
```
# split new_key, subkey = jax.random.split(key)
# random number generation data = jax.random.uniform(key, shape=(5,))
```
However, not all numerical operations work on key arrays. They now intentionally raise errors:
```
>>> key = key + 1 ValueError: dtype=key<fry> is not a valid dtype for JAX type promotion.
```
If for some reason you need to recover the underlying buffer
(the old-style key), you can do so with [`jax.random.key_data()`](index.html#jax.random.key_data):
```
>>> jax.random.key_data(key)
Array([0, 0], dtype=uint32)
```
For old-style keys, [`key_data()`](index.html#jax.random.key_data) is an identity operation.
###### What does this mean for users?[#](#what-does-this-mean-for-users)
For JAX users, this change does not require any code changes now, but we hope that you will find the upgrade worthwhile and switch to using typed keys. To try this out, replace uses of jax.random.PRNGKey() with jax.random.key(). This may introduce breakages in your code that fall into one of a few categories:
* If your code performs unsafe/unsupported operations on keys (such as indexing,
arithmetic, transposition, etc; see Type Safety section below), this change will catch it. You can update your code to avoid such unsupported operations,
or use [`jax.random.key_data()`](index.html#jax.random.key_data) and [`jax.random.wrap_key_data()`](index.html#jax.random.wrap_key_data)
to manipulate raw key buffers in an unsafe way.
* If your code includes explicit logic about `key.shape`, you may need to update this logic to account for the fact that the trailing key buffer dimension is no longer an explicit part of the shape.
* If your code includes explicit logic about `key.dtype`, you will need to upgrade it to use the new public APIs for reasoning about RNG dtypes, such as
`dtypes.issubdtype(dtype, dtypes.prng_key)`.
* If you call a JAX-based library which does not yet handle typed PRNG keys, you can use `raw_key = jax.random.key_data(key)` for now to recover the raw buffer,
but please keep a TODO to remove this once the downstream library supports typed RNG keys.
At some point in the future, we plan to deprecate [`jax.random.PRNGKey()`](index.html#jax.random.PRNGKey) and require the use of [`jax.random.key()`](index.html#jax.random.key).
###### Detecting new-style typed keys[#](#detecting-new-style-typed-keys)
To check whether an object is a new-style typed PRNG key, you can use
`jax.dtypes.issubdtype` or `jax.numpy.issubdtype`:
```
>>> typed_key = jax.random.key(0)
>>> jax.dtypes.issubdtype(typed_key.dtype, jax.dtypes.prng_key)
True
>>> raw_key = jax.random.PRNGKey(0)
>>> jax.dtypes.issubdtype(raw_key.dtype, jax.dtypes.prng_key)
False
```
###### Type annotations for PRNG Keys[#](#type-annotations-for-prng-keys)
The recommended type annotation for both old and new-style PRNG keys is `jax.Array`.
A PRNG key is distinguished from other arrays based on its `dtype`, and it is not currently possible to specify dtypes of JAX arrays within a type annotation.
Previously it was possible to use `jax.random.KeyArray` or `jax.random.PRNGKeyArray`
as type annotations, but these have always been aliased to `Any` under type checking,
and so `jax.Array` has much more specificity. In a future JAX release, we will deprecate and remove `jax.random.KeyArray` and `jax.random.PRNGKeyArray` from the public API.
###### Notes for JAX library authors[#](#notes-for-jax-library-authors)
If you maintain a JAX-based library, your users are also JAX users. Know that JAX will continue to support “raw” old-style keys in [`jax.random`](index.html#module-jax.random) for now, so callers may expect them to remain accepted everywhere. If you prefer to require new-style typed keys in your library, then you may want to enforce them with a check along the following lines:
```
from jax import dtypes
def ensure_typed_key_array(key: Array) -> Array:
if dtypes.issubdtype(key.dtype, dtypes.prng_key):
return key
else:
raise TypeError("New-style typed JAX PRNG keys required")
```
###### Motivation[#](#motivation)
Two major motivating factors for this change are customizability and safety.
###### Customizing PRNG implementations[#](#customizing-prng-implementations)
JAX currently operates with a single, globally configured PRNG algorithm. A PRNG key is a vector of unsigned 32-bit integers, which jax.random APIs consume to produce pseudorandom streams. Any higher-rank uint32 array is interpreted as an array of such key buffers, where the trailing dimension represents keys.
The drawbacks of this design became clearer as we introduced alternative PRNG implementations, which must be selected by setting a global or local configuration flag. Different PRNG implementations have different size key buffers, and different algorithms for generating random bits. Determining this behavior with a global flag is error-prone, especially when there is more than one key implementation in use process-wide.
Our new approach is to carry the implementation as part of the PRNG key type,
i.e. with the element type of the key array. Using the new key API, here is an example of generating pseudorandom values under the default threefry2x32 implementation (which is implemented in pure Python and compiled with JAX), and under the non-default rbg implementation (which corresponds to a single XLA random-bit generation operation):
```
>>> key = jax.random.key(0, impl='threefry2x32') # this is the default impl
>>> key Array((), dtype=key<fry>) overlaying:
[0 0]
>>> jax.random.uniform(key, shape=(3,))
Array([0.9653214 , 0.31468165, 0.63302994], dtype=float32)
>>> key = jax.random.key(0, impl='rbg')
>>> key Array((), dtype=key<rbg>) overlaying:
[0 0 0 0]
>>> jax.random.uniform(key, shape=(3,))
Array([0.39904642, 0.8805201 , 0.73571277], dtype=float32)
```
###### Safe PRNG key use[#](#safe-prng-key-use)
PRNG keys are really only meant to support a few operations in principle,
namely key derivation (e.g. splitting) and random number generation. The PRNG is designed to generate independent pseudorandom numbers, provided keys are properly split and that every key is consumed once.
Code that manipulates or consumes key data in other ways often indicates an accidental bug, and representing key arrays as raw uint32 buffers has allowed for easy misuse along these lines. Here are a few example misuses that we’ve encountered in the wild:
###### Key buffer indexing[#](#key-buffer-indexing)
Access to the underlying integer buffers makes it easy to try and derive keys in non-standard ways, sometimes with unexpectedly bad consequences:
```
# Incorrect key = random.PRNGKey(999)
new_key = random.PRNGKey(key[1]) # identical to the original key!
```
```
# Correct key = random.PRNGKey(999)
key, new_key = random.split(key)
```
If this key were a new-style typed key made with `random.key(999)`, indexing into the key buffer would error instead.
###### Key arithmetic[#](#key-arithmetic)
Key arithmetic is a similarly treacherous way to derive keys from other keys.
Deriving keys in a way that avoids [`jax.random.split()`](index.html#jax.random.split) or
[`jax.random.fold_in()`](index.html#jax.random.fold_in) by manipulating key data directly produces a batch of keys that—depending on the PRNG implementation—might then generate correlated random numbers within the batch:
```
# Incorrect key = random.PRNGKey(0)
batched_keys = key + jnp.arange(10, dtype=key.dtype)[:, None]
```
```
# Correct key = random.PRNGKey(0)
batched_keys = random.split(key, 10)
```
New-style typed keys created with `random.key(0)` address this by disallowing arithmetic operations on keys.
###### Inadvertent transposing of key buffers[#](#inadvertent-transposing-of-key-buffers)
With “raw” old-style key arrays, it’s easy to accidentally swap batch (leading)
dimensions and key buffer (trailing) dimensions. Again this possibly results in keys that produce correlated pseudorandomness. A pattern that we’ve seen over time boils down to this:
```
# Incorrect keys = random.split(random.PRNGKey(0))
data = jax.vmap(random.uniform, axis=1)(keys)
```
```
# Correct keys = random.split(random.PRNGKey(0))
data = jax.vmap(random.uniform, axis=0)(keys)
```
The bug here is subtle. By mapping over `axis=1`, this code makes new keys by combining a single element from each key buffer in the batch. The resulting keys are different from one another, but are effectively “derived” in a non-standard way. Again, the PRNG is not designed or tested to produce independent random streams from such a key batch.
New-style typed keys created with `random.key(0)` address this by hiding the buffer representation of individual keys, instead treating keys as opaque elements of a key array. Key arrays have no trailing “buffer” dimension to index, transpose, or map over.
###### Key reuse[#](#key-reuse)
Unlike state-based PRNG APIs like [`numpy.random`](https://numpy.org/doc/stable/reference/random/index.html#module-numpy.random), JAX’s functional PRNG does not implicitly update a key when it has been used.
```
# Incorrect key = random.PRNGKey(0)
x = random.uniform(key, (100,))
y = random.uniform(key, (100,)) # Identical values!
```
```
# Correct key = random.PRNGKey(0)
key1, key2 = random.split(random.key(0))
x = random.uniform(key1, (100,))
y = random.uniform(key2, (100,))
```
We’re actively working on tools to detect and prevent unintended key reuse.
This is still work in progress, but it relies on typed key arrays. Upgrading to typed keys now sets us up to introduce these safety features as we build them out.
###### Design of typed PRNG keys[#](#design-of-typed-prng-keys)
Typed PRNG keys are implemented as an instance of extended dtypes within JAX,
of which the new PRNG dtypes are a sub-dtype.
###### Extended dtypes[#](#extended-dtypes)
From the user perspective, an extended dtype dt has the following user-visible properties:
* `jax.dtypes.issubdtype(dt, jax.dtypes.extended)` returns `True`: this is the public API that should be used to detect whether a dtype is an extended dtype.
* It has a class-level attribute `dt.type`, which returns a typeclass in the hierarchy of `numpy.generic`. This is analogous to how `np.dtype('int32').type`
returns `numpy.int32`, which is not a dtype but rather a scalar type, and a subclass of `numpy.generic`.
* Unlike numpy scalar types, we do not allow instantiation of `dt.type` scalar objects: this is in accordance with JAX’s decision to represent scalar values as zero-dimensional arrays.
From a non-public implementation perspective, an extended dtype has the following properties:
* Its type is a subclass of the private base class `jax._src.dtypes.ExtendedDtype`,
the non-public base class used for extended dtypes. An instance of
`ExtendedDtype` is analogous to an instance of `np.dtype`, like
`np.dtype('int32')`.
* It has a private `_rules` attribute which allows the dtype to define how it behaves under particular operations. For example,
`jax.lax.full(shape, fill_value, dtype)` will delegate to
`dtype._rules.full(shape, fill_value, dtype)` when `dtype` is an extended dtype.
Why introduce extended dtypes in generality, beyond PRNGs? We reuse this same extended dtype mechanism elsewhere internally. For example, the
`jax._src.core.bint` object, a bounded integer type used for experimental work on dynamic shapes, is another extended dtype. In recent JAX versions it satisfies the properties above (See [jax/_src/core.py#L1789-L1802](https://github.com/google/jax/blob/jax-v0.4.14/jax/_src/core.py#L1789-L1802)).
###### PRNG dtypes[#](#prng-dtypes)
PRNG dtypes are defined as a particular case of extended dtypes. Specifically,
this change introduces a new public scalar type class jax.dtypes.prng_key,
which has the following property:
```
>>> jax.dtypes.issubdtype(jax.dtypes.prng_key, jax.dtypes.extended)
True
```
PRNG key arrays then have a dtype with the following properties:
```
>>> key = jax.random.key(0)
>>> jax.dtypes.issubdtype(key.dtype, jax.dtypes.extended)
True
>>> jax.dtypes.issubdtype(key.dtype, jax.dtypes.prng_key)
True
```
And in addition to `key.dtype._rules` as outlined for extended dtypes in general, PRNG dtypes define `key.dtype._impl`, which contains the metadata that defines the PRNG implementation. The PRNG implementation is currently defined by the non-public `jax._src.prng.PRNGImpl` class. For now, `PRNGImpl`
isn’t meant to be a public API, but we might revisit this soon to allow for fully custom PRNG implementations.
###### Progress[#](#progress)
Following is a non-comprehensive list of key Pull Requests implementing the above design. The main tracking issue is [#9263](https://github.com/google/jax/issues/9263).
* Implement pluggable PRNG via `PRNGImpl`: [#6899](https://github.com/google/jax/issues/6899)
* Implement `PRNGKeyArray`, without dtype: [#11952](https://github.com/google/jax/issues/11952)
* Add a “custom element” dtype property to `PRNGKeyArray` with `_rules`
attribute: [#12167](https://github.com/google/jax/issues/12167)
* Rename “custom element type” to “opaque dtype”: [#12170](https://github.com/google/jax/issues/12170)
* Refactor `bint` to use the opaque dtype infrastructure: [#12707](https://github.com/google/jax/issues/12707)
* Add `jax.random.key` to create typed keys directly: [#16086](https://github.com/google/jax/issues/16086)
* Add `impl` argument to `key` and `PRNGKey`: [#16589](https://github.com/google/jax/issues/16589)
* Rename “opaque dtype” to “extended dtype” & define `jax.dtypes.extended`:
[#16824](https://github.com/google/jax/issues/16824)
* Introduce `jax.dtypes.prng_key` and unify PRNG dtype with Extended dtype:
[#16781](https://github.com/google/jax/issues/16781)
* Add a `jax_legacy_prng_key` flag to support warning or erroring when using legacy (raw) PRNG keys: [#17225](https://github.com/google/jax/issues/17225)
##### Design of Type Promotion Semantics for JAX[#](#design-of-type-promotion-semantics-for-jax)
*<NAME>, December 2021*
One of the challenges faced in the design of any numerical computing library is the choice of how to handle operations between values of different types. This document outlines the thought process behind the promotion semantics used by JAX, summarized in [JAX Type Promotion Semantics](https://jax.readthedocs.io/en/latest/type_promotion.html).
###### Goals of JAX Type Promotion[#](#goals-of-jax-type-promotion)
JAX’s numerical computing API is modeled after that of NumPy, with a few enhancements including the ability to target accelerators like GPU and TPU.
This makes adoption of NumPy’s type promotion system disadvantageous for JAX users: NumPy’s type promotion rules heavily favor 64-bit outputs, which is problematic for computation on accelerators. Devices such as GPUs and TPUs often pay a significant performance penalty to use 64-bit floating point types, and in some cases do not support native 64-bit floating point types at all.
A simple example of this problematic type promotion semantics can be seen in binary operations between 32-bit integers and floats:
```
import numpy as np np.dtype(np.int32(1) + np.float32(1))
```
```
dtype('float64')
```
NumPy’s tendency to produce 64-bit values is a [long-standing issue](https://github.com/numpy/numpy/issues/6860) with using NumPy’s API for accelerator computations, for which there isn’t yet a good solution.
For this reason, JAX has sought to re-think NumPy-style type promotion with accelerators in mind.
###### Stepping Back: Tables and Lattices[#](#stepping-back-tables-and-lattices)
Before we dive into the details, let’s take a moment to step back and think about *how* to think about the problem of type promotion. Consider arithmetic operations between built-in numerical types in Python, namely those of type `int`, `float`, and `complex`. With a few lines of code we can generate the type promotion table used by Python for addition between values of these types:
```
import pandas as pd types = [int, float, complex]
name = lambda t: t.__name__ pd.DataFrame([[name(type(t1(1) + t2(1))) for t1 in types] for t2 in types],
index=[name(t) for t in types], columns=[name(t) for t in types])
```
| | int | float | complex |
| --- | --- | --- | --- |
| int | int | float | complex |
| float | float | float | complex |
| complex | complex | complex | complex |
This table enumerates Python’s numerical type promotion behavior, but it turns out there is a complementary representation that is much more compact: a [Lattice](https://en.wikipedia.org/wiki/Lattice_(order)) representation, where the [supremum](https://en.wikipedia.org/wiki/Infimum_and_supremum) between any two nodes is the type that they promote to. The lattice representation of Python’s promotion table is much simpler:
Show code cell source Hide code cell source
```
#@title import networkx as nx import matplotlib.pyplot as plt lattice = {'int': ['float'], 'float': ['complex']}
graph = nx.from_dict_of_lists(lattice, create_using=nx.DiGraph)
pos = {'int': [0, 0], 'float': [1, 0], 'complex': [2, 0]}
fig, ax = plt.subplots(figsize=(8, 2))
nx.draw(graph, with_labels=True, node_size=4000, node_color='lightgray', pos=pos, ax=ax, arrowsize=20)
```
This lattice is a compact encoding of the information in the promotion table above. You can find the result of a type promotion for two inputs by tracing the graph to the first common child of the two nodes (including the nodes themselves); mathematically, this common child is known as the *supremum*, or *least upper bound*, or *join* of the pair on the lattice; here we will refer to this operation as the **join**.
Conceptually, an arrow means that *implicit type promotion is allowed* between the source and the destination: for example, implicit promotion from integer to float is allowed, but implicit promotion from float to integer is not.
Keep in mind that in general not every directed acyclic graph (DAG) will satisfy the properties of a lattice. A lattice requires the existence of a unique least upper bound for every pair of nodes; so, for example the following two DAGs are not lattices:
Show code cell source Hide code cell source
```
#@title import networkx as nx import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 2, figsize=(10, 2))
lattice = {'A': ['B', 'C']}
graph = nx.from_dict_of_lists(lattice, create_using=nx.DiGraph)
pos = {'A': [0, 0], 'B': [1, 0.5], 'C': [1, -0.5]}
nx.draw(graph, with_labels=True, node_size=2000, node_color='lightgray', pos=pos, ax=ax[0], arrowsize=20)
ax[0].set(xlim=[-0.5, 1.5], ylim=[-1, 1])
lattice = {'A': ['C', 'D'], 'B': ['C', 'D']}
graph = nx.from_dict_of_lists(lattice, create_using=nx.DiGraph)
pos = {'A': [0, 0.5], 'B': [0, -0.5], 'C': [1, 0.5], 'D': [1, -0.5]}
nx.draw(graph, with_labels=True, node_size=2000, node_color='lightgray', pos=pos, ax=ax[1], arrowsize=20)
ax[1].set(xlim=[-0.5, 1.5], ylim=[-1, 1]);
```
The left DAG is not a lattice because there exists no upper bound for nodes `B` and `C`; the right DAG fails on two counts: first, there exists no upper bound for nodes `C` and `D`, and for nodes `A` and `B` the least upper bound cannot be *uniquely* determined: both `C` and `D` are candidates, but they are unorderable.
###### Properties of a Type Promotion Lattice[#](#properties-of-a-type-promotion-lattice)
Specifying type promotions in terms of a lattice ensures a number of useful properties. Denoting the join on the lattice with the \(\vee\) operator, we have:
**Existence:** A lattice by definition requires that a unique lattice join exists for every pair of elements: \(\forall (a, b): \exists !(a \vee b)\)
**Commutativity:** The lattice join is commutative: \(\forall (a, b): a\vee b = b \vee a\).
**Associativity:** The lattice join is associative: \(\forall (a, b, c): a \vee (b \vee c) = (a \vee b) \vee c\).
On the other hand, these properties imply restrictions on the type promotion systems they can represent; in particular **not every type promotion table can be represented by a lattice**. A ready example of this is NumPy’s full type promotion table; this can be shown quickly by counterexample: here are three scalar types whose promotion behavior in NumPy is non-associative:
```
import numpy as np a, b, c = np.int8(1), np.uint8(1), np.float16(1)
print(np.dtype((a + b) + c))
print(np.dtype(a + (b + c)))
```
```
float32 float16
```
Such a result may come as a surprise to users: we generally expect mathematical expressions to map to mathematical concepts, so, for example, `a + b + c` should be equivalent to `c + b + a`; `x * (y + z)` should be equivalent to `x * y + x * z`. If type promotion is non-associative or non-commutative, these properties no longer apply.
Further, a lattice-based type promotion system is simpler to conceptualize and understand when compared to a table-based system. For example, JAX recognizes 18 distinct types: a promotion lattice consisting of 18 nodes and sparse, well-motivated connections between them is far easier to hold in one’s mind than a table of 324 entries.
For this reason, we opt to use a lattice-based type promotion system for JAX.
###### Type Promotion within Categories[#](#type-promotion-within-categories)
Numerical computing libraries generally provide more than just `int`, `float`, and `complex`; within each of these categories there are a variety of possible precisions, denoted by the number of bits used in the numerical representation. The categories we will consider here are:
* *unsigned integers* which include `uint8`, `uint16`, `uint32` & `uint64` (we’ll use `u8`, `u16`, `u32`, `u64` for short)
* *signed integers* which include `int8`, `int16`, `int32` & `int64` (we’ll use `i8`, `i16`, `i32`, `i64` for short)
* *floating point*, which include `float16`, `float32` & `float64` (we’ll use `f16`, `f32`, `f64` for short)
* *complex floating point*, which include `complex64` & `complex128` (we’ll use `c64`, `c128` for short)
Numpy’s type promotion semantics **within** each of these four categories is relatively straightforward: the ordered hierarchy of types translates directly to four separate lattices representing in-category type promotion rules:
Show code cell source Hide code cell source
```
#@title import networkx as nx import matplotlib.pyplot as plt lattice = {
'u8': ['u16'], 'u16': ['u32'], 'u32': ['u64'],
'i8': ['i16'], 'i16': ['i32'], 'i32': ['i64'],
'f16': ['f32'], 'f32': ['f64'],
'c64': ['c128']
}
graph = nx.from_dict_of_lists(lattice, create_using=nx.DiGraph)
pos = {
'u8': [0, 0], 'u16': [1, 0], 'u32': [2, 0], 'u64': [3, 0],
'i8': [0, 1], 'i16': [1, 1], 'i32': [2, 1], 'i64': [3, 1],
'f16': [1, 2], 'f32': [2, 2], 'f64': [3, 2],
'c64': [2, 3], 'c128': [3, 3],
}
fig, ax = plt.subplots(figsize=(6, 4))
nx.draw(graph, with_labels=True, node_size=1500, node_color='lightgray', pos=pos, ax=ax)
```
In terms of promotion of values to 64-bit that JAX seeks to avoid, these same-kind promotion semantics within each type category are unproblematic: the only way to produce a 64-bit output is to have a 64-bit input.
###### Enter Python Scalars[#](#enter-python-scalars)
Let’s now think about where Python scalars fit into the mix.
In NumPy, promotion behavior differs depending on whether the inputs are arrays or scalars. For example, when operating on two scalars, normal promotion rules apply:
```
x = np.int8(0) # int8 scalar y = 1 # Python int = int64 scalar
(x + y).dtype
```
```
dtype('int64')
```
Here the Python value `1` is treated as an `int64`, and straightforward within-category rules lead to an `int64` result.
In operations between Python scalars and NumPy arrays, however, scalars defer to the dtype of the array. For example:
```
x = np.zeros(1, dtype='int8') # int8 array y = 1 # Python int = int64 scalar
(x + y).dtype
```
```
dtype('int8')
```
Here the bit width of the `int64` scalar is ignored, deferring to the bit width of the array.
There is another detail here: when NumPy type promotion involves a scalar, the output dtype is value-dependent: if the Python scalar is too large for the given dtype, it is promoted to a compatible type:
```
x = np.zeros(1, dtype='int8') # int8 array y = 1000 # int64 scalar
(x + y).dtype
```
```
dtype('int16')
```
For the purposes of JAX, **value-dependent promotion is a non-starter** because of the nature of JIT compilation and other transformations, which act on abstract representations of data without reference to their value.
Ignoring value-dependent effects, the signed integer branch of NumPy’s type promotion can be represented in the following lattice, where we’ll use `*` to mark scalar dtypes:
Show code cell source Hide code cell source
```
#@title import networkx as nx import matplotlib.pyplot as plt lattice = {
'i8*': ['i16*'], 'i16*': ['i32*'], 'i32*': ['i64*'], 'i64*': ['i8'],
'i8': ['i16'], 'i16': ['i32'], 'i32': ['i64']
}
graph = nx.from_dict_of_lists(lattice, create_using=nx.DiGraph)
pos = {
'i8*': [0, 1], 'i16*': [2, 1], 'i32*': [4, 1], 'i64*': [6, 1],
'i8': [9, 1], 'i16': [11, 1], 'i32': [13, 1], 'i64': [15, 1],
}
fig, ax = plt.subplots(figsize=(12, 4))
nx.draw(graph, with_labels=True, node_size=1500, node_color='lightgray', pos=pos, ax=ax)
ax.text(3, 1.6, "Scalar Types", ha='center', fontsize=14)
ax.text(12, 1.6, "Array Types", ha='center', fontsize=14)
ax.set_ylim(-1, 3);
```
A similar pattern holds within the `uint`, `float`, and `complex` lattices.
For the sake of simplicity, let’s collapse each category of scalar types into a single node, denoted by `u*`, `i*`, `f*`, and `c*` respectively. Our set of in-category lattices can now be represented like this:
Show code cell source Hide code cell source
```
#@title import networkx as nx import matplotlib.pyplot as plt lattice = {
'u*': ['u8'], 'u8': ['u16'], 'u16': ['u32'], 'u32': ['u64'],
'i*': ['i8'], 'i8': ['i16'], 'i16': ['i32'], 'i32': ['i64'],
'f*': ['f16'], 'f16': ['f32'], 'f32': ['f64'],
'c*': ['c64'], 'c64': ['c128']
}
graph = nx.from_dict_of_lists(lattice, create_using=nx.DiGraph)
pos = {
'u*': [0, 0], 'u8': [3, 0], 'u16': [5, 0], 'u32': [7, 0], 'u64': [9, 0],
'i*': [0, 1], 'i8': [3, 1], 'i16': [5, 1], 'i32': [7, 1], 'i64': [9, 1],
'f*': [0, 2], 'f16': [5, 2], 'f32': [7, 2], 'f64': [9, 2],
'c*': [0, 3], 'c64': [7, 3], 'c128': [9, 3],
}
fig, ax = plt.subplots(figsize=(6, 4))
nx.draw(graph, with_labels=True, node_size=1500, node_color='lightgray', pos=pos, ax=ax)
```
In some senses, putting scalars at the left is a strange choice: the scalar types may contain values of any width, but when interacting with an array of a given type, the promotion result defers to the array type.
The benefit of this is that when you perform an operation like `x + 2` for an array `x`, the type of `x` will carry to the result no matter its width:
```
for dtype in [np.int8, np.int16, np.int32, np.int64]:
x = np.arange(10, dtype=dtype)
assert (x + 2).dtype == dtype
```
This behavior gives motivation to our `*` notation for scalar values: the `*` is reminiscent of a wildcard that can take on any desired value.
The benefit of these semantics are that you can readily express sequences of operations with clean Python code, without having to explicitly cast scalars to the appropriate type. Imagine if rather than writing this:
```
3 * (x + 1) ** 2
```
you had to write this:
```
np.int32(3) * (x + np.int32(1)) ** np.int32(2)
```
Although it is explicit, numerical code would become tedious to read or write. With the scalar promotion semantics described above, given an array `x` of type `int32`, the types in the second statement are implicit within the first.
###### Combining Lattices[#](#combining-lattices)
Recall that we began our discussion by introducing the lattice representing type promotion within Python: `int -> float -> complex`. Let’s rewrite this as `i* -> f* -> c*`, and let’s further allow `i*` to subsume `u*` (after all, there is no unsigned integer scalar type in Python).
Putting these all together, we get the following partial lattice representing type promotion between Python scalars and numpy arrays:
Show code cell source Hide code cell source
```
#@title import networkx as nx import matplotlib.pyplot as plt lattice = {
'i*': ['f*', 'u8', 'i8'], 'f*': ['c*', 'f16'], 'c*': ['c64'],
'u8': ['u16'], 'u16': ['u32'], 'u32': ['u64'],
'i8': ['i16'], 'i16': ['i32'], 'i32': ['i64'],
'f16': ['f32'], 'f32': ['f64'],
'c64': ['c128']
}
graph = nx.from_dict_of_lists(lattice, create_using=nx.DiGraph)
pos = {
'i*': [-1.25, 0.5], 'f*': [-0.5, 2], 'c*': [0, 3],
'u8': [0.5, 0], 'u16': [1.5, 0], 'u32': [2.5, 0], 'u64': [3.5, 0],
'i8': [0, 1], 'i16': [1, 1], 'i32': [2, 1], 'i64': [3, 1],
'f16': [0.5, 2], 'f32': [1.5, 2], 'f64': [2.5, 2],
'c64': [2, 3], 'c128': [3, 3],
}
fig, ax = plt.subplots(figsize=(6, 5))
nx.draw(graph, with_labels=True, node_size=1500, node_color='lightgray', pos=pos, ax=ax)
```
Notice that this is not (yet) a true lattice: there are many pairs of nodes for which a join does not exist. However, we can think of this as a *partial* lattice, in which some pairs of nodes do not have a defined promotion behavior, and the defined portion of this partial lattice does correctly describe NumPy’s array promotion behavior (leaving aside value-dependent semantics mentioned above).
This sets up a nice framework by which we can think about filling-out these undefined promotion rules, by adding connections on this graph. But which connections to add?
Broadly speaking, we want any additional connections to satisfy a few properties:
1. Promotion should satisfy the commutative and associative properties: in other words, the graph should remain a (partial) lattice.
2. Promotion should never allow for dropping entire components of data: for example, we should never promote `complex` to `float`, as it would discard any imaginary parts.
3. Promotion should never lead to an unhandled overflow. For example, the maximum possible `uint32` is twice as large as the maximum possible `int32`, so we should not implicitly promote `uint32` to `int32`.
4. Wherever possible, promotion should avoid loss of precision. For example, an `int64` value may have 64 bits of mantissa, so promoting `int64` to `float64` represents a possible loss of precision. However, the maximum representable float64 is larger than the maximum representable int64, so in this case criterion #3 is still satisfied.
5. Wherever possible, binary promotion should avoid resulting in types that are wider than the inputs. This is to ensure that JAX’s implicit promotions remain friendly to accelerator-based workflows, in which users often want to restrict types to 32-bit (or in some cases 16-bit) values.
Each new connection on the lattice introduces some level of convenience to the user (a new set of types that can interact without explicit casting), but the convenience may becomes too costly if any of the above criteria are violated. Developing a full promotion lattice involves striking a balance between this convenience and this cost.
###### Mixed Promotion: Float and Complex[#](#mixed-promotion-float-and-complex)
Let’s begin with what is perhaps the easiest case, that of promotion between float and complex values.
Complex numbers are made up of pairs of floating point numbers, and so we have a natural path of promotion between them: cast float to complex while maintaining the width of the real part. In terms of our partial lattice representation, it would look like this:
Show code cell source Hide code cell source
```
#@title import networkx as nx import matplotlib.pyplot as plt lattice = {
'i*': ['f*', 'u8', 'i8'], 'f*': ['c*', 'f16'], 'c*': ['c64'],
'u8': ['u16'], 'u16': ['u32'], 'u32': ['u64'],
'i8': ['i16'], 'i16': ['i32'], 'i32': ['i64'],
'f16': ['f32'], 'f32': ['f64', 'c64'], 'f64': ['c128'],
'c64': ['c128']
}
graph = nx.from_dict_of_lists(lattice, create_using=nx.DiGraph)
pos = {
'i*': [-1.25, 0.5], 'f*': [-0.5, 2], 'c*': [0, 3],
'u8': [0.5, 0], 'u16': [1.5, 0], 'u32': [2.5, 0], 'u64': [3.5, 0],
'i8': [0, 1], 'i16': [1, 1], 'i32': [2, 1], 'i64': [3, 1],
'f16': [0.5, 2], 'f32': [1.5, 2], 'f64': [2.5, 2],
'c64': [2, 3], 'c128': [3, 3],
}
fig, ax = plt.subplots(figsize=(6, 5))
nx.draw(graph, with_labels=True, node_size=1500, node_color='lightgray', pos=pos, ax=ax)
```
This turns out to represent exactly the semantics used by Numpy in mixed float/complex type promotion.
###### Mixed Promotion: Signed & Unsigned Integers[#](#mixed-promotion-signed-unsigned-integers)
For the next case, let’s consider something a bit more difficult: promotion between signed and unsigned integers. For example, when promoting `uint8` to a signed integer, how many bits do we need?
At first glance, you might think it natural to promote `uint8` to `int8`; but the largest `uint8` numbers are not representable in `int8`. For this reason, it makes more sense to promote unsigned integers to integers with twice the number of bits; this promotion behavior can be represented by adding the following connections to the promotion lattice:
Show code cell source Hide code cell source
```
#@title import networkx as nx import matplotlib.pyplot as plt lattice = {
'i*': ['f*', 'u8', 'i8'], 'f*': ['c*', 'f16'], 'c*': ['c64'],
'u8': ['u16', 'i16'], 'u16': ['u32', 'i32'], 'u32': ['u64', 'i64'],
'i8': ['i16'], 'i16': ['i32'], 'i32': ['i64'],
'f16': ['f32'], 'f32': ['f64', 'c64'], 'f64': ['c128'],
'c64': ['c128']
}
graph = nx.from_dict_of_lists(lattice, create_using=nx.DiGraph)
pos = {
'i*': [-1.25, 0.5], 'f*': [-0.5, 2], 'c*': [0, 3],
'u8': [0.5, 0], 'u16': [1.5, 0], 'u32': [2.5, 0], 'u64': [3.5, 0],
'i8': [0, 1], 'i16': [1, 1], 'i32': [2, 1], 'i64': [3, 1],
'f16': [0.5, 2], 'f32': [1.5, 2], 'f64': [2.5, 2],
'c64': [2, 3], 'c128': [3, 3],
}
fig, ax = plt.subplots(figsize=(6, 5))
nx.draw(graph, with_labels=True, node_size=1500, node_color='lightgray', pos=pos, ax=ax)
```
Again, the connections added here are precisely the promotion semantics implemented by Numpy for mixed-integer promotion.
###### How to handle `uint64`?[#](#how-to-handle-uint64)
The approached to mixed signed/unsigned integer promotion leaves out one type: `uint64`. Following the pattern above, the output of a mixed-integer operation involving `uint64` should result in `int128`, but this is not a standard available dtype.
Numpy’s choice here is to promote to `float64`:
```
(np.uint64(1) + np.int64(1)).dtype
```
```
dtype('float64')
```
However, this may be a surprising convention: it’s the only case in which promotion of integer types does not result in an integer.
For now, we will leave `uint64` promotion undefined, and return to it later.
###### Mixed Promotion: Integer and Floating[#](#mixed-promotion-integer-and-floating)
When promoting integers to floating point, we might start with the same thought process as mixed promotion between signed and unsigned integers. A 16-bit signed or unsigned integer cannot be represented at full precision by a 16-bit float, which has only 10 bits of mantissa. Therefore, it might make sense to promote integers to floats represented by twice the number of bits:
Show code cell source Hide code cell source
```
#@title import networkx as nx import matplotlib.pyplot as plt lattice = {
'i*': ['f*', 'u8', 'i8'], 'f*': ['c*', 'f16'], 'c*': ['c64'],
'u8': ['u16', 'i16', 'f16'], 'u16': ['u32', 'i32', 'f32'], 'u32': ['u64', 'i64', 'f64'],
'i8': ['i16', 'f16'], 'i16': ['i32', 'f32'], 'i32': ['i64', 'f64'],
'f16': ['f32'], 'f32': ['f64', 'c64'], 'f64': ['c128'],
'c64': ['c128']
}
graph = nx.from_dict_of_lists(lattice, create_using=nx.DiGraph)
pos = {
'i*': [-1.25, 0.5], 'f*': [-0.5, 2], 'c*': [0, 3],
'u8': [0.5, 0], 'u16': [1.5, 0], 'u32': [2.5, 0], 'u64': [3.5, 0],
'i8': [0, 1], 'i16': [1, 1], 'i32': [2, 1], 'i64': [3, 1],
'f16': [0.5, 2], 'f32': [1.5, 2], 'f64': [2.5, 2],
'c64': [2, 3], 'c128': [3, 3],
}
fig, ax = plt.subplots(figsize=(6, 5))
nx.draw(graph, with_labels=True, node_size=1500, node_color='lightgray', pos=pos, ax=ax)
```
This is effectively what Numpy type promotion does, but in doing so it breaks the lattice property of the graph: for example, the pair *{i8, u8}* no longer has a unique least upper bound: the possibilities are *i16* and *f16*, which are unorderable on the graph. This turns out to be the source of NumPy’s non-associative type promotion highlighted above.
Can we come up with a modification of NumPy’s promotion rules, such that it will satisfy the lattice property, while also giving sensible results for mixed type promotion? There are a few approaches we could take here.
###### Option 0: Leave integer/floating mixed precision undefined[#](#option-0-leave-integer-floating-mixed-precision-undefined)
To make behavior utterly predictable (at some cost to user convenience), a defensible choice would be to leave as undefined any mixed integer/float promotion beyond Python scalars, stopping with the partial lattice from the previous section. The downside would be the requirement for users to explicitly type-cast when operating between integer and floating-point quantities.
###### Option 1: Avoiding All Precision Loss[#](#option-1-avoiding-all-precision-loss)
If our focus is on avoiding precision loss at all costs, we can restore the lattice property by promoting unsigned integers to float via their existing signed integer paths:
Show code cell source Hide code cell source
```
#@title import networkx as nx import matplotlib.pyplot as plt lattice = {
'i*': ['f*', 'u8', 'i8'], 'f*': ['c*', 'f16'], 'c*': ['c64'],
'u8': ['u16', 'i16'], 'u16': ['u32', 'i32'], 'u32': ['u64', 'i64'],
'i8': ['i16', 'f16'], 'i16': ['i32', 'f32'], 'i32': ['i64', 'f64'],
'f16': ['f32'], 'f32': ['f64', 'c64'], 'f64': ['c128'],
'c64': ['c128']
}
graph = nx.from_dict_of_lists(lattice, create_using=nx.DiGraph)
pos = {
'i*': [-1.25, 0.5], 'f*': [-0.5, 2], 'c*': [0, 3],
'u8': [0.5, 0], 'u16': [1.5, 0], 'u32': [2.5, 0], 'u64': [3.5, 0],
'i8': [0, 1], 'i16': [1, 1], 'i32': [2, 1], 'i64': [3, 1],
'f16': [0.5, 2], 'f32': [1.5, 2], 'f64': [2.5, 2],
'c64': [2, 3], 'c128': [3, 3],
}
fig, ax = plt.subplots(figsize=(6, 5))
nx.draw(graph, with_labels=True, node_size=1500, node_color='lightgray', pos=pos, ax=ax)
```
A disadvantage of this approach is that it still leaves `int64` and `uint64` promotion undefined, because there is no standard floating point type with enough bits of mantissa to represent their full range of values. We could relax the precision constraint and complete the lattice by drawing connections from `i64->f64` and `u64->f64`, but those links would run counter to the motivation for this promotion scheme.
A second disadvantage is that this lattice makes it difficult to find a sensible place to insert `bfloat16` (see below) while maintaining the lattice property.
A third disadvantage of this approach, more important for JAX’s accelerator backends, is that some operations result in types that are much wider than necessary; for example mixed operations between `uint16` and `float16` would promote all the way to `float64`, which is not ideal.
###### Option 2: Avoid most wider-than-necessary promotions[#](#option-2-avoid-most-wider-than-necessary-promotions)
To address the unnecessary promotions to wider types, we could accept the possibility of some precision loss in integer/float promotion, promoting signed integers to floats of the same width:
Show code cell source Hide code cell source
```
#@title import networkx as nx import matplotlib.pyplot as plt lattice = {
'i*': ['f*', 'u8', 'i8'], 'f*': ['c*', 'f16'], 'c*': ['c64'],
'u8': ['u16', 'i16'], 'u16': ['u32', 'i32'], 'u32': ['u64', 'i64'],
'i8': ['i16'], 'i16': ['f16', 'i32'], 'i32': ['f32', 'i64'], 'i64': ['f64'],
'f16': ['f32'], 'f32': ['f64', 'c64'], 'f64': ['c128'],
'c64': ['c128']
}
graph = nx.from_dict_of_lists(lattice, create_using=nx.DiGraph)
pos = {
'i*': [-1.25, 0.5], 'f*': [-0.5, 2], 'c*': [0, 3],
'u8': [0.5, 0], 'u16': [1.5, 0], 'u32': [2.5, 0], 'u64': [3.5, 0],
'i8': [0, 1], 'i16': [1, 1], 'i32': [2, 1], 'i64': [3, 1],
'f16': [1.5, 2], 'f32': [2.5, 2], 'f64': [3.5, 2],
'c64': [3, 3], 'c128': [4, 3],
}
fig, ax = plt.subplots(figsize=(6, 5))
nx.draw(graph, with_labels=True, node_size=1500, node_color='lightgray', pos=pos, ax=ax)
```
While this does allow for precision-losing promotions between integers and floats, these promotions will not mis-represent the *magnitude* of the result: though the floating point mantissa is not wide enough to represent all values, the exponent is wide enough to approximate them.
This approach also allows a natural promotion path from `int64` to `float64`, though `uint64` remains unpromotable in this scheme. That said, a connection from `u64` to `f64` could be justified more readily here than before.
This promotion scheme still results in some wider than necessary promotion paths; for example operations between `float32` and `uint32` result in `float64`. Additionally, this lattice makes it difficult to find a sensible place to insert `bfloat16` (see below) while maintaining the lattice property.
###### Option 3: Avoid all wider-than-necessary promotions[#](#option-3-avoid-all-wider-than-necessary-promotions)
We can avoid *all* non-ideal 64-bit promotions if we’re willing to fundamentally change our thinking around integer and float promotions.
Just as scalars always defer to the widths of array types, we can make integers always defer to the width of float types:
Show code cell source Hide code cell source
```
#@title import networkx as nx import matplotlib.pyplot as plt lattice = {
'i*': ['u8', 'i8'], 'f*': ['c*', 'f16'], 'c*': ['c64'],
'u8': ['u16', 'i16'], 'u16': ['u32', 'i32'], 'u32': ['u64', 'i64'],
'i8': ['i16'], 'i16': ['i32'], 'i32': ['i64'], 'i64': ['f*'],
'f16': ['f32'], 'f32': ['f64', 'c64'], 'f64': ['c128'],
'c64': ['c128']
}
graph = nx.from_dict_of_lists(lattice, create_using=nx.DiGraph)
pos = {
'i*': [-1.25, 0.5], 'f*': [-0.5, 2], 'c*': [0, 3],
'u8': [0.5, 0], 'u16': [1.5, 0], 'u32': [2.5, 0], 'u64': [3.5, 0],
'i8': [0, 1], 'i16': [1, 1], 'i32': [2, 1], 'i64': [3, 1],
'f16': [1.5, 2], 'f32': [2.5, 2], 'f64': [3.5, 2],
'c64': [3, 3], 'c128': [4, 3],
}
fig, ax = plt.subplots(figsize=(6, 5))
nx.draw(graph, with_labels=True, node_size=1500, node_color='lightgray', pos=pos, ax=ax)
```
This involves a small sleight of hand: previously we had used `f*` to refer to a scalar type. In this lattice, `f*` might be applied to the array output of a mixed computation. Instead of thinking of `f*` as a scalar, we could think of it as a special kind of `float` value with distinct promotion rules: in JAX we refer to this as a *weak float*; see below.
The advantage of this approach is that, outside unsigned ints, it avoids *all* wider-than-necessary promotions: you can never get an f64 output without a 64-bit input, and you can never get an f32 output without a 32-bit input: this results in convenient semantics for working on accelerators while avoiding inadvertent 64-bit values.
This feature of giving primacy to floating point types resembles the type promotion behavior of PyTorch.
This lattice also happens to generate a promotion table that very closely resembles JAX’s original *ad hoc* type promotion scheme, which was not based on a lattice but had the property of giving primacy to floating point types.
This lattice additionally offers a natural location to insert `bfloat16`, without the need to impose an ordering between `bf16` and `f16`:
Show code cell source Hide code cell source
```
#@title import networkx as nx import matplotlib.pyplot as plt lattice = {
'i*': ['u8', 'i8'], 'f*': ['c*', 'f16', 'bf16'], 'c*': ['c64'],
'u8': ['u16', 'i16'], 'u16': ['u32', 'i32'], 'u32': ['u64', 'i64'],
'i8': ['i16'], 'i16': ['i32'], 'i32': ['i64'], 'i64': ['f*'],
'f16': ['f32'], 'bf16': ['f32'], 'f32': ['f64', 'c64'], 'f64': ['c128'],
'c64': ['c128']
}
graph = nx.from_dict_of_lists(lattice, create_using=nx.DiGraph)
pos = {
'i*': [-1.25, 0.5], 'f*': [-0.5, 2], 'c*': [0, 3],
'u8': [0.5, 0], 'u16': [1.5, 0], 'u32': [2.5, 0], 'u64': [3.5, 0],
'i8': [0, 1], 'i16': [1, 1], 'i32': [2, 1], 'i64': [3, 1],
'f16': [1.8, 1.7], 'bf16': [1.8, 2.3], 'f32': [3.0, 2], 'f64': [4.0, 2],
'c64': [3.5, 3], 'c128': [4.5, 3],
}
fig, ax = plt.subplots(figsize=(6, 5))
nx.draw(graph, with_labels=True, node_size=1500, node_color='lightgray', pos=pos, ax=ax)
```
This is important because `f16` and `bf16` are not comparable because they utilize their bits differently: `bf16` represents a larger range at lower precision, while `f16` represents a smaller range at higher precision.
However, these advantages comes with a few tradeoffs:
* mixed float/integer promotion is very prone to precision loss: for example, `int64` (with a maximum value of \(9.2 \times 10^{18}\)) can be promoted to `float16` (with a maximum value of \(6.5 \times 10^4\)), meaning most representable values will become `inf`.
* as mentioned above, `f*` can no longer be thought of as a “scalar type”, but whether as a different flavor of float64. In JAX’s parlance, this is referred to as a [*weak type*](https://jax.readthedocs.io/en/latest/type_promotion.html#weakly-typed-values-in-jax), in that it is represented as 64-bit, but only weakly holds to this bit width in promotion with other values.
Note that also, this approach still leaves the `uint64` promotion question unanswered, although it is perhaps reasonable to close the lattice by connecting `u64` to `f*`.
###### Type Promotion in JAX[#](#type-promotion-in-jax)
In designing the type promotion semantics of JAX, we kept in mind many of these ideas, and leaned heavily on a few things:
1. We chose to constrain JAX’s type promotion semantics to graphs that satisfy the lattice property: this is to ensure associativity and commutativity, but also to allow the semantics to be compactly described in a DAG, rather than requiring a large table.
2. We leaned toward semantics that avoid inadvertent promotion to wider types, particularly when it comes to float values, in order to benefit computation on accelerators.
3. We were fine accepting potential loss of precision (but not loss of magnitude) in mixed type promotion if it were required to maintain (1) and (2)
With this in mind, JAX has adopted Option 3. Or rather, a slightly modified version of Option 3 that draws the connection between `u64` and `f*`, in order to create a true lattice.
Rearranging the nodes for clarity, JAX’s type promotion lattice then looks like this:
Show code cell source Hide code cell source
```
#@title import networkx as nx import matplotlib.pyplot as plt lattice = {
'i*': ['u8', 'i8'], 'f*': ['c*', 'f16', 'bf16'], 'c*': ['c64'],
'u8': ['u16', 'i16'], 'u16': ['u32', 'i32'], 'u32': ['u64', 'i64'], 'u64': ['f*'],
'i8': ['i16'], 'i16': ['i32'], 'i32': ['i64'], 'i64': ['f*'],
'f16': ['f32'], 'bf16': ['f32'], 'f32': ['f64', 'c64'], 'f64': ['c128'],
'c64': ['c128']
}
graph = nx.from_dict_of_lists(lattice, create_using=nx.DiGraph)
pos = {
'i*': [-1.25, 0.5], 'f*': [4.5, 0.5], 'c*': [5, 1.5],
'u8': [0.5, 0], 'u16': [1.5, 0], 'u32': [2.5, 0], 'u64': [3.5, 0],
'i8': [0, 1], 'i16': [1, 1], 'i32': [2, 1], 'i64': [3, 1],
'f16': [5.75, 0.8], 'bf16': [5.75, 0.2], 'f32': [7, 0.5], 'f64': [8, 0.5],
'c64': [7.5, 1.5], 'c128': [8.5, 1.5],
}
fig, ax = plt.subplots(figsize=(10, 4))
ax.set_ylim(-0.5, 2)
nx.draw(graph, with_labels=True, node_size=1500, node_color='lightgray', pos=pos, ax=ax)
# ax.patches[12].set_linestyle((0, (2, 4)))
```
The behavior resulting from this choice is summarized in [JAX Type Promotion Semantics](https://jax.readthedocs.io/en/latest/type_promotion.html). Notably, aside from the inclusion of larger unsigned types (`u16`, `u32`, `u64`) and some details about the behavior of scalar/weak types (`i*`, `f*`, `c*`), this type promotion scheme turns out to be very close to that chosen by PyTorch.
For those interested, the appendix below prints the full promotion tables used by NumPy, Tensorflow, PyTorch, and JAX.
###### Appendix: Example Type Promotion Tables[#](#appendix-example-type-promotion-tables)
The following are some examples of implicit type promotion tables implemented by various Python array computing libraries.
###### NumPy Type Promotion[#](#numpy-type-promotion)
Note that NumPy does not include the `bfloat16` dtype, and that the table below ignores value-dependent effects.
Show code cell source Hide code cell source
```
# @title
import numpy as np import pandas as pd from IPython import display
np_dtypes = {
'b': np.bool_,
'u8': np.uint8, 'u16': np.uint16, 'u32': np.uint32, 'u64': np.uint64,
'i8': np.int8, 'i16': np.int16, 'i32': np.int32, 'i64': np.int64,
'bf16': 'invalid', 'f16': np.float16, 'f32': np.float32, 'f64': np.float64,
'c64': np.complex64, 'c128': np.complex128,
'i*': int, 'f*': float, 'c*': complex}
np_dtype_to_code = {val: key for key, val in np_dtypes.items()}
def make_np_zero(dtype):
if dtype in {int, float, complex}:
return dtype(0)
else:
return np.zeros(1, dtype=dtype)
def np_result_code(dtype1, dtype2):
try:
out = np.add(make_np_zero(dtype1), make_np_zero(dtype2))
except TypeError:
return '-'
else:
if type(out) in {int, float, complex}:
return np_dtype_to_code[type(out)]
else:
return np_dtype_to_code[out.dtype.type]
grid = [[np_result_code(dtype1, dtype2)
for dtype2 in np_dtypes.values()]
for dtype1 in np_dtypes.values()]
table = pd.DataFrame(grid, index=np_dtypes.keys(), columns=np_dtypes.keys())
display.HTML(table.to_html())
```
| | b | u8 | u16 | u32 | u64 | i8 | i16 | i32 | i64 | bf16 | f16 | f32 | f64 | c64 | c128 | i* | f* | c* |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| b | b | u8 | u16 | u32 | u64 | i8 | i16 | i32 | i64 | - | f16 | f32 | f64 | c64 | c128 | i64 | f64 | c128 |
| u8 | u8 | u8 | u16 | u32 | u64 | i16 | i16 | i32 | i64 | - | f16 | f32 | f64 | c64 | c128 | u8 | f64 | c128 |
| u16 | u16 | u16 | u16 | u32 | u64 | i32 | i32 | i32 | i64 | - | f32 | f32 | f64 | c64 | c128 | u16 | f64 | c128 |
| u32 | u32 | u32 | u32 | u32 | u64 | i64 | i64 | i64 | i64 | - | f64 | f64 | f64 | c128 | c128 | u32 | f64 | c128 |
| u64 | u64 | u64 | u64 | u64 | u64 | f64 | f64 | f64 | f64 | - | f64 | f64 | f64 | c128 | c128 | u64 | f64 | c128 |
| i8 | i8 | i16 | i32 | i64 | f64 | i8 | i16 | i32 | i64 | - | f16 | f32 | f64 | c64 | c128 | i8 | f64 | c128 |
| i16 | i16 | i16 | i32 | i64 | f64 | i16 | i16 | i32 | i64 | - | f32 | f32 | f64 | c64 | c128 | i16 | f64 | c128 |
| i32 | i32 | i32 | i32 | i64 | f64 | i32 | i32 | i32 | i64 | - | f64 | f64 | f64 | c128 | c128 | i32 | f64 | c128 |
| i64 | i64 | i64 | i64 | i64 | f64 | i64 | i64 | i64 | i64 | - | f64 | f64 | f64 | c128 | c128 | i64 | f64 | c128 |
| bf16 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| f16 | f16 | f16 | f32 | f64 | f64 | f16 | f32 | f64 | f64 | - | f16 | f32 | f64 | c64 | c128 | f16 | f16 | c64 |
| f32 | f32 | f32 | f32 | f64 | f64 | f32 | f32 | f64 | f64 | - | f32 | f32 | f64 | c64 | c128 | f32 | f32 | c64 |
| f64 | f64 | f64 | f64 | f64 | f64 | f64 | f64 | f64 | f64 | - | f64 | f64 | f64 | c128 | c128 | f64 | f64 | c128 |
| c64 | c64 | c64 | c64 | c128 | c128 | c64 | c64 | c128 | c128 | - | c64 | c64 | c128 | c64 | c128 | c64 | c64 | c64 |
| c128 | c128 | c128 | c128 | c128 | c128 | c128 | c128 | c128 | c128 | - | c128 | c128 | c128 | c128 | c128 | c128 | c128 | c128 |
| i* | i64 | u8 | u16 | u32 | u64 | i8 | i16 | i32 | i64 | - | f16 | f32 | f64 | c64 | c128 | i64 | f64 | c128 |
| f* | f64 | f64 | f64 | f64 | f64 | f64 | f64 | f64 | f64 | - | f16 | f32 | f64 | c64 | c128 | f64 | f64 | c128 |
| c* | c128 | c128 | c128 | c128 | c128 | c128 | c128 | c128 | c128 | - | c64 | c64 | c128 | c64 | c128 | c128 | c128 | c128 |
###### Tensorflow Type Promotion[#](#tensorflow-type-promotion)
Tensorflow avoids defining implicit type promotion, except for Python scalars in limited cases. The table is asymmetric because in `tf.add(x, y)`, the type of `y` must be coercible to the type of `x`.
Show code cell source Hide code cell source
```
# @title
import tensorflow as tf import pandas as pd from IPython import display
tf_dtypes = {
'b': tf.bool,
'u8': tf.uint8, 'u16': tf.uint16, 'u32': tf.uint32, 'u64': tf.uint64,
'i8': tf.int8, 'i16': tf.int16, 'i32': tf.int32, 'i64': tf.int64,
'bf16': tf.bfloat16, 'f16': tf.float16, 'f32': tf.float32, 'f64': tf.float64,
'c64': tf.complex64, 'c128': tf.complex128,
'i*': int, 'f*': float, 'c*': complex}
tf_dtype_to_code = {val: key for key, val in tf_dtypes.items()}
def make_tf_zero(dtype):
if dtype in {int, float, complex}:
return dtype(0)
else:
return tf.zeros(1, dtype=dtype)
def result_code(dtype1, dtype2):
try:
out = tf.add(make_tf_zero(dtype1), make_tf_zero(dtype2))
except (TypeError, tf.errors.InvalidArgumentError):
return '-'
else:
if type(out) in {int, float, complex}:
return tf_dtype_to_code[type(out)]
else:
return tf_dtype_to_code[out.dtype]
grid = [[result_code(dtype1, dtype2)
for dtype2 in tf_dtypes.values()]
for dtype1 in tf_dtypes.values()]
table = pd.DataFrame(grid, index=tf_dtypes.keys(), columns=tf_dtypes.keys())
display.HTML(table.to_html())
```
| | b | u8 | u16 | u32 | u64 | i8 | i16 | i32 | i64 | bf16 | f16 | f32 | f64 | c64 | c128 | i* | f* | c* |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| b | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| u8 | - | u8 | - | - | - | - | - | - | - | - | - | - | - | - | - | u8 | - | - |
| u16 | - | - | u16 | - | - | - | - | - | - | - | - | - | - | - | - | u16 | - | - |
| u32 | - | - | - | u32 | - | - | - | - | - | - | - | - | - | - | - | u32 | - | - |
| u64 | - | - | - | - | u64 | - | - | - | - | - | - | - | - | - | - | u64 | - | - |
| i8 | - | - | - | - | - | i8 | - | - | - | - | - | - | - | - | - | i8 | - | - |
| i16 | - | - | - | - | - | - | i16 | - | - | - | - | - | - | - | - | i16 | - | - |
| i32 | - | - | - | - | - | - | - | i32 | - | - | - | - | - | - | - | i32 | - | - |
| i64 | - | - | - | - | - | - | - | - | i64 | - | - | - | - | - | - | i64 | - | - |
| bf16 | - | - | - | - | - | - | - | - | - | bf16 | - | - | - | - | - | bf16 | bf16 | - |
| f16 | - | - | - | - | - | - | - | - | - | - | f16 | - | - | - | - | f16 | f16 | - |
| f32 | - | - | - | - | - | - | - | - | - | - | - | f32 | - | - | - | f32 | f32 | - |
| f64 | - | - | - | - | - | - | - | - | - | - | - | - | f64 | - | - | f64 | f64 | - |
| c64 | - | - | - | - | - | - | - | - | - | - | - | - | - | c64 | - | c64 | c64 | c64 |
| c128 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | c128 | c128 | c128 | c128 |
| i* | - | - | - | - | - | - | - | i32 | - | - | - | - | - | - | - | i32 | - | - |
| f* | - | - | - | - | - | - | - | - | - | - | - | f32 | - | - | - | f32 | f32 | - |
| c* | - | - | - | - | - | - | - | - | - | - | - | - | - | - | c128 | c128 | c128 | c128 |
###### PyTorch Type Promotion[#](#pytorch-type-promotion)
Notice that torch does not include unsigned integer types larger than `uint8`.
Aside from this and some details about promotion with scalar/weak types, the table is close to that used by `jax.numpy`.
Show code cell source Hide code cell source
```
# @title import torch import pandas as pd from IPython import display
torch_dtypes = {
'b': torch.bool,
'u8': torch.uint8, 'u16': 'invalid', 'u32': 'invalid', 'u64': 'invalid',
'i8': torch.int8, 'i16': torch.int16, 'i32': torch.int32, 'i64': torch.int64,
'bf16': torch.bfloat16, 'f16': torch.float16, 'f32': torch.float32, 'f64': torch.float64,
'c64': torch.complex64, 'c128': torch.complex128,
'i*': int, 'f*': float, 'c*': complex}
torch_dtype_to_code = {val: key for key, val in torch_dtypes.items()}
def make_torch_zero(dtype):
if dtype in {int, float, complex}:
return dtype(0)
else:
return torch.zeros(1, dtype=dtype)
def torch_result_code(dtype1, dtype2):
try:
out = torch.add(make_torch_zero(dtype1), make_torch_zero(dtype2))
except TypeError:
return '-'
else:
if type(out) in {int, float, complex}:
return torch_dtype_to_code[type(out)]
else:
return torch_dtype_to_code[out.dtype]
grid = [[torch_result_code(dtype1, dtype2)
for dtype2 in torch_dtypes.values()]
for dtype1 in torch_dtypes.values()]
table = pd.DataFrame(grid, index=torch_dtypes.keys(), columns=torch_dtypes.keys())
display.HTML(table.to_html())
```
| | b | u8 | u16 | u32 | u64 | i8 | i16 | i32 | i64 | bf16 | f16 | f32 | f64 | c64 | c128 | i* | f* | c* |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| b | b | u8 | - | - | - | i8 | i16 | i32 | i64 | bf16 | f16 | f32 | f64 | c64 | c128 | i64 | f32 | c64 |
| u8 | u8 | u8 | - | - | - | i16 | i16 | i32 | i64 | bf16 | f16 | f32 | f64 | c64 | c128 | u8 | f32 | c64 |
| u16 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| u32 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| u64 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| i8 | i8 | i16 | - | - | - | i8 | i16 | i32 | i64 | bf16 | f16 | f32 | f64 | c64 | c128 | i8 | f32 | c64 |
| i16 | i16 | i16 | - | - | - | i16 | i16 | i32 | i64 | bf16 | f16 | f32 | f64 | c64 | c128 | i16 | f32 | c64 |
| i32 | i32 | i32 | - | - | - | i32 | i32 | i32 | i64 | bf16 | f16 | f32 | f64 | c64 | c128 | i32 | f32 | c64 |
| i64 | i64 | i64 | - | - | - | i64 | i64 | i64 | i64 | bf16 | f16 | f32 | f64 | c64 | c128 | i64 | f32 | c64 |
| bf16 | bf16 | bf16 | - | - | - | bf16 | bf16 | bf16 | bf16 | bf16 | f32 | f32 | f64 | c64 | c128 | bf16 | bf16 | c64 |
| f16 | f16 | f16 | - | - | - | f16 | f16 | f16 | f16 | f32 | f16 | f32 | f64 | c64 | c128 | f16 | f16 | c64 |
| f32 | f32 | f32 | - | - | - | f32 | f32 | f32 | f32 | f32 | f32 | f32 | f64 | c64 | c128 | f32 | f32 | c64 |
| f64 | f64 | f64 | - | - | - | f64 | f64 | f64 | f64 | f64 | f64 | f64 | f64 | c128 | c128 | f64 | f64 | c128 |
| c64 | c64 | c64 | - | - | - | c64 | c64 | c64 | c64 | c64 | c64 | c64 | c128 | c64 | c128 | c64 | c64 | c64 |
| c128 | c128 | c128 | - | - | - | c128 | c128 | c128 | c128 | c128 | c128 | c128 | c128 | c128 | c128 | c128 | c128 | c128 |
| i* | i64 | u8 | - | - | - | i8 | i16 | i32 | i64 | bf16 | f16 | f32 | f64 | c64 | c128 | i64 | f32 | c64 |
| f* | f32 | f32 | - | - | - | f32 | f32 | f32 | f32 | bf16 | f16 | f32 | f64 | c64 | c128 | f32 | f64 | c64 |
| c* | c64 | c64 | - | - | - | c64 | c64 | c64 | c64 | c64 | c64 | c64 | c128 | c64 | c128 | c64 | c64 | c128 |
###### JAX Type Promotion: `jax.numpy`[#](#jax-type-promotion-jax-numpy)
`jax.numpy` follows type promotion rules laid out at https://jax.readthedocs.io/en/latest/type_promotion.html. Here we use `i*`, `f*`, `c*` to indicate both Python scalars and weakly-typed arrays.
Show code cell source Hide code cell source
```
# @title from jax import dtypes import jax import jax.numpy as jnp import pandas as pd from IPython import display jax.config.update('jax_enable_x64', True)
jnp_dtypes = {
'b': jnp.bool_.dtype,
'u8': jnp.uint8.dtype, 'u16': jnp.uint16.dtype, 'u32': jnp.uint32.dtype, 'u64': jnp.uint64.dtype,
'i8': jnp.int8.dtype, 'i16': jnp.int16.dtype, 'i32': jnp.int32.dtype, 'i64': jnp.int64.dtype,
'bf16': jnp.bfloat16.dtype, 'f16': jnp.float16.dtype, 'f32': jnp.float32.dtype, 'f64': jnp.float64.dtype,
'c64': jnp.complex64.dtype, 'c128': jnp.complex128.dtype,
'i*': int, 'f*': float, 'c*': complex}
jnp_dtype_to_code = {val: key for key, val in jnp_dtypes.items()}
def make_jnp_zero(dtype):
if dtype in {int, float, complex}:
return dtype(0)
else:
return jnp.zeros((), dtype=dtype)
def jnp_result_code(dtype1, dtype2):
try:
out = jnp.add(make_jnp_zero(dtype1), make_jnp_zero(dtype2))
except TypeError:
return '-'
else:
if hasattr(out, 'aval') and out.aval.weak_type:
return out.dtype.kind + '*'
elif type(out) in {int, float, complex}:
return jnp_dtype_to_code[type(out)]
else:
return jnp_dtype_to_code[out.dtype]
grid = [[jnp_result_code(dtype1, dtype2)
for dtype2 in jnp_dtypes.values()]
for dtype1 in jnp_dtypes.values()]
table = pd.DataFrame(grid, index=jnp_dtypes.keys(), columns=jnp_dtypes.keys())
display.HTML(table.to_html())
```
| | b | u8 | u16 | u32 | u64 | i8 | i16 | i32 | i64 | bf16 | f16 | f32 | f64 | c64 | c128 | i* | f* | c* |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| b | b | u8 | u16 | u32 | u64 | i8 | i16 | i32 | i64 | bf16 | f16 | f32 | f64 | c64 | c128 | i* | f* | c* |
| u8 | u8 | u8 | u16 | u32 | u64 | i16 | i16 | i32 | i64 | bf16 | f16 | f32 | f64 | c64 | c128 | u8 | f* | c* |
| u16 | u16 | u16 | u16 | u32 | u64 | i32 | i32 | i32 | i64 | bf16 | f16 | f32 | f64 | c64 | c128 | u16 | f* | c* |
| u32 | u32 | u32 | u32 | u32 | u64 | i64 | i64 | i64 | i64 | bf16 | f16 | f32 | f64 | c64 | c128 | u32 | f* | c* |
| u64 | u64 | u64 | u64 | u64 | u64 | f* | f* | f* | f* | bf16 | f16 | f32 | f64 | c64 | c128 | u64 | f* | c* |
| i8 | i8 | i16 | i32 | i64 | f* | i8 | i16 | i32 | i64 | bf16 | f16 | f32 | f64 | c64 | c128 | i8 | f* | c* |
| i16 | i16 | i16 | i32 | i64 | f* | i16 | i16 | i32 | i64 | bf16 | f16 | f32 | f64 | c64 | c128 | i16 | f* | c* |
| i32 | i32 | i32 | i32 | i64 | f* | i32 | i32 | i32 | i64 | bf16 | f16 | f32 | f64 | c64 | c128 | i32 | f* | c* |
| i64 | i64 | i64 | i64 | i64 | f* | i64 | i64 | i64 | i64 | bf16 | f16 | f32 | f64 | c64 | c128 | i64 | f* | c* |
| bf16 | bf16 | bf16 | bf16 | bf16 | bf16 | bf16 | bf16 | bf16 | bf16 | bf16 | f32 | f32 | f64 | c64 | c128 | bf16 | bf16 | c64 |
| f16 | f16 | f16 | f16 | f16 | f16 | f16 | f16 | f16 | f16 | f32 | f16 | f32 | f64 | c64 | c128 | f16 | f16 | c64 |
| f32 | f32 | f32 | f32 | f32 | f32 | f32 | f32 | f32 | f32 | f32 | f32 | f32 | f64 | c64 | c128 | f32 | f32 | c64 |
| f64 | f64 | f64 | f64 | f64 | f64 | f64 | f64 | f64 | f64 | f64 | f64 | f64 | f64 | c128 | c128 | f64 | f64 | c128 |
| c64 | c64 | c64 | c64 | c64 | c64 | c64 | c64 | c64 | c64 | c64 | c64 | c64 | c128 | c64 | c128 | c64 | c64 | c64 |
| c128 | c128 | c128 | c128 | c128 | c128 | c128 | c128 | c128 | c128 | c128 | c128 | c128 | c128 | c128 | c128 | c128 | c128 | c128 |
| i* | i* | u8 | u16 | u32 | u64 | i8 | i16 | i32 | i64 | bf16 | f16 | f32 | f64 | c64 | c128 | i* | f* | c* |
| f* | f* | f* | f* | f* | f* | f* | f* | f* | f* | bf16 | f16 | f32 | f64 | c64 | c128 | f* | f* | c* |
| c* | c* | c* | c* | c* | c* | c* | c* | c* | c* | c64 | c64 | c64 | c128 | c64 | c128 | c* | c* | c* |
###### JAX Type Promotion: `jax.lax`[#](#jax-type-promotion-jax-lax)
`jax.lax` is lower-level, and does not do any implicit promotion. Here we use `i*`, `f*`, `c*` to indicate both Python scalars and weakly-typed arrays.
Show code cell source Hide code cell source
```
# @title from jax import dtypes import jax import jax.numpy as jnp import pandas as pd from IPython import display jax.config.update('jax_enable_x64', True)
jnp_dtypes = {
'b': jnp.bool_.dtype,
'u8': jnp.uint8.dtype, 'u16': jnp.uint16.dtype, 'u32': jnp.uint32.dtype, 'u64': jnp.uint64.dtype,
'i8': jnp.int8.dtype, 'i16': jnp.int16.dtype, 'i32': jnp.int32.dtype, 'i64': jnp.int64.dtype,
'bf16': jnp.bfloat16.dtype, 'f16': jnp.float16.dtype, 'f32': jnp.float32.dtype, 'f64': jnp.float64.dtype,
'c64': jnp.complex64.dtype, 'c128': jnp.complex128.dtype,
'i*': int, 'f*': float, 'c*': complex}
jnp_dtype_to_code = {val: key for key, val in jnp_dtypes.items()}
def make_jnp_zero(dtype):
if dtype in {int, float, complex}:
return dtype(0)
else:
return jnp.zeros((), dtype=dtype)
def jnp_result_code(dtype1, dtype2):
try:
out = jax.lax.add(make_jnp_zero(dtype1), make_jnp_zero(dtype2))
except TypeError:
return '-'
else:
if hasattr(out, 'aval') and out.aval.weak_type:
return out.dtype.kind + '*'
elif type(out) in {int, float, complex}:
return jnp_dtype_to_code[type(out)]
else:
return jnp_dtype_to_code[out.dtype]
grid = [[jnp_result_code(dtype1, dtype2)
for dtype2 in jnp_dtypes.values()]
for dtype1 in jnp_dtypes.values()]
table = pd.DataFrame(grid, index=jnp_dtypes.keys(), columns=jnp_dtypes.keys())
display.HTML(table.to_html())
```
| | b | u8 | u16 | u32 | u64 | i8 | i16 | i32 | i64 | bf16 | f16 | f32 | f64 | c64 | c128 | i* | f* | c* |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| b | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| u8 | - | u8 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| u16 | - | - | u16 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| u32 | - | - | - | u32 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| u64 | - | - | - | - | u64 | - | - | - | - | - | - | - | - | - | - | - | - | - |
| i8 | - | - | - | - | - | i8 | - | - | - | - | - | - | - | - | - | - | - | - |
| i16 | - | - | - | - | - | - | i16 | - | - | - | - | - | - | - | - | - | - | - |
| i32 | - | - | - | - | - | - | - | i32 | - | - | - | - | - | - | - | - | - | - |
| i64 | - | - | - | - | - | - | - | - | i64 | - | - | - | - | - | - | i64 | - | - |
| bf16 | - | - | - | - | - | - | - | - | - | bf16 | - | - | - | - | - | - | - | - |
| f16 | - | - | - | - | - | - | - | - | - | - | f16 | - | - | - | - | - | - | - |
| f32 | - | - | - | - | - | - | - | - | - | - | - | f32 | - | - | - | - | - | - |
| f64 | - | - | - | - | - | - | - | - | - | - | - | - | f64 | - | - | - | f64 | - |
| c64 | - | - | - | - | - | - | - | - | - | - | - | - | - | c64 | - | - | - | - |
| c128 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | c128 | - | - | c128 |
| i* | - | - | - | - | - | - | - | - | i64 | - | - | - | - | - | - | i* | - | - |
| f* | - | - | - | - | - | - | - | - | - | - | - | - | f64 | - | - | - | f* | - |
| c* | - | - | - | - | - | - | - | - | - | - | - | - | - | - | c128 | - | - | c* |
##### Jax and Jaxlib versioning[#](#jax-and-jaxlib-versioning)
###### Why are `jax` and `jaxlib` separate packages?[#](#why-are-jax-and-jaxlib-separate-packages)
We publish JAX as two separate Python wheels, namely `jax`, which is a pure Python wheel, and `jaxlib`, which is a mostly-C++ wheel that contains libraries such as:
* XLA,
* pieces of LLVM used by XLA,
* MLIR infrastructure, such as the MHLO Python bindings.
* JAX-specific C++ libraries for fast JIT and PyTree manipulation.
We distribute separate `jax` and `jaxlib` packages because it makes it easy to work on the Python parts of JAX without having to build C++ code or even having a C++ toolchain installed. `jaxlib` is a large library that is not easy for many users to build, but most changes to JAX only touch Python code. By allowing the Python pieces to be updated independently of the C++ pieces, we improve the development velocity for Python changes.
In addition `jaxlib` is not cheap to build, but we want to be able to iterate on and run the JAX tests in environments without a lot of CPU, for example in Github Actions or on a laptop. Many of our CI builds simply use a prebuilt
`jaxlib`, rather than needing to rebuild the C++ pieces of JAX on each PR.
As we will see, distributing `jax` and `jaxlib` separately comes with a cost, in that it requires that changes to `jaxlib` maintain a backward compatible API.
However, we believe that on balance it is preferable to make Python changes easy, even if at the cost of making C++ changes slightly harder.
###### How are `jax` and `jaxlib` versioned?[#](#how-are-jax-and-jaxlib-versioned)
Summary: `jax` and `jaxlib` share the same version number in the JAX source tree, but are released as separate Python packages.
When installed, the `jax` package version must be greater than or equal to `jaxlib`’s version,
and `jaxlib`’s version must be greater than or equal to the minimum `jaxlib`
version specified by `jax`.
Both `jax` and `jaxlib` releases are numbered `x.y.z`, where `x` is the major version, and `y` is the minor version, and `z` is an optional patch release.
Version numbers must follow
[PEP 440](https://www.python.org/dev/peps/pep-0440/). Version number comparisons are lexicographic comparisons on tuples of integers.
Each `jax` release has an associated minimum `jaxlib` version `mx.my.mz`. The minimum `jaxlib` version for `jax` version `x.y.z` must be no greater than
`x.y.z`.
For `jax` version `x.y.z` and `jaxlib` version `lx.ly.lz` to be compatible,
the following must hold:
* The jaxlib version (`lx.ly.lz`) must be greater than or equal to the minimum jaxlib version (`mx.my.mz`).
* The jax version (`x.y.z`) must be greater than or equal to the jaxlib version
(`lx.ly.lz`).
These constraints imply the following rules for releases:
* `jax` may be released on its own at any time, without updating `jaxlib`.
* If a new `jaxlib` is released, a `jax` release must be made at the same time.
These
[version constraints](https://github.com/google/jax/blob/main/jax/version.py)
are currently checked by `jax` at import time, instead of being expressed as Python package version constraints. `jax` checks the `jaxlib` version at runtime rather than using a `pip` package version constraint because we
[provide separate `jaxlib` wheels](https://github.com/google/jax#installation)
for a variety of hardware and software versions (e.g, GPU, TPU, etc.). Since we do not know which is the right choice for any given user, we do not want `pip`
to install a `jaxlib` package for us automatically.
In the future, we hope to separate out the hardware-specific pieces of `jaxlib`
into separate plugins, at which point the minimum version could be expressed as a Python package dependency. For now, we do provide platform-specific extra requirements that install a compatible jaxlib version,
e.g., `jax[cuda]`.
###### How can I safely make changes to the API of `jaxlib`?[#](#how-can-i-safely-make-changes-to-the-api-of-jaxlib)
* `jax` may drop compatibility with older `jaxlib` releases at any time, so long as the minimum `jaxlib` version is increased to a compatible version. However,
note that the minimum `jaxlib`, even for unreleased versions of `jax`, must be a released version! This allows us to use released `jaxlib` wheels in our CI builds, and allows Python developers to work on `jax` at HEAD without ever needing to build `jaxlib`.
For example, to remove an old backwards compatibility path in the `jax` Python code, it is sufficient to bump the minimum jaxlib version and then delete the compatibility path.
* `jaxlib` may drop compatibility with older `jax` releases lower than its own release version number. The version constraints enforced by `jax`
would forbid the use of an incompatible `jaxlib`.
For example, for `jaxlib` to drop a Python binding API used by an older `jax`
version, the `jaxlib` minor or major version number must be incremented.
* If possible, changes to the `jaxlib` should be made in a backwards-compatible way.
In general `jaxlib` may freely change its API, so long as the rules about `jax` being compatible with all `jaxlib`s at least as new as the minimum version are followed. This implies that
`jax` must always be compatible with at least two versions of `jaxlib`,
namely, the last release, and the tip-of-tree version, effectively the next release. This is easier to do if compatibility is maintained,
although incompatible changes can be made using version tests from `jax`; see below.
For example, it is usually safe to add a new function to `jaxlib`, but unsafe to remove an existing function or to change its signature if current `jax` is still using it. Changes to `jax` must work or degrade gracefully for all `jaxlib` releases greater than the minimum up to HEAD.
Note that the compatibility rules here only apply to *released* versions of
`jax` and `jaxlib`. They do not apply to unreleased versions; that is, it is ok to introduce and then remove an API from `jaxlib` if it is never released, or if no released `jax` version uses that API.
###### How is the source to `jaxlib` laid out?[#](#how-is-the-source-to-jaxlib-laid-out)
`jaxlib` is split across two main repositories, namely the
[`jaxlib/` subdirectory in the main JAX repository](https://github.com/google/jax/tree/main/jaxlib)
and in the
[XLA source tree, which lives inside the XLA repository](https://github.com/openxla/xla).
The JAX-specific pieces inside XLA are primarily in the
[`xla/python` subdirectory](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/compiler/xla/python).
The reason that C++ pieces of JAX, such as Python bindings and runtime components, are inside the XLA tree is partially historical and partially technical.
The historical reason is that originally the
`xla/python` bindings were envisaged as general purpose Python bindings that might be shared with other frameworks. In practice this is increasingly less true, and `xla/python` incorporates a number of JAX-specific pieces and is likely to incorporate more. So it is probably best to simply think of
`xla/python` as part of JAX.
The technical reason is that the XLA C++ API is not stable. By keeping the XLA:Python bindings in the XLA tree, their C++ implementation can be updated atomically with the C++ API of XLA. It is easier to maintain backward and forward compatibility of Python APIs than C++ ones, so `xla/python` exposes Python APIs and is responsible for maintaining backward compatibility at the Python level.
`jaxlib` is built using Bazel out of the `jax` repository. The pieces of
`jaxlib` from the XLA repository are incorporated into the build
[as a Bazel submodule](https://github.com/google/jax/blob/main/WORKSPACE).
To update the version of XLA used during the build, one must update the pinned version in the Bazel `WORKSPACE`. This is done manually on an as-needed basis, but can be overridden on a build-by-build basis.
###### How do we make changes across the `jax` and `jaxlib` boundary between releases?[#](#how-do-we-make-changes-across-the-jax-and-jaxlib-boundary-between-releases)
The jaxlib version is a coarse instrument: it only lets us reason about
*releases*.
However, since the `jax` and `jaxlib` code is split across repositories that cannot be updated atomically in a single change, we need to manage compatibility at a finer granularity than our release cycle. To manage fine-grained compatibility, we have additional versioning that is independent of the `jaxlib`
release version numbers.
We maintain an additional version number (`_version`) in
[`xla_client.py` in the XLA repository](https://github.com/openxla/xla/blob/main/xla/python/xla_client.py).
The idea is that this version number, is defined in `xla/python`
together with the C++ parts of JAX, is also accessible to JAX Python as
`jax._src.lib.xla_extension_version`, and must be incremented every time that a change is made to the XLA/Python code that has backwards compatibility implications for `jax`. The JAX Python code can then use this version number to maintain backwards compatibility, e.g.:
```
from jax._src.lib import xla_extension_version
# 123 is the new version number for _version in xla_client.py if xla_extension_version >= 123:
# Use new code path
...
else:
# Use old code path.
```
Note that this version number is in *addition* to the constraints on the released version numbers, that is, this version number exists to help manage compatibility during development for unreleased code. Releases must also follow the compatibility rules given above.
##### Sequencing side-effects in JAX[#](#sequencing-side-effects-in-jax)
*sharadmv@*
*May 9 2022*
###### Overview[#](#overview)
When we write JAX code, we can usually pretend we’re writing single-threaded, eagerly-executed Python even though underneath the hood, JAX and its runtime may execute it asynchronously in the background.
As long as we write pure (side-effect-free) code, these performance optimizations are usually invisible to us and don’t interfere with our single-threaded mental model.
Asynchronous execution is great – we get performant, parallel code without having to think about it at all!
However, in the presence of side-effects, the illusion begins to break down and the cracks in our mental model start to show. Specifically, these differences show up when we think about the *order* in which side-effects happen.
In this design note, we explore the interaction between JAX’s execution model,
and the ordering of side-effects. We also provide a way of enforcing a
“single-threaded” ordering of effects.
###### Background[#](#background)
When we write the following Python code
```
def f():
print("hello")
return 2 def g():
print("world")
return 3 f()
g()
```
we expect `"hello"` to be printed before `"world"`. This might seem obvious but consider the following JAX code:
```
@partial(jax.jit, device=<device 0>)
def f():
return 2
@partial(jax.jit, device=<device 1>)
def g():
return 3 f()
g()
```
In many cases, JAX will execute `f` and `g` *in parallel*, dispatching the computations onto different threads – `g` might actually be executed before `f`. Parallel execution is a nice performance optimization, especially if copying to and from a device is expensive (see the [asynchronous dispatch note](https://jax.readthedocs.io/en/latest/async_dispatch.html) for more details).
In practice, however, we often don’t need to think about asynchronous dispatch because we’re writing pure functions and only care about the inputs and outputs of functions – we’ll naturally block on future values.
However, now imagine that we have a `jax.print` function that works inside of JIT-ted JAX functions (`host_callback.id_print` is an example of this). Let’s return to the previous example except with prints in the mix.
```
@partial(jax.jit, device=<device 0>)
def f():
jax.print("hello")
return 2
@partial(jax.jit, device=<device 1>)
def g():
jax.print("world")
return 3 f()
g()
```
Thanks to asynchronous dispatch, we could actually see `"world"` being printed before `"hello"`. The reordering of the print side-effects breaks the illusion of a single-threaded execution model.
Another example of where side-effects can “reveal” out-of-order execution is when we we compile JAX programs. Consider the following JAX code:
```
@jax.jit def f(x):
jax.print("hello")
jax.print("world")
return x
```
Even though in Python, we’ve written the `"hello"` print before the `"world"` print,
a compiler like XLA is free to reorder them because there’s no explicit data-dependence between the prints.
###### Motivation[#](#motivation)
We’d like to support “ordered” effects. When we say ordered, we mean that the effects occur in the same order as we would if we were executing a single-threaded Python program.
This is our main desideratum. In the presence of explicit parallelism like `pmap` or user threads, we don’t need to maintain this behavior but at least if the user is not explicitly requesting parallelism, we’d like to preserve a single-threaded ordering.
Before we dive in more, let’s first step back and ask ourselves if it is okay if we reorder effects in the name of performance, and conversely, do we need to enforce an ordering on effects at all? In some cases, we don’t need ordering.
Maybe some side-effects shouldn’t adversely affect the performance of a JAX program. However, for other side-effects, we may want to enforce a single-threaded program order so users don’t get counterintuitive behavior. Consider a logging effect.
```
@jax.jit def f(x, y):
log_value(x)
log_value(y)
f(1, 2)
```
If `log` is mutating a global list, we might expect that we add `x` before adding `y`.
For a more strict effect, we may want the option to order the effects.
###### Enforcing ordered effects[#](#enforcing-ordered-effects)
The main tool we have to enforce the ordering of computations is *data-dependence*.
Simply put, if a function `g` has an input that is the output of a function `f`,
`f` must be executed before `g`.
However, we may have side effects like prints that have no inputs at all so naively we couldn’t sequence them. We thus use *tokens* as a means of injecting artificial data-dependence into a computation.
What is a token? A token is just a dummy value that can be threaded in and out of a computation.
By threading the same token in and out and several computations, we enforce that they have to happen in a certain order. Let’s take the previous print example and see what it would look like with tokens in the mix:
```
@jax.jit def f(token, x):
token = jax.print(token, "hello")
token = jax.print(token, "world")
return token, x
```
If we rewrite `jax.print` to take in and return a token, we have now sequenced the two prints since the input to the second print depends is the output of the first print.
The actual value of `token` can be anything really, but we’ll see in practice that the tokens are invisible to users.
###### Runtime tokens vs. compiler tokens[#](#runtime-tokens-vs-compiler-tokens)
Here we will actually start talking about implementation details. In practice, we’ll need two separate types of tokens to sequence effects: one for each of the aforementioned sources of reordering. We’ll need *runtime tokens* to sequence asynchronously dispatched side-effecting computations and we’ll need *compiler tokens* to sequence effects within computations.
In practice, our computation will be rewritten to look like this:
```
@jax.jit def f(runtime_token, x):
compiler_token = new_compiler_token()
compiler_token = jax.print(compiler_token, "hello")
compiler_token = jax.print(compiler_token, "world")
return runtime_token, x
```
Notice how the runtime tokens are only used at the JIT boundary and the compiler tokens are only within the compiled code. Compiler tokens are created during
“lowering” (we convert Python code to a lower level representation like HLO or MHLO)
but runtime tokens need to be managed in Python since they’re being threaded in and out of JIT-ted functions.
Furthermore, notice that the runtime tokens are “disconnected”
from the compiler tokens meaning there’s no data dependency between them. This could potentially be dangerous as if we will lose the data dependence between the bodies of two dispatched function calls. However, if we assume “strict execution” – i.e.
a dispatched function will only start execution when all of its inputs are ready and all of it outputs will become ready at the same time – we are safe to create a fresh compiler token and return a non-output-dependent runtime token.
###### Managing runtime tokens[#](#managing-runtime-tokens)
To manage runtime tokens on behalf of the user, we’ll need to hook into JAX’s dispatch machinery.
Whenever we call a JIT-ted function, we eventually bottom out in a function that looks like this:
```
def _execute(compiled_computation, *args):
outputs = compiled_computation.execute(*args)
return outputs
```
At this point we need to “inject” the runtime tokens into the computation and “extract” them from the computation’s outputs:
```
def _execute(compiled_computation, *args):
runtime_token = get_runtime_token() # Grab global token
runtime_token, *outputs = compiled_computation.execute(runtime_token, *args)
update_runtime_token(runtime_token) # Update global token
return outputs
```
What is `runtime_token` exactly? Well we need to be able to pass it into a `compiled_computation`,
which means it needs to be some sort of array (for now, since there’s no shared token representation inside and outside compiled JAX code). In practice we can use a `(0,)`-shaped array to minimize overheads.
We also need to think about the multiple device use case, e.g. the first example where we first call a JIT-ted function on device 0 and then one on device 1.
In that case, we need to also *copy* the runtime token returned from the first computation (which lives on device 0)
to device 1 so we can pass it into the second computation. If two subsequent computations share the same device,
this copy is not necessary.
###### Adding compiler tokens[#](#adding-compiler-tokens)
When we lower Python code to HLO or MHLO we need to create a token at the start of the computation and ensure they are available when we have side-effecting computations that need to be ordered. The side-effecting computations will take the token as input and return it as an output.
The implementation of this token threading involves upgrading the JAX lowering machinery to do this bookkeeping automatically.
The main challenges involve dealing with higher-order primitives like call primitives and control-flow primitives. We won’t go into details in how to handle those in this design note.
###### Blocking on output tokens[#](#blocking-on-output-tokens)
Adding support for runtime and compiler tokens for side-effecting computations is important for sequencing but there’s also another subtle use-case for tokens, which is blocking on side-effecting computations.
Even if we don’t want a side-effecting computation to be *ordered* we may still want to wait on its completion. Currently we have `jax.block_until_ready`, which waits until a future value has its result ready. However, with side-effecting computations, we may have functions that don’t have a return value but are still executing a side-effect. Take the simple example here:
```
@jax.jit def f():
jax.print("hello world")
return f() # Executed asynchronously
```
This compiled computation takes no explicit inputs and has no explicit outputs. If it was an ordered print effect,
we could block on the returned runtime token, However,
when this is an unordered computation we don’t do any token threading. How do we wait for `f()` to finish executing when we have no output value to call `block_until_ready` on? Well, we could apply our same token strategy except we only return runtime tokens and don’t take them as inputs. This will give us a value to block on that will only be ready once `f()` is done being executed. We’ll call these tokens
*output tokens*. We end up with a function that looks like this:
```
@jax.jit def f():
jax.print("hello world")
return new_runtime_token()
f() # Executed asynchronously
```
Underneath the hood, we’ll manage the output tokens in the same way we manage the runtime tokens but provide a method for users to block on the current set of output tokens. Unlike runtime tokens,
output tokens need to be *device-specific*.
Consider a single device use-case:
```
@jax.jit def f():
jax.print("hello")
@jax.jit def g():
jax.print("world")
f()
g()
```
Since `f()` and `g()` are executed on the same device, blocking on `g()`’s output token effectively blocks on `f()` since (as of now!), the JAX runtime does not interleave computations executed on the same device. We’ll have to revise this entire design if that changes, of course.
However, consider the two device use-case:
```
@partial(jax.jit, device=<device 0>)
def f():
jax.print("hello")
@partial(jax.jit, device=<device 1>)
def g():
jax.print("world")
f()
g()
```
Here we don’t want to explicitly sequence `f()` and `g()` but want to wait for both of them to finish.
We’ll need one output token for `f()` and one for `g()` and we’ll block on both of those tokens:
```
@partial(jax.jit, device=<device 0>)
def f():
jax.print("hello")
return new_runtime_token()
@partial(jax.jit, device=<device 1>)
def g():
jax.print("world")
return new_runtime_token()
t0 = f()
t1 = g()
block_until_ready((t0, t1))
```
We’ll thus need a per-device output token so we can avoid sequencing computations on different devices while offering the ability to block on side-effecting computations. We end up with the following
(approximate) change to the JAX dispatch machinery:
```
def _execute(compiled_computation, *args):
output_token, *outputs = compiled_computation.execute(runtime_token, *args)
update_output_token(output_token, compiled_computation.device)
return outputs
```
We’ll also need to expose a function to that blocks on the output token:
```
def effects_barrier():
output_token.block_until_ready()
```
Note that blocking on output tokens may not be fairly common since most JAX computations will return a value to block on. However, output tokens are helpful for testing and profiling, and are good to support so that we have a consistent and cohesive effect system.
###### Some more details[#](#some-more-details)
* All of the aforementioned token management infrastructure will be *thread-local*. This means that each user thread will have their own independent stream of runtime tokens. Sequencing is only promised at a user thread level.
* In practice, we have one runtime token per effect. Different instances of that effect will be sequenced. This is to avoid sequencing effectul computations that may not have any relation to each other. Technically this goes against our original goal though of enforcing a single-threaded Python program ordering, but this is a tradeoff that could be modulated by having both “effect”-specific tokens and “global” tokens.
##### `jax.remat` / `jax.checkpoint` changes: what you need to know[#](#jax-remat-jax-checkpoint-changes-what-you-need-to-know)
###### Contents[#](#contents)
* [What’s going on?](#whats-going-on)
* [How can I disable the change, and go back to the old behavior for now?](#how-can-i-disable-the-change-and-go-back-to-the-old-behavior-for-now)
* [Why are we doing this?](#why-are-we-doing-this)
* [What are the possible issues after the upgrade?](#what-are-the-possible-issues-after-the-upgrade)
###### What’s going on?[#](#whats-going-on)
As of [#11830](https://github.com/google/jax/pull/11830) we’re switching on a new implementation of [`jax.checkpoint()`](index.html#jax.checkpoint), aka `jax.remat()` (the two names are aliases of one another). **For most code, there will be no changes.** But there may be some observable differences in edge cases; see [What are the possible issues after the upgrade?](#what-are-the-possible-issues-after-the-upgrade)
###### How can I disable the change, and go back to the old behavior for now?[#](#how-can-i-disable-the-change-and-go-back-to-the-old-behavior-for-now)
In case you have a problem with this change, **through version `jax==0.3.16`** it is possible to switch off the new implementation by setting the `jax_new_checkpoint` config option to be False, in any one of these ways:
1. set the shell environment variable `JAX_NEW_CHECKPOINT=0`;
2. execute `jax.config.update('jax_new_checkpoint', False)`;
3. if you parse flags with `absl`, pass the `--jax_new_checkpoint=False` option.
If you need to revert to the old implementation, **please reach out** on a GitHub issue so that we can make the new implementation work for you.
As of `jax==0.3.17` the `jax_new_checkpoint` config option is no longer available. If you have an issue, please reach out on [the issue tracker](https://github.com/google/jax/issues) so we can help fix it!
###### Why are we doing this?[#](#why-are-we-doing-this)
At the time of writing, JAX has two parallel implementations of `jax.checkpoint`. The new one has been used for months (e.g. by Pax and Flaxformer/T5X) on an opt-in basis. But it hasn’t been on-by-default.
We want to switch the new implementation to be on-by-default, and then delete the old implementation. Using the new implementation, and removing the old implementation, gives users several benefits.
###### User-customizable rematerialization policies[#](#user-customizable-rematerialization-policies)
The main upside of the new implementation is a new feature corresponding to the `policy` argument. The idea is to give precise user control over what intermediates get saved (versus rematerialized) during the forward pass of automatic differentiation. By exercising this control over the memory-usage vs recomputation tradeoff, users can get significant performance wins, especially in large models and in our LLM MLPerf submission!
The full documentation for this feature is still forthcoming, but here’s a quick example:
```
from functools import partial import jax
def apply_layer(W, x):
return jnp.sin(jnp.dot(W, x))
@partial(jax.checkpoint, policy=jax.checkpoint_policies.checkpoint_dots)
def predict(params, x):
for W in params[:-1]:
x = apply_layer(W, x)
return jnp.dot(params[-1], x)
```
By applying `jax.checkpoint` with `policy=jax.checkpoint_policies.checkpoint_dots` here, we ensure that only the results of matrix multiplies are allowed to be saved during the forward pass. The Jacobian coefficient values from `cos` applications, and the values of `sin` applications needed to compute them, are not saved from the forward pass and are instead recomputed during the backward pass. (Policies like this one can be effective on TPUs, where elementwise computations are effectively free but results from the matrix unit are worth saving.)
###### Ability to rematerialize constants, not just operations with data dependence on arguments[#](#ability-to-rematerialize-constants-not-just-operations-with-data-dependence-on-arguments)
The old `jax.checkpoint` implementation couldn’t actually rematerialize computations without a data dependence on arguments to the decorated function. Consider this toy example:
```
@jax.checkpoint def f(x):
a = some_function(jnp.arange(10_000_000)) # `a` does not depend on `x`
return a * x
```
The old `jax.checkpoint` implementation was forced to save the value of `a`, which could require a lot of memory. The new `jax.checkpoint` implementation can rematerialize rather than save the value of `a`.
###### Significantly less Python overhead in some cases[#](#significantly-less-python-overhead-in-some-cases)
The new `jax.checkpoint` incurs significantly less Python overhead in some cases. [Simple overhead benchmarks](https://github.com/google/jax/blob/88636d2b649bfa31fa58a30ea15c925f35637397/benchmarks/api_benchmark.py#L511-L539) got 10x faster. These overheads only arise in eager op-by-op execution, so in the common case of using a `jax.checkpoint` under a `jax.jit` or similar the speedups aren’t relevant. But still, nice!
###### Enabling new JAX features by simplifying internals[#](#enabling-new-jax-features-by-simplifying-internals)
This change unlocks big future user benefits too, like custom batching rules (the `vmap` analogue of `custom_vjp`) and a forward-differentiable upgrade to `custom_vjp`. It also significantly reduces complexity in parts of the JAX codebase, which will be good for maintainability and bug-fixing in general.
###### What are the possible issues after the upgrade?[#](#what-are-the-possible-issues-after-the-upgrade)
###### Innocuous numerical changes[#](#innocuous-numerical-changes)
Because the new implementation can rematerialize more computations, including those of potentially large constants, some code may see small numerical changes. The magnitude of any numerical changes should be within the range we expect from changing compiler optimizations, like reordering of floating point operations. But some overly tight test tolerances may need to be slightly relaxed.
###### The `concrete=True` option is removed.[#](#the-concrete-true-option-is-removed)
The old `jax.checkpoint` implementation had a boolean `concrete` option, which allowed tracing on concrete Python values (rather than delaying all computations and only tracing on abstracted values). That option was seldom used, and in the cases where it was used there were much simpler alternatives. So we removed the option in the new `jax.checkpoint`.
For example, the overwhelmingly common use of `concrete=True` in Google code was to support passing an argument like `is_training`:
```
@partial(jax.checkpoint, concrete=True) # OLD jax.checkpoint API def foo(x, is_training):
if is_training:
return g(x)
else:
return h(x)
```
With the new `jax.checkpoint` implementation, we can accomplish the same using the `static\_argnums` option:
```
@partial(jax.checkpoint, static_argnums=(1,)) # NEW jax.checkpoint API def foo(x, is_training):
if is_training:
...
```
If `jax.numpy` operations need to be performed on static arguments, with their numerical results computed during Python tracing rather than delayed, we can use `static_argnums` with `jax.ensure_compile_time_eval()`. But it seems unlikely that you’d need this!
##### Type Annotation Roadmap for JAX[#](#type-annotation-roadmap-for-jax)
* *Author: jakevdp*
* *Date: August 2022*
###### Background[#](#background)
Python 3.0 introduced optional function annotations ([PEP 3107](https://peps.python.org/pep-3107/)), which were later codified for use in static type checking around the release of Python 3.5 ([PEP 484](https://peps.python.org/pep-0484/)).
To some degree, type annotations and static type checking have become an integral part of many Python development workflows, and to this end we have added annotations in a number of places throughout the JAX API.
The current state of type annotations in JAX is a bit patchwork, and efforts to add more have been hampered by more fundamental design questions.
This doc attempts to summarize those issues and generate a roadmap for the goals and non-goals of type annotations in JAX.
Why do we need such a roadmap? Better/more comprehensive type annotations are a frequent request from users, both internally and externally.
In addition, we frequently receive pull requests from external users (for example, [PR #9917](https://github.com/google/jax/pull/9917), [PR #10322](https://github.com/google/jax/pull/10322)) seeking to improve JAX’s type annotations: it’s not always clear to the JAX team member reviewing the code whether such contributions are beneficial, particularly when they introduce complex Protocols to address the challenges inherent to full-fledged annotation of JAX’s use of Python.
This document details JAX’s goals and recommendations for type annotations within the package.
###### Why type annotations?[#](#why-type-annotations)
There are a number of reasons that a Python project might wish to annotate their code-base; we’ll summarize them in this document as Level 1, Level 2, and Level 3.
###### Level 1: Annotations as documentation[#](#level-1-annotations-as-documentation)
When originally introduced in [PEP 3107](https://peps.python.org/pep-3107/), type annotations were motivated partly by the ability to use them as concise, inline documentation of function parameter types and return types. JAX has long utilized annotations in this manner; an example is the common pattern of creating type names aliased to `Any`. An example can be found in `lax/slicing.py` [[source](https://github.com/google/jax/blob/2bc3e39cd9104071ee39dacac22abd51b94eb27e/jax/_src/lax/slicing.py#L47-L58)]:
```
Array = Any Shape = core.Shape
def slice(operand: Array, start_indices: Sequence[int],
limit_indices: Sequence[int],
strides: Optional[Sequence[int]] = None) -> Array:
...
```
For the purposes of static type checking, this use of `Array = Any` for array type annotations puts no constraint on the argument values (`Any` is equivalent to no annotation at all), but it does serve as a form of useful in-code documentation for the developer.
For the sake of generated documentation, the name of the alias gets lost (the [HTML docs](https://jax.readthedocs.io/en/latest/_autosummary/jax.lax.slice.html) for `jax.lax.slice` report operand as type `Any`), so the documentation benefit does not go beyond the source code (though we could enable some `sphinx-autodoc` options to improve this: See [autodoc_type_aliases](https://www.sphinx-doc.org/en/master/usage/extensions/autodoc.html#confval-autodoc_type_aliases)).
A benefit of this level of type annotation is that it is never wrong to annotate a value with `Any`, so it will provide a concrete benefit to developers and users in the form of documentation, without added complexity of satisfying the stricter needs of any particular static type checker.
###### Level 2: Annotations for intelligent autocomplete[#](#level-2-annotations-for-intelligent-autocomplete)
Many modern IDEs take advantage of type annotations as inputs to [intelligent code completion](https://en.wikipedia.org/wiki/Intelligent_code_completion) systems. One example of this is the [Pylance](https://marketplace.visualstudio.com/items?itemName=ms-python.vscode-pylance) extension for VSCode, which uses Microsoft’s [pyright](https://github.com/microsoft/pyright) static type checker as a source of information for VSCode’s [IntelliSense](https://code.visualstudio.com/docs/editor/intellisense) completions.
This use of type checking requires going further than the simple aliases used above; for example, knowing that the `slice` function returns an alias of `Any` named `Array` does not add any useful information to the code completion engine. However, were we to annotate the function with a `DeviceArray` return type, the autocomplete would know how to populate the namespace of the result, and thus be able to suggest more relevant autocompletions during the course of development.
JAX has begun to add this level of type annotation in a few places; one example is the `jnp.ndarray` return type within the `jax.random` package [[source](https://github.com/google/jax/blob/2bc3e39cd9104071ee39dacac22abd51b94eb27e/jax/_src/random.py#L359)]:
```
def shuffle(key: KeyArray, x: Array, axis: int = 0) -> jnp.ndarray:
...
```
In this case `jnp.ndarray` is an abstract base class that forward-declares the attributes and methods of JAX arrays ([see source](https://github.com/google/jax/blob/2bc3e39cd9104071ee39dacac22abd51b94eb27e/jax/_src/numpy/ndarray.py#L41)), and so Pylance in VSCode can offer the full set of autocompletions on results from this function. Here is a screenshot showing the result:
Listed in the autocomplete field are all methods and attributes declared by the abstract `ndarray` class.
We’ll discuss further below why it was necessary to create this abstract class rather than annotating with `DeviceArray` directly.
###### Level 3: Annotations for static type-checking[#](#level-3-annotations-for-static-type-checking)
These days, static type-checking often is the first thing people think of when considering the purpose of type annotations in Python code.
While Python does not do any runtime checking of types, several mature static type checking tools exist that can do this as part of a CI test suite.
The most important ones for JAX are the following:
* [python/mypy](https://github.com/python/mypy) is more or less the standard in the open Python world. JAX currently runs mypy on a subset of source files within the Github Actions CI checks.
* [google/pytype](https://github.com/google/pytype) is Google’s static type checker, and projects which depend on JAX within Google frequently use this.
* [microsoft/pyright](https://github.com/microsoft/pyright) is important as the static type checker used within VSCode for the Pylance completions mentioned previously.
Full static type checking is the strictest of all the type annotation applications, because it will surface an error any time your type annotations are not precisely correct.
On the one hand, this is nice because your static type analysis may catch faulty type annotations (for example, a case where a `DeviceArray` method is missing from the `jnp.ndarray` abstract class).
On the other hand, this strictness can make the type checking process very brittle in packages that often rely on duck-typing rather than strict type-safe APIs.
You’ll currently find code comments like `#type: ignore` (for mypy) or `#pytype: disable` (for pytype) peppered throughout the JAX codebase in several hundred places.
These typically represent cases where typing problems have arisen; they may be inaccuracies in JAX type annotations, or inaccuracies in the static type checker’s ability to correctly follow the control flow in the code.
On occasion, they are due to real & subtle bugs in the behavior of pytype or mypy.
In rare cases, they may be due to the fact that JAX uses Python patterns that are difficult or even impossible to express in terms of Python’s static type annotation syntax.
###### Type annotation challenges for JAX[#](#type-annotation-challenges-for-jax)
JAX currently has type annotations that are a mixture of different styles, and aimed at all three levels of type annotation discussed above.
Partly, this comes from the fact that JAX’s source code poses a number of unique challenges for Python’s type annotation system. We’ll outline them here.
###### Challenge 1: pytype, mypy and developer friction[#](#challenge-1-pytype-mypy-and-developer-friction)
One challenge JAX currently faces is that package development must satisfy the constraints of two different static type checking systems, `pytype` (used by internal CI and internal Google projects) and `mypy` (used by external CI and external dependencies).
Although the two type checkers have broad overlap in their behavior, each presents its own unique corner cases, as evidenced by the numerous `#type: ignore` and `#pytype: disable` statements throughout the JAX codebase.
This creates friction in development: internal contributors may iterate until tests pass, only to find that on export their pytype-approved code falls afoul of mypy.
For external contributors, it’s often the opposite: a recent example is [#9596](https://github.com/google/jax/issues/9596) which had to be rolled-back after it failed internal Google pytype checks.
Each time we move a type annotation from Level 1 (`Any` everywhere) to Level 2 or 3 (stricter annotations), it introduces more potential for such frustrating developer experiences.
###### Challenge 2: array duck-typing[#](#challenge-2-array-duck-typing)
One particular challenge for annotating JAX code is its heavy use of duck-typing. An input to a function marked `Array` in general could be one of many different types: a JAX `DeviceArray`, a NumPy `np.ndarray`, a NumPy scalar, a Python scalar, a Python sequence, an object with an `__array__` attribute, an object with a `__jax_array__` attribute, or any flavor of `jax.Tracer`.
For this reason, simple annotations like `def func(x: DeviceArray)` will not be sufficient, and will lead to false positives for many valid uses.
This means that type annotations for JAX functions will not be short or trivial, but we would have to effectively develop a set of JAX-specific typing extensions similar to those in the [`numpy.typing` package](https://github.com/numpy/numpy/blob/main/numpy/_typing/_array_like.py).
###### Challenge 3: transformations and decorators[#](#challenge-3-transformations-and-decorators)
JAX’s Python API relies heavily on function transformations ([`jit()`](index.html#jax.jit), [`vmap()`](index.html#jax.vmap), [`grad()`](index.html#jax.grad), etc.), and this type of API poses a particular challenge for static type analysis.
Flexible annotation for decorators has been a [long-standing issue](https://github.com/python/mypy/issues/1927) in the mypy package, which was only recently resolved by the introduction of `ParamSpec`, discussed in [PEP 612](https://peps.python.org/pep-0612/) and added in Python 3.10.
Because JAX follows [NEP 29](https://numpy.org/neps/nep-0029-deprecation_policy.html), it cannot rely on Python 3.10 features until sometime after mid-2024.
In the meantime, Protocols can be used as a partial solution to this (JAX added this for jit and other methods in [#9950](https://github.com/google/jax/issues/9950)) and ParamSpec is possible to use via the `typing_extensions` package (a prototype is in [#9999](https://github.com/google/jax/issues/9999)) though this currently reveals fundamental bugs in mypy (see [python/mypy#12593](https://github.com/python/mypy/issues/12593)).
All that to say: it’s not yet clear that the API of JAX’s function transforms can be suitably annotated within the current constraints of Python type annotation tools.
###### Challenge 4: array annotation lack of granularity[#](#challenge-4-array-annotation-lack-of-granularity)
Another challenge here is common to all array-oriented APIs in Python, and has been part of the JAX discussion for several years (see [#943](https://github.com/google/jax/issues/943)).
Type annotations have to do with the Python class or type of an object, whereas in array-based languages often the attributes of the class are more important.
In the case of NumPy, JAX, and similar packages, often we would wish to annotate particular array shapes and data types.
For example, the arguments to the `jnp.linspace` function must be scalar values, but in JAX scalars are represented by zero-dimensional arrays.
So in order for annotations to not raise false positives, we must allow these arguments to be *arbitrary* arrays.
Another example is the second argument to `jax.random.choice`, which must have `dtype=int` when `shape=()`.
Python has a plan to enable type annotations with this level of granularity via Variadic Type Generics (see [PEP 646](https://peps.python.org/pep-0646/), slated for Python 3.11) but like `ParamSpec`, support for this feature will take a while to stabilize.
There are some third-party projects that may help in the meantime, in particular [google/jaxtyping](https://github.com/google/jaxtyping), but this uses non-standard annotations and may not be suitable for annotating the core JAX library itself.
All told, the array-type-granularity challenge is less of an issue than the other challenges, because the main effect is that array-like annotations will be less specific than they otherwise could be.
###### Challenge 5: imprecise APIs inherited from NumPy[#](#challenge-5-imprecise-apis-inherited-from-numpy)
A large part of JAX’s user-facing API is inherited from NumPy within the [`jax.numpy`](index.html#module-jax.numpy) submodule.
NumPy’s API was developed years before static type checking became part of the Python language, and follows Python’s historic recommendations to use a [duck-typing](https://docs.python.org/3/glossary.html#term-duck-typing)/[EAFP](https://docs.python.org/3/glossary.html#term-eafp) coding style, in which strict type-checking at runtime is discouraged. As a concrete example of this, consider the [`numpy.tile()`](https://numpy.org/doc/stable/reference/generated/numpy.tile.html#numpy.tile) function, which is defined like this:
```
def tile(A, reps):
try:
tup = tuple(reps)
except TypeError:
tup = (reps,)
d = len(tup)
...
```
Here the *intent* is that `reps` would contain either an `int` or a sequence of `int` values, but the *implementation* allows `tup` to be any iterable.
When adding annotations to this kind of duck-typed code, we could take one of two routes:
1. We may choose to annotate the *intent* of the function’s API, which here might be something like `reps: Union[int, Sequence[int]]`.
2. Conversely, we may choose to annotate the *implementation* of the function, which here might look something like `reps: Union[ConvertibleToInt, Iterable[ConvertibleToInt]]` where `ConvertibleToInt` is a special protocol that covers the exact mechanism by which our function converts the inputs to integers (i.e. via `__int__`, via `__index__`, via `__array__`, etc.). Note also here that in a strict sense, `Iterable` is not sufficient here because there are objects in Python that duck-type as iterables but do not satisfy a static type check against `Iterable` (namely, an object that is iterable via `__getitem__` rather than `__iter__`.)
The advantage of #1, annotating intent, is that the annotations are more useful to the user in communicating the API contract; while for the developer the flexibility leaves room for refactoring when necessary. The down-side (particularly for gradually-typed APIs like JAX’s) is that it’s quite likely that user code exists which runs correctly, but would be flagged as incorrect by a type checker.
Gradual typing of an existing duck-typed API means that the current annotation is implicitly `Any`, so changing this to a stricter type may present to users as a breaking change.
Broadly speaking, annotating intent better serves Level 1 type checking, while annotating implementation better serves Level 3, while Level 2 is more of a mixed bag (both intent and implementation are important when it comes to annotations in IDEs).
###### JAX type annotation roadmap[#](#jax-type-annotation-roadmap)
With this framing (Level 1/2/3) and JAX-specific challenges in mind, we can begin to develop our roadmap for implementing consistent type annotations across the JAX project.
###### Guiding Principles[#](#guiding-principles)
For JAX type annotation, we will be guided by the following principles:
###### Purpose of type annotations[#](#purpose-of-type-annotations)
We would like to support full, *Level 1, 2, and 3* type annotation as far as possible. In particular, this means that we should have restrictive type annotations on both inputs and outputs to public API functions.
###### Annotate for intent[#](#annotate-for-intent)
JAX type annotations should in general indicate the **intent** of APIs, rather than the implementation, so that the annotations become useful to communicate the contract of the API. This means that at times inputs that are valid at runtime may not be recognized as valid by the static type checker (one example might be an arbitrary iterator passed in place of a shape that is annotated as `Shape = Sequence[int]`).
###### Inputs should be permissively-typed[#](#inputs-should-be-permissively-typed)
Inputs to JAX functions and methods should be typed as permissively as is reasonable: for example, while shapes are typically tuples, functions that accept a shape should accept arbitrary sequences. Similarly, functions that accept a dtype need not require an instance of class `np.dtype`, but rather any dtype-convertible object. This might include strings, built-in scalar types, or scalar object constructors such as `np.float64` and `jnp.float64`. In order to make this as uniform as possible across the package, we will add a [`jax.typing`](index.html#module-jax.typing) module with common type specifications, starting with broad categories such as:
* `ArrayLike` would be a union of anything that can be implicitly converted into an array: for example, jax arrays, numpy arrays, JAX tracers, and python or numpy scalars
* `DTypeLike` would be a union of anything that can be implicitly converted into a dtype: for example, numpy dtypes, numpy dtype objects, jax dtype objects, strings, and built-in types.
* `ShapeLike` would be a union of anything that could be converted into a shape: for example, sequences of integer or integer-like objects.
* etc.
Note that these will in general be simpler than the equivalent protocols used in [`numpy.typing`](https://numpy.org/doc/stable/reference/typing.html#module-numpy.typing). For example, in the case of `DTypeLike`, JAX does not support structured dtypes, so JAX can use a simpler implementation. Similarly, in `ArrayLike`, JAX generally does not support list or tuple inputs in place of arrays, so the type definition will be simpler than the NumPy analog.
###### Outputs should be strictly-typed[#](#outputs-should-be-strictly-typed)
Conversely, outputs of functions and methods should be typed as strictly as possible: for example, for a JAX function that returns an array, the output should be annotated with something similar to `jnp.ndarray` rather than `ArrayLike`. Functions returning a dtype should always be annotated `np.dtype`, and functions returning a shape should always be `Tuple[int]` or a strictly-typed NamedShape equivalent. For this purpose, we will implement in [`jax.typing`](index.html#module-jax.typing) several strictly-typed analogs of the permissive types mentioned above, namely:
* `Array` or `NDArray` (see below) for type annotation purposes is effectively equivalent to `Union[Tracer, jnp.ndarray]` and should be used to annotate array outputs.
* `DType` is an alias of `np.dtype`, perhaps with the ability to also represent key types and other generalizations used within JAX.
* `Shape` is essentially `Tuple[int, ...]`, perhaps with some additional flexibility to account for dynamic shapes.
* `NamedShape` is an extension of `Shape` that allows for named shapes as used internally in JAX.
* etc.
We will also explore whether the current implementation of `jax.numpy.ndarray` can be dropped in favor of making `ndarray` an alias of `Array` or similar.
###### Err toward simplicity[#](#err-toward-simplicity)
Aside from common typing protocols gathered in `jax.typing`, we should err on the side of simplicity. We should avoid constructing overly-complex protocols for arguments passed to API functions, and instead use simple unions such as `Union[simple_type, Any]` in the case that the full type specification of the API cannot be succinctly specified. This is a compromise that achieves the goals of Level 1 and 2 annotations, while punting on Level 3 in favor of avoiding unnecessary complexity.
###### Avoid unstable typing mechanisms[#](#avoid-unstable-typing-mechanisms)
In order to not add undue development friction (due to the internal/external CI differences), we would like to be conservative in the type annotation constructs we use: in particular, when it comes to recently-introduced mechanisms such as `ParamSpec` ([PEP 612](https://peps.python.org/pep-0612/)) and Variadic Type Generics ([PEP 646](https://peps.python.org/pep-0646/)), we would like to wait until support in mypy and other tools matures and stabilizes before relying on them.
One impact of this is that for the time being, when functions are decorated by JAX transformations like `jit`, `vmap`, `grad`, etc. JAX will effectively **strip all annotations** from the decorated function.
While this is unfortunate, at the time of this writing mypy has a laundry-list of incompatibilities with the potential solution offered by `ParamSpec` (see [`ParamSpec` mypy bug tracker](https://github.com/python/mypy/issues?q=is%3Aissue+is%3Aopen++label%3Atopic-paramspec+)), and we therefore judge it as not ready for full adoption in JAX at this time.
We will revisit this question in the future once support for such features stabilizes.
Similarly, for the time being we will avoid adding the more complex & granular array type annotations offered by the [jaxtyping](http://github.com/google/jaxtyping) project. This is a decision we could revisit at a future date.
###### `Array` Type Design Considerations[#](#array-type-design-considerations)
As mentioned above, type annotation of arrays in JAX poses a unique challenge because of JAX’s extensive use of duck-typing, i.e. passing and returning `Tracer` objects in place actual arrays within jax transformations.
This becomes increasingly confusing because objects used for type annotation often overlap with objects used for runtime instance checking, and may or may not correspond to the actual type hierarchy of the objects in question.
For JAX, we need to provide duck-typed objects for use in two contexts: **static type annotations** and **runtime instance checks**.
The following discussion will assume that `jax.Array` is the runtime type on-device arrays, which is not yet the case but will be once the work in [#12016](https://github.com/google/jax/issues/12016) is complete.
###### Static type annotations[#](#static-type-annotations)
We need to provide an object that can be used for duck-typed type annotations.
Assuming for the moment that we call this object `ArrayAnnotation`, we need a solution which satisfies `mypy` and `pytype` for a case like the following:
```
@jit def f(x: ArrayAnnotation) -> ArrayAnnotation:
assert isinstance(x, core.Tracer)
return x
```
This could be accomplished via a number of approaches, for example:
* Use a type union: `ArrayAnnotation = Union[Array, Tracer]`
* Create an interface file that declares `Tracer` and `Array` should be treated as subclasses of `ArrayAnnotation`.
* Restructure `Array` and `Tracer` so that `ArrayAnnotation` is a true base class of both.
###### Runtime instance checks[#](#runtime-instance-checks)
We also must provide an object that can be used for duck-typed runtime `isinstance` checks.
Assuming for the moment that we call this object `ArrayInstance`, we need a solution that passes the following runtime check:
```
def f(x):
return isinstance(x, ArrayInstance)
x = jnp.array([1, 2, 3])
assert f(x) # x will be an array assert jit(f)(x) # x will be a tracer
```
Again, there are a couple mechanisms that could be used for this:
* override `type(ArrayInstance).__instancecheck__` to return `True` for both `Array` and `Tracer` objects; this is how `jnp.ndarray` is currently implemented ([source](https://github.com/google/jax/blob/jax-v0.3.17/jax/_src/numpy/ndarray.py#L24-L49)).
* define `ArrayInstance` as an abstract base class and dynamically register it to `Array` and `Tracer`
* restructure `Array` and `Tracer` so that `ArrayInstance` is a true base class of both `Array` and `Tracer`
A decision we need to make is whether `ArrayAnnotation` and `ArrayInstance` should be the same or different objects. There is some precedent here; for example in the core Python language spec, `typing.Dict` and `typing.List` exist for the sake of annotation, while the built-in `dict` and `list` serve the purposes of instance checks.
However, `Dict` and `List` are [deprecated](https://peps.python.org/pep-0585/#implementation) in newer Python versions in favor of using `dict` and `list` for both annotation and instance checks.
###### Following NumPy’s lead[#](#following-numpy-s-lead)
In NumPy’s case, `np.typing.NDArray` serves the purpose of type annotations, while `np.ndarray` serves the purpose of instance checks (as well as array type identity).
Given this, it may be reasonable to conform to NumPy’s precedent and implement the following:
* `jax.Array` is the actual type of on-device arrays.
* `jax.typing.NDArray` is the object used for duck-typed array annotations.
* `jax.numpy.ndarray` is the object used for duck-typed array instance checks.
This might feel somewhat natural to NumPy power-users, however this trifurcation would likely be a source of confusion: the choice of which to use for instance checks and annotations is not immediately clear.
###### Unifying instance checks and annotation[#](#unifying-instance-checks-and-annotation)
Another approach would be to unify type checking and annotation via override mechanisms mentioned above.
###### Option 1: Partial unification[#](#option-1-partial-unification)
A partial unification might look like this:
* `jax.Array` is the actual type of on-device arrays.
* `jax.typing.Array` is the object used for duck-typed array annotations (via `.pyi` interfaces on `Array` and `Tracer`).
* `jax.typing.Array` is also the object used duck-typed instance checks (via an `__isinstance__` override in its metaclass)
In this approach, `jax.numpy.ndarray` would become a simple alias `jax.typing.Array` for backward compatibility.
###### Option 2: Full unification via overrides[#](#option-2-full-unification-via-overrides)
Alternatively, we could opt for full unification via overrides:
* `jax.Array` is the actual type of on-device arrays.
* `jax.Array` is also the object used for duck-typed array annotations (via a `.pyi` interface on `Tracer`)
* `jax.Array` is also the object used for duck-typed instance checks (via an `__isinstance__` override in its metaclass)
Here, `jax.numpy.ndarray` would become a simple alias `jax.Array` for backward compatibility.
###### Option 3: Full unification via class hierarchy[#](#option-3-full-unification-via-class-hierarchy)
Finally, we could opt for full unification via restructuring of the class hierarchy and replacing duck-typing with OOP object hierarchies:
* `jax.Array` is the actual type of on-device arrays
* `jax.Array` is also the object used for array type annotations, by ensuring that `Tracer` inherits from `jax.Array`
* `jax.Array` is also the object used for instance checks, via the same mechanism
Here `jnp.ndarray` could be an alias for `jax.Array`.
This final approach is in some senses the most pure, but it is somewhat forced from an OOP design standpoint (`Tracer` *is an* `Array`?).
###### Option 4: Partial unification via class hierarchy[#](#option-4-partial-unification-via-class-hierarchy)
We could make the class hierarchy more sensible by making `Tracer` and the class for on-device arrays inherit from a common base class. So, for example:
* `jax.Array` is a base class for `Tracer` as well as the actual type of on-device arrays,
which might be `jax._src.ArrayImpl` or similar.
* `jax.Array` is the object used for array type annotations
* `jax.Array` is also the object used for instance checks
Here `jnp.ndarray` would be an alias for `Array`.
This may be purer from an OOP perspective, but compared to Options 2 and 3 it drops the notion that `type(x) is jax.Array` will evaluate to True.
###### Evaluation[#](#evaluation)
Considering the overall strengths and weaknesses of each potential approach:
* From a user perspective, the unified approaches (options 2 and 3) are arguably best, because they remove the cognitive overhead involved in remembering which objects to use for instance checks or annotations: `jax.Array` is all you need to know.
* However, both options 2 and 3 introduce some strange and/or confusing behavior. Option 2 depends on potentially confusing overrides of instance checks, which are [not well supported](https://github.com/pybind/pybind11/issues/2696) for classes defined in pybind11. Option 3 requires `Tracer` to be a subclass array. This breaks the inheritance model, because it would require `Tracer` objects to carry all the baggage of `Array` objects (data buffers, sharding, devices, etc.)
* Option 4 is purer in an OOP sense, and avoids the need for any overrides of typical instance check or type annotation behavior. The tradeoff is that the actual type of on-device arrays becomes something separate (here `jax._src.ArrayImpl`). But the vast majority of users would never have to touch this private implementation directly.
There are different tradeoffs here, but after discussion we’ve landed on Option 4 as our way forward.
###### Implementation Plan[#](#implementation-plan)
To move forward with type annotations, we will do the following:
1. Iterate on this JEP doc until developers and stakeholders are bought-in.
2. Create a private `jax._src.typing` (not providing any public APIs for now) and put in it the first level of simple types mentioned above:
* Alias `Array = Any` for the time being, as this will take a bit more thought.
* `ArrayLike`: a Union of types valid as inputs to normal `jax.numpy` functions
* `DType` / `DTypeLike` (Note: numpy uses camel-cased `DType`; we should follow this convention for ease of use)
* `Shape` / `NamedShape` / `ShapeLike`The beginnings of this are done in [#12300](https://github.com/google/jax/issues/12300).
3. Begin work on a `jax.Array` base class that follows Option 4 from the previous section. Initially this will be defined in Python, and use the dynamic registration mechanism currently found in the `jnp.ndarray` implementation to ensure correct behavior of `isinstance` checks. A `pyi` override for each tracer and array-like class would ensure correct behavior for type annotations. `jnp.ndarray` could then be make into an alias of `jax.Array`
4. As a test, use these new typing definitions to comprehensively annotate functions within `jax.lax` according to the guidelines above.
5. Continue adding additional annotations one module at a time, focusing on public API functions.
6. In parallel, begin re-implementing a `jax.Array` base class in pybind11, so that `ArrayImpl` and `Tracer` can inherit from it. Use a `pyi` definition to ensure static type checkers recognize the appropriate attributes of the class.
7. Once `jax.Array` and `jax._src.ArrayImpl` have fully landed, remove these temporary Python implementations.
8. When all is finalized, create a public `jax.typing` module that makes the above types available to users, along with documentation of annotation best practices for code using JAX.
We will track this work in [#12049](https://github.com/google/jax/issues/12049), from which this JEP gets its number.
##### `shmap` (`shard_map`) for simple per-device code[#](#shmap-shard-map-for-simple-per-device-code)
*sholto@, sharadmv@, jekbradbury@, zhangqiaorjc@, mattjj@*
*January 2023*
###### Motivation[#](#motivation)
JAX supports two schools of thought for multi-device programming:
1. **Compiler, take the wheel!** Let the compiler automatically partition bulk array functions over devices.
2. **Just let me write what I mean, damnit!** Give me per-device code and explicit communication collectives.
We need great APIs for both, and rather than being mutually exclusive alternatives, they need to compose with each other.
With `pjit` (now just `jit`) we have [a next-gen API](https://jax.readthedocs.io/en/latest/notebooks/Distributed_arrays_and_automatic_parallelization.html)
for the first school. But we haven’t quite leveled-up the second school. `pmap`
follows the second school, but over time we found it has [fatal flaws](#why-don-t-pmap-or-xmap-already-solve-this). `xmap` solved those flaws,
but it doesn’t quite give us per-device shapes, and it includes several other big ideas too. Meanwhile, new demands for per-device explicit-collectives programming have emerged, like in [Efficiently Scaling Transformer Inference](https://arxiv.org/abs/2211.05102).
We can level-up the second school with `shmap`. `shmap` is:
* a simple multi-device parallelism API which lets us write per-device code with explicit collectives, where logical shapes match per-device physical buffer shapes and collectives correspond exactly to cross-device communication;
* a specialization of `xmap` with scaled-back features and a few tweaks;
* a fairly direct surfacing of the XLA SPMD Partitioner’s ‘manual’ mode;
* a fun-to-say Seussian name which could stand for `shard_map`,
`shpecialized_xmap`, `sholto_map`, or `sharad_map`.
**For `pjit` users**, `shmap` is a complementary tool. It can be used inside a
`pjit` computation to drop temporarily into a “manual collectives” mode, like an escape hatch from the compiler’s automatic partitioning. That way, users get the convenience and familiar just-NumPy programming model of `pjit` for most of their code, along with the ability to hand-optimize collective communication with
`shmap` wherever it’s needed. It’s the best of both worlds!
**For `pmap` users**, `shmap` is a strict upgrade. It’s more expressive,
performant, and composable with other JAX APIs, without making basic batch data parallelism any harder.
For more on practical use, you can jump to [When should you use `shmap` and when should you use `pjit`?](#when-should-you-use-shmap-and-when-should-you-use-pjit).
If you’re wondering why we need a new thing at all, or what the problems with `pmap` are, jump to [Why don’t `pmap` or `xmap` already solve this?](#why-don-t-pmap-or-xmap-already-solve-this).
Or keep reading the next section to see some `shmap` examples and the API spec.
###### So, let’s see `shmap`![#](#so-let-s-see-shmap)
###### TL;DR example (with a more detailed explanation to follow)[#](#tl-dr-example-with-a-more-detailed-explanation-to-follow)
Sho shick:
```
from functools import partial
import numpy as np
import jax import jax.numpy as jnp from jax.sharding import Mesh, PartitionSpec as P from jax.experimental import mesh_utils from jax.experimental.shard_map import shard_map
devices = mesh_utils.create_device_mesh((4, 2))
mesh = Mesh(devices, axis_names=('i', 'j'))
a = jnp.arange( 8 * 16.).reshape(8, 16)
b = jnp.arange(16 * 32.).reshape(16, 32)
@partial(shard_map, mesh=mesh, in_specs=(P('i', 'j'), P('j', None)),
out_specs=P('i', None))
def matmul_basic(a_block, b_block):
# a_block: f32[2, 8]
# b_block: f32[8, 32]
z_partialsum = jnp.dot(a_block, b_block)
z_block = jax.lax.psum(z_partialsum, 'j')
return z_block
c = matmul_basic(a, b) # c: f32[8, 32]
```
Notice:
* no nesting needed (or `axis_index_groups`) for multiple axes of parallelism,
unlike `pmap`;
* no reshapes in the caller, unlike `pmap` and hard-`xmap`, and logical shapes correspond to per-device physical shapes, unlike (non-hard) `xmap`;
* precise device placement control by using `mesh`, unlike `pmap`;
* there’s only one set of axis names for logical and physical, unlike `xmap`;
* the result is a `jax.Array` which could be efficiently passed to a `pjit`,
unlike `pmap`;
* this same code works efficiently inside a `pjit`/`jit`, unlike `pmap`;
* this code works eagerly, so we can `pdb` in the middle and print values,
unlike `xmap`’s current implementation (though by design `xmap` without the sequential schedule can in principle work eagerly too).
Here’s another matmul variant with a fully sharded result:
```
@partial(shard_map, mesh=mesh, in_specs=(P('i', 'j'), P('j', None)),
out_specs=P('i', 'j'))
def matmul_reduce_scatter(a_block, b_block):
# c_partialsum: f32[8/X, 32]
c_partialsum = jnp.matmul(a_block, b_block)
# c_block: f32[8/X, 32/Y]
c_block = jax.lax.psum_scatter(c_partialsum, 'j', scatter_dimension=1, tiled=True)
return c_block
c = matmul_reduce_scatter(a, b)
```
###### Slow down, start with the basics![#](#slow-down-start-with-the-basics)
###### Rank-reducing vs rank-preserving maps over array axes[#](#rank-reducing-vs-rank-preserving-maps-over-array-axes)
We can think of `pmap` (and `vmap` and `xmap`) as unstacking each array input along an axis (e.g. unpacking a 2D matrix into its 1D rows), applying its body function to each piece, and stacking the results back together, at least when collectives aren’t involved:
```
pmap(f, in_axes=[0], out_axes=0)(xs) == jnp.stack([f(x) for x in xs])
```
For example, if `xs` had shape `f32[8,5]` then each `x` has shape `f32[5]`, and if each `f(x)` has shape `f32[3,7]` then the final stacked result `pmap(f)(xs)`
has shape `f32[8,3,7]`. That is, each application of the body function `f` takes as argument inputs with one fewer axis than the corresponding argument to
`pmap(f)`. We can say these are *rank-reducing maps* with unstacking/stacking of inputs/outputs.
The number of logical applications of `f` is determined by the size of the input axis being mapped over: for example, if we map over an input axis of size 8,
semantically we get 8 logical applications of the function, which for pmap always correspond to 8 devices physically computing them.
In contrast, `shmap` does not have this rank-reducing behavior. Instead, we can think of it as slicing (or “unconcatenating”) along input axes into blocks,
applying the body function, and concatenating the results back together (again when collectives aren’t involved):
```
devices = np.array(jax.devices()[:4])
m = Mesh(devices, ('i',)) # mesh.shape['i'] = 4
shard_map(f, m, in_specs=P('i'), out_specs=P('i'))(y)
==
jnp.concatenate([f(y_blk) for y_blk in jnp.split(y, 4)])
```
Recall that `jnp.split` slices its input into equally-sized blocks with the same rank, so that if in the above example `y` has shape `f32[8,5]` then each `y_blk`
has shape `f32[2,5]`, and if each `f(y_blk)` has shape `f32[3,7]` then the final concatenated result `shard_map(f, ...)(y)` has shape `f32[12,7]`. So `shmap`
(`shard_map`) maps over shards, or blocks, of its inputs. We can say it’s a
*rank-preserving ma*p with unconcatenating/concatenating of its inputs/outputs.
The number of logical applications of `f` is determined by the mesh size, not by any input axis size: for example, if we have a mesh of total size 4 (i.e. over 4 devices) then semantically we get 4 logical applications of the function,
corresponding to the 4 devices physically computing them.
###### Controlling how each input is split (unconcatenated) and tiled with `in_specs`[#](#controlling-how-each-input-is-split-unconcatenated-and-tiled-with-in-specs)
Each of the `in_specs` identifies some of the corresponding input array’s axes with mesh axes by name using `PartitionSpec`s, representing how to split (or unconcatenate) that input into the blocks to which the body function is applied.
That identification determines the shard sizes; when an input axis is identified with a mesh axis, the input is split (unconcatenated) along that logical axis into a number of pieces equal to the corresponding mesh axis size. (It’s an error if the corresponding mesh axis size does not evenly divide the input array axis size.) If an input’s pspec does not mention a mesh axis name, then there’s no splitting over that mesh axis. For example:
```
devices = np.array(jax.devices())
m = Mesh(devices.reshape(4, 2), ('i', 'j'))
@partial(shard_map, mesh=m, in_specs=P('i', None), out_specs=P('i', 'j'))
def f1(x_block):
print(x_block.shape)
return x_block
x1 = np.arange(12 * 12).reshape(12, 12)
y = f1(x1) # prints (3,12)
```
Here, because the input pspec did not mention the mesh axis name `'j'`, no input array axis is split over that mesh axis; similarly, because the second axis of the input array is not identified with (and hence split over) any mesh axis,
application of `f1` gets a full view of the input along that axis.
When a mesh axis is not mentioned in an input pspec, we can always rewrite to a less efficient program where all mesh axes are mentioned but the caller performs a `jnp.tile`, for example:
```
@partial(shard_map, mesh=m, in_specs=P('i', 'j'), out_specs=P('i', 'j'))
def f2(x_block):
print(x_block.shape)
return x_block
x = np.arange(12 * 12).reshape(12, 12)
x_ = jnp.tile(x, (1, mesh.axis_size['j'])) # x_ has shape (12, 24)
y = f2(x_) # prints (3,12), and f1(x) == f2(x_)
```
In other words, because each input pspec can mention each mesh axis name zero or one times, rather than having to mention each name exactly once, we can say that in addition to the `jnp.split` built into its input, `shard_map` also has a
`jnp.tile` built into its input, at least logically (though the tiling may not need to be carried out physically, depending on the arguments’ physical sharding layout). The tiling to use is not unique; we could also have tiled along the first axis, and used the pspec `P(('j', 'i'), None)`.
Physical data movement is possible on inputs, as each device needs to have a copy of the appropriate data.
###### Controlling how each output assembled by concatenation, block transposition, and untiling using `out_specs`[#](#controlling-how-each-output-assembled-by-concatenation-block-transposition-and-untiling-using-out-specs)
Analogously to the input side, each of the `out_specs` identifies some of the corresponding output array’s axes with mesh axes by name, representing how the output blocks (one for each application of the body function, or equivalently one for each physical device) should be assembled back together to form the final output value. For example, in both the `f1` and `f2` examples above the
`out_specs` indicate we should form the final output by concatenating together the block results along both axes, resulting in both cases an array `y` of shape
`(12,24)`. (It’s an error if an output shape of the body function, i.e. an output block shape, has a rank too small for the concatenation described by the corresponding output pspec.)
When a mesh axis name is not mentioned in an output pspec, it represents an
*un-tiling*: when the user writes an output pspec which does not mention one of the mesh axis names, they promise that the output blocks are equal along that mesh axis, and so only one block along that axis is used in the output (rather than concatenating all the blocks together along that mesh axis). For example,
using the same mesh as above:
```
x = jnp.array([[3.]])
z = shard_map(lambda: x, mesh=m, in_specs=(), out_specs=P('i', 'j'))()
print(z) # prints the same as jnp.tile(x, (4, 2))
z = shard_map(lambda: x, mesh=m, in_specs=(), out_specs=P('i', None))()
print(z) # prints the same as jnp.tile(x, (4, 1)), or just jnp.tile(x, (4,))
z = shard_map(lambda: x, mesh=m, in_specs=(), out_specs=P(None, None))()
print(z) # prints the same as jnp.tile(x, (1, 1)), or just x
```
Notice that the body function closing over an array value is equivalent to passing it as an augment with a corresponding input pspec of `P(None, None)`. As another example, following more closely to the other examples above:
```
@partial(shard_map, mesh=m, in_specs=P('i', 'j'), out_specs=P('i', None))
def f3(x_block):
return jax.lax.psum(x_block, 'j')
x = np.arange(12 * 12).reshape(12, 12)
y3 = f3(x)
print(y3.shape) # (12,6)
```
Notice that the result has a second axis size of 6, half the size of the input’s second axis. In this case, the un-tile expressed by not mentioning the mesh axis name `'j'` in the output pspec was safe because of the collective `psum`, which ensures each output block is equal along the corresponding mesh axis. Here are two more examples where we vary which mesh axes are mentioned in the output pspec:
```
@partial(shard_map, mesh=m, in_specs=P('i', 'j'), out_specs=P(None, 'j'))
def f4(x_block):
return jax.lax.psum(x_block, 'i')
x = np.arange(12 * 12).reshape(12, 12)
y4 = f4(x)
print(y4.shape) # (3,12)
@partial(shard_map, mesh=m, in_specs=P('i', 'j'), out_specs=P(None, None))
def f5(x_block):
return jax.lax.psum(x_block, ('i', 'j'))
y5 = f5(x)
print(y5.shape) # (3,6)
```
On the physical side, not mentioning a mesh axis name in an output pspec assembles an `Array` from the output device buffers with replicated layout along that mesh axis.
There is no runtime check that the output blocks are actually equal along a mesh axis to be un-tiled along, or equivalently that the corresponding physical buffers have equal values and thus can be interpreted as a replicated layout for a single logical array. But we can provide a static check mechanism which raises an error on all potentially-incorrect programs.
Because the `out_specs` can mention mesh axis names zero or one times, and because they can be mentioned in any order, we can say that in addition to the
`jnp.concatenate` built into its output, `shard_map` also has both an untile and a block transpose built into its output.
Physical data movement is not possible on outputs, no matter the output pspec.
Instead, `out_specs` just encodes how to assemble the block outputs into
`Array`s, or physically how to interpret the buffers across devices as the physical layout of a single logical `Array`.
###### API Specification[#](#api-specification)
```
from jax.sharding import Mesh Specs = PyTree[PartitionSpec]
def shard_map(f: Callable, mesh: Mesh, in_specs: Specs, out_specs: Specs
) -> Callable:
...
```
where:
* `mesh` encodes devices arranged in an array and with associated axis names,
just like it does for `xmap` and for `sharding.NamedSharding`;
* `in_specs` and `out_specs` are `PartitionSpec`s which can
[affinely](https://en.wikipedia.org/wiki/Substructural_type_system) mention axis names from `mesh` (not separate logical names as in `xmap`) to express slicing/unconcatenation and concatenation of inputs and outputs, respectively
(not unstacking and stacking like `pmap` and `xmap` do), with unmentioned names corresponding to replication and untiling
(assert-replicated-so-give-me-one-copy), respectively;
* the shapes of the arguments passed to `f` have the same ranks as the arguments passed to `shard_map`-of-`f` (unlike `pmap` and `xmap` where the ranks are reduced), and the shape of an argument to `f` is computed from the shape
`shape` of the corresponding argument to `shard_map`-of-`f` and the corresponding `PartitionSpec` spec as roughly
`tuple(sz // (1 if n is None else mesh.shape[n]) for sz, n in zip(shape, spec))`;
* the body of `f` can apply collectives using names from `mesh`.
`shmap` is eager by default, meaning that we dispatch computations primitive-by-primitive, so that the user can employ Python control flow on fully replicated values and interactive `pdb` debugging to print any values. To stage out and end-to-end compile a `shmap`ped function, just put a `jit` around it. A consequence is that `shmap` doesn’t have its own dispatch and compilation paths like `xmap` and `pmap` currently do; it’s just the `jit` path.
When it’s staged out by e.g. an enclosing `jit`, the lowering of `shmap` to MHLO is trivial: it just involves switching into ‘manual SPMD mode’ on the inputs,
and switching back on the outputs. (We don’t currently plan to support partially-manual-partially-automatic modes.)
The interaction with effects is the same as with `pmap`.
The interaction with autodiff is also just like `pmap` (rather than attempting the new semantics that `xmap` did, corresponding to having unmapped intermediates and hence `grad`’s `reduce_axes` as well as making `psum`
transpose to `pbroadcast` rather than `psum`). But it thus inherits an unsolved problem from `pmap`: in some cases, instead of transposing `psum` to `psum`, and thus performing a backward pass `psum` corresponding to the forward pass `psum`,
it can be beneficial to move the backward pass `psum` to elsewhere in the backward pass, exploiting linearity. Many advanced `pmap` users addressed this challenge by using `custom_vjp` to implement `psum_idrev` and `id_psumrev`
functions, but since it’s easy to accidentally leave those imbalanced, that technique is a foot-cannon. We have some ideas on how to provide this functionality in a safer way.
###### When should you use `shmap` and when should you use `pjit`?[#](#when-should-you-use-shmap-and-when-should-you-use-pjit)
One philosophy is: it is almost always simpler to write a program in `jit==pjit`
— but if a given part of the program is less optimized by the compiler than it could be, drop into `shmap`!
###### A realistic transformer example[#](#a-realistic-transformer-example)
In fact, we can implement a simple version of the [“collective matmul”](https://dl.acm.org/doi/pdf/10.1145/3567955.3567959) algorithm recently introduced in XLA to overlap communication and computation using `shmap`
and 30 lines of Python. The basic idea of the algorithm can be grasped with a simple example.
Suppose we want to compute `C = A @ B` where `A` is sharded by a 1D mesh on the 0-th dimension while `B` and `C` are replicated.
```
M, K, N = 4096, 2048, 1024 A = jnp.arange(np.prod((M, K))).reshape((M, K))
B = jnp.arange(np.prod((K, N))).reshape((K, N))
mesh = Mesh(np.array(jax.devices()), axis_names=('i'))
A_x = jax.device_put(A, NamedSharding(mesh, P('i', None)))
@jax.jit def f(lhs, rhs):
return lhs @ rhs
C = f(A_x, B)
```
A profile shows the blocking all-gather across 8 devices before the matmul can start. This is suboptimal because `A` is sharded on a non-contracting dimension,
and each shard of `A` can be matmul’ed with `B` independently and this chunked computation can be overlapped with fetching of the next shard of `A` from another device.
This overlap can be implemented using `shmap` and explicit collectives.
```
def collective_matmul_allgather_lhs_non_contracting(lhs, rhs):
# lhs is the looped operand; rhs is the local operand
axis_size = jax.lax.psum(1, axis_name='i')
axis_index = jax.lax.axis_index(axis_name='i')
chunk_size = lhs.shape[0]
def f(i, carrys):
accum, lhs = carrys
# matmul for a chunk
update = lhs @ rhs
# circular shift to the left
lhs = jax.lax.ppermute(
lhs,
axis_name='i',
perm=[(j, (j - 1) % axis_size) for j in range(axis_size)]
)
# device 0 computes chunks 0, 1, ...
# device 1 computes chunks 1, 2, ...
update_index = (((axis_index + i) % axis_size) * chunk_size, 0)
accum = jax.lax.dynamic_update_slice(accum, update, update_index)
return accum, lhs
accum = jnp.zeros((lhs.shape[0] * axis_size, rhs.shape[1]), dtype=lhs.dtype)
# fori_loop cause a crash: hlo_sharding.cc:817 Check failed: !IsManual()
# accum, lhs = jax.lax.fori_loop(0, axis_size - 1, f, (accum, lhs))
for i in range(0, axis_size - 1):
accum, lhs = f(i, (accum, lhs))
# compute the last chunk, without the ppermute
update = lhs @ rhs
i = axis_size - 1
update_index = (((axis_index + i) % axis_size) * chunk_size, 0)
accum = jax.lax.dynamic_update_slice(accum, update, update_index)
return accum
```
```
jit_sharded_f = jax.jit(shard_map(
collective_matmul_allgather_lhs_non_contracting, mesh,
in_specs=(P('i', None), P()), out_specs=P()))
C = jit_sharded_f(A_x, B)
```
A profile shows that the all-gather is gone, and replaced with overlapped matmul with async collective permute. This profile matches very closely with the collective matmul paper result.
This collective matmul technique can be used to speed up feedforward blocks in transformer layers. This typically consists of two matrix multiplications followed by a `ReduceScatter` (to resolve partial sums from a parallelized matrix multiplication) and preceded by an `AllGather` (to collect the sharded dimensions along some axes and allow partial sum computation). Together, the
`ReduceScatter` from one layer and the `AllGather` for the next amount to an
`AllReduce`.
In a typical profile, the two matmuls will be followed by an `AllReduce`, and they will not be overlapped. Collective matmul can be used to achieve the overlap, but is difficult to trigger, has a minimum slice size and does not yet cover all topologies, tensor shapes and variants of collective matmul (i.e latency and throughput optimized variants). [In a recent paper](https://arxiv.org/abs/2211.05102), we found a ~40% gain in many circumstances from manually implementing collective matmul variants in `shmap`
style.
But it isn’t always more complex! We expect this to be a much more natural way to think about pipelined computation, and plan to do some demos of that soon!
###### Another realistic example[#](#another-realistic-example)
Here’s how `shmap` might look in a transformer layer pass with a 2D weight gathered pattern ([paper](https://arxiv.org/abs/2211.05102), Sec 3.2.3 on p. 5):
```
def matmul_2D_wg_manual(xnorm, q_wi, layer):
'''Calls a custom manual implementation of matmul_reducescatter'''
# [batch, maxlen, embed.X] @ [heads.YZ, embed.X, q_wi_per_head]
# -> (matmul)
# -> [batch, maxlen, heads.YZ, q_wi_per_head]{x unreduced}
# -> (reducescatter over x into X heads, B batches)
# -> [batch, maxlen, heads.YZX, q_wi_per_head]
with jax.named_scope('q_wi'):
xnorm = intermediate_dtype(xnorm)
q_wi = matmul_reducescatter(
'bte,hed->bthd',
xnorm,
params.q_wi,
scatter_dimension=(0, 2),
axis_name='i',
layer=layer)
return q_wi
import partitioning.logical_to_physical as l2phys
def pjit_transformer_layer(
hparams: HParams, layer: int, params: weights.Layer, sin: jnp.ndarray,
cos: jnp.ndarray, kv_caches: Sequence[attention.KVCache],
x: jnp.ndarray) -> Tuple[jnp.ndarray, jnp.ndarray, jnp.ndarray]:
"""Forward pass through a single layer, returning output, K, V."""
def my_layer(t, axis=0):
"""Gets the parameters corresponding to a given layer."""
return lax.dynamic_index_in_dim(t, layer, axis=axis, keepdims=False)
# 2D: [batch.Z, time, embed.XY]
x = _with_sharding_constraint(
x, ('residual_batch', 'residual_time', 'residual_embed'))
xnorm = _layernorm(x)
# 2D: [batch, time, embed.X]
xnorm = _with_sharding_constraint(
xnorm, ('post_norm_batch', 'time', 'post_norm_embed'))
# jump into manual mode where you want to optimise
if manual:
q_wi = shard_map(matmul_2D_wg_manual, mesh
in_specs=(l2phys('post_norm_batch', 'time', 'post_norm_embed'),
l2phys('layers', 'heads', 'embed', 'q_wi_per_head')),
out_specs=l2phys('post_norm_batch', 'time', 'heads', 'q_wi_per_head'))(xnorm, q_wi, layer)
else:
q_wi = jnp.einsum('bte,hed->bthd', xnorm, my_layer(params.q_wi))
# 2D: [batch, time, heads.YZX, None]
q_wi = _with_sharding_constraint(q_wi,
('post_norm_batch', 'time', 'heads', 'qkv'))
q = q_wi[:, :, :, :hparams.qkv]
q = _rope(sin, cos, q)
# unlike in https://arxiv.org/pdf/2002.05202.pdf, PaLM implements
# swiGLU with full d_ff dimension, rather than 2/3 scaled
wi0 = q_wi[:, :, :, hparams.qkv:hparams.qkv + (hparams.ff // hparams.heads)]
wi1 = q_wi[:, :, :, hparams.qkv + (hparams.ff // hparams.heads):]
kv = jnp.einsum('bte,ezd->btzd', xnorm, my_layer(params.kv))
k = kv[:, :, 0, :hparams.qkv]
v = kv[:, :, 0, hparams.qkv:]
k = _rope(sin, cos, k)
y_att = jnp.bfloat16(attention.attend(q, k, v, kv_caches, layer))
y_mlp = special2.swish2(wi0) * wi1
# 2D: [batch, time, heads.YZX, None]
y_mlp = _with_sharding_constraint(y_mlp,
('post_norm_batch', 'time', 'heads', None))
y_fused = jnp.concatenate([y_att, y_mlp], axis=-1)
# do the second half of the mlp and the self-attn projection in parallel
y_out = jnp.einsum('bthd,hde->bte', y_fused, my_layer(params.o_wo))
# 2D: [batch.Z, time, embed.XY]
y_out = _with_sharding_constraint(
y_out, ('residual_batch', 'residual_time', 'residual_embed'))
z = y_out + x
z = _with_sharding_constraint(
z, ('residual_batch', 'residual_time', 'residual_embed'))
return z, k, v
```
In the profile below, both the first and second matmul were replaced by manually lowered versions, where the compute (fusions) are fully overlapped with the communication (ppermute)! One fun hint that we are using a latency optimised variant is that the ppmerute pixels are jittered — because there are two overlapping ppermutes using opposite ICI axes at the same time!
All-to-all is much harder to overlap, so was left on the table.
###### Why don’t `pmap` or `xmap` already solve this?[#](#why-don-t-pmap-or-xmap-already-solve-this)
`pmap` was our first multi-device parallelism API. It follows the per-device-code-and-explicit-collectives school. But it had major shortcomings which make it unsuitable for today’s programs:
* **Mapping multiple axes required nested `pmap`s.** Not only are nested `pmap`s cumbersome to write, but also they make it difficult to control (or even predict) the device placement of data and computation, and difficult to preserve data sharding (see the next two bullets). Today’s programs require multiple axes of parallelism.
* **Controlling device placement was impossible.** Especially with multiple axes of parallelism, programmers need to control how those axes are aligned with hardware resources and their communication topologies. But (nested) `pmap`
doesn’t offer control over how mapped program instances are placed on hardware; there’s just an automatic device order which the user can’t control.
([Gopher](https://arxiv.org/abs/2112.11446)’s use of `axis_index_groups` and a single un-nested `pmap` was essentially a hack to get around this by flattening multiple axes of parallelism down to one.)
* **`jit`/`pjit` composability.** `jit`-of-`pmap` is a performance footgun, as is nesting `pmap`s, as is e.g. `scan`-of-`pmap`, because sharding is not preserved when returning from an inner `pmap`. To preserve sharding we would need pattern matching on jaxprs to ensure we’re working with perfectly nested pmaps, or a pmap just inside a `jit`. Moreover, `pjit` was no help here because `pmap` targets XLA replicas while `pjit` targets the XLA SPMD Partitioner, and composing those two is hard.
* **`jax.Array` compatibility (and hence `pjit` compatibility).** Because the sharding of `pmap` outputs can’t be expressed as `Shardings` / `OpShardings`,
due to `pmap`’s stacking rather than concatenative semantics, the output of a
`pmap` computation can’t currently be passed to a `pjit` computation without bouncing to host (or dispatching a reshaping computation).
* **Multi-controller semantics (and hence `pjit` compatibility).**
Multi-controller `pmap` concatenates values across controllers, which works well but differs from single-controller `pmap`’s stacking semantics. More practically, it precludes the use of non-fully-addressable `jax.Array` inputs and outputs as we use with multi-controller `pjit`.
* **Eager mode.** We didn’t make `pmap` eager-first, and though we eventually
(after 4+ years!) added eager operation with `disable_jit()`, the fact that
`pmap` has `jit` fused into it means it has its own compilation and dispatch path (actually two dispatch paths: in Python for handling `Tracer`s, and in C++ for performance on raw `Array` inputs!), a heavy implementation burden.
* **Reshapes needed in the caller.** A typical use case with `pmap` on 8 devices might look like starting with a batch axis of size 128, reshaping it to split into two axes with sizes (8, 16), and then `pmap`ping over the first. These reshapes are awkward and the compiler often interprets them as copies instead of view — increasing memory and time usage.
These shortcomings aren’t so bad when only doing batch data parallelism. But when more parallelism is involved, `pmap` just can’t cut it!
`xmap` paved the way as a next-gen evolution of `pmap` and solved (almost) all these issues. `shmap` follows in `xmap`’s footsteps and solves these problems in essentially the same ways; indeed, `shmap` is like a specialized subset of `xmap`
(what some call the “hard `xmap`” subset), with a few tweaks.
For the initial prototype, we chose to implement `shmap` as a separate primitive from `xmap`, because limiting the set of features it supports makes it easier to focus on the core functionality. For example, `shmap` doesn’t allow unmapped intermediates, making it easier not to worry about the interactions between named axes and autodiff. Furthermore, not having to reason about interactions of all pairs of features makes it easier to add capabilities beyond what’s implemented in `xmap` today, such as support for eager mode.
Both `shmap` and `xmap` share significant portions of the lowering code. We could consider merging both in the future, or even focusing solely on `shmap`,
depending on how the usage will evolve.
##### `jax.extend`: a module for extensions[#](#jax-extend-a-module-for-extensions)
[@froystig](https://github.com/froystig),
[@sharadmv](https://github.com/sharadmv),
[@jakevdp](https://github.com/jakevdp),
[@yashk2810](https://github.com/yashk2810)
May 2023
```
import jax.extend as jex
```
Several projects depend on JAX’s codebase internals, often to use its core machinery (e.g. to write a
[transformation over its IR](https://jax.readthedocs.io/en/latest/notebooks/Writing_custom_interpreters_in_Jax.html))
or to extend it (e.g. to
[define new primitives](https://github.com/dfm/extending-jax)).
Two challenges for these dependencies are (a) that our internals aren’t all solidly designed for external use, and (b) that circumventing JAX’s public API is
[unsupported](https://jax.readthedocs.io/en/latest/api_compatibility.html).
In other words, our internals are often used like a library, but are neither structured nor updated like one.
This proposal considers **introducing a `jax.extend` module that defines a library view of some of JAX’s internal components**. We would treat this as a second-tier API, still guaranteeing essentially [no compatibility policy](#no-compatibility-policy), but hopefully making it easier to spot changes when they happen.
The audience for `jax.extend` includes JAX-adjacent Python libraries like [Oryx](https://github.com/jax-ml/oryx),
[jax-triton](https://github.com/jax-ml/jax-triton), and many others,
as well as projects experimenting with function transformations,
autodiff systems, compiler frontends for numerical programming, etc.
This note gives an overview of how `jax.extend` might look, now and eventually. It doesn’t lay things out in great detail, instead proposing that we begin [iteratively developing](#iterative-development)
the module.
Note that `jax.extend` differs from `jax.experimental`, which is a staging ground for new features and ideas in progress. Typically, work in `jax.experimental` eventually makes into another JAX module or is removed altogether.
###### No compatibility policy[#](#no-compatibility-policy)
To keep development overhead low, `jax.extend` would not follow the public
[API compatibility](https://jax.readthedocs.io/en/latest/api_compatibility.html)
policy. It would promise no deprecation windows nor backwards compatibility between releases. Every release may break existing callers without simple recourse (e.g. without a flag reintroducing prior behavior). We would rely on the
[changelog](https://jax.readthedocs.io/en/latest/changelog.html)
to call out such changes.
Callers of `jax.extend` that need to upgrade their code regularly alongside JAX releases might find it useful to pin JAX versions as an intermediate step between releases. This is a common habit among projects that rely on JAX’s internals today. The difference is that it would now come with the help of changelog announcements and better intentions regarding library design and naming.
###### Iterative development[#](#iterative-development)
Having no compatibility policy makes it easier to get started on implementation: on day one, we can move a handful of symbols over from internal packages such as `jax._src` and today’s `jax.core` and
`jax.interpreters`. Then we can iterate to improve things from there.
###### Possible module overview[#](#possible-module-overview)
We can imagine that eventually `jax.extend` would include the following modules:
* `core` – primitives, the Jaxpr IR, etc.
* `interpreters` – core transformations (e.g. autodiff, batching)
and lowerings.
* `random` – random bit generation, key splitting and folding, key arrays.
* `sharding` – extra functionality around distributed arrays.
We might also have other symbols in the module at first, such as
`jex.api_util`, as we work to remove or replace them. Others will be decided in time. For instance, `jex.lib` could offer an entry point to jaxlib (and would do so in the immediate term), but it’s not clear whether we want to keep it for long.
Some preliminary thoughts on what each of these might comprise follow.
###### `jax.extend.core`[#](#jax-extend-core)
This should enable callers at least to define new JAX primitives and to process the Jaxpr IR (the output of
`jax.make_jaxpr(...)`). Supporting this might involve providing:
* Access to existing core system primitives, such as today’s
`jax._src.lax.add_p`.
* Access to IR types, such as the current `jax._src.core.ShapedArray`.
* Functions for checking and pretty-printing jaxprs.
* Functions for building jaxprs explicitly, rather than by staging Python functions via `jax.make_jaxpr` (or not!).
At initialization, this module will contain many more symbols than what’s needed to define primitives and rules, including various names used in setting up
[“final-style transformations”](https://jax.readthedocs.io/en/latest/autodidax.html#on-the-fly-final-style-and-staged-initial-style-processing),
such as the current `jax._src.core.Trace` and `Tracer` classes. We can revisit whether `jex.core` should also support final-style extensions alongside initial style approaches, and whether it can do so by a more narrow API than exposing `Trace` and `Tracer` entirely.
[Oryx](https://github.com/jax-ml/oryx) might help guide these decisions.
We can also consider relocating `make_jaxpr` itself to `jex.core`.
###### `jax.extend.interpreters`[#](#jax-extend-interpreters)
This module would provide a means of registering various transformation rules for primitives—defining their behavior under AD, batching, lowering, etc.
It would initially reflect `jax._src.interpreters` in providing the modules `ad`, `batching`, `partial_eval` (for staging Python to Jaxpr, and for linearization in AD), `mlir`, `pxla`, and `xla`. The first three might be replaceable by a single primitive extension API in `jex.core`. The latter three, used for lowering, could be simplified into one module, maybe.
Today, to write transformation rules, e.g. for AD and batching,
callers may need symbols relating to tracers, e.g. `JVPTracer` and
`BatchTracer`. This may be avoidable later on, and allow us to remove tracer types from `jex`.
This module plus `jex.core` ought to suffice for replicating today’s custom primitive tutorials (e.g.
[ours](https://jax.readthedocs.io/en/latest/notebooks/How_JAX_primitives_work.html)
and
[dfm’s](https://github.com/dfm/extending-jax)).
For instance, defining a primitive and its behavior under `jax.jit`
would be possible as follows (in the immediate term):
```
from jax.extend import core # Previously: from jax import core from jax.extend.interpreters import mlir # ... and similarly
mul_add_p = core.Primitive('mul_add')
mul_add_p.def_impl(lambda x, y, z: x * y + z)
@mul_add_p.def_abstract_eval def mul_add_abstract(x_sa, y_sa, z_sa):
return core.ShapedArray(x_sa.shape, x_sa.dtype)
def mul_add_mlir(ctx, xc, yc, zc):
add = mlir.hlo.AddOp
mul = mlir.hlo.MulOp
return add(mul(xc, yc), zc).results
mlir.register_lowering(mul_add_p, mul_add_mlir)
import jax print(mul_add_p.bind(2, 3, 4)) # -> 10 print(jax.jit(mul_add_p.bind)(2, 3, 4)) # -> Array(10, dtype=int32)
```
###### `jax.extend.random`[#](#jax-extend-random)
This module could expose our mechanism for defining new RNG implementations, and functions for working with PRNG key internals
(see issue [#9263](https://github.com/google/jax/issues/9263)),
such as the current `jax._src.prng.random_wrap` and
`random_unwrap`.
It could also expose the keyed hash functions that underlie the built-in RNG implementations, such as `jax._src.prng.threefry_2x32`.
###### `jax.extend.sharding`[#](#jax-extend-sharding)
This module could expose low-level utilities for sharding distributed arrays.
We have only one item in mind for now. The XLA compiler’s array sharding format is more expressive than [those provided by JAX](https://jax.readthedocs.io/en/latest/jax.sharding.html). We could provide this as `jex.sharding.XlaOpShardingProto`, corresponding to today’s `jax._src.lib.xla_client.OpSharding` internally.
##### Efficient transposition of replication-inducing collectives[#](#efficient-transposition-of-replication-inducing-collectives)
*mattjj@*, *dougalm@*
*August 2023*
###### Motivation[#](#motivation)
We have an efficiency problem in automatically transposing `shmap`s containing certain collectives. The issue arises with `psum` and `all_gather`, specifically when the output of the collective is returned to the caller as an unmapped output. And it’s not an edge case: for example, it arises when applying `grad`
to a `shmap`-based batch data parallel neural network loss function which uses
`psum` to compute the total loss.
We’ve known about this problem for some time. An analogous issue exists with
`pmap`, though it’s been worked around by keeping `grad` inside `pmap` rather than outside. A primary goal of the incomplete avals-with-names work was to address a version of this transpose efficiency problem. This doc draws on those ideas,
while extending and revising them to handle more cases and to be much easier to land. Indeed the solution proposed here only affects the `shmap` implementation.
The rest of the system need not be changed (yet).
The main purpose of this doc is to define this transpose efficiency problem and propose an easy-to-land solution.
This doc is not about:
* logical axis names on arrays (the only axis names here are just like in
`shmap` and OG `pmap`);
* changing autodiff semantics (all the numbers and (non)errors are staying the same, we’re just making things more efficient);
* allowing user code to reflect on any new information, or really affecting user code at all.
###### Problem: efficient transpose of `psum` or `all_gather` depends on whether cotangents are invariant across devices[#](#problem-efficient-transpose-of-psum-or-all-gather-depends-on-whether-cotangents-are-invariant-across-devices)
Consider this semi-realistic example, meant to resemble a replicated-parameter batch data parallel loss function:
```
devices = jax.devices() # 8 devices
@partial(shmap, mesh=Mesh(devices, ('batch',)),
in_specs=(P(None, None), P('batch', None)),
out_specs=P())
def loss(params, batch):
inputs, targets = batch
predictions = predict(params, inputs)
local_loss = jnp.mean(jnp.sum(predictions - targets, -1))
global_loss = lax.pmean(local_loss, 'batch'))
return global_loss
```
Notice the `out_specs=P()`, which indicates an unmapped output. If you’re not familiar with the notion of unmapped outputs, see the appendix at the bottom of this document.
Most of the details in the `loss` example aren’t important. All that matters for our purposes is that we’re applying `psum` (or rather `pmean = lambda x, name: psum(x, name) / psum(1, name)`) at the end. So a distilled version looks like this:
```
# Example 1: shmap involving psum and unmapped output with inefficient transpose f1 = shmap(lambda x: psum(g(x), 'i'),
in_specs=P('i'), out_specs=P())
```
We even simplified notation by suppressing the `mesh` argument. In the examples to follow it can be inferred from context.
What does the transpose look like? Writing `t` to mean function transpose, we could evaluate `t(f1)(ybar)` for any `ybar` efficiently by applying the function
`¿f1_transpose?` below:
```
# An efficient "transpose" of Example 1 (but don't transpose this again!)
¿f1_transpose? = shmap(t(g), in_specs=P(), out_specs=P('i'))
```
But that’s not the transpose we currently get as t(f1).
Instead, the current recipe for transposition is roughly that we switch
`in_specs` and `out_specs`, do some division rescaling for unmapped outputs, and transpose the body. Because `psum` is its own transpose (as an all-reduce sum),
we end up producing this transpose:
```
# The transpose we currently get for Example 1 (which is fine to transpose again)
t(f1) = shmap(lambda ybar: t(g)(psum(ybar / 8, 'i')),
in_specs=P(), out_specs=P('i'))
```
This transpose gets the numbers right, but it’s wasteful. We know statically from the transpose’s `in_specs=P()` that `ybar` has the same value for each function instance, i.e. that its value is device-invariant for devices along the mesh axis named `i`, and yet we apply a `psum` to it! That uses expensive communication just to multiply the value on each device by 8. (Here 8 refers to the size of axis i. The division by 8 comes from the original function’s `out_specs=P()`; it and the trivial `psum` basically cancel each other out.)
What are we doing wrong? We’re not exploiting the fact that cotangents `ybar`
corresponding to `f1`’s unmapped outputs are guaranteed to be device-invariant;
instead, we’re defensively `psum`ming them as if they weren’t because `psum`’s transpose can’t be sure given the local information it has. Sometimes the `psum`
is necessary, as in transposing `f2` with respect to its first argument:
```
# Example 2: shmap involving psum and *mapped* output with efficient transpose f2 = shmap(lambda x, y: psum(g(x), 'i') * y,
in_specs=(P('i'), P('i')), out_specs=P('i'))
# The transpose we currently get for Example 2 is efficient t(f2, 0) = shmap(lambda y, zbar: t(g)(psum(zbar * y, 'i')),
in_specs=(P('i'), P('i')), out_specs=P('i'))
```
Intuitively, if our transpose machinery could tell the difference between Example 1 and Example 2, we could do better by avoiding the psum and division where possible.
The inefficient examples can be even smaller. Consider transposing this cursed identity function:
```
# Example 3: cursed identity cursed_identity = shmap(lambda x: x, P(), P())
# Currently we get these inefficient transposes t(cursed_identity) = shmap(lambda x: psum(x / 8, 'i'), P(), P())
t(t(cursed_identity)) = shmap(lambda x: psum(psum(x / 8 / 8, 'i'), 'i')), P(), P())
...
```
It keeps getting bigger the more we transpose. How embarrassing!
And `psum` isn’t the only culprit. Something analogous holds true for
`all_gather`:
```
# Example 4: all_gather to an unmapped output f4 = shmap(lambda x: all_gather(x, 'i'), P('i'), P())
# Currently we get this inefficient transpose t(f4) = shmap(lambda ybar: psum_scatter(ybar / 8, 'i'), P(), P('i'))
```
This program is a bit artificial. Why do an `all_gather` and feed the result into an unmapped output, rather than skipping the `all_gather` in the body and just using `out_specs=P('i')` to collect the results? But even though it’s cooked-up,
this example nevertheless exhibits a transpose which unnecessarily performs communication (we could have just performed a non-communicating slice),
analogous to Example 1 for `psum`.
Also analogously to the `psum` examples, the defensive `psum_scatter` is necessary in some cases:
```
# Example 5: all_gather to a mapped output f5 = shmap(lambda x, y: all_gather(x, 'i') * y,
in_specs=(P('i'), P('i')), out_specs=P('i'))
# Currently we get this efficient transpose t(f5, 0) = shmap(lambda y, zbar: psum_scatter(zbar * y, 'i'),
in_specs=(P('i'), P('i')), out_specs=P('i'))
```
So how do we avoid these inefficient transposes?
###### Solutions[#](#solutions)
Here are two solution ideas. They aren’t mutually exclusive. But (spoilers) the second one is better, and it’s all we need.
###### Partial solution “P-sum”: build the ability to express a `psum` into `out_specs`[#](#partial-solution-p-sum-build-the-ability-to-express-a-psum-into-out-specs)
This solution is a bit of a strawperson because it would offer only an awkward way to write programs. And it wouldn’t even fix everything! But it’s worth considering, if only to motivate a more complete solution.
Example 4 above is artificial because we could have just used `out_specs` instead of an `all_gather` in the body:
```
# Example 4 again f4 = shmap(lambda x: all_gather(x, 'i'), P('i'), P())
# Why didn't we just write it like this?
f4_better = shmap(lambda x: x, P('i'), P('i'))
```
The `f4_better` version doesn’t have any transposition problems, since the transpose problems arise from collectives in the body.
Analogously, we could fix Example 1 by extending `out_specs` so that they can express summing:
```
# Example 1 again f1 = shmap(lambda x: psum(g(x), 'i'),
in_specs=P('i'), out_specs=P())
# What if we could write an output sum like this?
f1_better = shmap(g, in_specs=P('i'), out_specs=P(sum='i')) # sum='i' means sum over that axis
# Then it could transpose like this:
t(f1_better) = shmap(t(g), in_specs=P(), out_specs=P('i'))
t(t(f1_better)) = shmap(t(t(g)), in_specs=P('i'), P(sum='i'))
```
So offering `psum`s built into `out_specs` fixes the transpose problem of Example 1. But it doesn’t fully fix the cursed identity transpose in Example 3:
```
# Example 3 again cursed_identity = shmap(lambda x: x, P(), P())
# How it would transpose with the P-sum partial solution:
t(cursed_identity) = shmap(lambda x: x / 8, P(), P(sum='i'))
t(t(cursed_identity)) = shmap(lambda x: x / 8, P(), P(sum='i'))
```
It’s an improvement since the program doesn’t continue to get bigger as we keep transposing, but we’re still doing wasteful communication.
###### Full solution: statically track device-varying vs device-invariant intermediates, plus new primitives[#](#full-solution-statically-track-device-varying-vs-device-invariant-intermediates-plus-new-primitives)
This solution has two components:
1. track when values are guaranteed to be device-invariant vs device-varying over particular mesh axes, and 2. decompose `psum` into a two-step process, introducing a new `pbroadcast`
primitive, and introduce new primitives for `all_gather` and its transposes.
Morally, the tracking of device-invariant vs device-varying information is a type-level consideration. But for the expedience of our first implementation, we don’t need to literally add the information to abstract values or jaxpr types.
Before we get to implementation, we’ll first introduce the idea using types.
Also to follow is a discussion of making the user API convenient and backward compatible. But to first introduce the idea, we’ll ignore convenience and instead write code that is as explicit as possible.
###### Tracking device invariance in avals (a.k.a. avals-with-names, revived)[#](#tracking-device-invariance-in-avals-a-k-a-avals-with-names-revived)
We can sometimes tell from static information alone that the values of some intermediate variables in the body of a `shmap` are guaranteed to be invariant along a mesh axis, in the sense that the function instances (and their corresponding devices) along the mesh axis must all be computing with the same value. We’ll call such values device-invariant. For values that are not device-invariant, we’ll say they’re device-varying, though really we mean potentially device-varying from the point of view of the type system.
To encode device variance in types, we’ll extend the syntax of types for arrays.
We’ll write things like `x:f32[3,4]{i}` to indicate that `x` is (potentially)
device-varying along mesh axis `i` (and device-invariant over any other mesh axes of the `shmap`). More generally, we’ll say the grammar for array type syntax is something like
```
shaped_array ::= <dtype>[<int_literal>, ...]<device_variance_type>
device_variance_type ::= {<axis_name>, ...}
```
We’ll also update the typing rules to handle device variance types:
* for first-order primitives other than collectives
+ for multi-arity primitives, the operand device variance types must be equal
where shapes must be equal, e.g. `mul x:f32[s1]{r1} y:f32[s2][r2]` requires
`r1 == r2` in addition to `s1 == s2`
+ the output device variance type must be the same as the operand(s)
* for higher-order primitives
+ we just instantiate any type variables including the device variance type
(and checking types for equality checks their device variance types are
equal)
+ (when performing type inference, e.g. for branches of a `cond`, we take the
union of the sets of axis names in device variance types)
* for first-order collectives
+ a collective can either accept a device-varying or device-invariant input
(along a mesh axis corresponding to its axis name parameter); it’s an error
to pass a device-invariant operand to a collective which accepts
device-varying operands and vice-versa
+ a collective can either produce a device-varying or device-invariant output
+ see the table below
As a side benefit, whatever logic implements this type checking can subsume
`shmap`’s “static analysis” check for whether a `shmap` body function is
compatible with any unmapped `out_specs`.
Here’s a table summarizing the device variance typing for collective primitives:
| Name | Device variance type | Example | Lowers to HLO | Transpose |
| --- | --- | --- | --- | --- |
| `psum2` | `Varying -> Invariant` | `y:f32[3]{j} = psum(x:f32[3]{i,j}, axis='i')` | `AllReduceSum` (communication) | `pbroadcast` |
| `pbroadcast` | `Invariant -> Varying` | `y:f32[3]{i} = pbroadcast(x:f32[3], 'i')` | no-op (no communication) | `psum` |
| `all_to_all` | `Varying -> Varying` | `y:f32[16]{i} = all_to_all(x:f32[16]{i}, 'i', 0, 0)` `AllToAll` (communication) | `all_to_all` | |
| `axis_index` | `() -> Varying` | `idx:i32[]{i} = axis_index('i')` | `ReplicaId` and some arithmetic (no communication) | n/a |
| `psum_scatter` | `Varying -> Varying` | `y:f32[2]{i} = psum_scatter(x:f32[16]{i}, 'i')` | `ReduceScatterSum` (communication) | `all_gather` |
| `all_gather` | `Varying -> Varying` | `y:f32[16]{i} = all_gather(x:f32[2]{i}, 'i')` | `AllGather` (communication) | `psum_scatter` |
| `pscatter` | `Invariant -> Varying` | `y:f32[2]{i} = pscatter(x:f32[16], 'i')` | `lambda x: x[axis_index('i'), None]` (no communication) | `all_gather_invariant` |
| `all_gather_invariant` | `Varying -> Invariant` | `y:f32[16] = all_gather_invariant(x:f32[2]{i}, 'i')` | `AllGather` (communication) | `pscatter` |
There are some surprising things here!
* We introduced several new primitives, including
+ `pbroadcast`, which interestingly lowers to a no-op
+ `all_gather_invariant`, which lowers to the same thing as `all_gather` but
has a different device variance type (essentially `all_gather` has a
`pbroadcast` fused into it, whereas `all_gather_invariant` does not)
+ `pscatter` which is the dual (transpose) of `all_gather_invariant`
* all_gather has a device-varying result
Intuitively, the reason to introduce `pbroadcast` (other than to make the typing rules work) is so that `psum` can transpose to a physical no-op. The reason we need `all_gather` to have a device-varying result is so that we can transpose it to `psum_scatter`; if we instead left it with a device-invariant result, we might need a downstream `pbroadcast`, and that composition would transpose to an inefficient `psum` followed by slicing / `pscatter`. So instead we have a
`pbroadcast` “fused into” the `all_gather`, thus allowing for an efficient transpose into `psum_scatter`. We provide `all_gather_invariant` and its transpose `pscatter` mainly for completeness; it’s unlikely users will need it
(it corresponds to the situation in Example 4, which is easy to write differently using `out_specs`).
Interestingly, the `psum` and `pbroadcast` transpose pair correspond to the
`psum_idrev` and `id_psumrev` that users introduced while training LLMs with
`pmap`.
###### How this system solves the inefficient transpose examples[#](#how-this-system-solves-the-inefficient-transpose-examples)
Consider again the simplified motivating example:
```
# Example 1 again f1 = shmap(lambda x: psum(g(x), 'i'),
in_specs=P('i'), out_specs=P())
# Example 1 with intermediate device variance types annotated
@partial(shmap, in_specs=P('i'), out_specs=P())
def f1(x: f32[3,4]{i}):
w:f32[]{i} = g(x)
y:f32[]{} = psum(w, 'i')
return y
```
With these new rules, the transpose is:
```
# Example 1 transpose using device variance types (go ahead and transpose this again!)
t(f1) = shmap(lambda ybar: t(g)(pbroadcast(ybar, 'i')),
in_specs=P(), out_specs=P('i'))
# Example 1 transpose with intermediate device variance types annotated
@partial(shmap, in_specs=P('i'), out_specs=P())
def f1_transpose(ybar: f32[]):
wbar:f32[]{i} = pbroadcast(ybar, 'i')
xbar:f32[3,4]{i} = transpose(g)(wbar)
return xbar
```
where evaluating the `pbroadcast` application involves no communication or FLOPs at all; it’s a no-op. Notice that if we keep transposing the body does not grow in size; indeed `t(t(f1)) == f1`. Efficiency achieved!
And we wouldn’t mess up the other examples either, so long as we `pbroadcast` to make the types check where needed:
```
# Example 2 rewritten with explicit pbroadcast f2 = shmap(lambda x, y: pbroadcast(psum(g(x), 'i'), 'i') * y,
in_specs=(P('i'), P('i')), out_specs=P('i'))
# Example 2 transpose using device variance types t(f2, 0) = shmap(lambda y, zbar: t(g)(pbroadcast(psum(zbar * y, 'i'), 'i')),
in_specs=(P('i'), P('i')), out_specs=P('i'))
# Example 3 again cursed_identity = shmap(lambda x: x, P(), P())
# Notice here the body is `f32[...] -> f32[...]`, i.e. no device varying type.
# Example 3 transpose using device variance types t(cursed_identity) = shmap(lambda x: x, P(), P())
t(t(cursed_identity)) = shmap(lambda x: x, P(), P())
```
Intuitively, in Example 1 we now only have “half the original psum”, whereas in Example 2 we get both “halves”. For Example 3 we never need any operations in the body at all.
For the `all_gather` examples, Example 4 would need to use
`all_reduce_invariant` to have an efficient transpose (though it’d be better to instead use `out_specs` instead of the collective in the body):
```
# Example 4 rewritten with explicit all_reduce_invariant f4 = shmap(lambda x: all_gather_invariant(x, 'i'), P('i'), P())
# Example 4 with intermediate device variance types annotated
@partial(shmap, P('i'), P())
def f4(x:f32[1]{i}):
y:f32[8]{} = all_gather_invariant(x, 'i')
return y
# Example 4 transpose with intermediate device variance types annotated
@partial(shmap, in_specs=P(), out_specs=P('i'))
def f4_transpose(ybar:f32[8]):
xbar:f32[1]{i} = pscatter(ybar, 'i')
return xbar
```
For Example 5, using the device-varying `all_gather` works as we’d want:
```
# Example 5 with intermediate device variance types annotated
@partial(shmap, in_specs=(P('i'), P('i')), out_specs=P('i'))
def f5(x:f32[1]{i}, y:f32[8]{i}):
z:f32[8]{i} = all_gather(x, 'i')
w:f32[8]{i} = z * y
return w
# Transpose with respect to first argument
@partial(shmap, in_specs=(P('i'), P('i')), out_specs=P('i'))
def f5_transpose(y:f32[8]{i}, wbar:f32[8]{i}):
zbar:f32[8]{i} = wbar * y
xbar:f32[1]{i} = psum_scatter(zbar, 'i')
return xbar
```
###### How to make the API convenient for users (and backward compatible)[#](#how-to-make-the-api-convenient-for-users-and-backward-compatible)
But what user wants to write `pbroadcast`s? And what developer wants to break lots of existing user code involving `psum`s which are not fed into unmapped outputs? Not me!
Instead we can automatically insert the `pbroadcast`s. It’s a bit analogous to how we do automatic rank promotion at the `jax.numpy` layer, inserting broadcasts to avoid rank mismatch errors in binary operators. But it’s much simpler since we don’t need to contend with shape tuples. The typical rule is: whenever we see a multi-arity operation where the operands disagree in their device variance types, take the union of operands’ device variance types’ axis name sets and insert `pbroadcast`s to lift each operand to the resulting device variance type.
Automatically inserting `pbroadcast`s just before they’re needed may mean we apply the same `pbroadcast` to the same operand multiple times, creating common subexpressions. When we transpose, those could turn into a sum-of-`psum`s rather than a `psum`-of-sum. We’ll rely on the compiler to clean that up as appropriate.
If it’s a problem then we could add some simple memoization to the
`pbroadcast`-insertion pass.
The user API for `all_gather` will mean `all_gather_p` by default (not
`all_gather_invariant_p`), covering the common case and meaning no `pbroadcast`s must be inserted.
We can provide an option on `shmap` to disable this automatic insertion of
`pbroadcast`s, in which case it’ll be up to the user to ensure type-correctness.
This explicit option may be appealing to some who want to be explicit about where the `psum`s occur in the backward pass.
###### How to implement the solution[#](#how-to-implement-the-solution)
The key to making the implementation lightweight is that **we aren’t going to add these types to avals or jaxprs**. At least, not at first. That can be expensive because it requires updating the rest of JAX, e.g. all consumers of avals and jaxprs may need to handle the new types. We’re not falling for that again!
Instead we’re going to keep these extended types as metadata internal to
`shmap`, just like the current “replication checking for `out_specs`” machinery is internal to `shmap`. Indeed this solution amounts to a relatively small extension to that existing machinery: it was already tracking the same information; now we’re just adding the `pbroadcast`s.
We have at least two options for where to perform the `pbroadcast` insertion:
1. just before transposition, in the transpose rule, where we have a jaxpr of the computation to be transposed;
2. in every `shmap` body, whether eagerly executed or staged out, like the current “replication checking for `out_specs`” machinery.
The former may end up being easier since we only have to handle the jaxpr case,
and only linear primitives. But we’ll start by trying the latter so the implementation here is a strict revision/extension to the existing replication-checking logic.
###### Appendix: defining and motivating maps with unmapped inputs and outputs[#](#appendix-defining-and-motivating-maps-with-unmapped-inputs-and-outputs)
For concreteness, we’ll mostly focus on `shmap`, though these same ideas apply to e.g. `pmap` and probably `xmap`.
An argument/input is *unmapped* along a mesh axis when the corresponding entry of `in_specs` doesn’t mention that mesh axis’s name. Logically it means that each function instance along that mesh axis gets the same value for the argument. To the caller, each operand is sliced according to the mesh axes over which the operand is mapped, whereas there is no slicing for mesh axes over which the operand is unmapped.
An output is *unmapped* along a mesh axis when the corresponding entry of
`out_specs` doesn’t mention that mesh axis’s name. Logically it means each function instance along that mesh axis must return the same value. To the caller, each result of the `shmap` is formed by concatenating the return values of every function instance along which the outputs are mapped, whereas for mesh axes over which the output is unmapped only one copy of the value is used.
See [the `shmap`
JEP](https://jax.readthedocs.io/en/latest/jep/14273-shard-map.html) for examples of unmapped inputs and outputs. For comparison, in `vmap` unmapped inputs/outputs are indicated by using `in_axes` / `out_axes` of `None` (rather than an `int`).
Here are reasons we like unmapped inputs and outputs for `shmap`:
* **Same expressiveness as `pjit`.** Anything `pjit` can do, the `shmap` escape hatch should be able to do too. Or else we’d have a lacking escape hatch! If we didn’t have unmapped outputs in `shmap` then we couldn’t express the same batch-parallel loss function computations as `pjit`.
* **Closed-over inputs.** Closed-over inputs essentially correspond to unmapped inputs, and…
* **Closure under transposition.** Once we have unmapped inputs, it’s natural to be able to transpose to unmapped outputs.
So unmapped outputs are both canonical and useful!
Several early JEPs were converted in hindsight from other documentation,
issues, and pull requests, so they might not exactly reflect the process outlined above.
Building on JAX[#](#building-on-jax)
---
A great way to learn advanced JAX usage is to see how other libraries are using JAX,
both how they integrate the library into their API,
what functionality it adds mathematically,
and how it’s used for computational speedup in other libraries.
Below are examples of how JAX’s features can be used to define accelerated computation across numerous domains and software packages.
### Gradient Computation[#](#gradient-computation)
Easy gradient calculation is a key feature of JAX.
In the [JaxOpt library](https://github.com/google/jaxopt) value and grad is directly utilized for users in multiple optimization algorithms in [its source code](https://github.com/google/jaxopt/blob/main/jaxopt/_src/base.py#LL87C30-L87C44).
Similarly the same Dynamax Optax pairing mentioned above is an example of gradients enabling estimation methods that were challenging historically
[Maximum Likelihood Expectation using Optax](https://probml.github.io/dynamax/notebooks/linear_gaussian_ssm/lgssm_learning.html).
### Computational Speedup on a Single Core across Multiple Devices[#](#computational-speedup-on-a-single-core-across-multiple-devices)
Models defined in JAX can then be compiled to enable single computation speedup through JIT compiling.
The same compiled code can then be sent to a CPU device,
to a GPU or TPU device for additional speedup,
typically with no additional changes needed.
This allows for a smooth workflow from development into production.
In Dynamax the computationally expensive portion of a Linear State Space Model solver has been [jitted](https://github.com/probml/dynamax/blob/main/dynamax/linear_gaussian_ssm/models.py#L579).
A more complex example comes from PyTensor which compiles a JAX function dynamically and then [jits the constructed function](https://github.com/pymc-devs/pytensor/blob/main/pytensor/link/jax/linker.py#L64).
### Single and Multi Computer Speedup Using Parallelization[#](#single-and-multi-computer-speedup-using-parallelization)
Another benefit of JAX is the simplicity of parallelizing computation using
`pmap` and `vmap` function calls or decorators.
In Dynamax state space models are parallelized with a [VMAP decorator](https://github.com/probml/dynamax/blob/main/dynamax/linear_gaussian_ssm/parallel_inference.py#L89)
a practical example of this use case being multi object tracking.
### Incorporating JAX code into your, or your users, workflows[#](#incorporating-jax-code-into-your-or-your-users-workflows)
JAX is quite composable and can be used in multiple ways.
JAX can be used with a standalone pattern, where the user defines all the calculations themselves.
However other patterns, such as using libraries built on jax that provide specific functionality.
These can be libraries that define specific types of models,
such as Neural Networks or State Space models or others,
or provide specific functionality such as optimization.
Here are more specific examples of each pattern.
#### Direct Usage[#](#direct-usage)
Jax can be directly imported and utilized to build models “from scratch” as shown across this website,
for example in [JAX 101](https://jax.readthedocs.io/en/latest/jax-101/index.html)
or [Neural Network with JAX](https://jax.readthedocs.io/en/latest/notebooks/neural_network_with_tfds_data.html).
This may be the best option if you are unable to find prebuilt code for your particular challenge, or if you’re looking to reduce the number of dependencies in your codebase.
#### Composable Domain Specific Libraries with JAX exposed[#](#composable-domain-specific-libraries-with-jax-exposed)
Another common approach are packages that provide prebuilt functionality,
whether it be model definition, or computation of some type.
Combinations of these packages can then be mixed and matched for a full end to end workflow where a model is defined and its parameters are estimated.
One example is [Flax](https://github.com/google/flax) which simplifies the construction of Neural Networks.
Flax is then typically paired with [Optax](https://github.com/deepmind/optax)
where Flax defines the neural network architecture and Optax supplies the optimization & model-fitting capabilities.
Another is [Dynamax](https://github.com/probml/dynamax) which allows easy definition of state space models.
With Dynamax parameters can be estimated using
[Maximum Likelihood using Optax](https://probml.github.io/dynamax/notebooks/linear_gaussian_ssm/lgssm_learning.html)
or full Bayesian Posterior can be estimating using [MCMC from Blackjax](https://probml.github.io/dynamax/notebooks/linear_gaussian_ssm/lgssm_hmc.html)
#### JAX Totally Hidden from Users[#](#jax-totally-hidden-from-users)
Other libraries opt to completely wrap JAX in their model specific API.
An example is PyMC and [Pytensor](https://github.com/pymc-devs/pytensor),
in which a user may never “see” JAX directly but instead wrapping [JAX functions](https://pytensor.readthedocs.io/en/latest/extending/creating_a_numba_jax_op.html)
with a PyMC specific API.
Notes[#](#notes)
---
This section contains shorter notes on topics relevant to using JAX; see also the longer design discussions in [JAX Enhancement Proposals (JEPs)](index.html#document-jep/index).
Dependencies and version compatibility:* [API compatibility](index.html#document-api_compatibility) outlines JAX’s policies with regard to API compatibility across releases.
* [Python and NumPy version support policy](index.html#document-deprecation) outlines JAX’s policies with regard to compatibility with Python and NumPy.
Migrations and deprecations:* [jax.Array migration](index.html#document-jax_array_migration) summarizes the changes to the default array type in jax v 0.4.1
Memory and computation usage:* [Asynchronous dispatch](index.html#document-async_dispatch) describes JAX’s asynchronous dispatch model.
* [Concurrency](index.html#document-concurrency) describes how JAX interacts with other Python concurrency.
* [GPU memory allocation](index.html#document-gpu_memory_allocation) describes how JAX interacts with memory allocation on GPU.
Programmer guardrails:* [Rank promotion warning](index.html#document-rank_promotion_warning) describes how to configure [`jax.numpy`](index.html#module-jax.numpy) to avoid implicit rank promotion.
### API compatibility[#](#api-compatibility)
JAX is constantly evolving, and we want to be able to make improvements to its APIs. That said, we want to minimize churn for the JAX user community, and we try to make breaking changes rarely.
JAX follows a 3 month deprecation policy. When an incompatible change is made to an API, we will make our best effort to obey the following procedure:
* the change will be announced in `CHANGELOG.md` and in the doc string for the deprecated API, and the old API will issue a `DeprecationWarning`.
* three months after the `jax` release that deprecated an API, we may remove the deprecated API at any time. Note that three months is a *lower* bound, and is intentionally chosen to be faster than that of many more mature projects. In practice, deprecations may take considerably longer, particularly if there are many users of a feature. If a three month deprecation period becomes problematic, please raise this with us.
We reserve the right to change this policy at any time.
#### What is covered?[#](#what-is-covered)
Only public JAX APIs are covered, which includes the following modules:
* `jax`
* `jax.dlpack`
* `jax.image`
* `jax.lax`
* `jax.nn`
* `jax.numpy`
* `jax.ops`
* `jax.profiler`
* `jax.random` (see [details below](#numerics-and-randomness))
* `jax.scipy`
* `jax.tree_util`
* `jax.test_util`
Not everything in these modules is public. Over time, we are working to separate public and private APIs. Public APIs are documented in the JAX documentation.
Additionally, our goal is that all non-public APIs should have names prefixed with underscores, although we do not entirely comply with this yet.
#### What is not covered?[#](#what-is-not-covered)
* anything prefixed with an underscore.
* `jax._src`
* `jax.core`
* `jax.linear_util`
* `jax.lib`
* `jax.prng`
* `jax.interpreters`
* `jax.experimental`
* `jax.example_libraries`
* `jax.extend` (see [details](https://jax.readthedocs.io/en/latest/jax.extend.html))
This list is not exhaustive.
#### Numerics and randomness[#](#numerics-and-randomness)
The *exact* values of numerical operations are not guaranteed to be stable across JAX releases. In fact, exact numerics are not necessarily stable at a given JAX version, across accelerator platforms, within or without `jax.jit`, and more.
For a fixed PRNG key input, the outputs of pseudorandom functions in
`jax.random` may vary across JAX versions. The compatibility policy applies only to the output *distribution*. For example, the expression
`jax.random.gumbel(jax.random.key(72))` may return a different value across JAX releases, but `jax.random.gumbel` will remain a pseudorandom generator for the Gumbel distribution.
We try to make such changes to pseudorandom values infrequently. When they happen, the changes are announced in the changelog, but do not follow a deprecation cycle. In some situations, JAX might expose a transient configuration flag that reverts the new behavior, to help users diagnose and update affected code. Such flags will last a deprecation window’s amount of time.
### Python and NumPy version support policy[#](#python-and-numpy-version-support-policy)
JAX follows NumPy’s [NEP-29 deprecation policy](https://numpy.org/neps/nep-0029-deprecation_policy.html). JAX supports at least:
* All minor versions of Python released 42 months prior to the project, and at minimum the two latest minor versions.
* All minor versions of numpy released in the 24 months prior to the project, and at minimum the last three minor versions.
JAX may support older versions of Python and NumPy, but support for older versions may be dropped at any time.
### jax.Array migration[#](#jax-array-migration)
**yashkatariya@**
#### TL;DR[#](#tl-dr)
JAX switched its default array implementation to the new `jax.Array` as of version 0.4.1.
This guide explains the reasoning behind this, the impact it might have on your code,
and how to (temporarily) switch back to the old behavior.
##### What’s going on?[#](#whats-going-on)
`jax.Array` is a unified array type that subsumes `DeviceArray`, `ShardedDeviceArray`,
and `GlobalDeviceArray` types in JAX. The `jax.Array` type helps make parallelism a core feature of JAX, simplifies and unifies JAX internals, and allows us to unify jit and pjit. If your code doesn’t mention `DeviceArray` vs
`ShardedDeviceArray` vs `GlobalDeviceArray`, no changes are needed. But code that depends on details of these separate classes may need to be tweaked to work with the unified jax.Array
After the migration is complete `jax.Array` will be the only type of array in JAX.
This doc explains how to migrate existing codebases to `jax.Array`. For more information on using `jax.Array` and JAX parallelism APIs, see the [Distributed arrays and automatic parallelization](https://jax.readthedocs.io/en/latest/notebooks/Distributed_arrays_and_automatic_parallelization.html) tutorial.
##### How to enable jax.Array?[#](#how-to-enable-jax-array)
You can enable `jax.Array` by:
* setting the shell environment variable `JAX_ARRAY` to something true-like
(e.g., `1`);
* setting the boolean flag `jax_array` to something true-like if your code parses flags with absl;
* using this statement at the top of your main file:
```
import jax jax.config.update('jax_array', True)
```
##### How do I know if jax.Array broke my code?[#](#how-do-i-know-if-jax-array-broke-my-code)
The easiest way to tell if `jax.Array` is responsible for any problems is to disable `jax.Array` and see if the issues go away.
##### How can I disable jax.Array for now?[#](#how-can-i-disable-jax-array-for-now)
Through **March 15, 2023** it will be possible to disable jax.Array by:
* setting the shell environment variable `JAX_ARRAY` to something falsey
(e.g., `0`);
* setting the boolean flag `jax_array` to something falsey if your code parses flags with absl;
* using this statement at the top of your main file:
```
import jax jax.config.update('jax_array', False)
```
#### Why create jax.Array?[#](#why-create-jax-array)
Currently JAX has three types; `DeviceArray`, `ShardedDeviceArray` and
`GlobalDeviceArray`. `jax.Array` merges these three types and cleans up JAX’s internals while adding new parallelism features.
We also introduce a new `Sharding` abstraction that describes how a logical Array is physically sharded out across one or more devices, such as TPUs or GPUs. The change also upgrades, simplifies and merges the parallelism features of `pjit` into `jit`. Functions decorated with `jit` will be able to operate over sharded arrays without copying data onto a single device.
Features you get with `jax.Array`:
* C++ `pjit` dispatch path
* Op-by-op parallelism (even if the array distributed across multiple devices across multiple hosts)
* Simpler batch data parallelism with `pjit`/`jit`.
* Ways to create `Sharding`s that are not necessarily consisting of a mesh and partition spec. Can fully utilize the flexibility of OpSharding if you want or any other Sharding that you want.
* and many more
Example:
```
import jax import jax.numpy as jnp from jax.sharding import PartitionSpec as P import numpy as np x = jnp.arange(8)
# Let's say there are 8 devices in jax.devices()
mesh = jax.sharding.Mesh(np.array(jax.devices()).reshape(4, 2), ('x', 'y'))
sharding = jax.sharding.NamedSharding(mesh, P('x'))
sharded_x = jax.device_put(x, sharding)
# `matmul_sharded_x` and `sin_sharded_x` are sharded. `jit` is able to operate over a
# sharded array without copying data to a single device.
matmul_sharded_x = sharded_x @ sharded_x.T sin_sharded_x = jnp.sin(sharded_x)
# Even jnp.copy preserves the sharding on the output.
copy_sharded_x = jnp.copy(sharded_x)
# double_out is also sharded double_out = jax.jit(lambda x: x * 2)(sharded_x)
```
#### What issues can arise when jax.Array is switched on?[#](#what-issues-can-arise-when-jax-array-is-switched-on)
##### New public type named jax.Array[#](#new-public-type-named-jax-array)
All `isinstance(..., jnp.DeviceArray)` or `isinstance(.., jax.xla.DeviceArray)`
and other variants of `DeviceArray` should be switched to using `isinstance(..., jax.Array)`.
Since `jax.Array` can represent DA, SDA and GDA, you can differentiate those 3 types in `jax.Array` via:
* `x.is_fully_addressable and len(x.sharding.device_set) == 1` – this means that `jax.Array` is like a DA
* `x.is_fully_addressable and (len(x.sharding.device_set) > 1` – this means that `jax.Array` is like a SDA
* `not x.is_fully_addressable` – this means that `jax.Array` is like a GDA and spans across multiple processes
For `ShardedDeviceArray`, you can move `isinstance(..., pxla.ShardedDeviceArray)` to `isinstance(..., jax.Array) and x.is_fully_addressable and len(x.sharding.device_set) > 1`.
In general it is not possible to differentiate a `ShardedDeviceArray` on 1 device from any other kind of single-device Array.
##### GDA’s API name changes[#](#gdas-api-name-changes)
GDA’s `local_shards` and `local_data` have been deprecated.
Please use `addressable_shards` and `addressable_data` which are compatible with
`jax.Array` and `GDA`.
##### Creating jax.Array[#](#creating-jax-array)
All JAX functions will output `jax.Array` when the `jax_array` flag is True. If you were using `GlobalDeviceArray.from_callback` or `make_sharded_device_array`
or `make_device_array` functions to explicitly create the respective JAX data types, you will need to switch them to use [`jax.make_array_from_callback()`](index.html#jax.make_array_from_callback)
or [`jax.make_array_from_single_device_arrays()`](index.html#jax.make_array_from_single_device_arrays).
**For GDA:**
`GlobalDeviceArray.from_callback(shape, mesh, pspec, callback)` can become
`jax.make_array_from_callback(shape, jax.sharding.NamedSharding(mesh, pspec), callback)`
in a 1:1 switch.
If you were using the raw GDA constructor to create GDAs, then do this:
`GlobalDeviceArray(shape, mesh, pspec, buffers)` can become
`jax.make_array_from_single_device_arrays(shape, jax.sharding.NamedSharding(mesh, pspec), buffers)`
**For SDA:**
`make_sharded_device_array(aval, sharding_spec, device_buffers, indices)` can become `jax.make_array_from_single_device_arrays(shape, sharding, device_buffers)`.
To decide what the sharding should be, it depends on why you were creating the SDAs:
If it was created to give as an input to `pmap`, then sharding can be:
`jax.sharding.PmapSharding(devices, sharding_spec)`.
If it was created to give as an input to `pjit`, then sharding can be `jax.sharding.NamedSharding(mesh, pspec)`.
##### Breaking change for pjit after switching to jax.Array for host local inputs[#](#breaking-change-for-pjit-after-switching-to-jax-array-for-host-local-inputs)
**If you are exclusively using GDA arguments to pjit, you can skip this section!
🎉**
With `jax.Array` enabled, all inputs to `pjit` must be globally shaped. This is a breaking change from the previous behavior where `pjit` would concatenate process-local arguments into a global value; this concatenation no longer occurs.
Why are we making this breaking change? Each array now says explicitly how its local shards fit into a global whole, rather than leaving it implicit. The more explicit representation also unlocks additional flexibility, for example the use of non-contiguous meshes with `pjit` which can improve efficiency on some TPU models.
Running **multi-process pjit computation** and passing host-local inputs when
`jax.Array` is enabled can lead to an error similar to this:
Example:
Mesh = `{'x': 2, 'y': 2, 'z': 2}` and host local input shape == `(4,)` and pspec = `P(('x', 'y', 'z'))`
Since `pjit` doesn’t lift host local shapes to global shapes with `jax.Array`,
you get the following error:
Note: You will only see this error if your host local shape is smaller than the shape of the mesh.
```
ValueError: One of pjit arguments was given the sharding of NamedSharding(mesh={'x': 2, 'y': 2, 'chips': 2}, partition_spec=PartitionSpec(('x', 'y', 'chips'),)),
which implies that the global size of its dimension 0 should be divisible by 8,
but it is equal to 4
```
The error makes sense because you can’t shard dimension 0, 8 ways when the value on dimension `0` is `4`.
How can you migrate if you still pass host local inputs to `pjit`? We are providing transitional APIs to help you migrate:
Note: You don’t need these utilities if you run your pjitted computation on a single process.
```
from jax.experimental import multihost_utils
global_inps = multihost_utils.host_local_array_to_global_array(
local_inputs, mesh, in_pspecs)
global_outputs = pjit(f, in_shardings=in_pspecs,
out_shardings=out_pspecs)(global_inps)
local_outs = multihost_utils.global_array_to_host_local_array(
global_outputs, mesh, out_pspecs)
```
`host_local_array_to_global_array` is a type cast that looks at a value with only local shards and changes its local shape to the shape that `pjit` would have previously assumed if that value was passed before the change.
Passing in fully replicated inputs i.e. same shape on each process with
`P(None)` as `in_axis_resources` is still supported. In this case you do not have to use `host_local_array_to_global_array` because the shape is already global.
```
key = jax.random.PRNGKey(1)
# As you can see, using host_local_array_to_global_array is not required since in_axis_resources says
# that the input is fully replicated via P(None)
pjit(f, in_shardings=None, out_shardings=None)(key)
# Mixing inputs global_inp = multihost_utils.host_local_array_to_global_array(
local_inp, mesh, P('data'))
global_out = pjit(f, in_shardings=(P(None), P('data')),
out_shardings=...)(key, global_inp)
```
##### FROM_GDA and jax.Array[#](#from-gda-and-jax-array)
If you were using `FROM_GDA` in `in_axis_resources` argument to `pjit`, then with `jax.Array` there is no need to pass anything to `in_axis_resources` as
`jax.Array` will follow **computation follows sharding** semantics.
For example:
```
pjit(f, in_shardings=FROM_GDA, out_shardings=...) can be replaced by pjit(f, out_shardings=...)
```
If you have PartitionSpecs mixed in with `FROM_GDA` for inputs like numpy arrays, etc, then use `host_local_array_to_global_array` to convert them to
`jax.Array`.
For example:
If you had this:
```
pjitted_f = pjit(
f, in_shardings=(FROM_GDA, P('x'), FROM_GDA, P(None)),
out_shardings=...)
pjitted_f(gda1, np_array1, gda2, np_array2)
```
then you can replace it with:
```
pjitted_f = pjit(f, out_shardings=...)
array2, array3 = multihost_utils.host_local_array_to_global_array(
(np_array1, np_array2), mesh, (P('x'), P(None)))
pjitted_f(array1, array2, array3, array4)
```
##### live_buffers replaced with live_arrays[#](#live-buffers-replaced-with-live-arrays)
`live_buffers` attribute on jax `Device`
has been deprecated. Please use `jax.live_arrays()` instead which is compatible with `jax.Array`.
##### Handling of host local inputs to pjit like batch, etc[#](#handling-of-host-local-inputs-to-pjit-like-batch-etc)
If you are passing host local inputs to `pjit` in a **multi-process environment**, then please use
`multihost_utils.host_local_array_to_global_array` to convert the batch to a global `jax.Array` and then pass that to `pjit`.
The most common example of such a host local input is a **batch of input data**.
This will work for any host local input (not just a batch of input data).
```
from jax.experimental import multihost_utils
batch = multihost_utils.host_local_array_to_global_array(
batch, mesh, batch_partition_spec)
```
See the pjit section above for more details about this change and more examples.
##### RecursionError: Recursively calling jit[#](#recursionerror-recursively-calling-jit)
This happens when some part of your code has `jax.Array` disabled and then you enable it only for some other part. For example, if you use some third_party code which has `jax.Array` disabled and you get a `DeviceArray` from that library and then you enable `jax.Array` in your library and pass that
`DeviceArray` to JAX functions, it will lead to a RecursionError.
This error should go away when `jax.Array` is enabled by default so that all libraries return `jax.Array` unless they explicitly disable it.
### Asynchronous dispatch[#](#asynchronous-dispatch)
JAX uses asynchronous dispatch to hide Python overheads. Consider the following program:
```
>>> import numpy as np
>>> import jax.numpy as jnp
>>> from jax import random
>>> x = random.uniform(random.PRNGKey(0), (1000, 1000))
>>> # Printing the result (i.e. evaluating `repr(result)` or `str(result)`)
>>> # will block until the value is ready.
>>> jnp.dot(x, x) + 3.
Array([[258.01971436, 249.64862061, 257.13372803, ...,
236.67948914, 250.68939209, 241.36853027],
[265.65979004, 256.28912354, 262.18252563, ...,
242.03181458, 256.16757202, 252.44122314],
[262.38916016, 255.72747803, 261.23059082, ...,
240.83563232, 255.41094971, 249.62471008],
...,
[259.15814209, 253.09197998, 257.72174072, ...,
242.23876953, 250.72680664, 247.16642761],
[271.22662354, 261.91204834, 265.33398438, ...,
248.26651001, 262.05389404, 261.33700562],
[257.16134644, 254.7543335, 259.08300781, ..., 241.59848022,
248.62597656, 243.22348022]], dtype=float32)
```
When an operation such as `jnp.dot(x, x)` is executed, JAX does not wait for the operation to complete before returning control to the Python program.
Instead, JAX returns a [`jax.Array`](index.html#jax.Array) value, which is a future,
i.e., a value that will be produced in the future on an accelerator device but isn’t necessarily available immediately. We can inspect the shape or type of a
[`jax.Array`](index.html#jax.Array) without waiting for the computation that produced it to complete, and we can even pass it to another JAX computation, as we do with the addition operation here. Only if we actually inspect the value of the array from the host, for example by printing it or by converting it into a plain old
[`numpy.ndarray`](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray) will JAX force the Python code to wait for the computation to complete.
Asynchronous dispatch is useful since it allows Python code to “run ahead” of an accelerator device, keeping Python code out of the critical path.
Provided the Python code enqueues work on the device faster than it can be executed, and provided that the Python code does not actually need to inspect the output of a computation on the host, then a Python program can enqueue arbitrary amounts of work and avoid having the accelerator wait.
Asynchronous dispatch has a slightly surprising consequence for microbenchmarks.
```
>>> %time jnp.dot(x, x)
CPU times: user 267 µs, sys: 93 µs, total: 360 µs Wall time: 269 µs Array([[255.01972961, 246.64862061, 254.13371277, ...,
233.67948914, 247.68939209, 238.36853027],
[262.65979004, 253.28910828, 259.18252563, ...,
239.03181458, 253.16757202, 249.44122314],
[259.38916016, 252.72747803, 258.23059082, ...,
237.83563232, 252.41094971, 246.62471008],
...,
[256.15814209, 250.09197998, 254.72172546, ...,
239.23876953, 247.72680664, 244.16642761],
[268.22662354, 258.91204834, 262.33398438, ...,
245.26651001, 259.05389404, 258.33700562],
[254.16134644, 251.7543335, 256.08300781, ..., 238.59848022,
245.62597656, 240.22348022]], dtype=float32)
```
269µs is a surprisingly small time for a 1000x1000 matrix multiplication on CPU!
However it turns out that asynchronous dispatch is misleading us and we are not timing the execution of the matrix multiplication, only the time to dispatch the work. To measure the true cost of the operation we must either read the value on the host (e.g., convert it to a plain old host-side numpy array), or use the `block_until_ready()` method on a
[`jax.Array`](index.html#jax.Array) value to wait for the computation that produced it to complete.
```
>>> %time np.asarray(jnp.dot(x, x))
CPU times: user 61.1 ms, sys: 0 ns, total: 61.1 ms Wall time: 8.09 ms Out[16]:
array([[255.01973, 246.64862, 254.13371, ..., 233.67949, 247.68939,
238.36853],
[262.6598 , 253.28911, 259.18253, ..., 239.03181, 253.16757,
249.44122],
[259.38916, 252.72748, 258.2306 , ..., 237.83563, 252.41095,
246.62471],
...,
[256.15814, 250.09198, 254.72173, ..., 239.23877, 247.7268 ,
244.16643],
[268.22662, 258.91205, 262.33398, ..., 245.26651, 259.0539 ,
258.337 ],
[254.16135, 251.75433, 256.083 , ..., 238.59848, 245.62598,
240.22348]], dtype=float32)
>>> %time jnp.dot(x, x).block_until_ready()
CPU times: user 50.3 ms, sys: 928 µs, total: 51.2 ms Wall time: 4.92 ms Array([[255.01972961, 246.64862061, 254.13371277, ...,
233.67948914, 247.68939209, 238.36853027],
[262.65979004, 253.28910828, 259.18252563, ...,
239.03181458, 253.16757202, 249.44122314],
[259.38916016, 252.72747803, 258.23059082, ...,
237.83563232, 252.41094971, 246.62471008],
...,
[256.15814209, 250.09197998, 254.72172546, ...,
239.23876953, 247.72680664, 244.16642761],
[268.22662354, 258.91204834, 262.33398438, ...,
245.26651001, 259.05389404, 258.33700562],
[254.16134644, 251.7543335, 256.08300781, ..., 238.59848022,
245.62597656, 240.22348022]], dtype=float32)
```
Blocking without transferring the result back to Python is usually faster, and is often the best choice when writing microbenchmarks of computation times.
### Concurrency[#](#concurrency)
JAX has limited support for Python concurrency.
Clients may call JAX APIs (e.g., [`jit()`](index.html#jax.jit) or [`grad()`](index.html#jax.grad))
concurrently from separate Python threads.
It is not permitted to manipulate JAX trace values concurrently from multiple threads. In other words, while it is permissible to call functions that use JAX tracing (e.g., [`jit()`](index.html#jax.jit)) from multiple threads, you must not use threading to manipulate JAX values inside the implementation of the function f that is passed to [`jit()`](index.html#jax.jit). The most likely outcome if you do this is a mysterious error from JAX.
### GPU memory allocation[#](#gpu-memory-allocation)
**JAX will preallocate 75% of the total GPU memory when the first JAX operation is run.** Preallocating minimizes allocation overhead and memory fragmentation, but can sometimes cause out-of-memory (OOM) errors. If your JAX process fails with OOM, the following environment variables can be used to override the default behavior:
`XLA_PYTHON_CLIENT_PREALLOCATE=false`This disables the preallocation behavior. JAX will instead allocate GPU memory as needed, potentially decreasing the overall memory usage. However,
this behavior is more prone to GPU memory fragmentation, meaning a JAX program that uses most of the available GPU memory may OOM with preallocation disabled.
`XLA_PYTHON_CLIENT_MEM_FRACTION=.XX`If preallocation is enabled, this makes JAX preallocate XX% of the total GPU memory, instead of the default 75%. Lowering the amount preallocated can fix OOMs that occur when the JAX program starts.
`XLA_PYTHON_CLIENT_ALLOCATOR=platform`This makes JAX allocate exactly what is needed on demand, and deallocate memory that is no longer needed (note that this is the only configuration that will deallocate GPU memory, instead of reusing it). This is very slow, so is not recommended for general use, but may be useful for running with the minimal possible GPU memory footprint or debugging OOM failures.
#### Common causes of OOM failures[#](#common-causes-of-oom-failures)
**Running multiple JAX processes concurrently.**Either use `XLA_PYTHON_CLIENT_MEM_FRACTION` to give each process an appropriate amount of memory, or set
`XLA_PYTHON_CLIENT_PREALLOCATE=false`.
**Running JAX and GPU TensorFlow concurrently.**TensorFlow also preallocates by default, so this is similar to running multiple JAX processes concurrently.
One solution is to use CPU-only TensorFlow (e.g. if you’re only doing data loading with TF). You can prevent TensorFlow from using the GPU with the command
`tf.config.experimental.set_visible_devices([], "GPU")`
Alternatively, use `XLA_PYTHON_CLIENT_MEM_FRACTION` or
`XLA_PYTHON_CLIENT_PREALLOCATE`. There are also similar options to configure TensorFlow’s GPU memory allocation
([gpu_memory_fraction](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/protobuf/config.proto#L36)
and [allow_growth](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/protobuf/config.proto#L40)
in TF1, which should be set in a `tf.ConfigProto` passed to
`tf.Session`. See
[Using GPUs: Limiting GPU memory growth](https://www.tensorflow.org/guide/gpu#limiting_gpu_memory_growth)
for TF2).
**Running JAX on the display GPU.**Use `XLA_PYTHON_CLIENT_MEM_FRACTION` or
`XLA_PYTHON_CLIENT_PREALLOCATE`.
### Rank promotion warning[#](#rank-promotion-warning)
[NumPy broadcasting rules](https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html#general-broadcasting-rules)
allow the automatic promotion of arguments from one rank (number of array axes)
to another. This behavior can be convenient when intended but can also lead to surprising bugs where a silent rank promotion masks an underlying shape error.
Here’s an example of rank promotion:
```
>>> import numpy as np
>>> x = np.arange(12).reshape(4, 3)
>>> y = np.array([0, 1, 0])
>>> x + y array([[ 0, 2, 2],
[ 3, 5, 5],
[ 6, 8, 8],
[ 9, 11, 11]])
```
To avoid potential surprises, `jax.numpy` is configurable so that expressions requiring rank promotion can lead to a warning, error, or can be allowed just like regular NumPy. The configuration option is named
`jax_numpy_rank_promotion` and it can take on string values
`allow`, `warn`, and `raise`. The default setting is
`allow`, which allows rank promotion without warning or error.
The `raise` setting raises an error on rank promotion, and `warn`
raises a warning on the first occurrence of rank promotion.
Rank promotion can be enabled or disabled locally with the [`jax.numpy_rank_promotion()`](index.html#jax.numpy_rank_promotion)
context manager:
```
with jax.numpy_rank_promotion("warn"):
z = x + y
```
This configuration can also be set globally in several ways.
One is by using `jax.config` in your code:
```
from jax import config config.update("jax_numpy_rank_promotion", "warn")
```
You can also set the option using the environment variable
`JAX_NUMPY_RANK_PROMOTION`, for example as
`JAX_NUMPY_RANK_PROMOTION='warn'`. Finally, when using `absl-py`
the option can be set with a command-line flag.
Public API: jax package[#](#public-api-jax-package)
---
### Subpackages[#](#subpackages)
#### `jax.numpy` module[#](#module-jax.numpy)
Implements the NumPy API, using the primitives in [`jax.lax`](index.html#module-jax.lax).
While JAX tries to follow the NumPy API as closely as possible, sometimes JAX cannot follow NumPy exactly.
* Notably, since JAX arrays are immutable, NumPy APIs that mutate arrays in-place cannot be implemented in JAX. However, often JAX is able to provide an alternative API that is purely functional. For example, instead of in-place array updates (`x[i] = y`), JAX provides an alternative pure indexed update function `x.at[i].set(y)` (see [`ndarray.at`](index.html#jax.numpy.ndarray.at)).
* Relatedly, some NumPy functions often return views of arrays when possible
(examples are [`transpose()`](index.html#jax.numpy.transpose) and [`reshape()`](index.html#jax.numpy.reshape)). JAX versions of such functions will return copies instead, although such are often optimized away by XLA when sequences of operations are compiled using [`jax.jit()`](index.html#jax.jit).
* NumPy is very aggressive at promoting values to `float64` type. JAX sometimes is less aggressive about type promotion (See [Type promotion semantics](index.html#type-promotion)).
* Some NumPy routines have data-dependent output shapes (examples include
[`unique()`](index.html#jax.numpy.unique) and [`nonzero()`](index.html#jax.numpy.nonzero)). Because the XLA compiler requires array shapes to be known at compile time, such operations are not compatible with JIT. For this reason, JAX adds an optional `size` argument to such functions which may be specified statically in order to use them with JIT.
Nearly all applicable NumPy functions are implemented in the `jax.numpy`
namespace; they are listed below.
| | |
| --- | --- |
| [`ndarray.at`](index.html#jax.numpy.ndarray.at) | Helper property for index update functionality. |
| [`abs`](index.html#jax.numpy.abs)(x, /) | Calculate the absolute value element-wise. |
| [`absolute`](index.html#jax.numpy.absolute)(x, /) | Calculate the absolute value element-wise. |
| [`add`](index.html#jax.numpy.add)(x1, x2, /) | Add arguments element-wise. |
| [`all`](index.html#jax.numpy.all)(a[, axis, out, keepdims, where]) | Test whether all array elements along a given axis evaluate to True. |
| [`allclose`](index.html#jax.numpy.allclose)(a, b[, rtol, atol, equal_nan]) | Returns True if two arrays are element-wise equal within a tolerance. |
| [`amax`](index.html#jax.numpy.amax)(a[, axis, out, keepdims, initial, where]) | Return the maximum of an array or maximum along an axis. |
| [`amin`](index.html#jax.numpy.amin)(a[, axis, out, keepdims, initial, where]) | Return the minimum of an array or minimum along an axis. |
| [`angle`](index.html#jax.numpy.angle)(z[, deg]) | Return the angle of the complex argument. |
| [`any`](index.html#jax.numpy.any)(a[, axis, out, keepdims, where]) | Test whether any array element along a given axis evaluates to True. |
| [`append`](index.html#jax.numpy.append)(arr, values[, axis]) | Append values to the end of an array. |
| [`apply_along_axis`](index.html#jax.numpy.apply_along_axis)(func1d, axis, arr, *args, ...) | Apply a function to 1-D slices along the given axis. |
| [`apply_over_axes`](index.html#jax.numpy.apply_over_axes)(func, a, axes) | Apply a function repeatedly over multiple axes. |
| [`arange`](index.html#jax.numpy.arange)(start[, stop, step, dtype]) | Return evenly spaced values within a given interval. |
| [`arccos`](index.html#jax.numpy.arccos)(x, /) | Trigonometric inverse cosine, element-wise. |
| [`arccosh`](index.html#jax.numpy.arccosh)(x, /) | Inverse hyperbolic cosine, element-wise. |
| [`arcsin`](index.html#jax.numpy.arcsin)(x, /) | Inverse sine, element-wise. |
| [`arcsinh`](index.html#jax.numpy.arcsinh)(x, /) | Inverse hyperbolic sine element-wise. |
| [`arctan`](index.html#jax.numpy.arctan)(x, /) | Trigonometric inverse tangent, element-wise. |
| [`arctan2`](index.html#jax.numpy.arctan2)(x1, x2, /) | Element-wise arc tangent of `x1/x2` choosing the quadrant correctly. |
| [`arctanh`](index.html#jax.numpy.arctanh)(x, /) | Inverse hyperbolic tangent element-wise. |
| [`argmax`](index.html#jax.numpy.argmax)(a[, axis, out, keepdims]) | Returns the indices of the maximum values along an axis. |
| [`argmin`](index.html#jax.numpy.argmin)(a[, axis, out, keepdims]) | Returns the indices of the minimum values along an axis. |
| [`argpartition`](index.html#jax.numpy.argpartition)(a, kth[, axis]) | Perform an indirect partition along the given axis using the |
| [`argsort`](index.html#jax.numpy.argsort)(a[, axis, kind, order]) | Returns the indices that would sort an array. |
| [`argwhere`](index.html#jax.numpy.argwhere)(a, *[, size, fill_value]) | Find the indices of array elements that are non-zero, grouped by element. |
| [`around`](index.html#jax.numpy.around)(a[, decimals, out]) | Round an array to the given number of decimals. |
| [`array`](index.html#jax.numpy.array)(object[, dtype, copy, order, ndmin]) | Create an array. |
| [`array_equal`](index.html#jax.numpy.array_equal)(a1, a2[, equal_nan]) | True if two arrays have the same shape and elements, False otherwise. |
| [`array_equiv`](index.html#jax.numpy.array_equiv)(a1, a2) | Returns True if input arrays are shape consistent and all elements equal. |
| [`array_repr`](index.html#jax.numpy.array_repr)(arr[, max_line_width, precision, ...]) | Return the string representation of an array. |
| [`array_split`](index.html#jax.numpy.array_split)(ary, indices_or_sections[, axis]) | Split an array into multiple sub-arrays. |
| [`array_str`](index.html#jax.numpy.array_str)(a[, max_line_width, precision, ...]) | Return a string representation of the data in an array. |
| [`asarray`](index.html#jax.numpy.asarray)(a[, dtype, order]) | Convert the input to an array. |
| [`atleast_1d`](index.html#jax.numpy.atleast_1d)(*arys) | Convert inputs to arrays with at least one dimension. |
| [`atleast_2d`](index.html#jax.numpy.atleast_2d)(*arys) | View inputs as arrays with at least two dimensions. |
| [`atleast_3d`](index.html#jax.numpy.atleast_3d)(*arys) | View inputs as arrays with at least three dimensions. |
| [`average`](index.html#jax.numpy.average)(a[, axis, weights, returned, keepdims]) | Compute the weighted average along the specified axis. |
| [`bartlett`](index.html#jax.numpy.bartlett)(M) | Return the Bartlett window. |
| [`bincount`](index.html#jax.numpy.bincount)(x[, weights, minlength, length]) | Count number of occurrences of each value in array of non-negative ints. |
| [`bitwise_and`](index.html#jax.numpy.bitwise_and)(x1, x2, /) | Compute the bit-wise AND of two arrays element-wise. |
| [`bitwise_count`](index.html#jax.numpy.bitwise_count)(x, /) |
param x:
|
| [`bitwise_not`](index.html#jax.numpy.bitwise_not)(x, /) | Compute bit-wise inversion, or bit-wise NOT, element-wise. |
| [`bitwise_or`](index.html#jax.numpy.bitwise_or)(x1, x2, /) | Compute the bit-wise OR of two arrays element-wise. |
| [`bitwise_xor`](index.html#jax.numpy.bitwise_xor)(x1, x2, /) | Compute the bit-wise XOR of two arrays element-wise. |
| [`blackman`](index.html#jax.numpy.blackman)(M) | Return the Blackman window. |
| [`block`](index.html#jax.numpy.block)(arrays) | Assemble an nd-array from nested lists of blocks. |
| [`bool_`](index.html#jax.numpy.bool_)(x) | |
| [`broadcast_arrays`](index.html#jax.numpy.broadcast_arrays)(*args) | Broadcast any number of arrays against each other. |
| [`broadcast_shapes`](index.html#jax.numpy.broadcast_shapes)(*shapes) | Broadcast the input shapes into a single shape. |
| [`broadcast_to`](index.html#jax.numpy.broadcast_to)(array, shape) | Broadcast an array to a new shape. |
| [`c_`](index.html#jax.numpy.c_) | Concatenate slices, scalars and array-like objects along the last axis. |
| [`can_cast`](index.html#jax.numpy.can_cast)(from_, to[, casting]) | Returns True if cast between data types can occur according to the casting rule. |
| [`cbrt`](index.html#jax.numpy.cbrt)(x, /) | Return the cube-root of an array, element-wise. |
| [`cdouble`](index.html#jax.numpy.cdouble) | alias of [`complex128`](index.html#jax.numpy.complex128) |
| [`ceil`](index.html#jax.numpy.ceil)(x, /) | Return the ceiling of the input, element-wise. |
| [`character`](index.html#jax.numpy.character)() | Abstract base class of all character string scalar types. |
| [`choose`](index.html#jax.numpy.choose)(a, choices[, out, mode]) | Construct an array from an index array and a list of arrays to choose from. |
| [`clip`](index.html#jax.numpy.clip)(a[, a_min, a_max, out]) | Clip (limit) the values in an array. |
| [`column_stack`](index.html#jax.numpy.column_stack)(tup) | Stack 1-D arrays as columns into a 2-D array. |
| [`complex_`](index.html#jax.numpy.complex_) | alias of [`complex128`](index.html#jax.numpy.complex128) |
| [`complex128`](index.html#jax.numpy.complex128)(x) | |
| [`complex64`](index.html#jax.numpy.complex64)(x) | |
| [`complexfloating`](index.html#jax.numpy.complexfloating)() | Abstract base class of all complex number scalar types that are made up of floating-point numbers. |
| [`ComplexWarning`](index.html#jax.numpy.ComplexWarning) | The warning raised when casting a complex dtype to a real dtype. |
| [`compress`](index.html#jax.numpy.compress)(condition, a[, axis, out]) | Return selected slices of an array along given axis. |
| [`concatenate`](index.html#jax.numpy.concatenate)(arrays[, axis, dtype]) | Join a sequence of arrays along an existing axis. |
| [`conj`](index.html#jax.numpy.conj)(x, /) | Return the complex conjugate, element-wise. |
| [`conjugate`](index.html#jax.numpy.conjugate)(x, /) | Return the complex conjugate, element-wise. |
| [`convolve`](index.html#jax.numpy.convolve)(a, v[, mode, precision, ...]) | Returns the discrete, linear convolution of two one-dimensional sequences. |
| [`copy`](index.html#jax.numpy.copy)(a[, order]) | Return an array copy of the given object. |
| [`copysign`](index.html#jax.numpy.copysign)(x1, x2, /) | Change the sign of x1 to that of x2, element-wise. |
| [`corrcoef`](index.html#jax.numpy.corrcoef)(x[, y, rowvar]) | Return Pearson product-moment correlation coefficients. |
| [`correlate`](index.html#jax.numpy.correlate)(a, v[, mode, precision, ...]) | Cross-correlation of two 1-dimensional sequences. |
| [`cos`](index.html#jax.numpy.cos)(x, /) | Cosine element-wise. |
| [`cosh`](index.html#jax.numpy.cosh)(x, /) | Hyperbolic cosine, element-wise. |
| [`count_nonzero`](index.html#jax.numpy.count_nonzero)(a[, axis, keepdims]) | Counts the number of non-zero values in the array `a`. |
| [`cov`](index.html#jax.numpy.cov)(m[, y, rowvar, bias, ddof, fweights, ...]) | Estimate a covariance matrix, given data and weights. |
| [`cross`](index.html#jax.numpy.cross)(a, b[, axisa, axisb, axisc, axis]) | Return the cross product of two (arrays of) vectors. |
| [`csingle`](index.html#jax.numpy.csingle) | alias of [`complex64`](index.html#jax.numpy.complex64) |
| [`cumprod`](index.html#jax.numpy.cumprod)(a[, axis, dtype, out]) | Return the cumulative product of elements along a given axis. |
| [`cumsum`](index.html#jax.numpy.cumsum)(a[, axis, dtype, out]) | Return the cumulative sum of the elements along a given axis. |
| [`deg2rad`](index.html#jax.numpy.deg2rad)(x, /) | Convert angles from degrees to radians. |
| [`degrees`](index.html#jax.numpy.degrees)(x, /) | Convert angles from radians to degrees. |
| [`delete`](index.html#jax.numpy.delete)(arr, obj[, axis, assume_unique_indices]) | Return a new array with sub-arrays along an axis deleted. |
| [`diag`](index.html#jax.numpy.diag)(v[, k]) | Extract a diagonal or construct a diagonal array. |
| [`diag_indices`](index.html#jax.numpy.diag_indices)(n[, ndim]) | Return the indices to access the main diagonal of an array. |
| [`diag_indices_from`](index.html#jax.numpy.diag_indices_from)(arr) | Return the indices to access the main diagonal of an n-dimensional array. |
| [`diagflat`](index.html#jax.numpy.diagflat)(v[, k]) | Create a two-dimensional array with the flattened input as a diagonal. |
| [`diagonal`](index.html#jax.numpy.diagonal)(a[, offset, axis1, axis2]) | Return specified diagonals. |
| [`diff`](index.html#jax.numpy.diff)(a[, n, axis, prepend, append]) | Calculate the n-th discrete difference along the given axis. |
| [`digitize`](index.html#jax.numpy.digitize)(x, bins[, right]) | Return the indices of the bins to which each value in input array belongs. |
| [`divide`](index.html#jax.numpy.divide)(x1, x2, /) | Divide arguments element-wise. |
| [`divmod`](index.html#jax.numpy.divmod)(x1, x2, /) | Return element-wise quotient and remainder simultaneously. |
| [`dot`](index.html#jax.numpy.dot)(a, b, *[, precision, preferred_element_type]) | Dot product of two arrays. |
| [`double`](index.html#jax.numpy.double) | alias of [`float64`](index.html#jax.numpy.float64) |
| [`dsplit`](index.html#jax.numpy.dsplit)(ary, indices_or_sections) | Split array into multiple sub-arrays along the 3rd axis (depth). |
| [`dstack`](index.html#jax.numpy.dstack)(tup[, dtype]) | Stack arrays in sequence depth wise (along third axis). |
| [`dtype`](index.html#jax.numpy.dtype)(dtype[, align, copy]) | Create a data type object. |
| [`ediff1d`](index.html#jax.numpy.ediff1d)(ary[, to_end, to_begin]) | The differences between consecutive elements of an array. |
| [`einsum`](index.html#jax.numpy.einsum)(subscripts, /, *operands[, out, ...]) | Evaluates the Einstein summation convention on the operands. |
| [`einsum_path`](index.html#jax.numpy.einsum_path)(subscripts, *operands[, optimize]) | Evaluates the lowest cost contraction order for an einsum expression by |
| [`empty`](index.html#jax.numpy.empty)(shape[, dtype]) | Return a new array of given shape and type, without initializing entries. |
| [`empty_like`](index.html#jax.numpy.empty_like)(prototype[, dtype, shape]) | Return a new array with the same shape and type as a given array. |
| [`equal`](index.html#jax.numpy.equal)(x1, x2, /) | Return (x1 == x2) element-wise. |
| [`exp`](index.html#jax.numpy.exp)(x, /) | Calculate the exponential of all elements in the input array. |
| [`exp2`](index.html#jax.numpy.exp2)(x, /) | Calculate 2**p for all p in the input array. |
| [`expand_dims`](index.html#jax.numpy.expand_dims)(a, axis) | Expand the shape of an array. |
| [`expm1`](index.html#jax.numpy.expm1)(x, /) | Calculate `exp(x) - 1` for all elements in the array. |
| [`extract`](index.html#jax.numpy.extract)(condition, arr) | Return the elements of an array that satisfy some condition. |
| [`eye`](index.html#jax.numpy.eye)(N[, M, k, dtype]) | Return a 2-D array with ones on the diagonal and zeros elsewhere. |
| [`fabs`](index.html#jax.numpy.fabs)(x, /) | Compute the absolute values element-wise. |
| [`finfo`](index.html#jax.numpy.finfo)(dtype) | Machine limits for floating point types. |
| [`fix`](index.html#jax.numpy.fix)(x[, out]) | Round to nearest integer towards zero. |
| [`flatnonzero`](index.html#jax.numpy.flatnonzero)(a, *[, size, fill_value]) | Return indices that are non-zero in the flattened version of a. |
| [`flexible`](index.html#jax.numpy.flexible)() | Abstract base class of all scalar types without predefined length. |
| [`flip`](index.html#jax.numpy.flip)(m[, axis]) | Reverse the order of elements in an array along the given axis. |
| [`fliplr`](index.html#jax.numpy.fliplr)(m) | Reverse the order of elements along axis 1 (left/right). |
| [`flipud`](index.html#jax.numpy.flipud)(m) | Reverse the order of elements along axis 0 (up/down). |
| [`float_`](index.html#jax.numpy.float_) | alias of [`float64`](index.html#jax.numpy.float64) |
| [`float_power`](index.html#jax.numpy.float_power)(x1, x2, /) | First array elements raised to powers from second array, element-wise. |
| [`float16`](index.html#jax.numpy.float16)(x) | |
| [`float32`](index.html#jax.numpy.float32)(x) | |
| [`float64`](index.html#jax.numpy.float64)(x) | |
| [`floating`](index.html#jax.numpy.floating)() | Abstract base class of all floating-point scalar types. |
| [`floor`](index.html#jax.numpy.floor)(x, /) | Return the floor of the input, element-wise. |
| [`floor_divide`](index.html#jax.numpy.floor_divide)(x1, x2, /) | Return the largest integer smaller or equal to the division of the inputs. |
| [`fmax`](index.html#jax.numpy.fmax)(x1, x2) | Element-wise maximum of array elements. |
| [`fmin`](index.html#jax.numpy.fmin)(x1, x2) | Element-wise minimum of array elements. |
| [`fmod`](index.html#jax.numpy.fmod)(x1, x2, /) | Returns the element-wise remainder of division. |
| [`frexp`](index.html#jax.numpy.frexp)(x, /) | Decompose the elements of x into mantissa and twos exponent. |
| [`frombuffer`](index.html#jax.numpy.frombuffer)(buffer[, dtype, count, offset]) | Interpret a buffer as a 1-dimensional array. |
| [`fromfile`](index.html#jax.numpy.fromfile)(*args, **kwargs) | Unimplemented JAX wrapper for jnp.fromfile. |
| [`fromfunction`](index.html#jax.numpy.fromfunction)(function, shape, *[, dtype]) | Construct an array by executing a function over each coordinate. |
| [`fromiter`](index.html#jax.numpy.fromiter)(*args, **kwargs) | Unimplemented JAX wrapper for jnp.fromiter. |
| [`frompyfunc`](index.html#jax.numpy.frompyfunc)(func, /, nin, nout, *[, identity]) | Create a JAX ufunc from an arbitrary JAX-compatible scalar function. |
| [`fromstring`](index.html#jax.numpy.fromstring)(string[, dtype, count]) | A new 1-D array initialized from text data in a string. |
| [`from_dlpack`](index.html#jax.numpy.from_dlpack)(x) | Create a NumPy array from an object implementing the `__dlpack__` |
| [`full`](index.html#jax.numpy.full)(shape, fill_value[, dtype]) | Return a new array of given shape and type, filled with fill_value. |
| [`full_like`](index.html#jax.numpy.full_like)(a, fill_value[, dtype, shape]) | Return a full array with the same shape and type as a given array. |
| [`gcd`](index.html#jax.numpy.gcd)(x1, x2) | Returns the greatest common divisor of `|x1|` and `|x2|` |
| [`generic`](index.html#jax.numpy.generic)() | Base class for numpy scalar types. |
| [`geomspace`](index.html#jax.numpy.geomspace)(start, stop[, num, endpoint, ...]) | Return numbers spaced evenly on a log scale (a geometric progression). |
| [`get_printoptions`](index.html#jax.numpy.get_printoptions)() | Return the current print options. |
| [`gradient`](index.html#jax.numpy.gradient)(f, *varargs[, axis, edge_order]) | Return the gradient of an N-dimensional array. |
| [`greater`](index.html#jax.numpy.greater)(x1, x2, /) | Return the truth value of (x1 > x2) element-wise. |
| [`greater_equal`](index.html#jax.numpy.greater_equal)(x1, x2, /) | Return the truth value of (x1 >= x2) element-wise. |
| [`hamming`](index.html#jax.numpy.hamming)(M) | Return the Hamming window. |
| [`hanning`](index.html#jax.numpy.hanning)(M) | Return the Hanning window. |
| [`heaviside`](index.html#jax.numpy.heaviside)(x1, x2, /) | Compute the Heaviside step function. |
| [`histogram`](index.html#jax.numpy.histogram)(a[, bins, range, weights, density]) | Compute the histogram of a dataset. |
| [`histogram_bin_edges`](index.html#jax.numpy.histogram_bin_edges)(a[, bins, range, weights]) | Function to calculate only the edges of the bins used by the histogram |
| [`histogram2d`](index.html#jax.numpy.histogram2d)(x, y[, bins, range, weights, ...]) | Compute the bi-dimensional histogram of two data samples. |
| [`histogramdd`](index.html#jax.numpy.histogramdd)(sample[, bins, range, weights, ...]) | Compute the multidimensional histogram of some data. |
| [`hsplit`](index.html#jax.numpy.hsplit)(ary, indices_or_sections) | Split an array into multiple sub-arrays horizontally (column-wise). |
| [`hstack`](index.html#jax.numpy.hstack)(tup[, dtype]) | Stack arrays in sequence horizontally (column wise). |
| [`hypot`](index.html#jax.numpy.hypot)(x1, x2, /) | Given the "legs" of a right triangle, return its hypotenuse. |
| [`i0`](index.html#jax.numpy.i0)(x) | Modified Bessel function of the first kind, order 0. |
| [`identity`](index.html#jax.numpy.identity)(n[, dtype]) | Return the identity array. |
| [`iinfo`](index.html#jax.numpy.iinfo)(int_type) | |
| [`imag`](index.html#jax.numpy.imag)(val, /) | Return the imaginary part of the complex argument. |
| [`in1d`](index.html#jax.numpy.in1d)(ar1, ar2[, assume_unique, invert]) | Test whether each element of a 1-D array is also present in a second array. |
| [`index_exp`](index.html#jax.numpy.index_exp) | A nicer way to build up index tuples for arrays. |
| [`indices`](index.html#jax.numpy.indices)(dimensions[, dtype, sparse]) | Return an array representing the indices of a grid. |
| [`inexact`](index.html#jax.numpy.inexact)() | Abstract base class of all numeric scalar types with a (potentially) inexact representation of the values in its range, such as floating-point numbers. |
| [`inner`](index.html#jax.numpy.inner)(a, b, *[, precision, ...]) | Inner product of two arrays. |
| [`insert`](index.html#jax.numpy.insert)(arr, obj, values[, axis]) | Insert values along the given axis before the given indices. |
| [`int_`](index.html#jax.numpy.int_) | alias of [`int64`](index.html#jax.numpy.int64) |
| [`int16`](index.html#jax.numpy.int16)(x) | |
| [`int32`](index.html#jax.numpy.int32)(x) | |
| [`int64`](index.html#jax.numpy.int64)(x) | |
| [`int8`](index.html#jax.numpy.int8)(x) | |
| [`integer`](index.html#jax.numpy.integer)() | Abstract base class of all integer scalar types. |
| [`interp`](index.html#jax.numpy.interp)(x, xp, fp[, left, right, period]) | One-dimensional linear interpolation for monotonically increasing sample points. |
| [`intersect1d`](index.html#jax.numpy.intersect1d)(ar1, ar2[, assume_unique, ...]) | Find the intersection of two arrays. |
| [`invert`](index.html#jax.numpy.invert)(x, /) | Compute bit-wise inversion, or bit-wise NOT, element-wise. |
| [`isclose`](index.html#jax.numpy.isclose)(a, b[, rtol, atol, equal_nan]) | Returns a boolean array where two arrays are element-wise equal within a |
| [`iscomplex`](index.html#jax.numpy.iscomplex)(x) | Returns a bool array, where True if input element is complex. |
| [`iscomplexobj`](index.html#jax.numpy.iscomplexobj)(x) | Check for a complex type or an array of complex numbers. |
| [`isfinite`](index.html#jax.numpy.isfinite)(x, /) | Test element-wise for finiteness (not infinity and not Not a Number). |
| [`isin`](index.html#jax.numpy.isin)(element, test_elements[, ...]) | Calculates `element in test_elements`, broadcasting over element only. |
| [`isinf`](index.html#jax.numpy.isinf)(x, /) | Test element-wise for positive or negative infinity. |
| [`isnan`](index.html#jax.numpy.isnan)(x, /) | Test element-wise for NaN and return result as a boolean array. |
| [`isneginf`](index.html#jax.numpy.isneginf)(x, /[, out]) | Test element-wise for negative infinity, return result as bool array. |
| [`isposinf`](index.html#jax.numpy.isposinf)(x, /[, out]) | Test element-wise for positive infinity, return result as bool array. |
| [`isreal`](index.html#jax.numpy.isreal)(x) | Returns a bool array, where True if input element is real. |
| [`isrealobj`](index.html#jax.numpy.isrealobj)(x) | Return True if x is a not complex type or an array of complex numbers. |
| [`isscalar`](index.html#jax.numpy.isscalar)(element) | Returns True if the type of element is a scalar type. |
| [`issubdtype`](index.html#jax.numpy.issubdtype)(arg1, arg2) | Returns True if first argument is a typecode lower/equal in type hierarchy. |
| [`iterable`](index.html#jax.numpy.iterable)(y) | Check whether or not an object can be iterated over. |
| [`ix_`](index.html#jax.numpy.ix_)(*args) | Construct an open mesh from multiple sequences. |
| [`kaiser`](index.html#jax.numpy.kaiser)(M, beta) | Return the Kaiser window. |
| [`kron`](index.html#jax.numpy.kron)(a, b) | Kronecker product of two arrays. |
| [`lcm`](index.html#jax.numpy.lcm)(x1, x2) | Returns the lowest common multiple of `|x1|` and `|x2|` |
| [`ldexp`](index.html#jax.numpy.ldexp)(x1, x2, /) | Returns x1 * 2**x2, element-wise. |
| [`left_shift`](index.html#jax.numpy.left_shift)(x1, x2, /) | Shift the bits of an integer to the left. |
| [`less`](index.html#jax.numpy.less)(x1, x2, /) | Return the truth value of (x1 < x2) element-wise. |
| [`less_equal`](index.html#jax.numpy.less_equal)(x1, x2, /) | Return the truth value of (x1 <= x2) element-wise. |
| [`lexsort`](index.html#jax.numpy.lexsort)(keys[, axis]) | Perform an indirect stable sort using a sequence of keys. |
| [`linspace`](index.html#jax.numpy.linspace)(start, stop[, num, endpoint, ...]) | Return evenly spaced numbers over a specified interval. |
| [`load`](index.html#jax.numpy.load)(*args, **kwargs) | Load arrays or pickled objects from `.npy`, `.npz` or pickled files. |
| [`log`](index.html#jax.numpy.log)(x, /) | Natural logarithm, element-wise. |
| [`log10`](index.html#jax.numpy.log10)(x, /) | Return the base 10 logarithm of the input array, element-wise. |
| [`log1p`](index.html#jax.numpy.log1p)(x, /) | Return the natural logarithm of one plus the input array, element-wise. |
| [`log2`](index.html#jax.numpy.log2)(x, /) | Base-2 logarithm of x. |
| [`logaddexp`](index.html#jax.numpy.logaddexp)(x1, x2, /) | Logarithm of the sum of exponentiations of the inputs. |
| [`logaddexp2`](index.html#jax.numpy.logaddexp2)(x1, x2, /) | Logarithm of the sum of exponentiations of the inputs in base-2. |
| [`logical_and`](index.html#jax.numpy.logical_and)(*args) | Compute the truth value of x1 AND x2 element-wise. |
| [`logical_not`](index.html#jax.numpy.logical_not)(*args) | Compute the truth value of NOT x element-wise. |
| [`logical_or`](index.html#jax.numpy.logical_or)(*args) | Compute the truth value of x1 OR x2 element-wise. |
| [`logical_xor`](index.html#jax.numpy.logical_xor)(*args) | Compute the truth value of x1 XOR x2, element-wise. |
| [`logspace`](index.html#jax.numpy.logspace)(start, stop[, num, endpoint, base, ...]) | Return numbers spaced evenly on a log scale. |
| [`mask_indices`](index.html#jax.numpy.mask_indices)(*args, **kwargs) | Return the indices to access (n, n) arrays, given a masking function. |
| [`matmul`](index.html#jax.numpy.matmul)(a, b, *[, precision, ...]) | Matrix product of two arrays. |
| [`matrix_transpose`](index.html#jax.numpy.matrix_transpose)(x, /) | Transposes the last two dimensions of x. |
| [`max`](index.html#jax.numpy.max)(a[, axis, out, keepdims, initial, where]) | Return the maximum of an array or maximum along an axis. |
| [`maximum`](index.html#jax.numpy.maximum)(x1, x2, /) | Element-wise maximum of array elements. |
| [`mean`](index.html#jax.numpy.mean)(a[, axis, dtype, out, keepdims, where]) | Compute the arithmetic mean along the specified axis. |
| [`median`](index.html#jax.numpy.median)(a[, axis, out, overwrite_input, keepdims]) | Compute the median along the specified axis. |
| [`meshgrid`](index.html#jax.numpy.meshgrid)(*xi[, copy, sparse, indexing]) | Return a list of coordinate matrices from coordinate vectors. |
| [`mgrid`](index.html#jax.numpy.mgrid) | Return dense multi-dimensional "meshgrid". |
| [`min`](index.html#jax.numpy.min)(a[, axis, out, keepdims, initial, where]) | Return the minimum of an array or minimum along an axis. |
| [`minimum`](index.html#jax.numpy.minimum)(x1, x2, /) | Element-wise minimum of array elements. |
| [`mod`](index.html#jax.numpy.mod)(x1, x2, /) | Returns the element-wise remainder of division. |
| [`modf`](index.html#jax.numpy.modf)(x, /[, out]) | Return the fractional and integral parts of an array, element-wise. |
| [`moveaxis`](index.html#jax.numpy.moveaxis)(a, source, destination) | Move axes of an array to new positions. |
| [`multiply`](index.html#jax.numpy.multiply)(x1, x2, /) | Multiply arguments element-wise. |
| [`nan_to_num`](index.html#jax.numpy.nan_to_num)(x[, copy, nan, posinf, neginf]) | Replace NaN with zero and infinity with large finite numbers (default |
| [`nanargmax`](index.html#jax.numpy.nanargmax)(a[, axis, out, keepdims]) | Return the indices of the maximum values in the specified axis ignoring |
| [`nanargmin`](index.html#jax.numpy.nanargmin)(a[, axis, out, keepdims]) | Return the indices of the minimum values in the specified axis ignoring |
| [`nancumprod`](index.html#jax.numpy.nancumprod)(a[, axis, dtype, out]) | Return the cumulative product of array elements over a given axis treating Not a |
| [`nancumsum`](index.html#jax.numpy.nancumsum)(a[, axis, dtype, out]) | Return the cumulative sum of array elements over a given axis treating Not a |
| [`nanmax`](index.html#jax.numpy.nanmax)(a[, axis, out, keepdims, initial, where]) | Return the maximum of an array or maximum along an axis, ignoring any |
| [`nanmean`](index.html#jax.numpy.nanmean)(a[, axis, dtype, out, keepdims, where]) | Compute the arithmetic mean along the specified axis, ignoring NaNs. |
| [`nanmedian`](index.html#jax.numpy.nanmedian)(a[, axis, out, overwrite_input, ...]) | Compute the median along the specified axis, while ignoring NaNs. |
| [`nanmin`](index.html#jax.numpy.nanmin)(a[, axis, out, keepdims, initial, where]) | Return minimum of an array or minimum along an axis, ignoring any NaNs. |
| [`nanpercentile`](index.html#jax.numpy.nanpercentile)(a, q[, axis, out, ...]) | Compute the qth percentile of the data along the specified axis, |
| [`nanprod`](index.html#jax.numpy.nanprod)(a[, axis, dtype, out, keepdims, ...]) | Return the product of array elements over a given axis treating Not a |
| [`nanquantile`](index.html#jax.numpy.nanquantile)(a, q[, axis, out, ...]) | Compute the qth quantile of the data along the specified axis, |
| [`nanstd`](index.html#jax.numpy.nanstd)(a[, axis, dtype, out, ddof, ...]) | Compute the standard deviation along the specified axis, while |
| [`nansum`](index.html#jax.numpy.nansum)(a[, axis, dtype, out, keepdims, ...]) | Return the sum of array elements over a given axis treating Not a |
| [`nanvar`](index.html#jax.numpy.nanvar)(a[, axis, dtype, out, ddof, ...]) | Compute the variance along the specified axis, while ignoring NaNs. |
| [`ndarray`](index.html#jax.numpy.ndarray) | alias of [`Array`](index.html#jax.Array) |
| [`ndim`](index.html#jax.numpy.ndim)(a) | Return the number of dimensions of an array. |
| [`negative`](index.html#jax.numpy.negative)(x, /) | Numerical negative, element-wise. |
| [`nextafter`](index.html#jax.numpy.nextafter)(x1, x2, /) | Return the next floating-point value after x1 towards x2, element-wise. |
| [`nonzero`](index.html#jax.numpy.nonzero)(a, *[, size, fill_value]) | Return the indices of the elements that are non-zero. |
| [`not_equal`](index.html#jax.numpy.not_equal)(x1, x2, /) | Return (x1 != x2) element-wise. |
| [`number`](index.html#jax.numpy.number)() | Abstract base class of all numeric scalar types. |
| [`object_`](index.html#jax.numpy.object_) | Any Python object. |
| [`ogrid`](index.html#jax.numpy.ogrid) | Return open multi-dimensional "meshgrid". |
| [`ones`](index.html#jax.numpy.ones)(shape[, dtype]) | Return a new array of given shape and type, filled with ones. |
| [`ones_like`](index.html#jax.numpy.ones_like)(a[, dtype, shape]) | Return an array of ones with the same shape and type as a given array. |
| [`outer`](index.html#jax.numpy.outer)(a, b[, out]) | Compute the outer product of two vectors. |
| [`packbits`](index.html#jax.numpy.packbits)(a[, axis, bitorder]) | Packs the elements of a binary-valued array into bits in a uint8 array. |
| [`pad`](index.html#jax.numpy.pad)(array, pad_width[, mode]) | Pad an array. |
| [`partition`](index.html#jax.numpy.partition)(a, kth[, axis]) | Return a partitioned copy of an array. |
| [`percentile`](index.html#jax.numpy.percentile)(a, q[, axis, out, ...]) | Compute the q-th percentile of the data along the specified axis. |
| [`piecewise`](index.html#jax.numpy.piecewise)(x, condlist, funclist, *args, **kw) | Evaluate a piecewise-defined function. |
| [`place`](index.html#jax.numpy.place)(arr, mask, vals, *[, inplace]) | Change elements of an array based on conditional and input values. |
| [`poly`](index.html#jax.numpy.poly)(seq_of_zeros) | Find the coefficients of a polynomial with the given sequence of roots. |
| [`polyadd`](index.html#jax.numpy.polyadd)(a1, a2) | Find the sum of two polynomials. |
| [`polyder`](index.html#jax.numpy.polyder)(p[, m]) | Return the derivative of the specified order of a polynomial. |
| [`polydiv`](index.html#jax.numpy.polydiv)(u, v, *[, trim_leading_zeros]) | Returns the quotient and remainder of polynomial division. |
| [`polyfit`](index.html#jax.numpy.polyfit)(x, y, deg[, rcond, full, w, cov]) | Least squares polynomial fit. |
| [`polyint`](index.html#jax.numpy.polyint)(p[, m, k]) | Return an antiderivative (indefinite integral) of a polynomial. |
| [`polymul`](index.html#jax.numpy.polymul)(a1, a2, *[, trim_leading_zeros]) | Find the product of two polynomials. |
| [`polysub`](index.html#jax.numpy.polysub)(a1, a2) | Difference (subtraction) of two polynomials. |
| [`polyval`](index.html#jax.numpy.polyval)(p, x, *[, unroll]) | Evaluate a polynomial at specific values. |
| [`positive`](index.html#jax.numpy.positive)(x, /) | Numerical positive, element-wise. |
| [`power`](index.html#jax.numpy.power)(x1, x2, /) | First array elements raised to powers from second array, element-wise. |
| [`printoptions`](index.html#jax.numpy.printoptions)(*args, **kwargs) | Context manager for setting print options. |
| [`prod`](index.html#jax.numpy.prod)(a[, axis, dtype, out, keepdims, ...]) | Return the product of array elements over a given axis. |
| [`promote_types`](index.html#jax.numpy.promote_types)(a, b) | Returns the type to which a binary operation should cast its arguments. |
| [`ptp`](index.html#jax.numpy.ptp)(a[, axis, out, keepdims]) | Range of values (maximum - minimum) along an axis. |
| [`put`](index.html#jax.numpy.put)(a, ind, v[, mode, inplace]) | Replaces specified elements of an array with given values. |
| [`quantile`](index.html#jax.numpy.quantile)(a, q[, axis, out, overwrite_input, ...]) | Compute the q-th quantile of the data along the specified axis. |
| [`r_`](index.html#jax.numpy.r_) | Concatenate slices, scalars and array-like objects along the first axis. |
| [`rad2deg`](index.html#jax.numpy.rad2deg)(x, /) | Convert angles from radians to degrees. |
| [`radians`](index.html#jax.numpy.radians)(x, /) | Convert angles from degrees to radians. |
| [`ravel`](index.html#jax.numpy.ravel)(a[, order]) | Return a contiguous flattened array. |
| [`ravel_multi_index`](index.html#jax.numpy.ravel_multi_index)(multi_index, dims[, mode, ...]) | Converts a tuple of index arrays into an array of flat |
| [`real`](index.html#jax.numpy.real)(val, /) | Return the real part of the complex argument. |
| [`reciprocal`](index.html#jax.numpy.reciprocal)(x, /) | Return the reciprocal of the argument, element-wise. |
| [`remainder`](index.html#jax.numpy.remainder)(x1, x2, /) | Returns the element-wise remainder of division. |
| [`repeat`](index.html#jax.numpy.repeat)(a, repeats[, axis, total_repeat_length]) | Repeat each element of an array after themselves |
| [`reshape`](index.html#jax.numpy.reshape)(a, newshape[, order]) | Gives a new shape to an array without changing its data. |
| [`resize`](index.html#jax.numpy.resize)(a, new_shape) | Return a new array with the specified shape. |
| [`result_type`](index.html#jax.numpy.result_type)(*args) | Returns the type that results from applying the NumPy |
| [`right_shift`](index.html#jax.numpy.right_shift)(x1, x2, /) | Shift the bits of an integer to the right. |
| [`rint`](index.html#jax.numpy.rint)(x, /) | Round elements of the array to the nearest integer. |
| [`roll`](index.html#jax.numpy.roll)(a, shift[, axis]) | Roll array elements along a given axis. |
| [`rollaxis`](index.html#jax.numpy.rollaxis)(a, axis[, start]) | Roll the specified axis backwards, until it lies in a given position. |
| [`roots`](index.html#jax.numpy.roots)(p, *[, strip_zeros]) | Return the roots of a polynomial with coefficients given in p. |
| [`rot90`](index.html#jax.numpy.rot90)(m[, k, axes]) | Rotate an array by 90 degrees in the plane specified by axes. |
| [`round`](index.html#jax.numpy.round)(a[, decimals, out]) | Round an array to the given number of decimals. |
| [`round_`](index.html#jax.numpy.round_)(a[, decimals, out]) | Round an array to the given number of decimals. |
| [`s_`](index.html#jax.numpy.s_) | A nicer way to build up index tuples for arrays. |
| [`save`](index.html#jax.numpy.save)(file, arr[, allow_pickle, fix_imports]) | Save an array to a binary file in NumPy `.npy` format. |
| [`savez`](index.html#jax.numpy.savez)(file, *args, **kwds) | Save several arrays into a single file in uncompressed `.npz` format. |
| [`searchsorted`](index.html#jax.numpy.searchsorted)(a, v[, side, sorter, method]) | Find indices where elements should be inserted to maintain order. |
| [`select`](index.html#jax.numpy.select)(condlist, choicelist[, default]) | Return an array drawn from elements in choicelist, depending on conditions. |
| [`set_printoptions`](index.html#jax.numpy.set_printoptions)([precision, threshold, ...]) | Set printing options. |
| [`setdiff1d`](index.html#jax.numpy.setdiff1d)(ar1, ar2[, assume_unique, size, ...]) | Find the set difference of two arrays. |
| [`setxor1d`](index.html#jax.numpy.setxor1d)(ar1, ar2[, assume_unique]) | Find the set exclusive-or of two arrays. |
| [`shape`](index.html#jax.numpy.shape)(a) | Return the shape of an array. |
| [`sign`](index.html#jax.numpy.sign)(x, /) | Returns an element-wise indication of the sign of a number. |
| [`signbit`](index.html#jax.numpy.signbit)(x, /) | Returns element-wise True where signbit is set (less than zero). |
| [`signedinteger`](index.html#jax.numpy.signedinteger)() | Abstract base class of all signed integer scalar types. |
| [`sin`](index.html#jax.numpy.sin)(x, /) | Trigonometric sine, element-wise. |
| [`sinc`](index.html#jax.numpy.sinc)(x, /) | Return the normalized sinc function. |
| [`single`](index.html#jax.numpy.single) | alias of [`float32`](index.html#jax.numpy.float32) |
| [`sinh`](index.html#jax.numpy.sinh)(x, /) | Hyperbolic sine, element-wise. |
| [`size`](index.html#jax.numpy.size)(a[, axis]) | Return the number of elements along a given axis. |
| [`sort`](index.html#jax.numpy.sort)(a[, axis, kind, order]) | Return a sorted copy of an array. |
| [`sort_complex`](index.html#jax.numpy.sort_complex)(a) | Sort a complex array using the real part first, then the imaginary part. |
| [`split`](index.html#jax.numpy.split)(ary, indices_or_sections[, axis]) | Split an array into multiple sub-arrays as views into ary. |
| [`sqrt`](index.html#jax.numpy.sqrt)(x, /) | Return the non-negative square-root of an array, element-wise. |
| [`square`](index.html#jax.numpy.square)(x, /) | Return the element-wise square of the input. |
| [`squeeze`](index.html#jax.numpy.squeeze)(a[, axis]) | Remove axes of length one from a. |
| [`stack`](index.html#jax.numpy.stack)(arrays[, axis, out, dtype]) | Join a sequence of arrays along a new axis. |
| [`std`](index.html#jax.numpy.std)(a[, axis, dtype, out, ddof, keepdims, where]) | Compute the standard deviation along the specified axis. |
| [`subtract`](index.html#jax.numpy.subtract)(x1, x2, /) | Subtract arguments, element-wise. |
| [`sum`](index.html#jax.numpy.sum)(a[, axis, dtype, out, keepdims, ...]) | Sum of array elements over a given axis. |
| [`swapaxes`](index.html#jax.numpy.swapaxes)(a, axis1, axis2) | Interchange two axes of an array. |
| [`take`](index.html#jax.numpy.take)(a, indices[, axis, out, mode, ...]) | Take elements from an array along an axis. |
| [`take_along_axis`](index.html#jax.numpy.take_along_axis)(arr, indices, axis[, mode]) | Take values from the input array by matching 1d index and data slices. |
| [`tan`](index.html#jax.numpy.tan)(x, /) | Compute tangent element-wise. |
| [`tanh`](index.html#jax.numpy.tanh)(x, /) | Compute hyperbolic tangent element-wise. |
| [`tensordot`](index.html#jax.numpy.tensordot)(a, b[, axes, precision, ...]) | Compute tensor dot product along specified axes. |
| [`tile`](index.html#jax.numpy.tile)(A, reps) | Construct an array by repeating A the number of times given by reps. |
| [`trace`](index.html#jax.numpy.trace)(a[, offset, axis1, axis2, dtype, out]) | Return the sum along diagonals of the array. |
| [`transpose`](index.html#jax.numpy.transpose)(a[, axes]) | Returns an array with axes transposed. |
| [`trapz`](index.html#jax.numpy.trapz)(y[, x, dx, axis]) | Integrate along the given axis using the composite trapezoidal rule. |
| [`tri`](index.html#jax.numpy.tri)(N[, M, k, dtype]) | An array with ones at and below the given diagonal and zeros elsewhere. |
| [`tril`](index.html#jax.numpy.tril)(m[, k]) | Lower triangle of an array. |
| [`tril_indices`](index.html#jax.numpy.tril_indices)(n[, k, m]) | Return the indices for the lower-triangle of an (n, m) array. |
| [`tril_indices_from`](index.html#jax.numpy.tril_indices_from)(arr[, k]) | Return the indices for the lower-triangle of arr. |
| [`trim_zeros`](index.html#jax.numpy.trim_zeros)(filt[, trim]) | Trim the leading and/or trailing zeros from a 1-D array or sequence. |
| [`triu`](index.html#jax.numpy.triu)(m[, k]) | Upper triangle of an array. |
| [`triu_indices`](index.html#jax.numpy.triu_indices)(n[, k, m]) | Return the indices for the upper-triangle of an (n, m) array. |
| [`triu_indices_from`](index.html#jax.numpy.triu_indices_from)(arr[, k]) | Return the indices for the upper-triangle of arr. |
| [`true_divide`](index.html#jax.numpy.true_divide)(x1, x2, /) | Divide arguments element-wise. |
| [`trunc`](index.html#jax.numpy.trunc)(x) | Return the truncated value of the input, element-wise. |
| [`ufunc`](index.html#jax.numpy.ufunc)(func, /, nin, nout, *[, name, nargs, ...]) | Functions that operate element-by-element on whole arrays. |
| [`uint`](index.html#jax.numpy.uint) | alias of [`uint64`](index.html#jax.numpy.uint64) |
| [`uint16`](index.html#jax.numpy.uint16)(x) | |
| [`uint32`](index.html#jax.numpy.uint32)(x) | |
| [`uint64`](index.html#jax.numpy.uint64)(x) | |
| [`uint8`](index.html#jax.numpy.uint8)(x) | |
| [`union1d`](index.html#jax.numpy.union1d)(ar1, ar2, *[, size, fill_value]) | Find the union of two arrays. |
| [`unique`](index.html#jax.numpy.unique)(ar[, return_index, return_inverse, ...]) | Find the unique elements of an array. |
| [`unpackbits`](index.html#jax.numpy.unpackbits)(a[, axis, count, bitorder]) | Unpacks elements of a uint8 array into a binary-valued output array. |
| [`unravel_index`](index.html#jax.numpy.unravel_index)(indices, shape) | Converts a flat index or array of flat indices into a tuple |
| [`unsignedinteger`](index.html#jax.numpy.unsignedinteger)() | Abstract base class of all unsigned integer scalar types. |
| [`unwrap`](index.html#jax.numpy.unwrap)(p[, discont, axis, period]) | Unwrap by taking the complement of large deltas with respect to the period. |
| [`vander`](index.html#jax.numpy.vander)(x[, N, increasing]) | Generate a Vandermonde matrix. |
| [`var`](index.html#jax.numpy.var)(a[, axis, dtype, out, ddof, keepdims, where]) | Compute the variance along the specified axis. |
| [`vdot`](index.html#jax.numpy.vdot)(a, b, *[, precision, ...]) | Return the dot product of two vectors. |
| [`vectorize`](index.html#jax.numpy.vectorize)(pyfunc, *[, excluded, signature]) | Define a vectorized function with broadcasting. |
| [`vsplit`](index.html#jax.numpy.vsplit)(ary, indices_or_sections) | Split an array into multiple sub-arrays vertically (row-wise). |
| [`vstack`](index.html#jax.numpy.vstack)(tup[, dtype]) | Stack arrays in sequence vertically (row wise). |
| [`where`](index.html#jax.numpy.where)(condition[, x, y, size, fill_value]) | Return elements chosen from x or y depending on condition. |
| [`zeros`](index.html#jax.numpy.zeros)(shape[, dtype]) | Return a new array of given shape and type, filled with zeros. |
| [`zeros_like`](index.html#jax.numpy.zeros_like)(a[, dtype, shape]) | Return an array of zeros with the same shape and type as a given array. |
##### jax.numpy.fft[#](#module-jax.numpy.fft)
| | |
| --- | --- |
| [`fft`](index.html#jax.numpy.fft.fft)(a[, n, axis, norm]) | Compute the one-dimensional discrete Fourier Transform. |
| [`fft2`](index.html#jax.numpy.fft.fft2)(a[, s, axes, norm]) | Compute the 2-dimensional discrete Fourier Transform. |
| [`fftfreq`](index.html#jax.numpy.fft.fftfreq)(n[, d, dtype]) | Return the Discrete Fourier Transform sample frequencies. |
| [`fftn`](index.html#jax.numpy.fft.fftn)(a[, s, axes, norm]) | Compute the N-dimensional discrete Fourier Transform. |
| [`fftshift`](index.html#jax.numpy.fft.fftshift)(x[, axes]) | Shift the zero-frequency component to the center of the spectrum. |
| [`hfft`](index.html#jax.numpy.fft.hfft)(a[, n, axis, norm]) | Compute the FFT of a signal that has Hermitian symmetry, i.e., a real |
| [`ifft`](index.html#jax.numpy.fft.ifft)(a[, n, axis, norm]) | Compute the one-dimensional inverse discrete Fourier Transform. |
| [`ifft2`](index.html#jax.numpy.fft.ifft2)(a[, s, axes, norm]) | Compute the 2-dimensional inverse discrete Fourier Transform. |
| [`ifftn`](index.html#jax.numpy.fft.ifftn)(a[, s, axes, norm]) | Compute the N-dimensional inverse discrete Fourier Transform. |
| [`ifftshift`](index.html#jax.numpy.fft.ifftshift)(x[, axes]) | The inverse of fftshift. |
| [`ihfft`](index.html#jax.numpy.fft.ihfft)(a[, n, axis, norm]) | Compute the inverse FFT of a signal that has Hermitian symmetry. |
| [`irfft`](index.html#jax.numpy.fft.irfft)(a[, n, axis, norm]) | Computes the inverse of rfft. |
| [`irfft2`](index.html#jax.numpy.fft.irfft2)(a[, s, axes, norm]) | Computes the inverse of rfft2. |
| [`irfftn`](index.html#jax.numpy.fft.irfftn)(a[, s, axes, norm]) | Computes the inverse of rfftn. |
| [`rfft`](index.html#jax.numpy.fft.rfft)(a[, n, axis, norm]) | Compute the one-dimensional discrete Fourier Transform for real input. |
| [`rfft2`](index.html#jax.numpy.fft.rfft2)(a[, s, axes, norm]) | Compute the 2-dimensional FFT of a real array. |
| [`rfftfreq`](index.html#jax.numpy.fft.rfftfreq)(n[, d, dtype]) | Return the Discrete Fourier Transform sample frequencies |
| [`rfftn`](index.html#jax.numpy.fft.rfftn)(a[, s, axes, norm]) | Compute the N-dimensional discrete Fourier Transform for real input. |
##### jax.numpy.linalg[#](#module-jax.numpy.linalg)
| | |
| --- | --- |
| [`cholesky`](index.html#jax.numpy.linalg.cholesky)(a) | Cholesky decomposition. |
| [`cond`](index.html#jax.numpy.linalg.cond)(x[, p]) | Compute the condition number of a matrix. |
| [`det`](index.html#jax.numpy.linalg.det)(a) | Compute the determinant of an array. |
| [`eig`](index.html#jax.numpy.linalg.eig)(a) | Compute the eigenvalues and right eigenvectors of a square array. |
| [`eigh`](index.html#jax.numpy.linalg.eigh)(a[, UPLO, symmetrize_input]) | Return the eigenvalues and eigenvectors of a complex Hermitian |
| [`eigvals`](index.html#jax.numpy.linalg.eigvals)(a) | Compute the eigenvalues of a general matrix. |
| [`eigvalsh`](index.html#jax.numpy.linalg.eigvalsh)(a[, UPLO]) | Compute the eigenvalues of a complex Hermitian or real symmetric matrix. |
| [`inv`](index.html#jax.numpy.linalg.inv)(a) | Compute the (multiplicative) inverse of a matrix. |
| [`lstsq`](index.html#jax.numpy.linalg.lstsq)(a, b[, rcond, numpy_resid]) | Return the least-squares solution to a linear matrix equation. |
| [`matrix_power`](index.html#jax.numpy.linalg.matrix_power)(a, n) | Raise a square matrix to the (integer) power n. |
| [`matrix_rank`](index.html#jax.numpy.linalg.matrix_rank)(M[, tol]) | Return matrix rank of array using SVD method |
| [`multi_dot`](index.html#jax.numpy.linalg.multi_dot)(arrays, *[, precision]) | Compute the dot product of two or more arrays in a single function call, |
| [`norm`](index.html#jax.numpy.linalg.norm)(x[, ord, axis, keepdims]) | Matrix or vector norm. |
| [`pinv`](index.html#jax.numpy.linalg.pinv)(a[, rcond, hermitian]) | Compute the (Moore-Penrose) pseudo-inverse of a matrix. |
| [`qr`](index.html#jax.numpy.linalg.qr)(a[, mode]) | Compute the qr factorization of a matrix. |
| [`slogdet`](index.html#jax.numpy.linalg.slogdet)(a, *[, method]) | Compute the sign and (natural) logarithm of the determinant of an array. |
| [`solve`](index.html#jax.numpy.linalg.solve)(a, b) | Solve a linear matrix equation, or system of linear scalar equations. |
| [`svd`](index.html#jax.numpy.linalg.svd)(a[, full_matrices, compute_uv, hermitian]) | Singular Value Decomposition. |
| [`tensorinv`](index.html#jax.numpy.linalg.tensorinv)(a[, ind]) | Compute the 'inverse' of an N-dimensional array. |
| [`tensorsolve`](index.html#jax.numpy.linalg.tensorsolve)(a, b[, axes]) | Solve the tensor equation `a x = b` for x. |
##### JAX Array[#](#jax-array)
The JAX [`Array`](index.html#jax.Array) (along with its alias, [`jax.numpy.ndarray`](index.html#jax.numpy.ndarray)) is the core array object in JAX: you can think of it as JAX’s equivalent of a
[`numpy.ndarray`](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray). Like [`numpy.ndarray`](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray), most users will not need to instantiate [`Array`](index.html#jax.Array) objects manually, but rather will create them via
[`jax.numpy`](#module-jax.numpy) functions like [`array()`](index.html#jax.numpy.array), [`arange()`](index.html#jax.numpy.arange),
[`linspace()`](index.html#jax.numpy.linspace), and others listed above.
###### Copying and Serialization[#](#copying-and-serialization)
JAX [`Array`](index.html#jax.Array) objects are designed to work seamlessly with Python standard library tools where appropriate.
With the built-in [`copy`](https://docs.python.org/3/library/copy.html#module-copy) module, when [`copy.copy()`](https://docs.python.org/3/library/copy.html#copy.copy) or [`copy.deepcopy()`](https://docs.python.org/3/library/copy.html#copy.deepcopy)
encounder an [`Array`](index.html#jax.Array), it is equivalent to calling the
`copy()` method, which will create a copy of the buffer on the same device as the original array. This will work correctly within traced/JIT-compiled code, though copy operations may be elided by the compiler in this context.
When the built-in [`pickle`](https://docs.python.org/3/library/pickle.html#module-pickle) module encounters an [`Array`](index.html#jax.Array),
it will be serialized via a compact bit representation in a similar manner to pickled
[`numpy.ndarray`](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray) objects. When unpickled, the result will be a new
[`Array`](index.html#jax.Array) object *on the default device.*
This is because in general, pickling and unpickling may take place in different runtime environments, and there is no general way to map the device IDs of one runtime to the device IDs of another. If [`pickle`](https://docs.python.org/3/library/pickle.html#module-pickle) is used in traced/JIT-compiled code,
it will result in a [`ConcretizationTypeError`](index.html#jax.errors.ConcretizationTypeError).
#### `jax.scipy` module[#](#jax-scipy-module)
##### jax.scipy.fft[#](#module-jax.scipy.fft)
| | |
| --- | --- |
| [`dct`](index.html#jax.scipy.fft.dct)(x[, type, n, axis, norm]) | Return the Discrete Cosine Transform of arbitrary type sequence x. |
| [`dctn`](index.html#jax.scipy.fft.dctn)(x[, type, s, axes, norm]) | Return multidimensional Discrete Cosine Transform along the specified axes. |
| [`idct`](index.html#jax.scipy.fft.idct)(x[, type, n, axis, norm]) | Return the Discrete Cosine Transform of arbitrary type sequence x. |
| [`idctn`](index.html#jax.scipy.fft.idctn)(x[, type, s, axes, norm]) | Return multidimensional Inverse Discrete Cosine Transform along the specified axes. |
##### jax.scipy.integrate[#](#module-jax.scipy.integrate)
| | |
| --- | --- |
| [`trapezoid`](index.html#jax.scipy.integrate.trapezoid)(y[, x, dx, axis]) | Integrate along the given axis using the composite trapezoidal rule. |
##### jax.scipy.linalg[#](#module-jax.scipy.linalg)
| | |
| --- | --- |
| [`block_diag`](index.html#jax.scipy.linalg.block_diag)(*arrs) | Create a block diagonal matrix from provided arrays. |
| [`cho_factor`](index.html#jax.scipy.linalg.cho_factor)(a[, lower, overwrite_a, check_finite]) | Compute the Cholesky decomposition of a matrix, to use in cho_solve |
| [`cho_solve`](index.html#jax.scipy.linalg.cho_solve)(c_and_lower, b[, overwrite_b, ...]) | Solve the linear equations A x = b, given the Cholesky factorization of A. |
| [`cholesky`](index.html#jax.scipy.linalg.cholesky)(a[, lower, overwrite_a, check_finite]) | Compute the Cholesky decomposition of a matrix. |
| [`det`](index.html#jax.scipy.linalg.det)(a[, overwrite_a, check_finite]) | Compute the determinant of a matrix |
| [`eigh`](index.html#jax.scipy.linalg.eigh)(a[, b, lower, eigvals_only, ...]) | Solve a standard or generalized eigenvalue problem for a complex |
| [`eigh_tridiagonal`](index.html#jax.scipy.linalg.eigh_tridiagonal)(d, e, *[, eigvals_only, ...]) | Solve eigenvalue problem for a real symmetric tridiagonal matrix. |
| [`expm`](index.html#jax.scipy.linalg.expm)(A, *[, upper_triangular, max_squarings]) | Compute the matrix exponential of an array. |
| [`expm_frechet`](index.html#jax.scipy.linalg.expm_frechet)(A, E, *[, method, compute_expm]) | Frechet derivative of the matrix exponential of A in the direction E. |
| [`funm`](index.html#jax.scipy.linalg.funm)(A, func[, disp]) | Evaluate a matrix function specified by a callable. |
| [`hessenberg`](index.html#jax.scipy.linalg.hessenberg)(a, *[, calc_q, overwrite_a, ...]) | Compute Hessenberg form of a matrix. |
| [`inv`](index.html#jax.scipy.linalg.inv)(a[, overwrite_a, check_finite]) | Compute the inverse of a matrix. |
| [`lu`](index.html#jax.scipy.linalg.lu)(a[, permute_l, overwrite_a, check_finite]) | Compute LU decomposition of a matrix with partial pivoting. |
| [`lu_factor`](index.html#jax.scipy.linalg.lu_factor)(a[, overwrite_a, check_finite]) | Compute pivoted LU decomposition of a matrix. |
| [`lu_solve`](index.html#jax.scipy.linalg.lu_solve)(lu_and_piv, b[, trans, ...]) | Solve an equation system, a x = b, given the LU factorization of a |
| [`polar`](index.html#jax.scipy.linalg.polar)(a[, side, method, eps, max_iterations]) | Computes the polar decomposition. |
| [`qr`](index.html#jax.scipy.linalg.qr)(a[, overwrite_a, lwork, mode, pivoting, ...]) | Compute QR decomposition of a matrix. |
| [`rsf2csf`](index.html#jax.scipy.linalg.rsf2csf)(T, Z[, check_finite]) | Convert real Schur form to complex Schur form. |
| [`schur`](index.html#jax.scipy.linalg.schur)(a[, output]) | Compute Schur decomposition of a matrix. |
| [`sqrtm`](index.html#jax.scipy.linalg.sqrtm)(A[, blocksize]) | Matrix square root. |
| [`solve`](index.html#jax.scipy.linalg.solve)(a, b[, sym_pos, lower, overwrite_a, ...]) | Solves the linear equation set `a @ x == b` for the unknown `x` |
| [`solve_triangular`](index.html#jax.scipy.linalg.solve_triangular)(a, b[, trans, lower, ...]) | Solve the equation a x = b for x, assuming a is a triangular matrix. |
| [`sqrtm`](index.html#jax.scipy.linalg.sqrtm)(A[, blocksize]) | Matrix square root. |
| [`svd`](index.html#jax.scipy.linalg.svd)(a[, full_matrices, compute_uv, ...]) | Singular Value Decomposition. |
| [`toeplitz`](index.html#jax.scipy.linalg.toeplitz)(c[, r]) | Construct a Toeplitz matrix. |
| [`tril`](index.html#jax.scipy.linalg.tril)(m[, k]) |
Deprecated since version 1.11.0.
|
| [`triu`](index.html#jax.scipy.linalg.triu)(m[, k]) |
Deprecated since version 1.11.0.
|
##### jax.scipy.ndimage[#](#module-jax.scipy.ndimage)
| | |
| --- | --- |
| [`map_coordinates`](index.html#jax.scipy.ndimage.map_coordinates)(input, coordinates, order[, ...]) | Map the input array to new coordinates by interpolation. |
##### jax.scipy.optimize[#](#module-jax.scipy.optimize)
| | |
| --- | --- |
| [`minimize`](index.html#jax.scipy.optimize.minimize)(fun, x0[, args, tol, options]) | Minimization of scalar function of one or more variables. |
| [`OptimizeResults`](index.html#jax.scipy.optimize.OptimizeResults)(x, success, status, fun, ...) | Object holding optimization results. |
##### jax.scipy.signal[#](#module-jax.scipy.signal)
| | |
| --- | --- |
| [`fftconvolve`](index.html#jax.scipy.signal.fftconvolve)(in1, in2[, mode, axes]) | Convolve two N-dimensional arrays using FFT. |
| [`convolve`](index.html#jax.scipy.signal.convolve)(in1, in2[, mode, method, precision]) | Convolve two N-dimensional arrays. |
| [`convolve2d`](index.html#jax.scipy.signal.convolve2d)(in1, in2[, mode, boundary, ...]) | Convolve two 2-dimensional arrays. |
| [`correlate`](index.html#jax.scipy.signal.correlate)(in1, in2[, mode, method, precision]) | Cross-correlate two N-dimensional arrays. |
| [`correlate2d`](index.html#jax.scipy.signal.correlate2d)(in1, in2[, mode, boundary, ...]) | Cross-correlate two 2-dimensional arrays. |
| [`csd`](index.html#jax.scipy.signal.csd)(x, y[, fs, window, nperseg, noverlap, ...]) | Estimate the cross power spectral density, Pxy, using Welch's method. |
| [`istft`](index.html#jax.scipy.signal.istft)(Zxx[, fs, window, nperseg, noverlap, ...]) | Perform the inverse Short Time Fourier transform (iSTFT). |
| [`stft`](index.html#jax.scipy.signal.stft)(x[, fs, window, nperseg, noverlap, ...]) | Compute the Short Time Fourier Transform (STFT). |
| [`welch`](index.html#jax.scipy.signal.welch)(x[, fs, window, nperseg, noverlap, ...]) | Estimate power spectral density using Welch's method. |
##### jax.scipy.spatial.transform[#](#module-jax.scipy.spatial.transform)
| | |
| --- | --- |
| [`Rotation`](index.html#jax.scipy.spatial.transform.Rotation)(quat) | Rotation in 3 dimensions. |
| [`Slerp`](index.html#jax.scipy.spatial.transform.Slerp)(times, timedelta, rotations, rotvecs) | Spherical Linear Interpolation of Rotations. |
##### jax.scipy.sparse.linalg[#](#module-jax.scipy.sparse.linalg)
| | |
| --- | --- |
| [`bicgstab`](index.html#jax.scipy.sparse.linalg.bicgstab)(A, b[, x0, tol, atol, maxiter, M]) | Use Bi-Conjugate Gradient Stable iteration to solve `Ax = b`. |
| [`cg`](index.html#jax.scipy.sparse.linalg.cg)(A, b[, x0, tol, atol, maxiter, M]) | Use Conjugate Gradient iteration to solve `Ax = b`. |
| [`gmres`](index.html#jax.scipy.sparse.linalg.gmres)(A, b[, x0, tol, atol, restart, ...]) | GMRES solves the linear system A x = b for x, given A and b. |
##### jax.scipy.special[#](#module-jax.scipy.special)
| | |
| --- | --- |
| [`bernoulli`](index.html#jax.scipy.special.bernoulli)(n) | Bernoulli numbers B0..Bn (inclusive). |
| [`betainc`](index.html#jax.scipy.special.betainc)(a, b, x) | Regularized incomplete beta function. |
| [`betaln`](index.html#jax.scipy.special.betaln)(a, b) | Natural logarithm of absolute value of beta function. |
| [`digamma`](index.html#jax.scipy.special.digamma)(x) | The digamma function. |
| [`entr`](index.html#jax.scipy.special.entr)(x) | Elementwise function for computing entropy. |
| [`erf`](index.html#jax.scipy.special.erf)(x) | Returns the error function of complex argument. |
| [`erfc`](index.html#jax.scipy.special.erfc)(x) | Complementary error function, `1 - erf(x)`. |
| [`erfinv`](index.html#jax.scipy.special.erfinv)(x) | Inverse of the error function. |
| [`exp1`](index.html#jax.scipy.special.exp1)(x[, module]) | Exponential integral E1. |
| [`expi`](index.html#jax.scipy.special.expi)(x) | Exponential integral Ei. |
| [`expit`](index.html#jax.scipy.special.expit)(x) | Expit (a.k.a. |
| [`expn`](index.html#jax.scipy.special.expn)(n, x) | Generalized exponential integral En. |
| [`gamma`](index.html#jax.scipy.special.gamma)(x) | gamma function. |
| [`gammainc`](index.html#jax.scipy.special.gammainc)(a, x) | Regularized lower incomplete gamma function. |
| [`gammaincc`](index.html#jax.scipy.special.gammaincc)(a, x) | Regularized upper incomplete gamma function. |
| [`gammaln`](index.html#jax.scipy.special.gammaln)(x) | Logarithm of the absolute value of the gamma function. |
| [`i0`](index.html#jax.scipy.special.i0)(x) | Modified Bessel function of order 0. |
| [`i0e`](index.html#jax.scipy.special.i0e)(x) | Exponentially scaled modified Bessel function of order 0. |
| [`i1`](index.html#jax.scipy.special.i1)(x) | Modified Bessel function of order 1. |
| [`i1e`](index.html#jax.scipy.special.i1e)(x) | Exponentially scaled modified Bessel function of order 1. |
| [`log_ndtr`](index.html#jax.scipy.special.log_ndtr)(x[, series_order]) | Log Normal distribution function. |
| [`logit`](index.html#jax.scipy.special.logit)(x) | Logit ufunc for ndarrays. |
| [`logsumexp`](index.html#jax.scipy.special.logsumexp)(a[, axis, b, keepdims, return_sign]) | Compute the log of the sum of exponentials of input elements. |
| [`lpmn`](index.html#jax.scipy.special.lpmn)(m, n, z) | The associated Legendre functions (ALFs) of the first kind. |
| [`lpmn_values`](index.html#jax.scipy.special.lpmn_values)(m, n, z, is_normalized) | The associated Legendre functions (ALFs) of the first kind. |
| [`multigammaln`](index.html#jax.scipy.special.multigammaln)(a, d) | Returns the log of multivariate gamma, also sometimes called the |
| [`ndtr`](index.html#jax.scipy.special.ndtr)(x) | Normal distribution function. |
| [`ndtri`](index.html#jax.scipy.special.ndtri)(p) | The inverse of the CDF of the Normal distribution function. |
| [`polygamma`](index.html#jax.scipy.special.polygamma)(n, x) | Polygamma functions. |
| [`spence`](index.html#jax.scipy.special.spence)(x) | Spence's function, also known as the dilogarithm for real values. |
| [`sph_harm`](index.html#jax.scipy.special.sph_harm)(m, n, theta, phi[, n_max]) | Computes the spherical harmonics. |
| [`xlog1py`](index.html#jax.scipy.special.xlog1py)(x, y) | Compute `x*log1p(y)` so that the result is 0 if `x = 0`. |
| [`xlogy`](index.html#jax.scipy.special.xlogy)(x, y) | Compute `x*log(y)` so that the result is 0 if `x = 0`. |
| [`zeta`](index.html#jax.scipy.special.zeta)(x[, q]) | Riemann or Hurwitz zeta function. |
| [`kl_div`](index.html#jax.scipy.special.kl_div)(p, q) | Elementwise function for computing Kullback-Leibler divergence. |
| [`rel_entr`](index.html#jax.scipy.special.rel_entr)(p, q) | Elementwise function for computing relative entropy. |
##### jax.scipy.stats[#](#module-jax.scipy.stats)
| | |
| --- | --- |
| [`mode`](index.html#jax.scipy.stats.mode)(a[, axis, nan_policy, keepdims]) | LAX-backend implementation of `scipy.stats._stats_py.mode()`. |
| [`rankdata`](index.html#jax.scipy.stats.rankdata)(a[, method, axis, nan_policy]) | Assign ranks to data, dealing with ties appropriately. |
###### jax.scipy.stats.bernoulli[#](#module-jax.scipy.stats.bernoulli)
| | |
| --- | --- |
| [`logpmf`](index.html#jax.scipy.stats.bernoulli.logpmf)(k, p[, loc]) | Log of the probability mass function at k of the given RV. |
| [`pmf`](index.html#jax.scipy.stats.bernoulli.pmf)(k, p[, loc]) | Probability mass function at k of the given RV. |
| [`cdf`](index.html#jax.scipy.stats.bernoulli.cdf)(k, p) | Cumulative distribution function of the given RV. |
| [`ppf`](index.html#jax.scipy.stats.bernoulli.ppf)(q, p) | Percent point function (inverse of cdf) at q of the given RV. |
###### jax.scipy.stats.beta[#](#module-jax.scipy.stats.beta)
| | |
| --- | --- |
| [`logpdf`](index.html#jax.scipy.stats.beta.logpdf)(x, a, b[, loc, scale]) | Log of the probability density function at x of the given RV. |
| [`pdf`](index.html#jax.scipy.stats.beta.pdf)(x, a, b[, loc, scale]) | Probability density function at x of the given RV. |
| [`cdf`](index.html#jax.scipy.stats.beta.cdf)(x, a, b[, loc, scale]) | Cumulative distribution function of the given RV. |
| [`logcdf`](index.html#jax.scipy.stats.beta.logcdf)(x, a, b[, loc, scale]) | Log of the cumulative distribution function at x of the given RV. |
| [`sf`](index.html#jax.scipy.stats.beta.sf)(x, a, b[, loc, scale]) | Survival function (1 - cdf) at x of the given RV. |
| [`logsf`](index.html#jax.scipy.stats.beta.logsf)(x, a, b[, loc, scale]) | Log of the survival function of the given RV. |
###### jax.scipy.stats.betabinom[#](#module-jax.scipy.stats.betabinom)
| | |
| --- | --- |
| [`logpmf`](index.html#jax.scipy.stats.betabinom.logpmf)(k, n, a, b[, loc]) | Log of the probability mass function at k of the given RV. |
| [`pmf`](index.html#jax.scipy.stats.betabinom.pmf)(k, n, a, b[, loc]) | Probability mass function at k of the given RV. |
###### jax.scipy.stats.binom[#](#module-jax.scipy.stats.binom)
| | |
| --- | --- |
| [`logpmf`](index.html#jax.scipy.stats.binom.logpmf)(k, n, p[, loc]) | Log of the probability mass function at k of the given RV. |
| [`pmf`](index.html#jax.scipy.stats.binom.pmf)(k, n, p[, loc]) | Probability mass function at k of the given RV. |
###### jax.scipy.stats.cauchy[#](#module-jax.scipy.stats.cauchy)
| | |
| --- | --- |
| [`logpdf`](index.html#jax.scipy.stats.cauchy.logpdf)(x[, loc, scale]) | Log of the probability density function at x of the given RV. |
| [`pdf`](index.html#jax.scipy.stats.cauchy.pdf)(x[, loc, scale]) | Probability density function at x of the given RV. |
| [`cdf`](index.html#jax.scipy.stats.cauchy.cdf)(x[, loc, scale]) | Cumulative distribution function of the given RV. |
| [`logcdf`](index.html#jax.scipy.stats.cauchy.logcdf)(x[, loc, scale]) | Log of the cumulative distribution function at x of the given RV. |
| [`sf`](index.html#jax.scipy.stats.cauchy.sf)(x[, loc, scale]) | Survival function (1 - cdf) at x of the given RV. |
| [`logsf`](index.html#jax.scipy.stats.cauchy.logsf)(x[, loc, scale]) | Log of the survival function of the given RV. |
| [`isf`](index.html#jax.scipy.stats.cauchy.isf)(q[, loc, scale]) | Inverse survival function (inverse of sf) at q of the given RV. |
| [`ppf`](index.html#jax.scipy.stats.cauchy.ppf)(q[, loc, scale]) | Percent point function (inverse of cdf) at q of the given RV. |
###### jax.scipy.stats.chi2[#](#module-jax.scipy.stats.chi2)
| | |
| --- | --- |
| [`logpdf`](index.html#jax.scipy.stats.chi2.logpdf)(x, df[, loc, scale]) | Log of the probability density function at x of the given RV. |
| [`pdf`](index.html#jax.scipy.stats.chi2.pdf)(x, df[, loc, scale]) | Probability density function at x of the given RV. |
| [`cdf`](index.html#jax.scipy.stats.chi2.cdf)(x, df[, loc, scale]) | Cumulative distribution function of the given RV. |
| [`logcdf`](index.html#jax.scipy.stats.chi2.logcdf)(x, df[, loc, scale]) | Log of the cumulative distribution function at x of the given RV. |
| [`sf`](index.html#jax.scipy.stats.chi2.sf)(x, df[, loc, scale]) | Survival function (1 - cdf) at x of the given RV. |
| [`logsf`](index.html#jax.scipy.stats.chi2.logsf)(x, df[, loc, scale]) | Log of the survival function of the given RV. |
###### jax.scipy.stats.dirichlet[#](#module-jax.scipy.stats.dirichlet)
| | |
| --- | --- |
| [`logpdf`](index.html#jax.scipy.stats.dirichlet.logpdf)(x, alpha) | Log of the Dirichlet probability density function. |
| [`pdf`](index.html#jax.scipy.stats.dirichlet.pdf)(x, alpha) | The Dirichlet probability density function. |
###### jax.scipy.stats.expon[#](#module-jax.scipy.stats.expon)
| | |
| --- | --- |
| [`logpdf`](index.html#jax.scipy.stats.expon.logpdf)(x[, loc, scale]) | Log of the probability density function at x of the given RV. |
| [`pdf`](index.html#jax.scipy.stats.expon.pdf)(x[, loc, scale]) | Probability density function at x of the given RV. |
###### jax.scipy.stats.gamma[#](#module-jax.scipy.stats.gamma)
| | |
| --- | --- |
| [`logpdf`](index.html#jax.scipy.stats.gamma.logpdf)(x, a[, loc, scale]) | Log of the probability density function at x of the given RV. |
| [`pdf`](index.html#jax.scipy.stats.gamma.pdf)(x, a[, loc, scale]) | Probability density function at x of the given RV. |
| [`cdf`](index.html#jax.scipy.stats.gamma.cdf)(x, a[, loc, scale]) | Cumulative distribution function of the given RV. |
| [`logcdf`](index.html#jax.scipy.stats.gamma.logcdf)(x, a[, loc, scale]) | Log of the cumulative distribution function at x of the given RV. |
| [`sf`](index.html#jax.scipy.stats.gamma.sf)(x, a[, loc, scale]) | Survival function (1 - cdf) at x of the given RV. |
| [`logsf`](index.html#jax.scipy.stats.gamma.logsf)(x, a[, loc, scale]) | Log of the survival function of the given RV. |
###### jax.scipy.stats.gennorm[#](#module-jax.scipy.stats.gennorm)
| | |
| --- | --- |
| [`cdf`](index.html#jax.scipy.stats.gennorm.cdf)(x, p) | Cumulative distribution function of the given RV. |
| [`logpdf`](index.html#jax.scipy.stats.gennorm.logpdf)(x, p) | Log of the probability density function at x of the given RV. |
| [`pdf`](index.html#jax.scipy.stats.gennorm.pdf)(x, p) | Probability density function at x of the given RV. |
###### jax.scipy.stats.geom[#](#module-jax.scipy.stats.geom)
| | |
| --- | --- |
| [`logpmf`](index.html#jax.scipy.stats.geom.logpmf)(k, p[, loc]) | Log of the probability mass function at k of the given RV. |
| [`pmf`](index.html#jax.scipy.stats.geom.pmf)(k, p[, loc]) | Probability mass function at k of the given RV. |
###### jax.scipy.stats.laplace[#](#module-jax.scipy.stats.laplace)
| | |
| --- | --- |
| [`cdf`](index.html#jax.scipy.stats.laplace.cdf)(x[, loc, scale]) | Cumulative distribution function of the given RV. |
| [`logpdf`](index.html#jax.scipy.stats.laplace.logpdf)(x[, loc, scale]) | Log of the probability density function at x of the given RV. |
| [`pdf`](index.html#jax.scipy.stats.laplace.pdf)(x[, loc, scale]) | Probability density function at x of the given RV. |
###### jax.scipy.stats.logistic[#](#module-jax.scipy.stats.logistic)
| | |
| --- | --- |
| [`cdf`](index.html#jax.scipy.stats.logistic.cdf)(x[, loc, scale]) | Cumulative distribution function of the given RV. |
| [`isf`](index.html#jax.scipy.stats.logistic.isf)(x[, loc, scale]) | Inverse survival function (inverse of sf) at q of the given RV. |
| [`logpdf`](index.html#jax.scipy.stats.logistic.logpdf)(x[, loc, scale]) | Log of the probability density function at x of the given RV. |
| [`pdf`](index.html#jax.scipy.stats.logistic.pdf)(x[, loc, scale]) | Probability density function at x of the given RV. |
| [`ppf`](index.html#jax.scipy.stats.logistic.ppf)(x[, loc, scale]) | Percent point function (inverse of cdf) at q of the given RV. |
| [`sf`](index.html#jax.scipy.stats.logistic.sf)(x[, loc, scale]) | Survival function (1 - cdf) at x of the given RV. |
###### jax.scipy.stats.multinomial[#](#module-jax.scipy.stats.multinomial)
| | |
| --- | --- |
| [`logpmf`](index.html#jax.scipy.stats.multinomial.logpmf)(x, n, p) | Log of the Multinomial probability mass function. |
| [`pmf`](index.html#jax.scipy.stats.multinomial.pmf)(x, n, p) | Multinomial probability mass function. |
###### jax.scipy.stats.multivariate_normal[#](#module-jax.scipy.stats.multivariate_normal)
| | |
| --- | --- |
| [`logpdf`](index.html#jax.scipy.stats.multivariate_normal.logpdf)(x, mean, cov[, allow_singular]) | Log of the multivariate normal probability density function. |
| [`pdf`](index.html#jax.scipy.stats.multivariate_normal.pdf)(x, mean, cov) | Multivariate normal probability density function. |
###### jax.scipy.stats.nbinom[#](#module-jax.scipy.stats.nbinom)
| | |
| --- | --- |
| [`logpmf`](index.html#jax.scipy.stats.nbinom.logpmf)(k, n, p[, loc]) | Log of the probability mass function at k of the given RV. |
| [`pmf`](index.html#jax.scipy.stats.nbinom.pmf)(k, n, p[, loc]) | Probability mass function at k of the given RV. |
###### jax.scipy.stats.norm[#](#module-jax.scipy.stats.norm)
| | |
| --- | --- |
| [`logpdf`](index.html#jax.scipy.stats.norm.logpdf)(x[, loc, scale]) | Log of the probability density function at x of the given RV. |
| [`pdf`](index.html#jax.scipy.stats.norm.pdf)(x[, loc, scale]) | Probability density function at x of the given RV. |
| [`cdf`](index.html#jax.scipy.stats.norm.cdf)(x[, loc, scale]) | Cumulative distribution function of the given RV. |
| [`logcdf`](index.html#jax.scipy.stats.norm.logcdf)(x[, loc, scale]) | Log of the cumulative distribution function at x of the given RV. |
| [`ppf`](index.html#jax.scipy.stats.norm.ppf)(q[, loc, scale]) | Percent point function (inverse of cdf) at q of the given RV. |
| [`sf`](index.html#jax.scipy.stats.norm.sf)(x[, loc, scale]) | Survival function (1 - cdf) at x of the given RV. |
| [`logsf`](index.html#jax.scipy.stats.norm.logsf)(x[, loc, scale]) | Log of the survival function of the given RV. |
| [`isf`](index.html#jax.scipy.stats.norm.isf)(q[, loc, scale]) | Inverse survival function (inverse of sf) at q of the given RV. |
###### jax.scipy.stats.pareto[#](#module-jax.scipy.stats.pareto)
| | |
| --- | --- |
| [`logpdf`](index.html#jax.scipy.stats.pareto.logpdf)(x, b[, loc, scale]) | Log of the probability density function at x of the given RV. |
| [`pdf`](index.html#jax.scipy.stats.pareto.pdf)(x, b[, loc, scale]) | Probability density function at x of the given RV. |
###### jax.scipy.stats.poisson[#](#module-jax.scipy.stats.poisson)
| | |
| --- | --- |
| [`logpmf`](index.html#jax.scipy.stats.poisson.logpmf)(k, mu[, loc]) | Log of the probability mass function at k of the given RV. |
| [`pmf`](index.html#jax.scipy.stats.poisson.pmf)(k, mu[, loc]) | Probability mass function at k of the given RV. |
###### jax.scipy.stats.t[#](#module-jax.scipy.stats.t)
| | |
| --- | --- |
| [`logpdf`](index.html#jax.scipy.stats.t.logpdf)(x, df[, loc, scale]) | Log of the probability density function at x of the given RV. |
| [`pdf`](index.html#jax.scipy.stats.t.pdf)(x, df[, loc, scale]) | Probability density function at x of the given RV. |
###### jax.scipy.stats.truncnorm[#](#module-jax.scipy.stats.truncnorm)
| | |
| --- | --- |
| [`cdf`](index.html#jax.scipy.stats.truncnorm.cdf)(x, a, b[, loc, scale]) | Cumulative distribution function of the given RV. |
| [`logcdf`](index.html#jax.scipy.stats.truncnorm.logcdf)(x, a, b[, loc, scale]) | Log of the cumulative distribution function at x of the given RV. |
| [`logpdf`](index.html#jax.scipy.stats.truncnorm.logpdf)(x, a, b[, loc, scale]) | Log of the probability density function at x of the given RV. |
| [`logsf`](index.html#jax.scipy.stats.truncnorm.logsf)(x, a, b[, loc, scale]) | Log of the survival function of the given RV. |
| [`pdf`](index.html#jax.scipy.stats.truncnorm.pdf)(x, a, b[, loc, scale]) | Probability density function at x of the given RV. |
| [`sf`](index.html#jax.scipy.stats.truncnorm.sf)(x, a, b[, loc, scale]) | Survival function (1 - cdf) at x of the given RV. |
###### jax.scipy.stats.uniform[#](#module-jax.scipy.stats.uniform)
| | |
| --- | --- |
| [`logpdf`](index.html#jax.scipy.stats.uniform.logpdf)(x[, loc, scale]) | Log of the probability density function at x of the given RV. |
| [`pdf`](index.html#jax.scipy.stats.uniform.pdf)(x[, loc, scale]) | Probability density function at x of the given RV. |
###### jax.scipy.stats.gaussian_kde[#](#jax-scipy-stats-gaussian-kde)
| | |
| --- | --- |
| [`gaussian_kde`](index.html#jax.scipy.stats.gaussian_kde)(dataset[, bw_method, weights]) | Representation of a kernel-density estimate using Gaussian kernels. |
| [`gaussian_kde.evaluate`](index.html#jax.scipy.stats.gaussian_kde.evaluate)(points) | Evaluate the estimated pdf on a set of points. |
| [`gaussian_kde.integrate_gaussian`](index.html#jax.scipy.stats.gaussian_kde.integrate_gaussian)(mean, cov) | Multiply estimated density by a multivariate Gaussian and integrate |
| [`gaussian_kde.integrate_box_1d`](index.html#jax.scipy.stats.gaussian_kde.integrate_box_1d)(low, high) | Computes the integral of a 1D pdf between two bounds. |
| [`gaussian_kde.integrate_kde`](index.html#jax.scipy.stats.gaussian_kde.integrate_kde)(other) | Computes the integral of the product of this kernel density estimate |
| [`gaussian_kde.resample`](index.html#jax.scipy.stats.gaussian_kde.resample)(key[, shape]) | Randomly sample a dataset from the estimated pdf |
| [`gaussian_kde.pdf`](index.html#jax.scipy.stats.gaussian_kde.pdf)(x) | Evaluate the estimated pdf on a provided set of points. |
| [`gaussian_kde.logpdf`](index.html#jax.scipy.stats.gaussian_kde.logpdf)(x) | Evaluate the log of the estimated pdf on a provided set of points. |
###### jax.scipy.stats.vonmises[#](#module-jax.scipy.stats.vonmises)
| | |
| --- | --- |
| [`logpdf`](index.html#jax.scipy.stats.vonmises.logpdf)(x, kappa) | Log of the probability density function at x of the given RV. |
| [`pdf`](index.html#jax.scipy.stats.vonmises.pdf)(x, kappa) | Probability density function at x of the given RV. |
###### jax.scipy.stats.wrapcauchy[#](#module-jax.scipy.stats.wrapcauchy)
| | |
| --- | --- |
| [`logpdf`](index.html#jax.scipy.stats.wrapcauchy.logpdf)(x, c) | Log of the probability density function at x of the given RV. |
| [`pdf`](index.html#jax.scipy.stats.wrapcauchy.pdf)(x, c) | Probability density function at x of the given RV. |
#### `jax.lax` module[#](#module-jax.lax)
[`jax.lax`](#module-jax.lax) is a library of primitives operations that underpins libraries such as [`jax.numpy`](index.html#module-jax.numpy). Transformation rules, such as JVP and batching rules,
are typically defined as transformations on [`jax.lax`](#module-jax.lax) primitives.
Many of the primitives are thin wrappers around equivalent XLA operations,
described by the [XLA operation semantics](https://www.tensorflow.org/xla/operation_semantics) documentation. In a few cases JAX diverges from XLA, usually to ensure that the set of operations is closed under the operation of JVP and transpose rules.
Where possible, prefer to use libraries such as [`jax.numpy`](index.html#module-jax.numpy) instead of using [`jax.lax`](#module-jax.lax) directly. The [`jax.numpy`](index.html#module-jax.numpy) API follows NumPy, and is therefore more stable and less likely to change than the [`jax.lax`](#module-jax.lax) API.
##### Operators[#](#operators)
| | |
| --- | --- |
| [`abs`](index.html#jax.lax.abs)(x) | Elementwise absolute value: \(|x|\). |
| [`acos`](index.html#jax.lax.acos)(x) | Elementwise arc cosine: \(\mathrm{acos}(x)\). |
| [`acosh`](index.html#jax.lax.acosh)(x) | Elementwise inverse hyperbolic cosine: \(\mathrm{acosh}(x)\). |
| [`add`](index.html#jax.lax.add)(x, y) | Elementwise addition: \(x + y\). |
| [`after_all`](index.html#jax.lax.after_all)(*operands) | Merges one or more XLA token values. |
| [`approx_max_k`](index.html#jax.lax.approx_max_k)(operand, k[, ...]) | Returns max `k` values and their indices of the `operand` in an approximate manner. |
| [`approx_min_k`](index.html#jax.lax.approx_min_k)(operand, k[, ...]) | Returns min `k` values and their indices of the `operand` in an approximate manner. |
| [`argmax`](index.html#jax.lax.argmax)(operand, axis, index_dtype) | Computes the index of the maximum element along `axis`. |
| [`argmin`](index.html#jax.lax.argmin)(operand, axis, index_dtype) | Computes the index of the minimum element along `axis`. |
| [`asin`](index.html#jax.lax.asin)(x) | Elementwise arc sine: \(\mathrm{asin}(x)\). |
| [`asinh`](index.html#jax.lax.asinh)(x) | Elementwise inverse hyperbolic sine: \(\mathrm{asinh}(x)\). |
| [`atan`](index.html#jax.lax.atan)(x) | Elementwise arc tangent: \(\mathrm{atan}(x)\). |
| [`atan2`](index.html#jax.lax.atan2)(x, y) | Elementwise arc tangent of two variables: \(\mathrm{atan}({x \over y})\). |
| [`atanh`](index.html#jax.lax.atanh)(x) | Elementwise inverse hyperbolic tangent: \(\mathrm{atanh}(x)\). |
| [`batch_matmul`](index.html#jax.lax.batch_matmul)(lhs, rhs[, precision]) | Batch matrix multiplication. |
| [`bessel_i0e`](index.html#jax.lax.bessel_i0e)(x) | Exponentially scaled modified Bessel function of order 0: \(\mathrm{i0e}(x) = e^{-|x|} \mathrm{i0}(x)\) |
| [`bessel_i1e`](index.html#jax.lax.bessel_i1e)(x) | Exponentially scaled modified Bessel function of order 1: \(\mathrm{i1e}(x) = e^{-|x|} \mathrm{i1}(x)\) |
| [`betainc`](index.html#jax.lax.betainc)(a, b, x) | Elementwise regularized incomplete beta integral. |
| [`bitcast_convert_type`](index.html#jax.lax.bitcast_convert_type)(operand, new_dtype) | Elementwise bitcast. |
| [`bitwise_and`](index.html#jax.lax.bitwise_and)(x, y) | Elementwise AND: \(x \wedge y\). |
| [`bitwise_not`](index.html#jax.lax.bitwise_not)(x) | Elementwise NOT: \(\neg x\). |
| [`bitwise_or`](index.html#jax.lax.bitwise_or)(x, y) | Elementwise OR: \(x \vee y\). |
| [`bitwise_xor`](index.html#jax.lax.bitwise_xor)(x, y) | Elementwise exclusive OR: \(x \oplus y\). |
| [`population_count`](index.html#jax.lax.population_count)(x) | Elementwise popcount, count the number of set bits in each element. |
| [`broadcast`](index.html#jax.lax.broadcast)(operand, sizes) | Broadcasts an array, adding new leading dimensions |
| [`broadcast_in_dim`](index.html#jax.lax.broadcast_in_dim)(operand, shape, ...) | Wraps XLA's [BroadcastInDim](https://www.tensorflow.org/xla/operation_semantics#broadcastindim) operator. |
| [`broadcast_shapes`](index.html#jax.lax.broadcast_shapes)(*shapes) | Returns the shape that results from NumPy broadcasting of shapes. |
| [`broadcast_to_rank`](index.html#jax.lax.broadcast_to_rank)(x, rank) | Adds leading dimensions of `1` to give `x` rank `rank`. |
| [`broadcasted_iota`](index.html#jax.lax.broadcasted_iota)(dtype, shape, dimension) | Convenience wrapper around `iota`. |
| [`cbrt`](index.html#jax.lax.cbrt)(x) | Elementwise cube root: \(\sqrt[3]{x}\). |
| [`ceil`](index.html#jax.lax.ceil)(x) | Elementwise ceiling: \(\left\lceil x \right\rceil\). |
| [`clamp`](index.html#jax.lax.clamp)(min, x, max) | Elementwise clamp. |
| [`clz`](index.html#jax.lax.clz)(x) | Elementwise count-leading-zeros. |
| [`collapse`](index.html#jax.lax.collapse)(operand, start_dimension[, ...]) | Collapses dimensions of an array into a single dimension. |
| [`complex`](index.html#jax.lax.complex)(x, y) | Elementwise make complex number: \(x + jy\). |
| [`concatenate`](index.html#jax.lax.concatenate)(operands, dimension) | Concatenates a sequence of arrays along dimension. |
| [`conj`](index.html#jax.lax.conj)(x) | Elementwise complex conjugate function: \(\overline{x}\). |
| [`conv`](index.html#jax.lax.conv)(lhs, rhs, window_strides, padding[, ...]) | Convenience wrapper around conv_general_dilated. |
| [`convert_element_type`](index.html#jax.lax.convert_element_type)(operand, new_dtype) | Elementwise cast. |
| [`conv_dimension_numbers`](index.html#jax.lax.conv_dimension_numbers)(lhs_shape, rhs_shape, ...) | Converts convolution dimension_numbers to a ConvDimensionNumbers. |
| [`conv_general_dilated`](index.html#jax.lax.conv_general_dilated)(lhs, rhs, ...[, ...]) | General n-dimensional convolution operator, with optional dilation. |
| [`conv_general_dilated_local`](index.html#jax.lax.conv_general_dilated_local)(lhs, rhs, ...[, ...]) | General n-dimensional unshared convolution operator with optional dilation. |
| [`conv_general_dilated_patches`](index.html#jax.lax.conv_general_dilated_patches)(lhs, ...[, ...]) | Extract patches subject to the receptive field of conv_general_dilated. |
| [`conv_transpose`](index.html#jax.lax.conv_transpose)(lhs, rhs, strides, padding[, ...]) | Convenience wrapper for calculating the N-d convolution "transpose". |
| [`conv_with_general_padding`](index.html#jax.lax.conv_with_general_padding)(lhs, rhs, ...[, ...]) | Convenience wrapper around conv_general_dilated. |
| [`cos`](index.html#jax.lax.cos)(x) | Elementwise cosine: \(\mathrm{cos}(x)\). |
| [`cosh`](index.html#jax.lax.cosh)(x) | Elementwise hyperbolic cosine: \(\mathrm{cosh}(x)\). |
| [`cumlogsumexp`](index.html#jax.lax.cumlogsumexp)(operand[, axis, reverse]) | Computes a cumulative logsumexp along axis. |
| [`cummax`](index.html#jax.lax.cummax)(operand[, axis, reverse]) | Computes a cumulative maximum along axis. |
| [`cummin`](index.html#jax.lax.cummin)(operand[, axis, reverse]) | Computes a cumulative minimum along axis. |
| [`cumprod`](index.html#jax.lax.cumprod)(operand[, axis, reverse]) | Computes a cumulative product along axis. |
| [`cumsum`](index.html#jax.lax.cumsum)(operand[, axis, reverse]) | Computes a cumulative sum along axis. |
| [`digamma`](index.html#jax.lax.digamma)(x) | Elementwise digamma: \(\psi(x)\). |
| [`div`](index.html#jax.lax.div)(x, y) | Elementwise division: \(x \over y\). |
| [`dot`](index.html#jax.lax.dot)(lhs, rhs[, precision, ...]) | Vector/vector, matrix/vector, and matrix/matrix multiplication. |
| [`dot_general`](index.html#jax.lax.dot_general)(lhs, rhs, dimension_numbers[, ...]) | General dot product/contraction operator. |
| [`dynamic_index_in_dim`](index.html#jax.lax.dynamic_index_in_dim)(operand, index[, axis, ...]) | Convenience wrapper around dynamic_slice to perform int indexing. |
| [`dynamic_slice`](index.html#jax.lax.dynamic_slice)(operand, start_indices, ...) | Wraps XLA's [DynamicSlice](https://www.tensorflow.org/xla/operation_semantics#dynamicslice) operator. |
| [`dynamic_slice_in_dim`](index.html#jax.lax.dynamic_slice_in_dim)(operand, start_index, ...) | Convenience wrapper around `lax.dynamic_slice()` applied to one dimension. |
| [`dynamic_update_index_in_dim`](index.html#jax.lax.dynamic_update_index_in_dim)(operand, update, ...) | Convenience wrapper around [`dynamic_update_slice()`](index.html#jax.lax.dynamic_update_slice) to update a slice of size 1 in a single `axis`. |
| [`dynamic_update_slice`](index.html#jax.lax.dynamic_update_slice)(operand, update, ...) | Wraps XLA's [DynamicUpdateSlice](https://www.tensorflow.org/xla/operation_semantics#dynamicupdateslice) operator. |
| [`dynamic_update_slice_in_dim`](index.html#jax.lax.dynamic_update_slice_in_dim)(operand, update, ...) | Convenience wrapper around [`dynamic_update_slice()`](index.html#jax.lax.dynamic_update_slice) to update a slice in a single `axis`. |
| [`eq`](index.html#jax.lax.eq)(x, y) | Elementwise equals: \(x = y\). |
| [`erf`](index.html#jax.lax.erf)(x) | Elementwise error function: \(\mathrm{erf}(x)\). |
| [`erfc`](index.html#jax.lax.erfc)(x) | Elementwise complementary error function: \(\mathrm{erfc}(x) = 1 - \mathrm{erf}(x)\). |
| [`erf_inv`](index.html#jax.lax.erf_inv)(x) | Elementwise inverse error function: \(\mathrm{erf}^{-1}(x)\). |
| [`exp`](index.html#jax.lax.exp)(x) | Elementwise exponential: \(e^x\). |
| [`expand_dims`](index.html#jax.lax.expand_dims)(array, dimensions) | Insert any number of size 1 dimensions into an array. |
| [`expm1`](index.html#jax.lax.expm1)(x) | Elementwise \(e^{x} - 1\). |
| [`fft`](index.html#jax.lax.fft)(x, fft_type, fft_lengths) |
param fft_type:
|
| [`floor`](index.html#jax.lax.floor)(x) | Elementwise floor: \(\left\lfloor x \right\rfloor\). |
| [`full`](index.html#jax.lax.full)(shape, fill_value[, dtype]) | Returns an array of shape filled with fill_value. |
| [`full_like`](index.html#jax.lax.full_like)(x, fill_value[, dtype, shape]) | Create a full array like np.full based on the example array x. |
| [`gather`](index.html#jax.lax.gather)(operand, start_indices, ...[, ...]) | Gather operator. |
| [`ge`](index.html#jax.lax.ge)(x, y) | Elementwise greater-than-or-equals: \(x \geq y\). |
| [`gt`](index.html#jax.lax.gt)(x, y) | Elementwise greater-than: \(x > y\). |
| [`igamma`](index.html#jax.lax.igamma)(a, x) | Elementwise regularized incomplete gamma function. |
| [`igammac`](index.html#jax.lax.igammac)(a, x) | Elementwise complementary regularized incomplete gamma function. |
| [`imag`](index.html#jax.lax.imag)(x) | Elementwise extract imaginary part: \(\mathrm{Im}(x)\). |
| [`index_in_dim`](index.html#jax.lax.index_in_dim)(operand, index[, axis, keepdims]) | Convenience wrapper around `lax.slice()` to perform int indexing. |
| [`index_take`](index.html#jax.lax.index_take)(src, idxs, axes) |
param src:
|
| [`integer_pow`](index.html#jax.lax.integer_pow)(x, y) | Elementwise power: \(x^y\), where \(y\) is a fixed integer. |
| [`iota`](index.html#jax.lax.iota)(dtype, size) | Wraps XLA's [Iota](https://www.tensorflow.org/xla/operation_semantics#iota) operator. |
| [`is_finite`](index.html#jax.lax.is_finite)(x) | Elementwise \(\mathrm{isfinite}\). |
| [`le`](index.html#jax.lax.le)(x, y) | Elementwise less-than-or-equals: \(x \leq y\). |
| [`lgamma`](index.html#jax.lax.lgamma)(x) | Elementwise log gamma: \(\mathrm{log}(\Gamma(x))\). |
| [`log`](index.html#jax.lax.log)(x) | Elementwise natural logarithm: \(\mathrm{log}(x)\). |
| [`log1p`](index.html#jax.lax.log1p)(x) | Elementwise \(\mathrm{log}(1 + x)\). |
| [`logistic`](index.html#jax.lax.logistic)(x) | Elementwise logistic (sigmoid) function: \(\frac{1}{1 + e^{-x}}\). |
| [`lt`](index.html#jax.lax.lt)(x, y) | Elementwise less-than: \(x < y\). |
| [`max`](index.html#jax.lax.max)(x, y) | Elementwise maximum: \(\mathrm{max}(x, y)\) |
| [`min`](index.html#jax.lax.min)(x, y) | Elementwise minimum: \(\mathrm{min}(x, y)\) |
| [`mul`](index.html#jax.lax.mul)(x, y) | Elementwise multiplication: \(x \times y\). |
| [`ne`](index.html#jax.lax.ne)(x, y) | Elementwise not-equals: \(x \neq y\). |
| [`neg`](index.html#jax.lax.neg)(x) | Elementwise negation: \(-x\). |
| [`nextafter`](index.html#jax.lax.nextafter)(x1, x2) | Returns the next representable value after x1 in the direction of x2. |
| [`pad`](index.html#jax.lax.pad)(operand, padding_value, padding_config) | Applies low, high, and/or interior padding to an array. |
| [`polygamma`](index.html#jax.lax.polygamma)(m, x) | Elementwise polygamma: \(\psi^{(m)}(x)\). |
| [`population_count`](index.html#jax.lax.population_count)(x) | Elementwise popcount, count the number of set bits in each element. |
| [`pow`](index.html#jax.lax.pow)(x, y) | Elementwise power: \(x^y\). |
| [`random_gamma_grad`](index.html#jax.lax.random_gamma_grad)(a, x) | Elementwise derivative of samples from Gamma(a, 1). |
| [`real`](index.html#jax.lax.real)(x) | Elementwise extract real part: \(\mathrm{Re}(x)\). |
| [`reciprocal`](index.html#jax.lax.reciprocal)(x) | Elementwise reciprocal: \(1 \over x\). |
| [`reduce`](index.html#jax.lax.reduce)(operands, init_values, computation, ...) | Wraps XLA's [Reduce](https://www.tensorflow.org/xla/operation_semantics#reduce) operator. |
| [`reduce_precision`](index.html#jax.lax.reduce_precision)(operand, exponent_bits, ...) | Wraps XLA's [ReducePrecision](https://www.tensorflow.org/xla/operation_semantics#reduceprecision) operator. |
| [`reduce_window`](index.html#jax.lax.reduce_window)(operand, init_value, ...[, ...]) | Wraps XLA's [ReduceWindowWithGeneralPadding](https://www.tensorflow.org/xla/operation_semantics#reducewindow) operator. |
| [`rem`](index.html#jax.lax.rem)(x, y) | Elementwise remainder: \(x \bmod y\). |
| [`reshape`](index.html#jax.lax.reshape)(operand, new_sizes[, dimensions]) | Wraps XLA's [Reshape](https://www.tensorflow.org/xla/operation_semantics#reshape) operator. |
| [`rev`](index.html#jax.lax.rev)(operand, dimensions) | Wraps XLA's [Rev](https://www.tensorflow.org/xla/operation_semantics#rev_reverse) operator. |
| [`rng_bit_generator`](index.html#jax.lax.rng_bit_generator)(key, shape[, dtype, algorithm]) | Stateless PRNG bit generator. |
| [`rng_uniform`](index.html#jax.lax.rng_uniform)(a, b, shape) | Stateful PRNG generator. |
| [`round`](index.html#jax.lax.round)(x[, rounding_method]) | Elementwise round. |
| [`rsqrt`](index.html#jax.lax.rsqrt)(x) | Elementwise reciprocal square root: \(1 \over \sqrt{x}\). |
| [`scatter`](index.html#jax.lax.scatter)(operand, scatter_indices, updates, ...) | Scatter-update operator. |
| [`scatter_add`](index.html#jax.lax.scatter_add)(operand, scatter_indices, ...[, ...]) | Scatter-add operator. |
| [`scatter_apply`](index.html#jax.lax.scatter_apply)(operand, scatter_indices, ...) | Scatter-apply operator. |
| [`scatter_max`](index.html#jax.lax.scatter_max)(operand, scatter_indices, ...[, ...]) | Scatter-max operator. |
| [`scatter_min`](index.html#jax.lax.scatter_min)(operand, scatter_indices, ...[, ...]) | Scatter-min operator. |
| [`scatter_mul`](index.html#jax.lax.scatter_mul)(operand, scatter_indices, ...[, ...]) | Scatter-multiply operator. |
| [`shift_left`](index.html#jax.lax.shift_left)(x, y) | Elementwise left shift: \(x \ll y\). |
| [`shift_right_arithmetic`](index.html#jax.lax.shift_right_arithmetic)(x, y) | Elementwise arithmetic right shift: \(x \gg y\). |
| [`shift_right_logical`](index.html#jax.lax.shift_right_logical)(x, y) | Elementwise logical right shift: \(x \gg y\). |
| [`sign`](index.html#jax.lax.sign)(x) | Elementwise sign. |
| [`sin`](index.html#jax.lax.sin)(x) | Elementwise sine: \(\mathrm{sin}(x)\). |
| [`sinh`](index.html#jax.lax.sinh)(x) | Elementwise hyperbolic sine: \(\mathrm{sinh}(x)\). |
| [`slice`](index.html#jax.lax.slice)(operand, start_indices, limit_indices) | Wraps XLA's [Slice](https://www.tensorflow.org/xla/operation_semantics#slice) operator. |
| [`slice_in_dim`](index.html#jax.lax.slice_in_dim)(operand, start_index, limit_index) | Convenience wrapper around `lax.slice()` applying to only one dimension. |
| [`sort`](index.html#jax.lax.sort)(operand[, dimension, is_stable, num_keys]) | Wraps XLA's [Sort](https://www.tensorflow.org/xla/operation_semantics#sort) operator. |
| [`sort_key_val`](index.html#jax.lax.sort_key_val)(keys, values[, dimension, ...]) | Sorts `keys` along `dimension` and applies the same permutation to `values`. |
| [`sqrt`](index.html#jax.lax.sqrt)(x) | Elementwise square root: \(\sqrt{x}\). |
| [`square`](index.html#jax.lax.square)(x) | Elementwise square: \(x^2\). |
| [`squeeze`](index.html#jax.lax.squeeze)(array, dimensions) | Squeeze any number of size 1 dimensions from an array. |
| [`sub`](index.html#jax.lax.sub)(x, y) | Elementwise subtraction: \(x - y\). |
| [`tan`](index.html#jax.lax.tan)(x) | Elementwise tangent: \(\mathrm{tan}(x)\). |
| [`tanh`](index.html#jax.lax.tanh)(x) | Elementwise hyperbolic tangent: \(\mathrm{tanh}(x)\). |
| [`tie_in`](index.html#jax.lax.tie_in)(x, y) | Deprecated. |
| [`top_k`](index.html#jax.lax.top_k)(operand, k) | Returns top `k` values and their indices along the last axis of `operand`. |
| [`transpose`](index.html#jax.lax.transpose)(operand, permutation) | Wraps XLA's [Transpose](https://www.tensorflow.org/xla/operation_semantics#transpose) operator. |
| [`zeros_like_array`](index.html#jax.lax.zeros_like_array)(x) |
param x:
|
| [`zeta`](index.html#jax.lax.zeta)(x, q) | Elementwise Hurwitz zeta function: \(\zeta(x, q)\) |
##### Control flow operators[#](#control-flow-operators)
| | |
| --- | --- |
| [`associative_scan`](index.html#jax.lax.associative_scan)(fn, elems[, reverse, axis]) | Performs a scan with an associative binary operation, in parallel. |
| [`cond`](index.html#jax.lax.cond)(pred, true_fun, false_fun, *operands[, ...]) | Conditionally apply `true_fun` or `false_fun`. |
| [`fori_loop`](index.html#jax.lax.fori_loop)(lower, upper, body_fun, init_val) | Loop from `lower` to `upper` by reduction to [`jax.lax.while_loop()`](index.html#jax.lax.while_loop). |
| [`map`](index.html#jax.lax.map)(f, xs) | Map a function over leading array axes. |
| [`scan`](index.html#jax.lax.scan)(f, init, xs[, length, reverse, unroll]) | Scan a function over leading array axes while carrying along state. |
| [`select`](index.html#jax.lax.select)(pred, on_true, on_false) | Selects between two branches based on a boolean predicate. |
| [`select_n`](index.html#jax.lax.select_n)(which, *cases) | Selects array values from multiple cases. |
| [`switch`](index.html#jax.lax.switch)(index, branches, *operands[, operand]) | Apply exactly one of `branches` given by `index`. |
| [`while_loop`](index.html#jax.lax.while_loop)(cond_fun, body_fun, init_val) | Call `body_fun` repeatedly in a loop while `cond_fun` is True. |
##### Custom gradient operators[#](#custom-gradient-operators)
| | |
| --- | --- |
| [`stop_gradient`](index.html#jax.lax.stop_gradient)(x) | Stops gradient computation. |
| [`custom_linear_solve`](index.html#jax.lax.custom_linear_solve)(matvec, b, solve[, ...]) | Perform a matrix-free linear solve with implicitly defined gradients. |
| [`custom_root`](index.html#jax.lax.custom_root)(f, initial_guess, solve, ...[, ...]) | Differentiably solve for a roots of a function. |
##### Parallel operators[#](#parallel-operators)
Parallelism support is experimental.
| | |
| --- | --- |
| [`all_gather`](index.html#jax.lax.all_gather)(x, axis_name, *[, ...]) | Gather values of x across all replicas. |
| [`all_to_all`](index.html#jax.lax.all_to_all)(x, axis_name, split_axis, ...[, ...]) | Materialize the mapped axis and map a different axis. |
| [`pdot`](index.html#jax.lax.pdot)(x, y, axis_name[, pos_contract, ...]) | |
| [`psum`](index.html#jax.lax.psum)(x, axis_name, *[, axis_index_groups]) | Compute an all-reduce sum on `x` over the pmapped axis `axis_name`. |
| [`pmax`](index.html#jax.lax.pmax)(x, axis_name, *[, axis_index_groups]) | Compute an all-reduce max on `x` over the pmapped axis `axis_name`. |
| [`pmin`](index.html#jax.lax.pmin)(x, axis_name, *[, axis_index_groups]) | Compute an all-reduce min on `x` over the pmapped axis `axis_name`. |
| [`pmean`](index.html#jax.lax.pmean)(x, axis_name, *[, axis_index_groups]) | Compute an all-reduce mean on `x` over the pmapped axis `axis_name`. |
| [`ppermute`](index.html#jax.lax.ppermute)(x, axis_name, perm) | Perform a collective permutation according to the permutation `perm`. |
| [`pshuffle`](index.html#jax.lax.pshuffle)(x, axis_name, perm) | Convenience wrapper of jax.lax.ppermute with alternate permutation encoding |
| [`pswapaxes`](index.html#jax.lax.pswapaxes)(x, axis_name, axis, *[, ...]) | Swap the pmapped axis `axis_name` with the unmapped axis `axis`. |
| [`axis_index`](index.html#jax.lax.axis_index)(axis_name) | Return the index along the mapped axis `axis_name`. |
##### Sharding-related operators[#](#sharding-related-operators)
| | |
| --- | --- |
| [`with_sharding_constraint`](index.html#jax.lax.with_sharding_constraint)(x, shardings) | Mechanism to constrain the sharding of an Array inside a jitted computation |
##### Linear algebra operators (jax.lax.linalg)[#](#module-jax.lax.linalg)
| | |
| --- | --- |
| [`cholesky`](index.html#jax.lax.linalg.cholesky)(x, *[, symmetrize_input]) | Cholesky decomposition. |
| [`eig`](index.html#jax.lax.linalg.eig)(x, *[, compute_left_eigenvectors, ...]) | Eigendecomposition of a general matrix. |
| [`eigh`](index.html#jax.lax.linalg.eigh)(x, *[, lower, symmetrize_input, ...]) | Eigendecomposition of a Hermitian matrix. |
| [`hessenberg`](index.html#jax.lax.linalg.hessenberg)(a) | Reduces a square matrix to upper Hessenberg form. |
| [`lu`](index.html#jax.lax.linalg.lu)(x) | LU decomposition with partial pivoting. |
| [`householder_product`](index.html#jax.lax.linalg.householder_product)(a, taus) | Product of elementary Householder reflectors. |
| [`qdwh`](index.html#jax.lax.linalg.qdwh)(x, *[, is_hermitian, max_iterations, ...]) | QR-based dynamically weighted Halley iteration for polar decomposition. |
| [`qr`](index.html#jax.lax.linalg.qr)(x, *[, full_matrices]) | QR decomposition. |
| [`schur`](index.html#jax.lax.linalg.schur)(x, *[, compute_schur_vectors, ...]) |
param x:
|
| [`svd`](index.html#jax.lax.linalg.svd)(x, *[, full_matrices, compute_uv]) | Singular value decomposition. |
| [`triangular_solve`](index.html#jax.lax.linalg.triangular_solve)(a, b, *[, left_side, ...]) | Triangular solve. |
| [`tridiagonal`](index.html#jax.lax.linalg.tridiagonal)(a, *[, lower]) | Reduces a symmetric/Hermitian matrix to tridiagonal form. |
| [`tridiagonal_solve`](index.html#jax.lax.linalg.tridiagonal_solve)(dl, d, du, b) | Computes the solution of a tridiagonal linear system. |
##### Argument classes[#](#argument-classes)
*class* jax.lax.ConvDimensionNumbers(*lhs_spec: [Sequence](https://docs.python.org/3/library/collections.abc.html#collections.abc.Sequence)[[int](https://docs.python.org/3/library/functions.html#int)]*, *rhs_spec: [Sequence](https://docs.python.org/3/library/collections.abc.html#collections.abc.Sequence)[[int](https://docs.python.org/3/library/functions.html#int)]*, *out_spec: [Sequence](https://docs.python.org/3/library/collections.abc.html#collections.abc.Sequence)[[int](https://docs.python.org/3/library/functions.html#int)]*)[[source]](_modules/jax/_src/lax/convolution.html#ConvDimensionNumbers)[#](#jax.lax.ConvDimensionNumbers)
Describes batch, spatial, and feature dimensions of a convolution.
Parameters:
* **lhs_spec** – a tuple of nonnegative integer dimension numbers containing
(batch dimension, feature dimension, spatial dimensions…).
* **rhs_spec** – a tuple of nonnegative integer dimension numbers containing
(out feature dimension, in feature dimension, spatial dimensions…).
* **out_spec** – a tuple of nonnegative integer dimension numbers containing
(batch dimension, feature dimension, spatial dimensions…).
jax.lax.ConvGeneralDilatedDimensionNumbers[#](#jax.lax.ConvGeneralDilatedDimensionNumbers)
alias of [`Union`](https://docs.python.org/3/library/typing.html#typing.Union)[[`None`](https://docs.python.org/3/library/constants.html#None), [`ConvDimensionNumbers`](#jax.lax.ConvDimensionNumbers), [`tuple`](https://docs.python.org/3/library/stdtypes.html#tuple)[[`str`](https://docs.python.org/3/library/stdtypes.html#str), [`str`](https://docs.python.org/3/library/stdtypes.html#str), [`str`](https://docs.python.org/3/library/stdtypes.html#str)]]
*class* jax.lax.GatherDimensionNumbers(*offset_dims: [tuple](https://docs.python.org/3/library/stdtypes.html#tuple)[[int](https://docs.python.org/3/library/functions.html#int), ...]*, *collapsed_slice_dims: [tuple](https://docs.python.org/3/library/stdtypes.html#tuple)[[int](https://docs.python.org/3/library/functions.html#int), ...]*, *start_index_map: [tuple](https://docs.python.org/3/library/stdtypes.html#tuple)[[int](https://docs.python.org/3/library/functions.html#int), ...]*)[[source]](_modules/jax/_src/lax/slicing.html#GatherDimensionNumbers)[#](#jax.lax.GatherDimensionNumbers)
Describes the dimension number arguments to an [XLA’s Gather operator](https://www.tensorflow.org/xla/operation_semantics#gather). See the XLA documentation for more details of what the dimension numbers mean.
Parameters:
* **offset_dims** – the set of dimensions in the gather output that offset into an array sliced from operand. Must be a tuple of integers in ascending order, each representing a dimension number of the output.
* **collapsed_slice_dims** – the set of dimensions i in operand that have slice_sizes[i] == 1 and that should not have a corresponding dimension in the output of the gather. Must be a tuple of integers in ascending order.
* **start_index_map** – for each dimension in start_indices, gives the corresponding dimension in operand that is to be sliced. Must be a tuple of integers with size equal to start_indices.shape[-1].
Unlike XLA’s GatherDimensionNumbers structure, index_vector_dim is implicit; there is always an index vector dimension and it must always be the last dimension. To gather scalar indices, add a trailing dimension of size 1.
*class* jax.lax.GatherScatterMode(*value*)[[source]](_modules/jax/_src/lax/slicing.html#GatherScatterMode)[#](#jax.lax.GatherScatterMode)
Describes how to handle out-of-bounds indices in a gather or scatter.
Possible values are:
CLIP:Indices will be clamped to the nearest in-range value, i.e., such that the entire window to be gathered is in-range.
FILL_OR_DROP:If any part of a gathered window is out of bounds, the entire window that is returned, even those elements that were otherwise in-bounds, will be filled with a constant.
If any part of a scattered window is out of bounds, the entire window will be discarded.
PROMISE_IN_BOUNDS:The user promises that indices are in bounds. No additional checking will be performed. In practice, with the current XLA implementation this means that, out-of-bounds gathers will be clamped but out-of-bounds scatters will be discarded. Gradients will not be correct if indices are out-of-bounds.
*class* jax.lax.Precision(*arg0*)[[source]](_modules/jax/_src/lax/lax.html#Precision)[#](#jax.lax.Precision)
Precision enum for lax functions
The precision argument to JAX functions generally controls the tradeoff between speed and accuracy for array computations on accelerator backends,
(i.e. TPU and GPU). Members are:
DEFAULT:Fastest mode, but least accurate. Performs computations in bfloat16.
Aliases: `'default'`, `'fastest'`, `'bfloat16'`.
HIGH:Slower but more accurate. Performs float32 computations in 3 bfloat16 passes, or using tensorfloat32 where available. Aliases: `'high'`,
`'bfloat16_3x'`, `'tensorfloat32'`.
HIGHEST:Slowest but most accurate. Performs computations in float32 or float64 as applicable. Aliases: `'highest'`, `'float32'`.
*class* jax.lax.RoundingMethod(*value*)[[source]](_modules/jax/_src/lax/lax.html#RoundingMethod)[#](#jax.lax.RoundingMethod)
An enumeration.
*class* jax.lax.ScatterDimensionNumbers(*update_window_dims: [Sequence](https://docs.python.org/3/library/collections.abc.html#collections.abc.Sequence)[[int](https://docs.python.org/3/library/functions.html#int)]*, *inserted_window_dims: [Sequence](https://docs.python.org/3/library/collections.abc.html#collections.abc.Sequence)[[int](https://docs.python.org/3/library/functions.html#int)]*, *scatter_dims_to_operand_dims: [Sequence](https://docs.python.org/3/library/collections.abc.html#collections.abc.Sequence)[[int](https://docs.python.org/3/library/functions.html#int)]*)[[source]](_modules/jax/_src/lax/slicing.html#ScatterDimensionNumbers)[#](#jax.lax.ScatterDimensionNumbers)
Describes the dimension number arguments to an [XLA’s Scatter operator](https://www.tensorflow.org/xla/operation_semantics#scatter). See the XLA documentation for more details of what the dimension numbers mean.
Parameters:
* **update_window_dims** – the set of dimensions in the updates that are window dimensions. Must be a tuple of integers in ascending order, each representing a dimension number.
* **inserted_window_dims** – the set of size 1 window dimensions that must be inserted into the shape of updates. Must be a tuple of integers in ascending order, each representing a dimension number of the output. These are the mirror image of collapsed_slice_dims in the case of gather.
* **scatter_dims_to_operand_dims** – for each dimension in scatter_indices, gives the corresponding dimension in operand. Must be a sequence of integers with size equal to indices.shape[-1].
Unlike XLA’s ScatterDimensionNumbers structure, index_vector_dim is implicit; there is always an index vector dimension and it must always be the last dimension. To scatter scalar indices, add a trailing dimension of size 1.
#### `jax.random` module[#](#module-jax.random)
Utilities for pseudo-random number generation.
The [`jax.random`](#module-jax.random) package provides a number of routines for deterministic generation of sequences of pseudorandom numbers.
##### Basic usage[#](#basic-usage)
```
>>> seed = 1701
>>> num_steps = 100
>>> key = jax.random.PRNGKey(seed)
>>> for i in range(num_steps):
... key, subkey = jax.random.split(key)
... params = compiled_update(subkey, params, next(batches))
```
##### PRNG Keys[#](#prng-keys)
Unlike the *stateful* pseudorandom number generators (PRNGs) that users of NumPy and SciPy may be accustomed to, JAX random functions all require an explicit PRNG state to be passed as a first argument.
The random state is described by two unsigned 32-bit integers that we call a **key**,
usually generated by the [`jax.random.PRNGKey()`](index.html#jax.random.PRNGKey) function:
```
>>> from jax import random
>>> key = random.PRNGKey(0)
>>> key Array([0, 0], dtype=uint32)
```
This key can then be used in any of JAX’s random number generation routines:
```
>>> random.uniform(key)
Array(0.41845703, dtype=float32)
```
Note that using a key does not modify it, so reusing the same key will lead to the same result:
```
>>> random.uniform(key)
Array(0.41845703, dtype=float32)
```
If you need a new random number, you can use [`jax.random.split()`](index.html#jax.random.split) to generate new subkeys:
```
>>> key, subkey = random.split(key)
>>> random.uniform(subkey)
Array(0.10536897, dtype=float32)
```
##### Advanced[#](#advanced)
###### Design and Context[#](#design-and-context)
**TLDR**: JAX PRNG = [Threefry counter PRNG](http://www.thesalmons.org/john/random123/papers/random123sc11.pdf)
+ a functional array-oriented [splitting model](https://dl.acm.org/citation.cfm?id=2503784)
See [docs/jep/263-prng.md](https://github.com/google/jax/blob/main/docs/jep/263-prng.md)
for more details.
To summarize, among other requirements, the JAX PRNG aims to:
1. ensure reproducibility,
2. parallelize well, both in terms of vectorization (generating array values)
and multi-replica, multi-core computation. In particular it should not use sequencing constraints between random function calls.
###### Advanced RNG configuration[#](#advanced-rng-configuration)
JAX provides several PRNG implementations (controlled by the jax_default_prng_impl flag).
* **default**
[A counter-based PRNG built around the Threefry hash function](http://www.thesalmons.org/john/random123/papers/random123sc11.pdf).
* *experimental* A PRNG that thinly wraps the XLA Random Bit Generator (RBG) algorithm. See
[TF doc](https://www.tensorflow.org/xla/operation_semantics#rngbitgenerator).
+ “rbg” uses ThreeFry for splitting, and XLA RBG for data generation.
+ “unsafe_rbg” exists only for demonstration purposes, using RBG both for
splitting (using an untested made up algorithm) and generating.The random streams generated by these experimental implementations haven’t been subject to any empirical randomness testing (e.g. Big Crush). The random bits generated may change between JAX versions.
The possible reasons not use the default RNG are:
1. it may be slow to compile (specifically for Google Cloud TPUs)
2. it’s slower to execute on TPUs 3. it doesn’t support efficient automatic sharding / partitioning
Here is a short summary:
| Property | Threefry | Threefry* | rbg | unsafe_rbg | rbg** | unsafe_rbg** |
| --- | --- | --- | --- | --- | --- | --- |
| Fastest on TPU | | | ✅ | ✅ | ✅ | ✅ |
| efficiently shardable (w/ pjit) | | ✅ | | | ✅ | ✅ |
| identical across shardings | ✅ | ✅ | ✅ | ✅ | | |
| identical across CPU/GPU/TPU | ✅ | ✅ | | | | |
| identical across JAX/XLA versions | ✅ | ✅ | | | | |
(*): with jax_threefry_partitionable=1 set
(**): with XLA_FLAGS=–xla_tpu_spmd_rng_bit_generator_unsafe=1 set
The difference between “rbg” and “unsafe_rbg” is that while “rbg” uses a less robust/studied hash function for random value generation (but not for jax.random.split or jax.random.fold_in), “unsafe_rbg” additionally uses less robust hash functions for jax.random.split and jax.random.fold_in. Therefore less safe in the sense that the quality of random streams it generates from different keys is less well understood.
For more about jax_threefry_partitionable, see
<https://jax.readthedocs.io/en/latest/notebooks/Distributed_arrays_and_automatic_parallelization.html#generating-random-numbers##### API Reference[#](#api-reference)
###### Key Creation & Manipulation[#](#key-creation-manipulation)
| | |
| --- | --- |
| [`PRNGKey`](index.html#jax.random.PRNGKey)(seed, *[, impl]) | Create a pseudo-random number generator (PRNG) key given an integer seed. |
| [`key`](index.html#jax.random.key)(seed, *[, impl]) | Create a pseudo-random number generator (PRNG) key given an integer seed. |
| [`key_data`](index.html#jax.random.key_data)(keys) | Recover the bits of key data underlying a PRNG key array. |
| [`wrap_key_data`](index.html#jax.random.wrap_key_data)(key_bits_array, *[, impl]) | Wrap an array of key data bits into a PRNG key array. |
| [`fold_in`](index.html#jax.random.fold_in)(key, data) | Folds in data to a PRNG key to form a new PRNG key. |
| [`split`](index.html#jax.random.split)(key[, num]) | Splits a PRNG key into num new keys by adding a leading axis. |
###### Random Samplers[#](#random-samplers)
| | |
| --- | --- |
| [`ball`](index.html#jax.random.ball)(key, d[, p, shape, dtype]) | Sample uniformly from the unit Lp ball. |
| [`bernoulli`](index.html#jax.random.bernoulli)(key[, p, shape]) | Sample Bernoulli random values with given shape and mean. |
| [`beta`](index.html#jax.random.beta)(key, a, b[, shape, dtype]) | Sample Beta random values with given shape and float dtype. |
| [`bits`](index.html#jax.random.bits)(key[, shape, dtype]) | Sample uniform bits in the form of unsigned integers. |
| [`categorical`](index.html#jax.random.categorical)(key, logits[, axis, shape]) | Sample random values from categorical distributions. |
| [`cauchy`](index.html#jax.random.cauchy)(key[, shape, dtype]) | Sample Cauchy random values with given shape and float dtype. |
| [`chisquare`](index.html#jax.random.chisquare)(key, df[, shape, dtype]) | Sample Chisquare random values with given shape and float dtype. |
| [`choice`](index.html#jax.random.choice)(key, a[, shape, replace, p, axis]) | Generates a random sample from a given array. |
| [`dirichlet`](index.html#jax.random.dirichlet)(key, alpha[, shape, dtype]) | Sample Dirichlet random values with given shape and float dtype. |
| [`double_sided_maxwell`](index.html#jax.random.double_sided_maxwell)(key, loc, scale[, ...]) | Sample from a double sided Maxwell distribution. |
| [`exponential`](index.html#jax.random.exponential)(key[, shape, dtype]) | Sample Exponential random values with given shape and float dtype. |
| [`f`](index.html#jax.random.f)(key, dfnum, dfden[, shape, dtype]) | Sample F-distribution random values with given shape and float dtype. |
| [`gamma`](index.html#jax.random.gamma)(key, a[, shape, dtype]) | Sample Gamma random values with given shape and float dtype. |
| [`generalized_normal`](index.html#jax.random.generalized_normal)(key, p[, shape, dtype]) | Sample from the generalized normal distribution. |
| [`geometric`](index.html#jax.random.geometric)(key, p[, shape, dtype]) | Sample Geometric random values with given shape and float dtype. |
| [`gumbel`](index.html#jax.random.gumbel)(key[, shape, dtype]) | Sample Gumbel random values with given shape and float dtype. |
| [`laplace`](index.html#jax.random.laplace)(key[, shape, dtype]) | Sample Laplace random values with given shape and float dtype. |
| [`loggamma`](index.html#jax.random.loggamma)(key, a[, shape, dtype]) | Sample log-gamma random values with given shape and float dtype. |
| [`logistic`](index.html#jax.random.logistic)(key[, shape, dtype]) | Sample logistic random values with given shape and float dtype. |
| [`lognormal`](index.html#jax.random.lognormal)(key[, sigma, shape, dtype]) | Sample lognormal random values with given shape and float dtype. |
| [`maxwell`](index.html#jax.random.maxwell)(key[, shape, dtype]) | Sample from a one sided Maxwell distribution. |
| [`multivariate_normal`](index.html#jax.random.multivariate_normal)(key, mean, cov[, shape, ...]) | Sample multivariate normal random values with given mean and covariance. |
| [`normal`](index.html#jax.random.normal)(key[, shape, dtype]) | Sample standard normal random values with given shape and float dtype. |
| [`orthogonal`](index.html#jax.random.orthogonal)(key, n[, shape, dtype]) | Sample uniformly from the orthogonal group O(n). |
| [`pareto`](index.html#jax.random.pareto)(key, b[, shape, dtype]) | Sample Pareto random values with given shape and float dtype. |
| [`permutation`](index.html#jax.random.permutation)(key, x[, axis, independent]) | Returns a randomly permuted array or range. |
| [`poisson`](index.html#jax.random.poisson)(key, lam[, shape, dtype]) | Sample Poisson random values with given shape and integer dtype. |
| [`rademacher`](index.html#jax.random.rademacher)(key, shape[, dtype]) | Sample from a Rademacher distribution. |
| [`randint`](index.html#jax.random.randint)(key, shape, minval, maxval[, dtype]) | Sample uniform random values in [minval, maxval) with given shape/dtype. |
| [`rayleigh`](index.html#jax.random.rayleigh)(key, scale[, shape, dtype]) | Sample Rayleigh random values with given shape and float dtype. |
| [`shuffle`](index.html#jax.random.shuffle)(key, x[, axis]) | Shuffle the elements of an array uniformly at random along an axis. |
| [`t`](index.html#jax.random.t)(key, df[, shape, dtype]) | Sample Student's t random values with given shape and float dtype. |
| [`triangular`](index.html#jax.random.triangular)(key, left, mode, right[, shape, ...]) | Sample Triangular random values with given shape and float dtype. |
| [`truncated_normal`](index.html#jax.random.truncated_normal)(key, lower, upper[, shape, ...]) | Sample truncated standard normal random values with given shape and dtype. |
| [`uniform`](index.html#jax.random.uniform)(key[, shape, dtype, minval, maxval]) | Sample uniform random values in [minval, maxval) with given shape/dtype. |
| [`wald`](index.html#jax.random.wald)(key, mean[, shape, dtype]) | Sample Wald random values with given shape and float dtype. |
| [`weibull_min`](index.html#jax.random.weibull_min)(key, scale, concentration[, ...]) | Sample from a Weibull distribution. |
#### `jax.sharding` module[#](#module-jax.sharding)
##### Classes[#](#classes)
*class* jax.sharding.Sharding[#](#jax.sharding.Sharding)
Describes how a [`jax.Array`](index.html#jax.Array) is laid out across devices.
*property* addressable_devices*: [set](https://docs.python.org/3/library/stdtypes.html#set)[[jaxlib.xla_extension.Device](index.html#jax.Device)]*[#](#jax.sharding.Sharding.addressable_devices)
The set of devices in the [`Sharding`](#jax.sharding.Sharding) that are addressable by the current process.
addressable_devices_indices_map(*global_shape*)[[source]](_modules/jax/_src/sharding.html#Sharding.addressable_devices_indices_map)[#](#jax.sharding.Sharding.addressable_devices_indices_map)
A mapping from addressable devices to the slice of array data each contains.
`addressable_devices_indices_map` contains that part of
`device_indices_map` that applies to the addressable devices.
Parameters:
**global_shape** (*Shape*) –
Return type:
Mapping[[Device](index.html#jax.Device), Index | None]
*property* device_set*: [set](https://docs.python.org/3/library/stdtypes.html#set)[[jaxlib.xla_extension.Device](index.html#jax.Device)]*[#](#jax.sharding.Sharding.device_set)
The set of devices that this [`Sharding`](#jax.sharding.Sharding) spans.
In multi-controller JAX, the set of devices is global, i.e., includes non-addressable devices from other processes.
devices_indices_map(*global_shape*)[[source]](_modules/jax/_src/sharding.html#Sharding.devices_indices_map)[#](#jax.sharding.Sharding.devices_indices_map)
Returns a mapping from devices to the array slices each contains.
The mapping includes all global devices, i.e., including non-addressable devices from other processes.
Parameters:
**global_shape** (*Shape*) –
Return type:
Mapping[[Device](index.html#jax.Device), Index | None]
is_equivalent_to(*other*, *ndim*)[[source]](_modules/jax/_src/sharding.html#Sharding.is_equivalent_to)[#](#jax.sharding.Sharding.is_equivalent_to)
Returns `True` if two shardings are equivalent.
Two shardings are equivalent if they place the same logical array shards on the same devices.
For example, a [`NamedSharding`](#jax.sharding.NamedSharding) may be equivalent to a [`PositionalSharding`](#jax.sharding.PositionalSharding) if both place the same shards of the array on the same devices.
Parameters:
* **other** ([`Sharding`](#jax.sharding.Sharding)) –
* **ndim** ([`int`](https://docs.python.org/3/library/functions.html#int)) –
Return type:
[`bool`](https://docs.python.org/3/library/functions.html#bool)
*property* is_fully_addressable*: [bool](https://docs.python.org/3/library/functions.html#bool)*[#](#jax.sharding.Sharding.is_fully_addressable)
Is this sharding fully addressable?
A sharding is fully addressable if the current process can address all of the devices named in the [`Sharding`](#jax.sharding.Sharding). `is_fully_addressable` is equivalent to “is_local” in multi-process JAX.
*property* is_fully_replicated*: [bool](https://docs.python.org/3/library/functions.html#bool)*[#](#jax.sharding.Sharding.is_fully_replicated)
Is this sharding fully replicated?
A sharding is fully replicated if each device has a complete copy of the entire data.
*property* memory_kind*: [str](https://docs.python.org/3/library/stdtypes.html#str) | [None](https://docs.python.org/3/library/constants.html#None)*[#](#jax.sharding.Sharding.memory_kind)
Returns the memory kind of the sharding.
shard_shape(*global_shape*)[[source]](_modules/jax/_src/sharding.html#Sharding.shard_shape)[#](#jax.sharding.Sharding.shard_shape)
Returns the shape of the data on each device.
The shard shape returned by this function is calculated from
`global_shape` and the properties of the sharding.
Parameters:
**global_shape** ([`tuple`](https://docs.python.org/3/library/stdtypes.html#tuple)[[`int`](https://docs.python.org/3/library/functions.html#int), [`...`](https://docs.python.org/3/library/constants.html#Ellipsis)]) –
Return type:
[`tuple`](https://docs.python.org/3/library/stdtypes.html#tuple)[[`int`](https://docs.python.org/3/library/functions.html#int), [`...`](https://docs.python.org/3/library/constants.html#Ellipsis)]
with_memory_kind(*kind*)[[source]](_modules/jax/_src/sharding.html#Sharding.with_memory_kind)[#](#jax.sharding.Sharding.with_memory_kind)
Returns a new Sharding instance with the specified memory kind.
Parameters:
**kind** ([`str`](https://docs.python.org/3/library/stdtypes.html#str)) –
Return type:
[`Sharding`](#jax.sharding.Sharding)
*class* jax.sharding.XLACompatibleSharding[#](#jax.sharding.XLACompatibleSharding)
Bases: [`Sharding`](#jax.sharding.Sharding)
A [`Sharding`](#jax.sharding.Sharding) that describes shardings expressible to XLA.
Subclasses of [`XLACompatibleSharding`](#jax.sharding.XLACompatibleSharding) work with all JAX APIs and transformations that use XLA.
devices_indices_map(*global_shape*)[[source]](_modules/jax/_src/sharding_impls.html#XLACompatibleSharding.devices_indices_map)[#](#jax.sharding.XLACompatibleSharding.devices_indices_map)
Returns a mapping from devices to the array slices each contains.
The mapping includes all global devices, i.e., including non-addressable devices from other processes.
Parameters:
**global_shape** ([`tuple`](https://docs.python.org/3/library/stdtypes.html#tuple)[[`int`](https://docs.python.org/3/library/functions.html#int), [`...`](https://docs.python.org/3/library/constants.html#Ellipsis)]) –
Return type:
[`Mapping`](https://docs.python.org/3/library/collections.abc.html#collections.abc.Mapping)[[`Device`](index.html#jax.Device), [`tuple`](https://docs.python.org/3/library/stdtypes.html#tuple)[[`slice`](https://docs.python.org/3/library/functions.html#slice), [`...`](https://docs.python.org/3/library/constants.html#Ellipsis)]]
is_equivalent_to(*other*, *ndim*)[[source]](_modules/jax/_src/sharding_impls.html#XLACompatibleSharding.is_equivalent_to)[#](#jax.sharding.XLACompatibleSharding.is_equivalent_to)
Returns `True` if two shardings are equivalent.
Two shardings are equivalent if they place the same logical array shards on the same devices.
For example, a [`NamedSharding`](#jax.sharding.NamedSharding) may be equivalent to a [`PositionalSharding`](#jax.sharding.PositionalSharding) if both place the same shards of the array on the same devices.
Parameters:
* **self** ([`XLACompatibleSharding`](#jax.sharding.XLACompatibleSharding)) –
* **other** ([`XLACompatibleSharding`](#jax.sharding.XLACompatibleSharding)) –
* **ndim** ([`int`](https://docs.python.org/3/library/functions.html#int)) –
Return type:
[`bool`](https://docs.python.org/3/library/functions.html#bool)
shard_shape(*global_shape*)[[source]](_modules/jax/_src/sharding_impls.html#XLACompatibleSharding.shard_shape)[#](#jax.sharding.XLACompatibleSharding.shard_shape)
Returns the shape of the data on each device.
The shard shape returned by this function is calculated from
`global_shape` and the properties of the sharding.
Parameters:
**global_shape** ([`tuple`](https://docs.python.org/3/library/stdtypes.html#tuple)[[`int`](https://docs.python.org/3/library/functions.html#int), [`...`](https://docs.python.org/3/library/constants.html#Ellipsis)]) –
Return type:
[`tuple`](https://docs.python.org/3/library/stdtypes.html#tuple)[[`int`](https://docs.python.org/3/library/functions.html#int), [`...`](https://docs.python.org/3/library/constants.html#Ellipsis)]
*class* jax.sharding.SingleDeviceSharding[#](#jax.sharding.SingleDeviceSharding)
Bases: [`XLACompatibleSharding`](#jax.sharding.XLACompatibleSharding)
A [`Sharding`](#jax.sharding.Sharding) that places its data on a single device.
Parameters:
**device** – A single `Device`.
Example
```
>>> single_device_sharding = jax.sharding.SingleDeviceSharding(
... jax.devices()[0])
```
*property* device_set*: [set](https://docs.python.org/3/library/stdtypes.html#set)[[jaxlib.xla_extension.Device](index.html#jax.Device)]*[#](#jax.sharding.SingleDeviceSharding.device_set)
The set of devices that this [`Sharding`](#jax.sharding.Sharding) spans.
In multi-controller JAX, the set of devices is global, i.e., includes non-addressable devices from other processes.
devices_indices_map(*global_shape*)[[source]](_modules/jax/_src/sharding_impls.html#SingleDeviceSharding.devices_indices_map)[#](#jax.sharding.SingleDeviceSharding.devices_indices_map)
Returns a mapping from devices to the array slices each contains.
The mapping includes all global devices, i.e., including non-addressable devices from other processes.
Parameters:
**global_shape** ([`tuple`](https://docs.python.org/3/library/stdtypes.html#tuple)[[`int`](https://docs.python.org/3/library/functions.html#int), [`...`](https://docs.python.org/3/library/constants.html#Ellipsis)]) –
Return type:
[`Mapping`](https://docs.python.org/3/library/collections.abc.html#collections.abc.Mapping)[[`Device`](index.html#jax.Device), [`tuple`](https://docs.python.org/3/library/stdtypes.html#tuple)[[`slice`](https://docs.python.org/3/library/functions.html#slice), [`...`](https://docs.python.org/3/library/constants.html#Ellipsis)]]
*property* is_fully_addressable*: [bool](https://docs.python.org/3/library/functions.html#bool)*[#](#jax.sharding.SingleDeviceSharding.is_fully_addressable)
Is this sharding fully addressable?
A sharding is fully addressable if the current process can address all of the devices named in the [`Sharding`](#jax.sharding.Sharding). `is_fully_addressable` is equivalent to “is_local” in multi-process JAX.
*property* is_fully_replicated*: [bool](https://docs.python.org/3/library/functions.html#bool)*[#](#jax.sharding.SingleDeviceSharding.is_fully_replicated)
Is this sharding fully replicated?
A sharding is fully replicated if each device has a complete copy of the entire data.
*property* memory_kind*: [str](https://docs.python.org/3/library/stdtypes.html#str) | [None](https://docs.python.org/3/library/constants.html#None)*[#](#jax.sharding.SingleDeviceSharding.memory_kind)
Returns the memory kind of the sharding.
with_memory_kind(*kind*)[[source]](_modules/jax/_src/sharding_impls.html#SingleDeviceSharding.with_memory_kind)[#](#jax.sharding.SingleDeviceSharding.with_memory_kind)
Returns a new Sharding instance with the specified memory kind.
Parameters:
**kind** ([`str`](https://docs.python.org/3/library/stdtypes.html#str)) –
Return type:
[`SingleDeviceSharding`](#jax.sharding.SingleDeviceSharding)
*class* jax.sharding.NamedSharding[#](#jax.sharding.NamedSharding)
Bases: [`XLACompatibleSharding`](#jax.sharding.XLACompatibleSharding)
A [`NamedSharding`](#jax.sharding.NamedSharding) expresses sharding using named axes.
A [`NamedSharding`](#jax.sharding.NamedSharding) is a pair of a [`Mesh`](#jax.sharding.Mesh) of devices and
[`PartitionSpec`](#jax.sharding.PartitionSpec) which describes how to shard an array across that mesh.
A [`Mesh`](#jax.sharding.Mesh) is a multidimensional NumPy array of JAX devices,
where each axis of the mesh has a name, e.g. `'x'` or `'y'`.
A [`PartitionSpec`](#jax.sharding.PartitionSpec) is a tuple, whose elements can be a `None`,
a mesh axis, or a tuple of mesh axes. Each element describes how an input dimension is partitioned across zero or more mesh dimensions. For example,
`PartitionSpec('x', 'y')` says that the first dimension of data is sharded across `x` axis of the mesh, and the second dimension is sharded across `y` axis of the mesh.
The Distributed arrays and automatic parallelization
(<https://jax.readthedocs.io/en/latest/notebooks/Distributed_arrays_and_automatic_parallelization.html#namedsharding-gives-a-way-to-express-shardings-with-names>)
tutorial has more details and diagrams that explain how
[`Mesh`](#jax.sharding.Mesh) and [`PartitionSpec`](#jax.sharding.PartitionSpec) are used.
Parameters:
* **mesh** – A [`jax.sharding.Mesh`](#jax.sharding.Mesh) object.
* **spec** – A [`jax.sharding.PartitionSpec`](#jax.sharding.PartitionSpec) object.
Example
```
>>> from jax.sharding import Mesh
>>> from jax.sharding import PartitionSpec as P
>>> mesh = Mesh(np.array(jax.devices()).reshape(2, 4), ('x', 'y'))
>>> spec = P('x', 'y')
>>> named_sharding = jax.sharding.NamedSharding(mesh, spec)
```
*property* addressable_devices*: [set](https://docs.python.org/3/library/stdtypes.html#set)[[jaxlib.xla_extension.Device](index.html#jax.Device)]*[#](#jax.sharding.NamedSharding.addressable_devices)
The set of devices in the [`Sharding`](#jax.sharding.Sharding) that are addressable by the current process.
*property* device_set*: [set](https://docs.python.org/3/library/stdtypes.html#set)[[jaxlib.xla_extension.Device](index.html#jax.Device)]*[#](#jax.sharding.NamedSharding.device_set)
The set of devices that this [`Sharding`](#jax.sharding.Sharding) spans.
In multi-controller JAX, the set of devices is global, i.e., includes non-addressable devices from other processes.
*property* is_fully_addressable*: [bool](https://docs.python.org/3/library/functions.html#bool)*[#](#jax.sharding.NamedSharding.is_fully_addressable)
Is this sharding fully addressable?
A sharding is fully addressable if the current process can address all of the devices named in the [`Sharding`](#jax.sharding.Sharding). `is_fully_addressable` is equivalent to “is_local” in multi-process JAX.
*property* is_fully_replicated*: [bool](https://docs.python.org/3/library/functions.html#bool)*[#](#jax.sharding.NamedSharding.is_fully_replicated)
Is this sharding fully replicated?
A sharding is fully replicated if each device has a complete copy of the entire data.
*property* memory_kind*: [str](https://docs.python.org/3/library/stdtypes.html#str) | [None](https://docs.python.org/3/library/constants.html#None)*[#](#jax.sharding.NamedSharding.memory_kind)
Returns the memory kind of the sharding.
with_memory_kind(*kind*)[[source]](_modules/jax/_src/sharding_impls.html#NamedSharding.with_memory_kind)[#](#jax.sharding.NamedSharding.with_memory_kind)
Returns a new Sharding instance with the specified memory kind.
Parameters:
**kind** ([`str`](https://docs.python.org/3/library/stdtypes.html#str)) –
Return type:
[`NamedSharding`](#jax.sharding.NamedSharding)
*class* jax.sharding.PositionalSharding(*devices*, ***, *memory_kind=None*)[[source]](_modules/jax/_src/sharding_impls.html#PositionalSharding)[#](#jax.sharding.PositionalSharding)
Bases: [`XLACompatibleSharding`](#jax.sharding.XLACompatibleSharding)
Parameters:
* **devices** (*Sequence**[**xc.Device**]* *|* *np.ndarray*) –
* **memory_kind** ([*str*](https://docs.python.org/3/library/stdtypes.html#str) *|* *None*) –
*property* device_set*: [set](https://docs.python.org/3/library/stdtypes.html#set)[[jaxlib.xla_extension.Device](index.html#jax.Device)]*[#](#jax.sharding.PositionalSharding.device_set)
The set of devices that this [`Sharding`](#jax.sharding.Sharding) spans.
In multi-controller JAX, the set of devices is global, i.e., includes non-addressable devices from other processes.
*property* is_fully_addressable*: [bool](https://docs.python.org/3/library/functions.html#bool)*[#](#jax.sharding.PositionalSharding.is_fully_addressable)
Is this sharding fully addressable?
A sharding is fully addressable if the current process can address all of the devices named in the [`Sharding`](#jax.sharding.Sharding). `is_fully_addressable` is equivalent to “is_local” in multi-process JAX.
*property* is_fully_replicated*: [bool](https://docs.python.org/3/library/functions.html#bool)*[#](#jax.sharding.PositionalSharding.is_fully_replicated)
Is this sharding fully replicated?
A sharding is fully replicated if each device has a complete copy of the entire data.
*property* memory_kind*: [str](https://docs.python.org/3/library/stdtypes.html#str) | [None](https://docs.python.org/3/library/constants.html#None)*[#](#jax.sharding.PositionalSharding.memory_kind)
Returns the memory kind of the sharding.
with_memory_kind(*kind*)[[source]](_modules/jax/_src/sharding_impls.html#PositionalSharding.with_memory_kind)[#](#jax.sharding.PositionalSharding.with_memory_kind)
Returns a new Sharding instance with the specified memory kind.
Parameters:
**kind** ([`str`](https://docs.python.org/3/library/stdtypes.html#str)) –
Return type:
[`PositionalSharding`](#jax.sharding.PositionalSharding)
*class* jax.sharding.PmapSharding[#](#jax.sharding.PmapSharding)
Bases: [`XLACompatibleSharding`](#jax.sharding.XLACompatibleSharding)
Describes a sharding used by [`jax.pmap()`](index.html#jax.pmap).
*classmethod* default(*shape*, *sharded_dim=0*, *devices=None*)[[source]](_modules/jax/_src/sharding_impls.html#PmapSharding.default)[#](#jax.sharding.PmapSharding.default)
Creates a [`PmapSharding`](#jax.sharding.PmapSharding) which matches the default placement used by [`jax.pmap()`](index.html#jax.pmap).
Parameters:
* **shape** (*Shape*) – The shape of the input array.
* **sharded_dim** ([*int*](https://docs.python.org/3/library/functions.html#int)) – Dimension the input array is sharded on. Defaults to 0.
* **devices** (*Sequence**[**xc.Device**]* *|* *None*) – Optional sequence of devices to use. If omitted, the implicit
* **used** (*device order used by pmap is*) – [`jax.local_devices()`](index.html#jax.local_devices).
* **of** (*which is the order*) – [`jax.local_devices()`](index.html#jax.local_devices).
Return type:
[PmapSharding](index.html#jax.sharding.PmapSharding)
*property* device_set*: [set](https://docs.python.org/3/library/stdtypes.html#set)[[jaxlib.xla_extension.Device](index.html#jax.Device)]*[#](#jax.sharding.PmapSharding.device_set)
The set of devices that this [`Sharding`](#jax.sharding.Sharding) spans.
In multi-controller JAX, the set of devices is global, i.e., includes non-addressable devices from other processes.
devices_indices_map(*global_shape*)[[source]](_modules/jax/_src/sharding_impls.html#PmapSharding.devices_indices_map)[#](#jax.sharding.PmapSharding.devices_indices_map)
Returns a mapping from devices to the array slices each contains.
The mapping includes all global devices, i.e., including non-addressable devices from other processes.
Parameters:
**global_shape** ([`tuple`](https://docs.python.org/3/library/stdtypes.html#tuple)[[`int`](https://docs.python.org/3/library/functions.html#int), [`...`](https://docs.python.org/3/library/constants.html#Ellipsis)]) –
Return type:
[`Mapping`](https://docs.python.org/3/library/collections.abc.html#collections.abc.Mapping)[[`Device`](index.html#jax.Device), [`tuple`](https://docs.python.org/3/library/stdtypes.html#tuple)[[`slice`](https://docs.python.org/3/library/functions.html#slice), [`...`](https://docs.python.org/3/library/constants.html#Ellipsis)]]
is_equivalent_to(*other*, *ndim*)[[source]](_modules/jax/_src/sharding_impls.html#PmapSharding.is_equivalent_to)[#](#jax.sharding.PmapSharding.is_equivalent_to)
Returns `True` if two shardings are equivalent.
Two shardings are equivalent if they place the same logical array shards on the same devices.
For example, a [`NamedSharding`](#jax.sharding.NamedSharding) may be equivalent to a [`PositionalSharding`](#jax.sharding.PositionalSharding) if both place the same shards of the array on the same devices.
Parameters:
* **self** ([`PmapSharding`](#jax.sharding.PmapSharding)) –
* **other** ([`PmapSharding`](#jax.sharding.PmapSharding)) –
* **ndim** ([`int`](https://docs.python.org/3/library/functions.html#int)) –
Return type:
[`bool`](https://docs.python.org/3/library/functions.html#bool)
*property* is_fully_addressable*: [bool](https://docs.python.org/3/library/functions.html#bool)*[#](#jax.sharding.PmapSharding.is_fully_addressable)
Is this sharding fully addressable?
A sharding is fully addressable if the current process can address all of the devices named in the [`Sharding`](#jax.sharding.Sharding). `is_fully_addressable` is equivalent to “is_local” in multi-process JAX.
*property* is_fully_replicated*: [bool](https://docs.python.org/3/library/functions.html#bool)*[#](#jax.sharding.PmapSharding.is_fully_replicated)
Is this sharding fully replicated?
A sharding is fully replicated if each device has a complete copy of the entire data.
*property* memory_kind*: [str](https://docs.python.org/3/library/stdtypes.html#str) | [None](https://docs.python.org/3/library/constants.html#None)*[#](#jax.sharding.PmapSharding.memory_kind)
Returns the memory kind of the sharding.
shard_shape(*global_shape*)[[source]](_modules/jax/_src/sharding_impls.html#PmapSharding.shard_shape)[#](#jax.sharding.PmapSharding.shard_shape)
Returns the shape of the data on each device.
The shard shape returned by this function is calculated from
`global_shape` and the properties of the sharding.
Parameters:
**global_shape** ([`tuple`](https://docs.python.org/3/library/stdtypes.html#tuple)[[`int`](https://docs.python.org/3/library/functions.html#int), [`...`](https://docs.python.org/3/library/constants.html#Ellipsis)]) –
Return type:
[`tuple`](https://docs.python.org/3/library/stdtypes.html#tuple)[[`int`](https://docs.python.org/3/library/functions.html#int), [`...`](https://docs.python.org/3/library/constants.html#Ellipsis)]
with_memory_kind(*kind*)[[source]](_modules/jax/_src/sharding_impls.html#PmapSharding.with_memory_kind)[#](#jax.sharding.PmapSharding.with_memory_kind)
Returns a new Sharding instance with the specified memory kind.
Parameters:
**kind** ([`str`](https://docs.python.org/3/library/stdtypes.html#str)) –
*class* jax.sharding.GSPMDSharding[#](#jax.sharding.GSPMDSharding)
Bases: [`XLACompatibleSharding`](#jax.sharding.XLACompatibleSharding)
*property* device_set*: [set](https://docs.python.org/3/library/stdtypes.html#set)[[jaxlib.xla_extension.Device](index.html#jax.Device)]*[#](#jax.sharding.GSPMDSharding.device_set)
The set of devices that this [`Sharding`](#jax.sharding.Sharding) spans.
In multi-controller JAX, the set of devices is global, i.e., includes non-addressable devices from other processes.
devices_indices_map(*global_shape*)[[source]](_modules/jax/_src/sharding_impls.html#GSPMDSharding.devices_indices_map)[#](#jax.sharding.GSPMDSharding.devices_indices_map)
Returns a mapping from devices to the array slices each contains.
The mapping includes all global devices, i.e., including non-addressable devices from other processes.
Parameters:
**global_shape** ([`tuple`](https://docs.python.org/3/library/stdtypes.html#tuple)[[`int`](https://docs.python.org/3/library/functions.html#int), [`...`](https://docs.python.org/3/library/constants.html#Ellipsis)]) –
Return type:
[`Mapping`](https://docs.python.org/3/library/collections.abc.html#collections.abc.Mapping)[[`Device`](index.html#jax.Device), [`tuple`](https://docs.python.org/3/library/stdtypes.html#tuple)[[`slice`](https://docs.python.org/3/library/functions.html#slice), [`...`](https://docs.python.org/3/library/constants.html#Ellipsis)]]
*property* is_fully_addressable*: [bool](https://docs.python.org/3/library/functions.html#bool)*[#](#jax.sharding.GSPMDSharding.is_fully_addressable)
Is this sharding fully addressable?
A sharding is fully addressable if the current process can address all of the devices named in the [`Sharding`](#jax.sharding.Sharding). `is_fully_addressable` is equivalent to “is_local” in multi-process JAX.
*property* is_fully_replicated*: [bool](https://docs.python.org/3/library/functions.html#bool)*[#](#jax.sharding.GSPMDSharding.is_fully_replicated)
Is this sharding fully replicated?
A sharding is fully replicated if each device has a complete copy of the entire data.
*property* memory_kind*: [str](https://docs.python.org/3/library/stdtypes.html#str) | [None](https://docs.python.org/3/library/constants.html#None)*[#](#jax.sharding.GSPMDSharding.memory_kind)
Returns the memory kind of the sharding.
with_memory_kind(*kind*)[[source]](_modules/jax/_src/sharding_impls.html#GSPMDSharding.with_memory_kind)[#](#jax.sharding.GSPMDSharding.with_memory_kind)
Returns a new Sharding instance with the specified memory kind.
Parameters:
**kind** ([`str`](https://docs.python.org/3/library/stdtypes.html#str)) –
Return type:
[`GSPMDSharding`](#jax.sharding.GSPMDSharding)
*class* jax.sharding.PartitionSpec(**partitions*)[[source]](_modules/jax/_src/partition_spec.html#PartitionSpec)[#](#jax.sharding.PartitionSpec)
Tuple describing how to partition an array across a mesh of devices.
Each element is either `None`, a string, or a tuple of strings.
See the documentation of [`jax.sharding.NamedSharding`](#jax.sharding.NamedSharding) for more details.
This class exists so JAX’s pytree utilities can distinguish a partition specifications from tuples that should be treated as pytrees.
*class* jax.sharding.Mesh(*devices: np.ndarray | Sequence[xc.Device]*, *axis_names: [str](https://docs.python.org/3/library/stdtypes.html#str) | Sequence[MeshAxisName]*)[[source]](_modules/jax/_src/mesh.html#Mesh)[#](#jax.sharding.Mesh)
Declare the hardware resources available in the scope of this manager.
In particular, all `axis_names` become valid resource names inside the managed block and can be used e.g. in the `in_axis_resources` argument of
[`jax.experimental.pjit.pjit()`](index.html#jax.experimental.pjit.pjit). Also see JAX’s multi-process programming model (<https://jax.readthedocs.io/en/latest/multi_process.html>)
and the Distributed arrays and automatic parallelization tutorial
(<https://jax.readthedocs.io/en/latest/notebooks/Distributed_arrays_and_automatic_parallelization.html>)
If you are compiling in multiple threads, make sure that the
`with Mesh` context manager is inside the function that the threads will execute.
Parameters:
* **devices** – A NumPy ndarray object containing JAX device objects (as obtained e.g. from [`jax.devices()`](index.html#jax.devices)).
* **axis_names** – A sequence of resource axis names to be assigned to the dimensions of the `devices` argument. Its length should match the rank of `devices`.
Example
```
>>> from jax.experimental.pjit import pjit
>>> from jax.sharding import Mesh
>>> from jax.sharding import PartitionSpec as P
>>> import numpy as np
...
>>> inp = np.arange(16).reshape((8, 2))
>>> devices = np.array(jax.devices()).reshape(4, 2)
...
>>> # Declare a 2D mesh with axes `x` and `y`.
>>> global_mesh = Mesh(devices, ('x', 'y'))
>>> # Use the mesh object directly as a context manager.
>>> with global_mesh:
... out = pjit(lambda x: x, in_shardings=None, out_shardings=None)(inp)
```
```
>>> # Initialize the Mesh and use the mesh as the context manager.
>>> with Mesh(devices, ('x', 'y')) as global_mesh:
... out = pjit(lambda x: x, in_shardings=None, out_shardings=None)(inp)
```
```
>>> # Also you can use it as `with ... as ...`.
>>> global_mesh = Mesh(devices, ('x', 'y'))
>>> with global_mesh as m:
... out = pjit(lambda x: x, in_shardings=None, out_shardings=None)(inp)
```
```
>>> # You can also use it as `with Mesh(...)`.
>>> with Mesh(devices, ('x', 'y')):
... out = pjit(lambda x: x, in_shardings=None, out_shardings=None)(inp)
```
#### `jax.debug` module[#](#module-jax.debug)
##### Runtime value debugging utilities[#](#runtime-value-debugging-utilities)
[jax.debug.print and jax.debug.breakpoint](index.html#document-debugging/print_breakpoint) describes how to make use of JAX’s runtime value debugging features.
| | |
| --- | --- |
| [`callback`](index.html#jax.debug.callback)(callback, *args[, ordered]) | Calls a stageable Python callback. |
| [`print`](index.html#jax.debug.print)(fmt, *args[, ordered]) | Prints values and works in staged out JAX functions. |
| [`breakpoint`](index.html#jax.debug.breakpoint)(*[, backend, filter_frames, ...]) | Enters a breakpoint at a point in a program. |
##### Sharding debugging utilities[#](#sharding-debugging-utilities)
Functions that enable inspecting and visualizing array shardings inside (and outside)
staged functions.
| | |
| --- | --- |
| [`inspect_array_sharding`](index.html#jax.debug.inspect_array_sharding)(value, *, callback) | Enables inspecting array sharding inside JIT-ted functions. |
| [`visualize_array_sharding`](index.html#jax.debug.visualize_array_sharding)(arr, **kwargs) | Visualizes an array's sharding. |
| [`visualize_sharding`](index.html#jax.debug.visualize_sharding)(shape, sharding, *[, ...]) | Visualizes a `Sharding` using `rich`. |
#### `jax.dlpack` module[#](#module-jax.dlpack)
| | |
| --- | --- |
| [`from_dlpack`](index.html#jax.dlpack.from_dlpack)(external_array) | Returns a [`Array`](index.html#jax.Array) representation of a DLPack tensor. |
| [`to_dlpack`](index.html#jax.dlpack.to_dlpack)(x[, take_ownership, stream]) | Returns a DLPack tensor that encapsulates a [`Array`](index.html#jax.Array) `x`. |
#### `jax.distributed` module[#](#module-jax.distributed)
| | |
| --- | --- |
| [`initialize`](index.html#jax.distributed.initialize)([coordinator_address, ...]) | Initializes the JAX distributed system. |
| [`shutdown`](index.html#jax.distributed.shutdown)() | Shuts down the distributed system. |
#### `jax.dtypes` module[#](#module-jax.dtypes)
| | |
| --- | --- |
| [`bfloat16`](index.html#jax.dtypes.bfloat16) | bfloat16 floating-point values |
| [`canonicalize_dtype`](index.html#jax.dtypes.canonicalize_dtype)(dtype[, ...]) | Convert from a dtype to a canonical dtype based on config.x64_enabled. |
| [`float0`](index.html#jax.dtypes.float0) | DType class corresponding to the scalar type and dtype of the same name. |
| [`issubdtype`](index.html#jax.dtypes.issubdtype)(a, b) | Returns True if first argument is a typecode lower/equal in type hierarchy. |
| [`prng_key`](index.html#jax.dtypes.prng_key)() | Scalar class for PRNG Key dtypes. |
| [`result_type`](index.html#jax.dtypes.result_type)(*args[, return_weak_type_flag]) | Convenience function to apply JAX argument dtype promotion. |
| [`scalar_type_of`](index.html#jax.dtypes.scalar_type_of)(x) | Return the scalar type associated with a JAX value. |
#### `jax.flatten_util` module[#](#module-jax.flatten_util)
##### List of Functions[#](#list-of-functions)
| | |
| --- | --- |
| [`ravel_pytree`](index.html#jax.flatten_util.ravel_pytree)(pytree) | Ravel (flatten) a pytree of arrays down to a 1D array. |
#### `jax.image` module[#](#module-jax.image)
Image manipulation functions.
More image manipulation functions can be found in libraries built on top of JAX, such as [PIX](https://github.com/deepmind/dm_pix).
##### Image manipulation functions[#](#image-manipulation-functions)
| | |
| --- | --- |
| [`resize`](index.html#jax.image.resize)(image, shape, method[, antialias, ...]) | Image resize. |
| [`scale_and_translate`](index.html#jax.image.scale_and_translate)(image, shape, ...[, ...]) | Apply a scale and translation to an image. |
##### Argument classes[#](#argument-classes)
*class* jax.image.ResizeMethod(*value*)[[source]](_modules/jax/_src/image/scale.html#ResizeMethod)[#](#jax.image.ResizeMethod)
Image resize method.
Possible values are:
NEAREST:Nearest-neighbor interpolation.
LINEAR:[Linear interpolation](https://en.wikipedia.org/wiki/Bilinear_interpolation).
LANCZOS3:[Lanczos resampling](https://en.wikipedia.org/wiki/Lanczos_resampling), using a kernel of radius 3.
LANCZOS5:[Lanczos resampling](https://en.wikipedia.org/wiki/Lanczos_resampling), using a kernel of radius 5.
CUBIC:[Cubic interpolation](https://en.wikipedia.org/wiki/Bicubic_interpolation), using the Keys cubic kernel.
#### `jax.nn` module[#](#jax-nn-module)
##### `jax.nn.initializers` module[#](#module-jax.nn.initializers)
Common neural network layer initializers, consistent with definitions used in Keras and Sonnet.
###### Initializers[#](#initializers)
This module provides common neural network layer initializers,
consistent with definitions used in Keras and Sonnet.
An initializer is a function that takes three arguments:
`(key, shape, dtype)` and returns an array with dimensions `shape` and data type `dtype`. Argument `key` is a [`jax.random.PRNGKey`](index.html#jax.random.PRNGKey) random key used when generating random numbers to initialize the array.
| | |
| --- | --- |
| [`constant`](index.html#jax.nn.initializers.constant)(value[, dtype]) | Builds an initializer that returns arrays full of a constant `value`. |
| [`delta_orthogonal`](index.html#jax.nn.initializers.delta_orthogonal)([scale, column_axis, dtype]) | Builds an initializer for delta orthogonal kernels. |
| [`glorot_normal`](index.html#jax.nn.initializers.glorot_normal)([in_axis, out_axis, ...]) | Builds a Glorot normal initializer (aka Xavier normal initializer). |
| [`glorot_uniform`](index.html#jax.nn.initializers.glorot_uniform)([in_axis, out_axis, ...]) | Builds a Glorot uniform initializer (aka Xavier uniform initializer). |
| [`he_normal`](index.html#jax.nn.initializers.he_normal)([in_axis, out_axis, batch_axis, dtype]) | Builds a He normal initializer (aka Kaiming normal initializer). |
| [`he_uniform`](index.html#jax.nn.initializers.he_uniform)([in_axis, out_axis, batch_axis, ...]) | Builds a He uniform initializer (aka Kaiming uniform initializer). |
| [`lecun_normal`](index.html#jax.nn.initializers.lecun_normal)([in_axis, out_axis, ...]) | Builds a Lecun normal initializer. |
| [`lecun_uniform`](index.html#jax.nn.initializers.lecun_uniform)([in_axis, out_axis, ...]) | Builds a Lecun uniform initializer. |
| [`normal`](index.html#jax.nn.initializers.normal)([stddev, dtype]) | Builds an initializer that returns real normally-distributed random arrays. |
| [`ones`](index.html#jax.nn.initializers.ones)(key, shape[, dtype]) | An initializer that returns a constant array full of ones. |
| [`orthogonal`](index.html#jax.nn.initializers.orthogonal)([scale, column_axis, dtype]) | Builds an initializer that returns uniformly distributed orthogonal matrices. |
| [`truncated_normal`](index.html#jax.nn.initializers.truncated_normal)([stddev, dtype, lower, upper]) | Builds an initializer that returns truncated-normal random arrays. |
| [`uniform`](index.html#jax.nn.initializers.uniform)([scale, dtype]) | Builds an initializer that returns real uniformly-distributed random arrays. |
| [`variance_scaling`](index.html#jax.nn.initializers.variance_scaling)(scale, mode, distribution) | Initializer that adapts its scale to the shape of the weights tensor. |
| [`zeros`](index.html#jax.nn.initializers.zeros)(key, shape[, dtype]) | An initializer that returns a constant array full of zeros. |
Common functions for neural network libraries.
##### Activation functions[#](#activation-functions)
| | |
| --- | --- |
| [`relu`](index.html#jax.nn.relu)(x) | Rectified linear unit activation function. |
| [`relu6`](index.html#jax.nn.relu6)(x) | Rectified Linear Unit 6 activation function. |
| [`sigmoid`](index.html#jax.nn.sigmoid)(x) | Sigmoid activation function. |
| [`softplus`](index.html#jax.nn.softplus)(x) | Softplus activation function. |
| [`soft_sign`](index.html#jax.nn.soft_sign)(x) | Soft-sign activation function. |
| [`silu`](index.html#jax.nn.silu)(x) | SiLU (a.k.a. |
| [`swish`](index.html#jax.nn.swish)(x) | SiLU (a.k.a. |
| [`log_sigmoid`](index.html#jax.nn.log_sigmoid)(x) | Log-sigmoid activation function. |
| [`leaky_relu`](index.html#jax.nn.leaky_relu)(x[, negative_slope]) | Leaky rectified linear unit activation function. |
| [`hard_sigmoid`](index.html#jax.nn.hard_sigmoid)(x) | Hard Sigmoid activation function. |
| [`hard_silu`](index.html#jax.nn.hard_silu)(x) | Hard SiLU (swish) activation function |
| [`hard_swish`](index.html#jax.nn.hard_swish)(x) | Hard SiLU (swish) activation function |
| [`hard_tanh`](index.html#jax.nn.hard_tanh)(x) | Hard \(\mathrm{tanh}\) activation function. |
| [`elu`](index.html#jax.nn.elu)(x[, alpha]) | Exponential linear unit activation function. |
| [`celu`](index.html#jax.nn.celu)(x[, alpha]) | Continuously-differentiable exponential linear unit activation. |
| [`selu`](index.html#jax.nn.selu)(x) | Scaled exponential linear unit activation. |
| [`gelu`](index.html#jax.nn.gelu)(x[, approximate]) | Gaussian error linear unit activation function. |
| [`glu`](index.html#jax.nn.glu)(x[, axis]) | Gated linear unit activation function. |
##### Other functions[#](#other-functions)
| | |
| --- | --- |
| [`softmax`](index.html#jax.nn.softmax)(x[, axis, where, initial]) | Softmax function. |
| [`log_softmax`](index.html#jax.nn.log_softmax)(x[, axis, where, initial]) | Log-Softmax function. |
| [`logsumexp`](index.html#jax.nn.logsumexp)(a[, axis, b, keepdims, return_sign]) | Compute the log of the sum of exponentials of input elements. |
| [`standardize`](index.html#jax.nn.standardize)(x[, axis, mean, variance, ...]) | Normalizes an array by subtracting `mean` and dividing by \(\sqrt{\mathrm{variance}}\). |
| [`one_hot`](index.html#jax.nn.one_hot)(x, num_classes, *[, dtype, axis]) | One-hot encodes the given indices. |
#### `jax.ops` module[#](#module-jax.ops)
The functions `jax.ops.index_update`, `jax.ops.index_add`, etc., which were deprecated in JAX 0.2.22, have been removed. Please use the
[`jax.numpy.ndarray.at`](index.html#jax.numpy.ndarray.at) property on JAX arrays instead.
##### Segment reduction operators[#](#segment-reduction-operators)
| | |
| --- | --- |
| [`segment_max`](index.html#jax.ops.segment_max)(data, segment_ids[, ...]) | Computes the maximum within segments of an array. |
| [`segment_min`](index.html#jax.ops.segment_min)(data, segment_ids[, ...]) | Computes the minimum within segments of an array. |
| [`segment_prod`](index.html#jax.ops.segment_prod)(data, segment_ids[, ...]) | Computes the product within segments of an array. |
| [`segment_sum`](index.html#jax.ops.segment_sum)(data, segment_ids[, ...]) | Computes the sum within segments of an array. |
#### `jax.profiler` module[#](#module-jax.profiler)
##### Tracing and time profiling[#](#tracing-and-time-profiling)
[Profiling JAX programs](index.html#document-profiling) describes how to make use of JAX’s tracing and time profiling features.
| | |
| --- | --- |
| [`start_server`](index.html#jax.profiler.start_server)(port) | Starts the profiler server on port port. |
| [`start_trace`](index.html#jax.profiler.start_trace)(log_dir[, create_perfetto_link, ...]) | Starts a profiler trace. |
| [`stop_trace`](index.html#jax.profiler.stop_trace)() | Stops the currently-running profiler trace. |
| [`trace`](index.html#jax.profiler.trace)(log_dir[, create_perfetto_link, ...]) | Context manager to take a profiler trace. |
| [`annotate_function`](index.html#jax.profiler.annotate_function)(func[, name]) | Decorator that generates a trace event for the execution of a function. |
| [`TraceAnnotation`](index.html#jax.profiler.TraceAnnotation) | Context manager that generates a trace event in the profiler. |
| [`StepTraceAnnotation`](index.html#jax.profiler.StepTraceAnnotation)(name, **kwargs) | Context manager that generates a step trace event in the profiler. |
##### Device memory profiling[#](#device-memory-profiling)
See [Device Memory Profiling](index.html#document-device_memory_profiling) for an introduction to JAX’s device memory profiling features.
| | |
| --- | --- |
| [`device_memory_profile`](index.html#jax.profiler.device_memory_profile)([backend]) | Captures a JAX device memory profile as `pprof`-format protocol buffer. |
| [`save_device_memory_profile`](index.html#jax.profiler.save_device_memory_profile)(filename[, backend]) | Collects a device memory profile and writes it to a file. |
#### `jax.stages` module[#](#module-jax.stages)
Interfaces to stages of the compiled execution process.
JAX transformations that compile just in time for execution, such as
`jax.jit` and `jax.pmap`, also support a common means of explicit lowering and compilation *ahead of time*. This module defines types that represent the stages of this process.
For more, see the [AOT walkthrough](https://jax.readthedocs.io/en/latest/aot.html).
##### Classes[#](#classes)
*class* jax.stages.Wrapped(**args*, ***kwargs*)[[source]](_modules/jax/_src/stages.html#Wrapped)[#](#jax.stages.Wrapped)
A function ready to be specialized, lowered, and compiled.
This protocol reflects the output of functions such as
`jax.jit`. Calling it results in JIT (just-in-time) lowering,
compilation, and execution. It can also be explicitly lowered prior to compilation, and the result compiled prior to execution.
__call__(**args*, ***kwargs*)[[source]](_modules/jax/_src/stages.html#Wrapped.__call__)[#](#jax.stages.Wrapped.__call__)
Executes the wrapped function, lowering and compiling as needed.
lower(**args*, ***kwargs*)[[source]](_modules/jax/_src/stages.html#Wrapped.lower)[#](#jax.stages.Wrapped.lower)
Lower this function explicitly for the given arguments.
A lowered function is staged out of Python and translated to a compiler’s input language, possibly in a backend-dependent manner. It is ready for compilation but not yet compiled.
Return type:
[`Lowered`](#jax.stages.Lowered)
Returns:
A `Lowered` instance representing the lowering.
*class* jax.stages.Lowered(*lowering*, *args_info*, *out_tree*, *no_kwargs=False*)[[source]](_modules/jax/_src/stages.html#Lowered)[#](#jax.stages.Lowered)
Lowering of a function specialized to argument types and values.
A lowering is a computation ready for compilation. This class carries a lowering together with the remaining information needed to later compile and execute it. It also provides a common API for querying properties of lowered computations across JAX’s various lowering paths ([`jit()`](index.html#jax.jit), [`pmap()`](index.html#jax.pmap), etc.).
Parameters:
* **lowering** (`XlaLowering`) –
* **out_tree** (`PyTreeDef`) –
* **no_kwargs** ([`bool`](https://docs.python.org/3/library/functions.html#bool)) –
as_text(*dialect=None*)[[source]](_modules/jax/_src/stages.html#Lowered.as_text)[#](#jax.stages.Lowered.as_text)
A human-readable text representation of this lowering.
Intended for visualization and debugging purposes. This need not be a valid nor reliable serialization. It is relayed directly to external callers.
Parameters:
**dialect** ([*str*](https://docs.python.org/3/library/stdtypes.html#str) *|* *None*) – Optional string specifying a lowering dialect (e.g. “stablehlo”)
Return type:
[str](https://docs.python.org/3/library/stdtypes.html#str)
compile(*compiler_options=None*)[[source]](_modules/jax/_src/stages.html#Lowered.compile)[#](#jax.stages.Lowered.compile)
Compile, returning a corresponding `Compiled` instance.
Parameters:
**compiler_options** (*CompilerOptions* *|* *None*) –
Return type:
[Compiled](index.html#jax.stages.Compiled)
compiler_ir(*dialect=None*)[[source]](_modules/jax/_src/stages.html#Lowered.compiler_ir)[#](#jax.stages.Lowered.compiler_ir)
An arbitrary object representation of this lowering.
Intended for debugging purposes. This is not a valid nor reliable serialization. The output has no guarantee of consistency across invocations.
Returns `None` if unavailable, e.g. based on backend, compiler, or runtime.
Parameters:
**dialect** ([*str*](https://docs.python.org/3/library/stdtypes.html#str) *|* *None*) – Optional string specifying a lowering dialect (e.g. “stablehlo”)
Return type:
Any | None
cost_analysis()[[source]](_modules/jax/_src/stages.html#Lowered.cost_analysis)[#](#jax.stages.Lowered.cost_analysis)
A summary of execution cost estimates.
Intended for visualization and debugging purposes. The object output by this is some simple data structure that can easily be printed or serialized
(e.g. nested dicts, lists, and tuples with numeric leaves). However, its structure can be arbitrary: it may be inconsistent across versions of JAX and jaxlib, or even across invocations.
Returns `None` if unavailable, e.g. based on backend, compiler, or runtime.
Return type:
Any | None
*property* in_tree*: PyTreeDef*[#](#jax.stages.Lowered.in_tree)
Tree structure of the pair (positional arguments, keyword arguments).
*class* jax.stages.Compiled(*executable*, *args_info*, *out_tree*, *no_kwargs=False*)[[source]](_modules/jax/_src/stages.html#Compiled)[#](#jax.stages.Compiled)
Compiled representation of a function specialized to types/values.
A compiled computation is associated with an executable and the remaining information needed to execute it. It also provides a common API for querying properties of compiled computations across JAX’s various compilation paths and backends.
__call__(**args*, ***kwargs*)[[source]](_modules/jax/_src/stages.html#Compiled.__call__)[#](#jax.stages.Compiled.__call__)
Call self as a function.
as_text()[[source]](_modules/jax/_src/stages.html#Compiled.as_text)[#](#jax.stages.Compiled.as_text)
A human-readable text representation of this executable.
Intended for visualization and debugging purposes. This is not a valid nor reliable serialization.
Returns `None` if unavailable, e.g. based on backend, compiler, or runtime.
Return type:
[str](https://docs.python.org/3/library/stdtypes.html#str) | None
cost_analysis()[[source]](_modules/jax/_src/stages.html#Compiled.cost_analysis)[#](#jax.stages.Compiled.cost_analysis)
A summary of execution cost estimates.
Intended for visualization and debugging purposes. The object output by this is some simple data structure that can easily be printed or serialized
(e.g. nested dicts, lists, and tuples with numeric leaves). However, its structure can be arbitrary: it may be inconsistent across versions of JAX and jaxlib, or even across invocations.
Returns `None` if unavailable, e.g. based on backend, compiler, or runtime.
Return type:
Any | None
*property* in_tree*: PyTreeDef*[#](#jax.stages.Compiled.in_tree)
Tree structure of the pair (positional arguments, keyword arguments).
memory_analysis()[[source]](_modules/jax/_src/stages.html#Compiled.memory_analysis)[#](#jax.stages.Compiled.memory_analysis)
A summary of estimated memory requirements.
Intended for visualization and debugging purposes. The object output by this is some simple data structure that can easily be printed or serialized
(e.g. nested dicts, lists, and tuples with numeric leaves). However, its structure can be arbitrary: it may be inconsistent across versions of JAX and jaxlib, or even across invocations.
Returns `None` if unavailable, e.g. based on backend, compiler, or runtime.
Return type:
Any | None
runtime_executable()[[source]](_modules/jax/_src/stages.html#Compiled.runtime_executable)[#](#jax.stages.Compiled.runtime_executable)
An arbitrary object representation of this executable.
Intended for debugging purposes. This is not valid nor reliable serialization. The output has no guarantee of consistency across invocations.
Returns `None` if unavailable, e.g. based on backend, compiler, or runtime.
Return type:
Any | None
#### `jax.tree_util` module[#](#module-jax.tree_util)
Utilities for working with tree-like container data structures.
This module provides a small set of utility functions for working with tree-like data structures, such as nested tuples, lists, and dicts. We call these structures pytrees. They are trees in that they are defined recursively (any non-pytree is a pytree, i.e. a leaf, and any pytree of pytrees is a pytree) and can be operated on recursively (object identity equivalence is not preserved by mapping operations, and the structures cannot contain reference cycles).
The set of Python types that are considered pytree nodes (e.g. that can be mapped over, rather than treated as leaves) is extensible. There is a single module-level registry of types, and class hierarchy is ignored. By registering a new pytree node type, that type in effect becomes transparent to the utility functions in this file.
The primary purpose of this module is to enable the interoperability between user defined data structures and JAX transformations (e.g. jit). This is not meant to be a general purpose tree-like data structure handling library.
See the [JAX pytrees note](pytrees.html)
for examples.
##### List of Functions[#](#list-of-functions)
| | |
| --- | --- |
| [`Partial`](index.html#jax.tree_util.Partial)(func, *args, **kw) | A version of functools.partial that works in pytrees. |
| [`all_leaves`](index.html#jax.tree_util.all_leaves)(iterable[, is_leaf]) | Tests whether all elements in the given iterable are all leaves. |
| [`build_tree`](index.html#jax.tree_util.build_tree)(treedef, xs) |
param treedef:
|
| [`register_pytree_node`](index.html#jax.tree_util.register_pytree_node)(nodetype, flatten_func, ...) | Extends the set of types that are considered internal nodes in pytrees. |
| [`register_pytree_node_class`](index.html#jax.tree_util.register_pytree_node_class)(cls) | Extends the set of types that are considered internal nodes in pytrees. |
| [`register_pytree_with_keys`](index.html#jax.tree_util.register_pytree_with_keys)(nodetype, ...[, ...]) | Extends the set of types that are considered internal nodes in pytrees. |
| [`register_pytree_with_keys_class`](index.html#jax.tree_util.register_pytree_with_keys_class)(cls) | Extends the set of types that are considered internal nodes in pytrees. |
| [`tree_all`](index.html#jax.tree_util.tree_all)(tree) |
param tree:
|
| [`tree_flatten`](index.html#jax.tree_util.tree_flatten)(tree[, is_leaf]) | Flattens a pytree. |
| [`tree_flatten_with_path`](index.html#jax.tree_util.tree_flatten_with_path)(tree[, is_leaf]) | Flattens a pytree like `tree_flatten`, but also returns each leaf's key path. |
| [`tree_leaves`](index.html#jax.tree_util.tree_leaves)(tree[, is_leaf]) | Gets the leaves of a pytree. |
| [`tree_leaves_with_path`](index.html#jax.tree_util.tree_leaves_with_path)(tree[, is_leaf]) | Gets the leaves of a pytree like `tree_leaves` and returns each leaf's key path. |
| [`tree_map`](index.html#jax.tree_util.tree_map)(f, tree, *rest[, is_leaf]) | Maps a multi-input function over pytree args to produce a new pytree. |
| [`tree_map_with_path`](index.html#jax.tree_util.tree_map_with_path)(f, tree, *rest[, is_leaf]) | Maps a multi-input function over pytree key path and args to produce a new pytree. |
| [`tree_reduce`](index.html#jax.tree_util.tree_reduce)(function, tree[, initializer, ...]) |
param function:
|
| [`tree_structure`](index.html#jax.tree_util.tree_structure)(tree[, is_leaf]) | Gets the treedef for a pytree. |
| [`tree_transpose`](index.html#jax.tree_util.tree_transpose)(outer_treedef, inner_treedef, ...) | Transform a tree having tree structure (outer, inner) into one having structure |
| [`tree_unflatten`](index.html#jax.tree_util.tree_unflatten)(treedef, leaves) | Reconstructs a pytree from the treedef and the leaves. |
| [`treedef_children`](index.html#jax.tree_util.treedef_children)(treedef) |
param treedef:
|
| [`treedef_is_leaf`](index.html#jax.tree_util.treedef_is_leaf)(treedef) |
param treedef:
|
| [`treedef_tuple`](index.html#jax.tree_util.treedef_tuple)(treedefs) | Makes a tuple treedef from an iterable of child treedefs. |
| [`keystr`](index.html#jax.tree_util.keystr)(keys) | Helper to pretty-print a tuple of keys. |
#### `jax.typing` module[#](#module-jax.typing)
The JAX typing module is where JAX-specific static type annotations live.
This submodule is a work in progress; to see the proposal behind the types exported here, see <https://jax.readthedocs.io/en/latest/jep/12049-type-annotations.html>.
The currently-available types are:
* [`jax.Array`](index.html#jax.Array): annotation for any JAX array or tracer (i.e. representations of arrays within JAX transforms).
* [`jax.typing.ArrayLike`](index.html#jax.typing.ArrayLike): annotation for any value that is safe to implicitly cast to a JAX array; this includes [`jax.Array`](index.html#jax.Array), [`numpy.ndarray`](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray), as well as Python builtin numeric values (e.g. [`int`](https://docs.python.org/3/library/functions.html#int), [`float`](https://docs.python.org/3/library/functions.html#float), etc.) and numpy scalar values
(e.g. `numpy.int32`, `numpy.flota64`, etc.)
* [`jax.typing.DTypeLike`](index.html#jax.typing.DTypeLike): annotation for any value that can be cast to a JAX-compatible dtype; this includes strings (e.g. ‘float32’, ‘int32’), scalar types (e.g. float,
np.float32), dtypes (e.g. np.dtype(‘float32’)), or objects with a dtype attribute
(e.g. jnp.float32, jnp.int32).
We may add additional types here in future releases.
##### JAX Typing Best Practices[#](#jax-typing-best-practices)
When annotating JAX arrays in public API functions, we recommend using [`ArrayLike`](index.html#jax.typing.ArrayLike)
for array inputs, and [`Array`](index.html#jax.Array) for array outputs.
For example, your function might look like this:
```
import numpy as np import jax.numpy as jnp from jax import Array from jax.typing import ArrayLike
def my_function(x: ArrayLike) -> Array:
# Runtime type validation, Python 3.10 or newer:
if not isinstance(x, ArrayLike):
raise TypeError(f"Expected arraylike input; got {x}")
# Runtime type validation, any Python version:
if not (isinstance(x, (np.ndarray, Array)) or np.isscalar(x)):
raise TypeError(f"Expected arraylike input; got {x}")
# Convert input to jax.Array:
x_arr = jnp.asarray(x)
# ... do some computation; JAX functions will return Array types:
result = x_arr.sum(0) / x_arr.shape[0]
# return an Array
return result
```
Most of JAX’s public APIs follow this pattern. Note in particular that we recommend JAX functions to not accept sequences such as [`list`](https://docs.python.org/3/library/stdtypes.html#list) or [`tuple`](https://docs.python.org/3/library/stdtypes.html#tuple) in place of arrays, as this can cause extra overhead in JAX transforms like [`jit()`](index.html#jax.jit) and can behave in unexpected ways with batch-wise transforms like [`vmap()`](index.html#jax.vmap) or [`jax.pmap()`](index.html#jax.pmap). For more information on this,
see [Non-array inputs NumPy vs JAX](https://jax.readthedocs.io/en/latest/notebooks/Common_Gotchas_in_JAX.html#non-array-inputs-numpy-vs-jax)
##### List of Members[#](#list-of-members)
| | |
| --- | --- |
| [`ArrayLike`](index.html#jax.typing.ArrayLike) | Type annotation for JAX array-like objects. |
| [`DTypeLike`](index.html#jax.typing.DTypeLike) | alias of [`Union`](https://docs.python.org/3/library/typing.html#typing.Union)[[`str`](https://docs.python.org/3/library/stdtypes.html#str), [`type`](https://docs.python.org/3/library/functions.html#type)[[`Any`](https://docs.python.org/3/library/typing.html#typing.Any)], [`dtype`](index.html#jax.numpy.dtype), `SupportsDType`] |
#### `jax.extend` module[#](#module-jax.extend)
Modules for JAX extensions.
The [`jax.extend`](#module-jax.extend) package provides modules for access to JAX internal machinery. See
[JEP #15856](https://jax.readthedocs.io/en/latest/jep/15856-jex.html).
##### API policy[#](#api-policy)
Unlike the
[public API](https://jax.readthedocs.io/en/latest/api_compatibility.html),
this package offers **no compatibility guarantee** across releases.
Breaking changes will be announced via the
[JAX project changelog](https://jax.readthedocs.io/en/latest/changelog.html).
##### `jax.extend.linear_util`[#](#module-jax.extend.linear_util)
| | |
| --- | --- |
| [`StoreException`](index.html#jax.extend.linear_util.StoreException) | |
| [`WrappedFun`](index.html#jax.extend.linear_util.WrappedFun)(f, transforms, stores, params, ...) | Represents a function f to which transforms are to be applied. |
| [`cache`](index.html#jax.extend.linear_util.cache)(call) | Memoization decorator for functions taking a WrappedFun as first argument. |
| [`merge_linear_aux`](index.html#jax.extend.linear_util.merge_linear_aux)(aux1, aux2) | |
| [`transformation`](index.html#jax.extend.linear_util.transformation)(gen, fun, *gen_static_args) | Adds one more transformation to a WrappedFun. |
| [`transformation_with_aux`](index.html#jax.extend.linear_util.transformation_with_aux)(gen, fun, ...[, ...]) | Adds one more transformation with auxiliary output to a WrappedFun. |
| [`wrap_init`](index.html#jax.extend.linear_util.wrap_init)(f[, params]) | Wraps function f as a WrappedFun, suitable for transformation. |
##### `jax.extend.random`[#](#module-jax.extend.random)
| | |
| --- | --- |
| [`PRNGImpl`](index.html#jax.extend.random.PRNGImpl)(key_shape, seed, split, ...[, ...]) | Specifies PRNG key shape and operations. |
| [`seed_with_impl`](index.html#jax.extend.random.seed_with_impl)(impl, seed) |
param impl:
|
| [`threefry2x32_p`](index.html#jax.extend.random.threefry2x32_p) | |
| [`threefry_2x32`](index.html#jax.extend.random.threefry_2x32)(keypair, count) | Apply the Threefry 2x32 hash. |
| [`threefry_prng_impl`](index.html#jax.extend.random.threefry_prng_impl) | Specifies PRNG key shape and operations. |
| [`rbg_prng_impl`](index.html#jax.extend.random.rbg_prng_impl) | Specifies PRNG key shape and operations. |
| [`unsafe_rbg_prng_impl`](index.html#jax.extend.random.unsafe_rbg_prng_impl) | Specifies PRNG key shape and operations. |
#### `jax.example_libraries` module[#](#jax-example-libraries-module)
JAX provides some small, experimental libraries for machine learning. These libraries are in part about providing tools and in part about serving as examples for how to build such libraries using JAX. Each one is only <300 source lines of code, so take a look inside and adapt them as you need!
Note
Each mini-library is meant to be an *inspiration*, but not a prescription.
To serve that purpose, it is best to keep their code samples minimal; so we generally **will not merge PRs** adding new features. Instead, please send your lovely pull requests and design ideas to more fully-featured libraries like
[Haiku](https://github.com/deepmind/dm-haiku) or [Flax](https://github.com/google/flax).
##### `jax.example_libraries.optimizers` module[#](#module-jax.example_libraries.optimizers)
Examples of how to write optimizers with JAX.
You likely do not mean to import this module! The optimizers in this library are intended as examples only. If you are looking for a fully featured optimizer library, two good options are [JAXopt](https://github.com/google/jaxopt) and [Optax](https://github.com/deepmind/optax).
This module contains some convenient optimizer definitions, specifically initialization and update functions, which can be used with ndarrays or arbitrarily-nested tuple/list/dicts of ndarrays.
An optimizer is modeled as an `(init_fun, update_fun, get_params)` triple of functions, where the component functions have these signatures:
```
init_fun(params)
Args:
params: pytree representing the initial parameters.
Returns:
A pytree representing the initial optimizer state, which includes the
initial parameters and may also include auxiliary values like initial
momentum. The optimizer state pytree structure generally differs from that
of `params`.
```
```
update_fun(step, grads, opt_state)
Args:
step: integer representing the step index.
grads: a pytree with the same structure as `get_params(opt_state)`
representing the gradients to be used in updating the optimizer state.
opt_state: a pytree representing the optimizer state to be updated.
Returns:
A pytree with the same structure as the `opt_state` argument representing
the updated optimizer state.
```
```
get_params(opt_state)
Args:
opt_state: pytree representing an optimizer state.
Returns:
A pytree representing the parameters extracted from `opt_state`, such that
the invariant `params == get_params(init_fun(params))` holds true.
```
Notice that an optimizer implementation has a lot of flexibility in the form of opt_state: it just has to be a pytree of JaxTypes (so that it can be passed to the JAX transforms defined in api.py) and it has to be consumable by update_fun and get_params.
Example Usage:
```
opt_init, opt_update, get_params = optimizers.sgd(learning_rate)
opt_state = opt_init(params)
def step(step, opt_state):
value, grads = jax.value_and_grad(loss_fn)(get_params(opt_state))
opt_state = opt_update(step, grads, opt_state)
return value, opt_state
for i in range(num_steps):
value, opt_state = step(i, opt_state)
```
*class* jax.example_libraries.optimizers.JoinPoint(*subtree*)[[source]](_modules/jax/example_libraries/optimizers.html#JoinPoint)[#](#jax.example_libraries.optimizers.JoinPoint)
Bases: [`object`](https://docs.python.org/3/library/functions.html#object)
Marks the boundary between two joined (nested) pytrees.
*class* jax.example_libraries.optimizers.Optimizer(*init_fn*, *update_fn*, *params_fn*)[[source]](_modules/jax/example_libraries/optimizers.html#Optimizer)[#](#jax.example_libraries.optimizers.Optimizer)
Bases: [`NamedTuple`](https://docs.python.org/3/library/typing.html#typing.NamedTuple)
init_fn*: [`Callable`](https://docs.python.org/3/library/typing.html#typing.Callable)[[[`Any`](https://docs.python.org/3/library/typing.html#typing.Any)], [`OptimizerState`](#jax.example_libraries.optimizers.OptimizerState)]*[#](#jax.example_libraries.optimizers.Optimizer.init_fn)
Alias for field number 0
params_fn*: [`Callable`](https://docs.python.org/3/library/typing.html#typing.Callable)[[[`OptimizerState`](#jax.example_libraries.optimizers.OptimizerState)], [`Any`](https://docs.python.org/3/library/typing.html#typing.Any)]*[#](#jax.example_libraries.optimizers.Optimizer.params_fn)
Alias for field number 2
update_fn*: [`Callable`](https://docs.python.org/3/library/typing.html#typing.Callable)[[[`int`](https://docs.python.org/3/library/functions.html#int), [`Any`](https://docs.python.org/3/library/typing.html#typing.Any), [`OptimizerState`](#jax.example_libraries.optimizers.OptimizerState)], [`OptimizerState`](#jax.example_libraries.optimizers.OptimizerState)]*[#](#jax.example_libraries.optimizers.Optimizer.update_fn)
Alias for field number 1
*class* jax.example_libraries.optimizers.OptimizerState(*packed_state*, *tree_def*, *subtree_defs*)[#](#jax.example_libraries.optimizers.OptimizerState)
Bases: [`tuple`](https://docs.python.org/3/library/stdtypes.html#tuple)
packed_state[#](#jax.example_libraries.optimizers.OptimizerState.packed_state)
Alias for field number 0
subtree_defs[#](#jax.example_libraries.optimizers.OptimizerState.subtree_defs)
Alias for field number 2
tree_def[#](#jax.example_libraries.optimizers.OptimizerState.tree_def)
Alias for field number 1
jax.example_libraries.optimizers.adagrad(*step_size*, *momentum=0.9*)[[source]](_modules/jax/example_libraries/optimizers.html#adagrad)[#](#jax.example_libraries.optimizers.adagrad)
Construct optimizer triple for Adagrad.
Adaptive Subgradient Methods for Online Learning and Stochastic Optimization:
<http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdfParameters:
* **step_size** – positive scalar, or a callable representing a step size schedule that maps the iteration index to a positive scalar.
* **momentum** – optional, a positive scalar value for momentum
Returns:
An (init_fun, update_fun, get_params) triple.
jax.example_libraries.optimizers.adam(*step_size*, *b1=0.9*, *b2=0.999*, *eps=1e-08*)[[source]](_modules/jax/example_libraries/optimizers.html#adam)[#](#jax.example_libraries.optimizers.adam)
Construct optimizer triple for Adam.
Parameters:
* **step_size** – positive scalar, or a callable representing a step size schedule that maps the iteration index to a positive scalar.
* **b1** – optional, a positive scalar value for beta_1, the exponential decay rate for the first moment estimates (default 0.9).
* **b2** – optional, a positive scalar value for beta_2, the exponential decay rate for the second moment estimates (default 0.999).
* **eps** – optional, a positive scalar value for epsilon, a small constant for numerical stability (default 1e-8).
Returns:
An (init_fun, update_fun, get_params) triple.
jax.example_libraries.optimizers.adamax(*step_size*, *b1=0.9*, *b2=0.999*, *eps=1e-08*)[[source]](_modules/jax/example_libraries/optimizers.html#adamax)[#](#jax.example_libraries.optimizers.adamax)
Construct optimizer triple for AdaMax (a variant of Adam based on infinity norm).
Parameters:
* **step_size** – positive scalar, or a callable representing a step size schedule that maps the iteration index to a positive scalar.
* **b1** – optional, a positive scalar value for beta_1, the exponential decay rate for the first moment estimates (default 0.9).
* **b2** – optional, a positive scalar value for beta_2, the exponential decay rate for the second moment estimates (default 0.999).
* **eps** – optional, a positive scalar value for epsilon, a small constant for numerical stability (default 1e-8).
Returns:
An (init_fun, update_fun, get_params) triple.
jax.example_libraries.optimizers.clip_grads(*grad_tree*, *max_norm*)[[source]](_modules/jax/example_libraries/optimizers.html#clip_grads)[#](#jax.example_libraries.optimizers.clip_grads)
Clip gradients stored as a pytree of arrays to maximum norm max_norm.
jax.example_libraries.optimizers.constant(*step_size*)[[source]](_modules/jax/example_libraries/optimizers.html#constant)[#](#jax.example_libraries.optimizers.constant)
Return type:
[`Callable`](https://docs.python.org/3/library/typing.html#typing.Callable)[[[`int`](https://docs.python.org/3/library/functions.html#int)], [`float`](https://docs.python.org/3/library/functions.html#float)]
jax.example_libraries.optimizers.exponential_decay(*step_size*, *decay_steps*, *decay_rate*)[[source]](_modules/jax/example_libraries/optimizers.html#exponential_decay)[#](#jax.example_libraries.optimizers.exponential_decay)
jax.example_libraries.optimizers.inverse_time_decay(*step_size*, *decay_steps*, *decay_rate*, *staircase=False*)[[source]](_modules/jax/example_libraries/optimizers.html#inverse_time_decay)[#](#jax.example_libraries.optimizers.inverse_time_decay)
jax.example_libraries.optimizers.l2_norm(*tree*)[[source]](_modules/jax/example_libraries/optimizers.html#l2_norm)[#](#jax.example_libraries.optimizers.l2_norm)
Compute the l2 norm of a pytree of arrays. Useful for weight decay.
jax.example_libraries.optimizers.make_schedule(*scalar_or_schedule*)[[source]](_modules/jax/example_libraries/optimizers.html#make_schedule)[#](#jax.example_libraries.optimizers.make_schedule)
Parameters:
**scalar_or_schedule** ([`Union`](https://docs.python.org/3/library/typing.html#typing.Union)[[`float`](https://docs.python.org/3/library/functions.html#float), [`Callable`](https://docs.python.org/3/library/typing.html#typing.Callable)[[[`int`](https://docs.python.org/3/library/functions.html#int)], [`float`](https://docs.python.org/3/library/functions.html#float)]]) –
Return type:
[`Callable`](https://docs.python.org/3/library/typing.html#typing.Callable)[[[`int`](https://docs.python.org/3/library/functions.html#int)], [`float`](https://docs.python.org/3/library/functions.html#float)]
jax.example_libraries.optimizers.momentum(*step_size*, *mass*)[[source]](_modules/jax/example_libraries/optimizers.html#momentum)[#](#jax.example_libraries.optimizers.momentum)
Construct optimizer triple for SGD with momentum.
Parameters:
* **step_size** ([`Callable`](https://docs.python.org/3/library/typing.html#typing.Callable)[[[`int`](https://docs.python.org/3/library/functions.html#int)], [`float`](https://docs.python.org/3/library/functions.html#float)]) – positive scalar, or a callable representing a step size schedule that maps the iteration index to a positive scalar.
* **mass** ([`float`](https://docs.python.org/3/library/functions.html#float)) – positive scalar representing the momentum coefficient.
Returns:
An (init_fun, update_fun, get_params) triple.
jax.example_libraries.optimizers.nesterov(*step_size*, *mass*)[[source]](_modules/jax/example_libraries/optimizers.html#nesterov)[#](#jax.example_libraries.optimizers.nesterov)
Construct optimizer triple for SGD with Nesterov momentum.
Parameters:
* **step_size** ([`Callable`](https://docs.python.org/3/library/typing.html#typing.Callable)[[[`int`](https://docs.python.org/3/library/functions.html#int)], [`float`](https://docs.python.org/3/library/functions.html#float)]) – positive scalar, or a callable representing a step size schedule that maps the iteration index to a positive scalar.
* **mass** ([`float`](https://docs.python.org/3/library/functions.html#float)) – positive scalar representing the momentum coefficient.
Returns:
An (init_fun, update_fun, get_params) triple.
jax.example_libraries.optimizers.optimizer(*opt_maker*)[[source]](_modules/jax/example_libraries/optimizers.html#optimizer)[#](#jax.example_libraries.optimizers.optimizer)
Decorator to make an optimizer defined for arrays generalize to containers.
With this decorator, you can write init, update, and get_params functions that each operate only on single arrays, and convert them to corresponding functions that operate on pytrees of parameters. See the optimizers defined in optimizers.py for examples.
Parameters:
**opt_maker** ([`Callable`](https://docs.python.org/3/library/typing.html#typing.Callable)[[`...`](https://docs.python.org/3/library/constants.html#Ellipsis), [`tuple`](https://docs.python.org/3/library/stdtypes.html#tuple)[[`Callable`](https://docs.python.org/3/library/typing.html#typing.Callable)[[[`Any`](https://docs.python.org/3/library/typing.html#typing.Any)], [`Any`](https://docs.python.org/3/library/typing.html#typing.Any)], [`Callable`](https://docs.python.org/3/library/typing.html#typing.Callable)[[[`int`](https://docs.python.org/3/library/functions.html#int), [`Any`](https://docs.python.org/3/library/typing.html#typing.Any), [`Any`](https://docs.python.org/3/library/typing.html#typing.Any)], [`Any`](https://docs.python.org/3/library/typing.html#typing.Any)], [`Callable`](https://docs.python.org/3/library/typing.html#typing.Callable)[[[`Any`](https://docs.python.org/3/library/typing.html#typing.Any)], [`Any`](https://docs.python.org/3/library/typing.html#typing.Any)]]]) – a function that returns an `(init_fun, update_fun, get_params)`
triple of functions that might only work with ndarrays, as per
```
init_fun :: ndarray -> OptStatePytree ndarray update_fun :: OptStatePytree ndarray -> OptStatePytree ndarray get_params :: OptStatePytree ndarray -> ndarray
```
Return type:
[`Callable`](https://docs.python.org/3/library/typing.html#typing.Callable)[[`...`](https://docs.python.org/3/library/constants.html#Ellipsis), [`Optimizer`](#jax.example_libraries.optimizers.Optimizer)]
Returns:
An `(init_fun, update_fun, get_params)` triple of functions that work on arbitrary pytrees, as per
```
init_fun :: ParameterPytree ndarray -> OptimizerState update_fun :: OptimizerState -> OptimizerState get_params :: OptimizerState -> ParameterPytree ndarray
```
The OptimizerState pytree type used by the returned functions is isomorphic to `ParameterPytree (OptStatePytree ndarray)`, but may store the state instead as e.g. a partially-flattened data structure for performance.
jax.example_libraries.optimizers.pack_optimizer_state(*marked_pytree*)[[source]](_modules/jax/example_libraries/optimizers.html#pack_optimizer_state)[#](#jax.example_libraries.optimizers.pack_optimizer_state)
Converts a marked pytree to an OptimizerState.
The inverse of unpack_optimizer_state. Converts a marked pytree with the leaves of the outer pytree represented as JoinPoints back into an OptimizerState. This function is intended to be useful when deserializing optimizer states.
Parameters:
**marked_pytree** – A pytree containing JoinPoint leaves that hold more pytrees.
Returns:
An equivalent OptimizerState to the input argument.
jax.example_libraries.optimizers.piecewise_constant(*boundaries*, *values*)[[source]](_modules/jax/example_libraries/optimizers.html#piecewise_constant)[#](#jax.example_libraries.optimizers.piecewise_constant)
Parameters:
* **boundaries** ([`Any`](https://docs.python.org/3/library/typing.html#typing.Any)) –
* **values** ([`Any`](https://docs.python.org/3/library/typing.html#typing.Any)) –
jax.example_libraries.optimizers.polynomial_decay(*step_size*, *decay_steps*, *final_step_size*, *power=1.0*)[[source]](_modules/jax/example_libraries/optimizers.html#polynomial_decay)[#](#jax.example_libraries.optimizers.polynomial_decay)
jax.example_libraries.optimizers.rmsprop(*step_size*, *gamma=0.9*, *eps=1e-08*)[[source]](_modules/jax/example_libraries/optimizers.html#rmsprop)[#](#jax.example_libraries.optimizers.rmsprop)
Construct optimizer triple for RMSProp.
Parameters:
**step_size** – positive scalar, or a callable representing a step size schedule that maps the iteration index to a positive scalar.
gamma: Decay parameter.
eps: Epsilon parameter.
Returns:
An (init_fun, update_fun, get_params) triple.
jax.example_libraries.optimizers.rmsprop_momentum(*step_size*, *gamma=0.9*, *eps=1e-08*, *momentum=0.9*)[[source]](_modules/jax/example_libraries/optimizers.html#rmsprop_momentum)[#](#jax.example_libraries.optimizers.rmsprop_momentum)
Construct optimizer triple for RMSProp with momentum.
This optimizer is separate from the rmsprop optimizer because it needs to keep track of additional parameters.
Parameters:
* **step_size** – positive scalar, or a callable representing a step size schedule that maps the iteration index to a positive scalar.
* **gamma** – Decay parameter.
* **eps** – Epsilon parameter.
* **momentum** – Momentum parameter.
Returns:
An (init_fun, update_fun, get_params) triple.
jax.example_libraries.optimizers.sgd(*step_size*)[[source]](_modules/jax/example_libraries/optimizers.html#sgd)[#](#jax.example_libraries.optimizers.sgd)
Construct optimizer triple for stochastic gradient descent.
Parameters:
**step_size** – positive scalar, or a callable representing a step size schedule that maps the iteration index to a positive scalar.
Returns:
An (init_fun, update_fun, get_params) triple.
jax.example_libraries.optimizers.sm3(*step_size*, *momentum=0.9*)[[source]](_modules/jax/example_libraries/optimizers.html#sm3)[#](#jax.example_libraries.optimizers.sm3)
Construct optimizer triple for SM3.
Memory-Efficient Adaptive Optimization for Large-Scale Learning.
<https://arxiv.org/abs/1901.11150Parameters:
* **step_size** – positive scalar, or a callable representing a step size schedule that maps the iteration index to a positive scalar.
* **momentum** – optional, a positive scalar value for momentum
Returns:
An (init_fun, update_fun, get_params) triple.
jax.example_libraries.optimizers.unpack_optimizer_state(*opt_state*)[[source]](_modules/jax/example_libraries/optimizers.html#unpack_optimizer_state)[#](#jax.example_libraries.optimizers.unpack_optimizer_state)
Converts an OptimizerState to a marked pytree.
Converts an OptimizerState to a marked pytree with the leaves of the outer pytree represented as JoinPoints to avoid losing information. This function is intended to be useful when serializing optimizer states.
Parameters:
**opt_state** – An OptimizerState
Returns:
A pytree with JoinPoint leaves that contain a second level of pytrees.
##### `jax.example_libraries.stax` module[#](#module-jax.example_libraries.stax)
Stax is a small but flexible neural net specification library from scratch.
You likely do not mean to import this module! Stax is intended as an example library only. There are a number of other much more fully-featured neural network libraries for JAX, including [Flax](https://github.com/google/flax) from Google, and [Haiku](https://github.com/deepmind/dm-haiku) from DeepMind.
jax.example_libraries.stax.AvgPool(*window_shape*, *strides=None*, *padding='VALID'*, *spec=None*)[#](#jax.example_libraries.stax.AvgPool)
Layer construction function for a pooling layer.
jax.example_libraries.stax.BatchNorm(*axis=(0*, *1*, *2)*, *epsilon=1e-05*, *center=True*, *scale=True*, *beta_init=<function zeros>*, *gamma_init=<function ones>*)[[source]](_modules/jax/example_libraries/stax.html#BatchNorm)[#](#jax.example_libraries.stax.BatchNorm)
Layer construction function for a batch normalization layer.
jax.example_libraries.stax.Conv(*out_chan*, *filter_shape*, *strides=None*, *padding='VALID'*, *W_init=None*, *b_init=<function normal.<locals>.init>*)[#](#jax.example_libraries.stax.Conv)
Layer construction function for a general convolution layer.
jax.example_libraries.stax.Conv1DTranspose(*out_chan*, *filter_shape*, *strides=None*, *padding='VALID'*, *W_init=None*, *b_init=<function normal.<locals>.init>*)[#](#jax.example_libraries.stax.Conv1DTranspose)
Layer construction function for a general transposed-convolution layer.
jax.example_libraries.stax.ConvTranspose(*out_chan*, *filter_shape*, *strides=None*, *padding='VALID'*, *W_init=None*, *b_init=<function normal.<locals>.init>*)[#](#jax.example_libraries.stax.ConvTranspose)
Layer construction function for a general transposed-convolution layer.
jax.example_libraries.stax.Dense(*out_dim*, *W_init=<function variance_scaling.<locals>.init>*, *b_init=<function normal.<locals>.init>*)[[source]](_modules/jax/example_libraries/stax.html#Dense)[#](#jax.example_libraries.stax.Dense)
Layer constructor function for a dense (fully-connected) layer.
jax.example_libraries.stax.Dropout(*rate*, *mode='train'*)[[source]](_modules/jax/example_libraries/stax.html#Dropout)[#](#jax.example_libraries.stax.Dropout)
Layer construction function for a dropout layer with given rate.
jax.example_libraries.stax.FanInConcat(*axis=-1*)[[source]](_modules/jax/example_libraries/stax.html#FanInConcat)[#](#jax.example_libraries.stax.FanInConcat)
Layer construction function for a fan-in concatenation layer.
jax.example_libraries.stax.FanOut(*num*)[[source]](_modules/jax/example_libraries/stax.html#FanOut)[#](#jax.example_libraries.stax.FanOut)
Layer construction function for a fan-out layer.
jax.example_libraries.stax.GeneralConv(*dimension_numbers*, *out_chan*, *filter_shape*, *strides=None*, *padding='VALID'*, *W_init=None*, *b_init=<function normal.<locals>.init>*)[[source]](_modules/jax/example_libraries/stax.html#GeneralConv)[#](#jax.example_libraries.stax.GeneralConv)
Layer construction function for a general convolution layer.
jax.example_libraries.stax.GeneralConvTranspose(*dimension_numbers*, *out_chan*, *filter_shape*, *strides=None*, *padding='VALID'*, *W_init=None*, *b_init=<function normal.<locals>.init>*)[[source]](_modules/jax/example_libraries/stax.html#GeneralConvTranspose)[#](#jax.example_libraries.stax.GeneralConvTranspose)
Layer construction function for a general transposed-convolution layer.
jax.example_libraries.stax.MaxPool(*window_shape*, *strides=None*, *padding='VALID'*, *spec=None*)[#](#jax.example_libraries.stax.MaxPool)
Layer construction function for a pooling layer.
jax.example_libraries.stax.SumPool(*window_shape*, *strides=None*, *padding='VALID'*, *spec=None*)[#](#jax.example_libraries.stax.SumPool)
Layer construction function for a pooling layer.
jax.example_libraries.stax.elementwise(*fun*, ***fun_kwargs*)[[source]](_modules/jax/example_libraries/stax.html#elementwise)[#](#jax.example_libraries.stax.elementwise)
Layer that applies a scalar function elementwise on its inputs.
jax.example_libraries.stax.parallel(**layers*)[[source]](_modules/jax/example_libraries/stax.html#parallel)[#](#jax.example_libraries.stax.parallel)
Combinator for composing layers in parallel.
The layer resulting from this combinator is often used with the FanOut and FanInSum layers.
Parameters:
***layers** – a sequence of layers, each an (init_fun, apply_fun) pair.
Returns:
A new layer, meaning an (init_fun, apply_fun) pair, representing the parallel composition of the given sequence of layers. In particular, the returned layer takes a sequence of inputs and returns a sequence of outputs with the same length as the argument layers.
jax.example_libraries.stax.serial(**layers*)[[source]](_modules/jax/example_libraries/stax.html#serial)[#](#jax.example_libraries.stax.serial)
Combinator for composing layers in serial.
Parameters:
***layers** – a sequence of layers, each an (init_fun, apply_fun) pair.
Returns:
A new layer, meaning an (init_fun, apply_fun) pair, representing the serial composition of the given sequence of layers.
jax.example_libraries.stax.shape_dependent(*make_layer*)[[source]](_modules/jax/example_libraries/stax.html#shape_dependent)[#](#jax.example_libraries.stax.shape_dependent)
Combinator to delay layer constructor pair until input shapes are known.
Parameters:
**make_layer** – a one-argument function that takes an input shape as an argument
(a tuple of positive integers) and returns an (init_fun, apply_fun) pair.
Returns:
A new layer, meaning an (init_fun, apply_fun) pair, representing the same layer as returned by make_layer but with its construction delayed until input shapes are known.
#### `jax.experimental` module[#](#jax-experimental-module)
`jax.experimental.optix` has been moved into its own Python package
([deepmind/optax](https://github.com/deepmind/optax)).
`jax.experimental.ann` has been moved into `jax.lax`.
##### Experimental Modules[#](#experimental-modules)
###### `jax.experimental.checkify` module[#](#module-jax.experimental.checkify)
###### API[#](#api)
| | |
| --- | --- |
| [`checkify`](index.html#jax.experimental.checkify.checkify)(f[, errors]) | Functionalize check calls in fun, and optionally add run-time error checks. |
| [`check`](index.html#jax.experimental.checkify.check)(pred, msg, *fmt_args, **fmt_kwargs) | Check a predicate, add an error with msg if predicate is False. |
| [`check_error`](index.html#jax.experimental.checkify.check_error)(error) | Raise an Exception if `error` represents a failure. |
| [`Error`](index.html#jax.experimental.checkify.Error)(_pred, _code, _metadata, _payload) |
param _pred:
|
| [`JaxRuntimeError`](index.html#jax.experimental.checkify.JaxRuntimeError) | |
| [`user_checks`](index.html#jax.experimental.checkify.user_checks) | frozenset() -> empty frozenset object frozenset(iterable) -> frozenset object |
| [`nan_checks`](index.html#jax.experimental.checkify.nan_checks) | frozenset() -> empty frozenset object frozenset(iterable) -> frozenset object |
| [`index_checks`](index.html#jax.experimental.checkify.index_checks) | frozenset() -> empty frozenset object frozenset(iterable) -> frozenset object |
| [`div_checks`](index.html#jax.experimental.checkify.div_checks) | frozenset() -> empty frozenset object frozenset(iterable) -> frozenset object |
| [`float_checks`](index.html#jax.experimental.checkify.float_checks) | frozenset() -> empty frozenset object frozenset(iterable) -> frozenset object |
| [`automatic_checks`](index.html#jax.experimental.checkify.automatic_checks) | frozenset() -> empty frozenset object frozenset(iterable) -> frozenset object |
| [`all_checks`](index.html#jax.experimental.checkify.all_checks) | frozenset() -> empty frozenset object frozenset(iterable) -> frozenset object |
###### `jax.experimental.host_callback` module[#](#module-jax.experimental.host_callback)
Primitives for calling Python functions on the host from JAX accelerator code.
**Experimental: please give feedback, and expect changes.**
This module introduces the host callback functions [`call()`](index.html#jax.experimental.host_callback.call),
[`id_tap()`](index.html#jax.experimental.host_callback.id_tap), and [`id_print()`](index.html#jax.experimental.host_callback.id_print), that send their arguments from the device to the host and invoke user-defined Python functions on the host, optionally returning results back to the device computation.
We show below how these functions can be used. We start with [`call()`](index.html#jax.experimental.host_callback.call),
and we discuss examples of calling from JAX to arbitrary Python functions on the CPU, e.g., to use NumPy CPU custom kernels. Then we show uses of [`id_tap()`](index.html#jax.experimental.host_callback.id_tap) and [`id_print()`](index.html#jax.experimental.host_callback.id_print), which have the restriction that they cannot return values from the host to the device.
These primitives are generally faster because they are executed asynchronously with the device code.
In particular, they can be used to tap into and to debug JAX code.
###### Using [`call()`](index.html#jax.experimental.host_callback.call) to call a host function and return results to device[#](#using-call-to-call-a-host-function-and-return-results-to-device)
Use [`call()`](index.html#jax.experimental.host_callback.call) to invoke a computation on the host and return NumPy arrays to the device computation.
Host computation is useful, e.g., when a device computation needs some data that requires I/O on the host, or it needs a library that is available on the host and you do not want to code it in JAX.
For example, eigen decomposition for general matrices in JAX does not work on TPU.
We can call the Numpy implementation from any JAX accelerator computation,
using a host computation:
```
# This function runs on the host def host_eig(m: np.ndarray) -> np.ndarray:
return np.linalg.eigvals(m)
# This function is used in JAX def device_fun(m):
# We send "m" to the host, asking it to call "host_eig" and return the result.
# We have to specify the result shape and dtype, either in the form of an
# example return value or any object that has `shape` and `dtype` attributes,
# e.g., a NumPy array or a `jax.ShapeDtypeStruct`.
return hcb.call(host_eig, m,
# Given an input of shape (..., d, d), eig output has shape (..., d)
result_shape=jax.ShapeDtypeStruct(m.shape[:-1], m.dtype))
```
The [`call()`](index.html#jax.experimental.host_callback.call) function and the Python host function both take a single argument and return a single result, but those can be pytrees. Note that we must tell the [`call()`](index.html#jax.experimental.host_callback.call) what shape and dtype to expect from the host invocation, using the `result_shape` keyword argument.
This is important because the device code is compiled with that expectation.
There will be an error raised at runtime if the actual invocation produces a different result shape. In general, **such errors and also exceptions raised by the host computation may be difficult to debug**. See the Debugging section below.
This is a problem for [`call()`](index.html#jax.experimental.host_callback.call) but not for [`id_tap()`](index.html#jax.experimental.host_callback.id_tap) because for the latter the device code does not expect a returned value.
The [`call()`](index.html#jax.experimental.host_callback.call) API can be used inside a jit or pmap computation or inside cond/scan/while control flow. When used inside [`jax.pmap()`](index.html#jax.pmap), there will be separate calls to the host from each of the participating devices:
```
def host_sin(x, *, device):
# The ``device`` argument is passed due to ``call_with_device=True`` below.
print(f"Invoking host_sin with {x.shape} on {device}")
return np.sin(x)
# Use pmap to run the computation on two devices jax.pmap(lambda x: hcb.call(host_sin, x,
result_shape=x,
# Ask that the `host_sin` function be passed `device=dev`
call_with_device=True))(
np.ones((2, 4), dtype=np.float32))
# prints (in arbitrary order)
# Invoking host_sin with (4,) on cpu:0
# Invoking host_sin with (4,) on cpu:1
```
Note that [`call()`](index.html#jax.experimental.host_callback.call) does not support any JAX transformations, but as we show below one can make use of the existing support for [Custom differentiation in JAX](https://jax.readthedocs.io/en/latest/notebooks/Custom_derivative_rules_for_Python_code.html).
###### Using [`id_tap()`](index.html#jax.experimental.host_callback.id_tap) to call a Python function on the host, with no returned values[#](#using-id-tap-to-call-a-python-function-on-the-host-with-no-returned-values)
The [`id_tap()`](index.html#jax.experimental.host_callback.id_tap) and [`id_print()`](index.html#jax.experimental.host_callback.id_print) are special cases of [`call()`](index.html#jax.experimental.host_callback.call), when you just want the side effects of your Python callback. These functions have the advantage that once the arguments have been sent to the host, the device computation can proceed without waiting for the Python callback to return.
For [`id_tap()`](index.html#jax.experimental.host_callback.id_tap) you can specify your Python callback to be called, while
[`id_print()`](index.html#jax.experimental.host_callback.id_print) uses a built-in callback that prints the arguments to stdout on the host.
The Python function passed to [`id_tap()`](index.html#jax.experimental.host_callback.id_tap) takes two positional arguments (the value tapped from the device computation along with a `transforms` tuple,
described below). Optionally, the function may be passed a keyword argument
`device` with the Device from which the value was tapped.
A few examples:
```
def host_func(arg, transforms):
...do something with arg...
# calls host_func(2x, []) on host id_tap(host_func, 2 * x)
# calls host_func((2x, 3x), [])
id_tap(host_func, (2 * x, 3 * x)) # The argument can be a pytree
# calls host_func(2x, [], device=jax.devices()[0])
id_tap(host_func, 2 * x, tap_with_device=True) # Pass the device to the tap
# calls host_func(2x, [], what='activation')
id_tap(functools.partial(host_func, what='activation'), 2 * x)
# calls host_func(dict(x=x, y=y), what='data')
id_tap(lambda tap, transforms: host_func(tap, what='data'), dict(x=x, y=y))
```
The above examples can all be adapted to use [`id_print()`](index.html#jax.experimental.host_callback.id_print) instead, with the difference that [`id_print()`](index.html#jax.experimental.host_callback.id_print) prints on the host the positional argument,
along with any additional kwargs and the automatic kwarg `transforms`.
###### Using [`barrier_wait()`](index.html#jax.experimental.host_callback.barrier_wait) to wait until all callbacks have executed[#](#using-barrier-wait-to-wait-until-all-callbacks-have-executed)
If your Python callbacks have side-effects you may need to wait until the computation has finished to ensure that the side-effects have been observed.
You can use the [`barrier_wait()`](index.html#jax.experimental.host_callback.barrier_wait) function for that purpose:
```
accumulator = []
def host_log(arg, transforms):
# We just record the arguments in a list
accumulator.append(arg)
def device_fun(x):
id_tap(host_log, x)
id_tap(host_log, 2. * x)
jax.jit(device_fun)(1.)
jax.jit(device_fun)(1.)
# At this point, we have started two computations, each with two
# taps, but they may not have yet executed.
barrier_wait()
# Now we know that all the computations started before `barrier_wait`
# on all devices, have finished, and all the callbacks have finished
# executing.
```
Note that [`barrier_wait()`](index.html#jax.experimental.host_callback.barrier_wait) will start one tiny computation with one tap on each of the jax.local_devices() and will wait for all these taps to be received.
An alternative to using [`barrier_wait()`](index.html#jax.experimental.host_callback.barrier_wait) is to just wait for the end of the computation, if all the callbacks are [`call()`](index.html#jax.experimental.host_callback.call):
```
accumulator = p[]
def host_log(arg):
# We just record the arguments in a list
accumulator.append(arg)
return 0. # return something
def device_fun(c):
y = call(host_log, x, result_shape=jax.ShapeDtypeStruct((), np.float32))
z = call(host_log, 2. * x, result_shape=jax.ShapeDtypeStruct((), np.float32))
return y + z # return something that uses both results
res1 = jax.jit(device_fun)(1.)
res2 = jax.jit(device_fun)(1.)
res1.block_until_ready()
res2.block_until_ready()
```
###### Behavior under parallelization transformations[#](#behavior-under-parallelization-transformations)
In presence of [`jax.pmap()`](index.html#jax.pmap) the code will run on multiple devices and each device will tap its values independently.
It may be helpful to use the `tap_with_device` option for [`id_print()`](index.html#jax.experimental.host_callback.id_print)
or [`id_tap()`](index.html#jax.experimental.host_callback.id_tap), so that you see which device is sending which data:
```
jax.pmap(power3, devices=jax.local_devices()[:2])(np.array([3., 4.])
# device=cpu:0 what=x,x^2: (3., 9.) # from the first device
# device=cpu:1 what=x,x^2: (4., 16.) # from the second device
```
When using [`jax.pmap()`](index.html#jax.pmap) with multiple devices on multiple hosts, every host will receive callbacks from all of its local devices, with an operand that corresponds to each device slice. For a
[`call()`](index.html#jax.experimental.host_callback.call), the callback must return to each device only the slice of the result that pertains to the corresponding device.
When using the experimental `pjit.pjit()` the code will run on multiple devices on different shards of the input. The current implementation of host callbacks will ensure that a single device will collect and outfeed the entire operand, in a single callback. The callback function is supposed to return the entire array, which will then be sent in a single infeed to the same device that issued the outfeed. This device is then responsible for sending the required shards to the other devices:
```
with jax.sharding.Mesh(jax.local_devices()[:2], ["d"]):
pjit.pjit(power3, in_shardings=(P("d"),),
out_shardings=(P("d"),))(np.array([3., 4.]))
# device=TPU:0 what=x,x^2: ( [3., 4.],
# [9., 16.] )
```
Note that the collection of the operand on one device may result in OOM if the operand was sharded across devices.
When using `pjit.pjit()` with multiple devices on multiple hosts, only the host for the device 0 (w.r.t. the mesh) will receive the callback, with the operand collected from all participating devices on all hosts. For a [`call()`](index.html#jax.experimental.host_callback.call), the callback must return the entire array for all devices on all hosts.
###### Behavior under JAX autodiff transformations[#](#behavior-under-jax-autodiff-transformations)
When used under a JAX autodiff transformation, the host callback functions operate on the primal values only. Consider the following example:
```
def power3(x):
y = x * x
# Print both 'x' and 'x^2'. Must pack as a tuple.
hcb.id_print((x, y), what="x,x^2")
return y * x
power3(3.)
# what: x,x^2 : (3., 9.)
```
(You can see these examples tested in host_callback_test.HostCallbackTapTest.test_tap_transforms.)
When used under [`jax.jvp()`](index.html#jax.jvp) there will be one callback with the primal values only:
```
jax.jvp(power3, (3.,), (0.1,))
# what: x,x^2 : (3., 9.)
```
Similarly for [`jax.grad()`](index.html#jax.grad), we get a callback from the forward computation only:
```
jax.grad(power3)(3.)
# what: x,x^2 : (3., 9.)
```
If you want to invoke the callback on the tangents during a [`jax.jvp()`](index.html#jax.jvp),
you can use a custom_jvp. For example, you can define a function that does nothing interesting except that its custom_jvp will print the tangents:
```
@jax.custom_jvp def print_tangents(arg):
return None
@print_tangents.defjvp def print_tangents_jvp(primals, tangents):
arg_dot, = tangents
hcb.id_print(arg_dot, what="tangents")
return primals, tangents
```
Then you use this function in the places where you want to tap the tangents:
```
def power3_with_tangents(x):
y = x * x
# Print both 'x' and 'x^2'. Must pack as a tuple.
hcb.id_print((x, y), what="x,x^2")
print_tangents((x, y))
return y * x
jax.jvp(power3_with_tangents, (3.,), (0.1,))
# what: x,x^2 : (3., 9.)
# what: tangents : (0.1, 0.6)
```
You can do a similar thing for the cotangents during [`jax.grad()`](index.html#jax.grad). This time you must be careful to use in the rest of the computation the values whose cotangents you want to tap. Hence we make the `print_cotangents` return its argument:
```
@jax.custom_vjp def print_cotangents(arg):
# Must return the argument for which we want the cotangent.
return arg
# f_fwd: a -> (b, residual)
def print_cotangents_fwd(arg):
return print_cotangents(arg), None
# f_bwd: (residual, CT b) -> [CT a]
def print_cotangents_bwd(residual, ct_b):
hcb.id_print(ct_b, what="cotangents", output_stream=testing_stream)
return ct_b,
print_cotangents.defvjp(print_cotangents_fwd, print_cotangents_bwd)
def power3_with_cotangents(x):
y = x * x
# Print both 'x' and 'x^2'. Must pack as a tuple.
hcb.id_print((x, y), what="x,x^2", output_stream=testing_stream)
(x1, y1) = print_cotangents((x, y))
# Must use the output of print_cotangents
return y1 * x1
jax.grad(power3_with_cotangents)(3.)
# what: x,x^2 : (3., 9.)
# what: cotangents : (9., 3.)
```
If you use `ad_checkpoint.checkpoint()` to rematerialize the residuals for the backward pass, then the callbacks from the primal computation will be called twice:
```
jax.grad(lambda x: power3(ad_checkpoint.checkpoint(power3)(x)))(3.)
# what: x,x^2 : (3., 9.)
# what: x,x^2 : (27., 729.)
# what: x,x^2 : (3., 9.)
```
The callbacks are, in order from: the primal computation of the inner `power3`,
the primal computation of the outer `power3`, and the rematerialization of the residuals for the inner `power3`.
###### Behavior under jax.vmap[#](#behavior-under-jax-vmap)
The host callback functions [`id_print()`](index.html#jax.experimental.host_callback.id_print) and [`id_tap()`](index.html#jax.experimental.host_callback.id_tap) support the vectorization transformation [`jax.vmap()`](index.html#jax.vmap).
For [`jax.vmap()`](index.html#jax.vmap) the arguments to the callback are batched,
and the callback function is passed an additional special `transforms` containing a list of transformation descriptors in the form `("batch", {"batch_dims": ...})`, where `...`` denotes the batched dimensions for the tapped values (one entry per argument, `
None` denotes an argument that was broadcast).
> jax.vmap(power3)(np.array([2., 3.]))
> # transforms: [(‘batch’, {‘batch_dims’: (0, 0)})] what: x,x^2 : ([2., 3.], [4., 9.])
See documentation for [`id_tap()`](index.html#jax.experimental.host_callback.id_tap), [`id_print()`](index.html#jax.experimental.host_callback.id_print), and [`call()`](index.html#jax.experimental.host_callback.call).
For more usage example, see tests/host_callback_test.py.
###### Using [`call()`](index.html#jax.experimental.host_callback.call) to call a TensorFlow function, with reverse-mode autodiff support[#](#using-call-to-call-a-tensorflow-function-with-reverse-mode-autodiff-support)
Another possible use for host computation is to invoke a library written for another framework, such as TensorFlow.
In this case it becomes interesting to support JAX autodiff for host callbacks by deferring to the autodiff mechanism in TensorFlow,
using the [`jax.custom_vjp()`](index.html#jax.custom_vjp) mechanism.
This is relatively easy to do, once one understands both the JAX custom VJP and the TensorFlow autodiff mechanisms.
The code for how this can be done is shown in the `call_tf_full_ad`
function in [host_callback_to_tf_test.py](https://github.com/google/jax/blob/main/tests/host_callback_to_tf_test.py).
This example supports arbitrary higher-order differentiation as well.
Note that if you just want to call TensorFlow functions from JAX, you can also use the [jax2tf.call_tf function](https://github.com/google/jax/blob/main/jax/experimental/jax2tf/call_tf.py).
###### Using [`call()`](index.html#jax.experimental.host_callback.call) to call a JAX function on another device, with reverse-mode autodiff support[#](#using-call-to-call-a-jax-function-on-another-device-with-reverse-mode-autodiff-support)
It should not be surprising that we can use host computation to invoke a JAX computation on another device. The arguments are sent from the accelerator to the host, and then to the outside device on which the JAX host computation will run, and then the results are sent back to the original accelerator.
The code for how this can be done is shown in the `call_jax_other_device function`
in [host_callback_test.py](https://github.com/google/jax/blob/main/tests/host_callback_test.py).
###### Low-level details and debugging[#](#low-level-details-and-debugging)
The host callback functions will be executed for each device in the order in which the send operations were performed on the device.
The host callback functions for multiple devices may be interleaved.
The data from the devices is received by separate threads managed by the JAX runtime (one thread per device). The runtime maintains a buffer of configurable size (see the flag `--jax_host_callback_max_queue_byte_size`).
When the buffer is full, all the receiving threads are paused which eventually pauses the computation on devices. The runtime has one additional thread for each device to invoke the Python user functions with the received data. If the processing of the callbacks is slow, it may actually lead to the runtime buffer filling up, and eventually pausing the computation on the devices when they need to send something.
For more details on the outfeed receiver runtime mechanism see
[runtime code](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/python/outfeed_receiver.cc).
In order to pause the execution until all data from computations already started on devices has arrived and has been processed, use [`barrier_wait()`](index.html#jax.experimental.host_callback.barrier_wait).
Exceptions from the user-defined callback functions are logged along with their stack traces, but the receiving threads are not stopped. Instead the last exception is recorded and the subsequent [`barrier_wait()`](index.html#jax.experimental.host_callback.barrier_wait) will raise [`CallbackException`](index.html#jax.experimental.host_callback.CallbackException) if any exception had occurred in one of the tap functions. This exception will include the text and the stack trace of the last exception encountered.
One further complication arises for callback functions that must return results to the call origin device, such as [`call()`](index.html#jax.experimental.host_callback.call). This is handled differently on CPU/GPU devices compared to TPU devices.
On CPU/GPU devices, in order to avoid the device computation being stuck waiting for a result that will never arrive, in case of any error during the processing of the callback (whether raised by the user-code itself or due to a mismatch of the returned value and the expected return_shape)
we send the device a “fake” result of shape `int8[12345]`.
This will make the device computation abort because the received data is different than the one that it expects. On CPU the runtime will crash with a distinctive error message:
``
Check failed: buffer->length() == buffer_length (12345 vs. ...)
``
On GPU, the failure is more user-friendly and will be surfaced to the Python program as:
``
RET_CHECK failure ... Mismatch between infeed source buffer shape s8[12345] ...
``
To debug the underlying cause for these messages, see the Debugging section.
On TPU devices, there is currently no shape check for infeed, so we take the safer route of not sending this fake result in case of errors. This means that the computation will hang, and no exception will be raised (but any exceptions in the callback functions will still appear in the logs).
The current implementation uses the outfeed mechanism provided by XLA. The mechanism itself is quite primitive in the sense that a receiver must know exactly the shape of each incoming packet, and how many packets are expected.
This makes it hard to use for multiple kinds of data in the same computation,
and it is practically impossible to use it under conditionals or in loops of non-constant iteration count. Furthermore, code that uses the outfeed mechanism directly cannot be transformed by JAX. All these limitations are addressed by the host callback functions. The tapping API introduced here makes it easy to share the outfeed mechanism for multiple purposes, while supporting all transformations.
**Note that after you have used the host callback functions, you cannot use lax.outfeed directly**. You may want to `stop_outfeed_receiver()`
if you later need to use lax.outfeed.
Since the actual calls to your callback functions are made from the C++
receiver, it may be hard to debug the calls. In particular, the stack trace will not include the calling code. You can use the flag
`jax_host_callback_inline` (or the environment variable
`JAX_HOST_CALLBACK_INLINE`) to ensure that the calls to the callbacks are inlined. This works only if the calls are outside a staging context
([`jit()`](index.html#jax.jit) or a control-flow primitive).
The C++ [receiver](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/python/outfeed_receiver.cc)
is started automatically on the first call to [`id_tap()`](index.html#jax.experimental.host_callback.id_tap). In order to stop it properly, upon start an `atexit` handler is registered to call
[`barrier_wait()`](index.html#jax.experimental.host_callback.barrier_wait) with the logging name “at_exit”.
There are a few environment variables that you can use to turn on logging for the C++ outfeed [receiver backend](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/python/outfeed_receiver.cc).
> * `TF_CPP_MIN_LOG_LEVEL=0`: will turn on INFO logging, needed for all below.
> * `TF_CPP_MIN_VLOG_LEVEL=3`: will make all VLOG logging up to level 3 behave
> like INFO logs. This may be too much, but you will see which modules are
> logging relevant info, and then you can select which modules to log from.
> * `TF_CPP_VMODULE=<module_name>=3` (the module name can be either C++ or
> Python, without the extension).
You should also use the `--verbosity=2` flag so that you see the logs from Python.
For example, you can try to enable logging in the `host_callback` module:
`TF_CPP_MIN_LOG_LEVEL=0 TF_CPP_VMODULE=host_callback=3 python tests/host_callback_test.py --verbosity=2 HostCallbackIdTapTest.test_tap_jit_simple`
If you want to enable logging in lower-level implementation modules try:
`TF_CPP_MIN_LOG_LEVEL=0 TF_CPP_VMODULE=outfeed_receiver=3,host_callback=3,outfeed_receiver_py=3,outfeed_thunk=3,infeed_thunk=3,cpu_transfer_manager=3,cpu_runtime=3,xfeed_manager=3,pjrt_client=3 python tests/host_callback_test.py --verbosity=2 HostCallbackIdTapTest.test_tap_jit_simple`
(For bazel tests use –test_arg=–vmodule=…
Still to do:* More performance tests.
* Explore implementation with outside compilation for TPU.
* Explore implementation with XLA CustomCall for CPU and GPU.
###### API[#](#api)
| | |
| --- | --- |
| [`id_tap`](index.html#jax.experimental.host_callback.id_tap)(tap_func, arg, *[, result, ...]) | Host-callback tap primitive, like identity function with a call to `tap_func`. |
| [`id_print`](index.html#jax.experimental.host_callback.id_print)(arg, *[, result, tap_with_device, ...]) | Like [`id_tap()`](index.html#jax.experimental.host_callback.id_tap) with a printing tap function. |
| [`call`](index.html#jax.experimental.host_callback.call)(callback_func, arg, *[, result_shape, ...]) | Make a call to the host, and expect a result. |
| [`barrier_wait`](index.html#jax.experimental.host_callback.barrier_wait)([logging_name]) | Blocks the calling thread until all current outfeed is processed. |
| [`CallbackException`](index.html#jax.experimental.host_callback.CallbackException) | Signals that some callback function had exceptions. |
###### `jax.experimental.maps` module[#](#module-jax.experimental.maps)
###### API[#](#api)
| | |
| --- | --- |
| [`xmap`](index.html#jax.experimental.maps.xmap)(fun, in_axes, out_axes, *[, ...]) | Assign a positional signature to a program that uses named array axes. |
###### `jax.experimental.pjit` module[#](#module-jax.experimental.pjit)
###### API[#](#api)
jax.experimental.pjit.pjit(*fun*, *in_shardings=UnspecifiedValue*, *out_shardings=UnspecifiedValue*, *static_argnums=None*, *static_argnames=None*, *donate_argnums=None*, *donate_argnames=None*, *keep_unused=False*, *device=None*, *backend=None*, *inline=False*, *abstracted_axes=None*)[[source]](_modules/jax/_src/pjit.html#pjit)[#](#jax.experimental.pjit.pjit)
Makes `fun` compiled and automatically partitioned across multiple devices.
NOTE: This function is now equivalent to jax.jit please use that instead.
The returned function has semantics equivalent to those of `fun`, but is compiled to an XLA computation that runs across multiple devices
(e.g. multiple GPUs or multiple TPU cores). This can be useful if the jitted version of `fun` would not fit in a single device’s memory, or to speed up
`fun` by running each operation in parallel across multiple devices.
The partitioning over devices happens automatically based on the propagation of the input partitioning specified in `in_shardings` and the output partitioning specified in `out_shardings`. The resources specified in those two arguments must refer to mesh axes, as defined by the [`jax.sharding.Mesh()`](index.html#jax.sharding.Mesh) context manager. Note that the mesh definition at [`pjit()`](#jax.experimental.pjit.pjit) application time is ignored, and the returned function will use the mesh definition available at each call site.
Inputs to a [`pjit()`](#jax.experimental.pjit.pjit)’d function will be automatically partitioned across devices if they’re not already correctly partitioned based on `in_shardings`.
In some scenarios, ensuring that the inputs are already correctly pre-partitioned can increase performance. For example, if passing the output of one
[`pjit()`](#jax.experimental.pjit.pjit)’d function to another [`pjit()`](#jax.experimental.pjit.pjit)’d function (or the same
[`pjit()`](#jax.experimental.pjit.pjit)’d function in a loop), make sure the relevant
`out_shardings` match the corresponding `in_shardings`.
Note
**Multi-process platforms:** On multi-process platforms such as TPU pods,
[`pjit()`](#jax.experimental.pjit.pjit) can be used to run computations across all available devices across processes. To achieve this, [`pjit()`](#jax.experimental.pjit.pjit) is designed to be used in SPMD Python programs, where every process is running the same Python code such that all processes run the same [`pjit()`](#jax.experimental.pjit.pjit)’d function in the same order.
When running in this configuration, the mesh should contain devices across all processes. However, any input argument dimensions partitioned over multi-process mesh axes should be of size equal to the corresponding *local*
mesh axis size, and outputs will be similarly sized according to the local mesh. `fun` will still be executed across *all* devices in the mesh,
including those from other processes, and will be given a global view of the data spread across multiple processes as a single array. However, outside of [`pjit()`](#jax.experimental.pjit.pjit) every process only “sees” its local piece of the input and output,
corresponding to its local sub-mesh.
This means that each process’s participating local devices must form a
_contiguous_ local sub-mesh within the full global mesh. A contiguous sub-mesh is one where all of its devices are adjacent within the global mesh, and form a rectangular prism.
The SPMD model also requires that the same multi-process [`pjit()`](#jax.experimental.pjit.pjit)’d functions must be run in the same order on all processes, but they can be interspersed with arbitrary operations running in a single process.
Parameters:
* **fun** ([`Callable`](https://docs.python.org/3/library/typing.html#typing.Callable)) – Function to be compiled. Should be a pure function, as side-effects may only be executed once. Its arguments and return value should be arrays,
scalars, or (nested) standard Python containers (tuple/list/dict) thereof.
Positional arguments indicated by `static_argnums` can be anything at all, provided they are hashable and have an equality operation defined.
Static arguments are included as part of a compilation cache key, which is why hash and equality operators must be defined.
* **in_shardings** – Pytree of structure matching that of arguments to `fun`,
with all actual arguments replaced by resource assignment specifications.
It is also valid to specify a pytree prefix (e.g. one value in place of a whole subtree), in which case the leaves get broadcast to all values in that subtree.
The `in_shardings` argument is optional. JAX will infer the shardings from the input [`jax.Array`](index.html#jax.Array)’s, and defaults to replicating the input if the sharding cannot be inferred.
The valid resource assignment specifications are:
+ `XLACompatibleSharding`, which will decide how the value
will be partitioned. With this, using a mesh context manager is not
required.
+ [`None`](https://docs.python.org/3/library/constants.html#None) is a special case whose semantics are:
- if the mesh context manager is *not* provided, JAX has the freedom to
choose whatever sharding it wants.
For in_shardings, JAX will mark is as replicated but this behavior
can change in the future.
For out_shardings, we will rely on the XLA GSPMD partitioner to
determine the output shardings.
- If the mesh context manager is provided, None will imply that the
value will be replicated on all devices of the mesh.
+ For backwards compatibility, in_shardings still supports ingesting
`PartitionSpec`. This option can *only* be used with the
mesh context manager.
- `PartitionSpec`, a tuple of length at most equal to the rank
of the partitioned value. Each element can be a [`None`](https://docs.python.org/3/library/constants.html#None), a mesh
axis or a tuple of mesh axes, and specifies the set of resources assigned
to partition the value’s dimension matching its position in the spec.
The size of every dimension has to be a multiple of the total number of resources assigned to it.
* **out_shardings** – Like `in_shardings`, but specifies resource assignment for function outputs.
The `out_shardings` argument is optional. If not specified, [`jax.jit()`](index.html#jax.jit)
will use GSPMD’s sharding propagation to determine how to shard the outputs.
* **static_argnums** ([`Union`](https://docs.python.org/3/library/typing.html#typing.Union)[[`None`](https://docs.python.org/3/library/constants.html#None), [`int`](https://docs.python.org/3/library/functions.html#int), [`Sequence`](https://docs.python.org/3/library/collections.abc.html#collections.abc.Sequence)[[`int`](https://docs.python.org/3/library/functions.html#int)]]) – An optional int or collection of ints that specify which positional arguments to treat as static (compile-time constant).
Operations that only depend on static arguments will be constant-folded in Python (during tracing), and so the corresponding argument values can be any Python object.
Static arguments should be hashable, meaning both `__hash__` and
`__eq__` are implemented, and immutable. Calling the jitted function with different values for these constants will trigger recompilation.
Arguments that are not arrays or containers thereof must be marked as static.
If `static_argnums` is not provided, no arguments are treated as static.
* **static_argnames** ([`Union`](https://docs.python.org/3/library/typing.html#typing.Union)[[`str`](https://docs.python.org/3/library/stdtypes.html#str), [`Iterable`](https://docs.python.org/3/library/collections.abc.html#collections.abc.Iterable)[[`str`](https://docs.python.org/3/library/stdtypes.html#str)], [`None`](https://docs.python.org/3/library/constants.html#None)]) – An optional string or collection of strings specifying which named arguments to treat as static (compile-time constant). See the comment on `static_argnums` for details. If not provided but `static_argnums` is set, the default is based on calling
`inspect.signature(fun)` to find corresponding named arguments.
* **donate_argnums** ([`Union`](https://docs.python.org/3/library/typing.html#typing.Union)[[`None`](https://docs.python.org/3/library/constants.html#None), [`int`](https://docs.python.org/3/library/functions.html#int), [`Sequence`](https://docs.python.org/3/library/collections.abc.html#collections.abc.Sequence)[[`int`](https://docs.python.org/3/library/functions.html#int)]]) – Specify which positional argument buffers are “donated” to the computation. It is safe to donate argument buffers if you no longer need them once the computation has finished. In some cases XLA can make use of donated buffers to reduce the amount of memory needed to perform a computation, for example recycling one of your input buffers to store a result. You should not reuse buffers that you donate to a computation, JAX will raise an error if you try to. By default, no argument buffers are donated.
If neither `donate_argnums` nor `donate_argnames` is provided, no arguments are donated. If `donate_argnums` is not provided but
`donate_argnames` is, or vice versa, JAX uses
`inspect.signature(fun)` to find any positional arguments that correspond to `donate_argnames`
(or vice versa). If both `donate_argnums` and `donate_argnames` are provided, `inspect.signature` is not used, and only actual parameters listed in either `donate_argnums` or `donate_argnames` will be donated.
For more details on buffer donation see the
[FAQ](https://jax.readthedocs.io/en/latest/faq.html#buffer-donation).
* **donate_argnames** ([`Union`](https://docs.python.org/3/library/typing.html#typing.Union)[[`str`](https://docs.python.org/3/library/stdtypes.html#str), [`Iterable`](https://docs.python.org/3/library/collections.abc.html#collections.abc.Iterable)[[`str`](https://docs.python.org/3/library/stdtypes.html#str)], [`None`](https://docs.python.org/3/library/constants.html#None)]) – An optional string or collection of strings specifying which named arguments are donated to the computation. See the comment on `donate_argnums` for details. If not provided but `donate_argnums` is set, the default is based on calling
`inspect.signature(fun)` to find corresponding named arguments.
* **keep_unused** ([`bool`](https://docs.python.org/3/library/functions.html#bool)) – If False (the default), arguments that JAX determines to be unused by fun *may* be dropped from resulting compiled XLA executables.
Such arguments will not be transferred to the device nor provided to the underlying executable. If True, unused arguments will not be pruned.
* **device** ([`Optional`](https://docs.python.org/3/library/typing.html#typing.Optional)[[`Device`](index.html#jax.Device)]) – This argument is deprecated. Please put your arguments on the device you want before passing them to jit.
Optional, the Device the jitted function will run on. (Available devices can be retrieved via [`jax.devices()`](index.html#jax.devices).) The default is inherited from XLA’s DeviceAssignment logic and is usually to use
`jax.devices()[0]`.
* **backend** ([`Optional`](https://docs.python.org/3/library/typing.html#typing.Optional)[[`str`](https://docs.python.org/3/library/stdtypes.html#str)]) – This argument is deprecated. Please put your arguments on the backend you want before passing them to jit.
Optional, a string representing the XLA backend: `'cpu'`, `'gpu'`, or
`'tpu'`.
Return type:
[`Wrapped`](index.html#jax.stages.Wrapped)
Returns:
A wrapped version of `fun`, set up for just-in-time compilation and automatically partitioned by the mesh available at each call site.
For example, a convolution operator can be automatically partitioned over an arbitrary set of devices by a single [`pjit()`](#jax.experimental.pjit.pjit) application:
```
>>> import jax
>>> import jax.numpy as jnp
>>> import numpy as np
>>> from jax.sharding import Mesh, PartitionSpec
>>> from jax.experimental.pjit import pjit
>>>
>>> x = jnp.arange(8, dtype=jnp.float32)
>>> f = pjit(lambda x: jax.numpy.convolve(x, jnp.asarray([0.5, 1.0, 0.5]), 'same'),
... in_shardings=None, out_shardings=PartitionSpec('devices'))
>>> with Mesh(np.array(jax.devices()), ('devices',)):
... print(f(x))
[ 0.5 2. 4. 6. 8. 10. 12. 10. ]
```
Parameters:
* **inline** ([`bool`](https://docs.python.org/3/library/functions.html#bool)) –
* **abstracted_axes** ([`Optional`](https://docs.python.org/3/library/typing.html#typing.Optional)[[`Any`](https://docs.python.org/3/library/typing.html#typing.Any)]) –
###### `jax.experimental.sparse` module[#](#module-jax.experimental.sparse)
The [`jax.experimental.sparse`](#module-jax.experimental.sparse) module includes experimental support for sparse matrix operations in JAX. It is under active development, and the API is subject to change. The primary interfaces made available are the [`BCOO`](index.html#jax.experimental.sparse.BCOO) sparse array type, and the
[`sparsify()`](index.html#jax.experimental.sparse.sparsify) transform.
###### Batched-coordinate (BCOO) sparse matrices[#](#batched-coordinate-bcoo-sparse-matrices)
The main high-level sparse object currently available in JAX is the [`BCOO`](index.html#jax.experimental.sparse.BCOO),
or *batched coordinate* sparse array, which offers a compressed storage format compatible with JAX transformations, in particular JIT (e.g. [`jax.jit()`](index.html#jax.jit)), batching
(e.g. [`jax.vmap()`](index.html#jax.vmap)) and autodiff (e.g. [`jax.grad()`](index.html#jax.grad)).
Here is an example of creating a sparse array from a dense array:
```
>>> from jax.experimental import sparse
>>> import jax.numpy as jnp
>>> import numpy as np
```
```
>>> M = jnp.array([[0., 1., 0., 2.],
... [3., 0., 0., 0.],
... [0., 0., 4., 0.]])
```
```
>>> M_sp = sparse.BCOO.fromdense(M)
```
```
>>> M_sp BCOO(float32[3, 4], nse=4)
```
Convert back to a dense array with the `todense()` method:
```
>>> M_sp.todense()
Array([[0., 1., 0., 2.],
[3., 0., 0., 0.],
[0., 0., 4., 0.]], dtype=float32)
```
The BCOO format is a somewhat modified version of the standard COO format, and the dense representation can be seen in the `data` and `indices` attributes:
```
>>> M_sp.data # Explicitly stored data Array([1., 2., 3., 4.], dtype=float32)
```
```
>>> M_sp.indices # Indices of the stored data Array([[0, 1],
[0, 3],
[1, 0],
[2, 2]], dtype=int32)
```
BCOO objects have familiar array-like attributes, as well as sparse-specific attributes:
```
>>> M_sp.ndim 2
```
```
>>> M_sp.shape
(3, 4)
```
```
>>> M_sp.dtype dtype('float32')
```
```
>>> M_sp.nse # "number of specified elements"
4
```
BCOO objects also implement a number of array-like methods, to allow you to use them directly within jax programs. For example, here we compute the transposed matrix-vector product:
```
>>> y = jnp.array([3., 6., 5.])
```
```
>>> M_sp.T @ y Array([18., 3., 20., 6.], dtype=float32)
```
```
>>> M.T @ y # Compare to dense version Array([18., 3., 20., 6.], dtype=float32)
```
BCOO objects are designed to be compatible with JAX transforms, including [`jax.jit()`](index.html#jax.jit),
[`jax.vmap()`](index.html#jax.vmap), [`jax.grad()`](index.html#jax.grad), and others. For example:
```
>>> from jax import grad, jit
```
```
>>> def f(y):
... return (M_sp.T @ y).sum()
...
>>> jit(grad(f))(y)
Array([3., 3., 4.], dtype=float32)
```
Note, however, that under normal circumstances [`jax.numpy`](index.html#module-jax.numpy) and [`jax.lax`](index.html#module-jax.lax) functions do not know how to handle sparse matrices, so attempting to compute things like
`jnp.dot(M_sp.T, y)` will result in an error (however, see the next section).
###### Sparsify transform[#](#sparsify-transform)
An overarching goal of the JAX sparse implementation is to provide a means to switch from dense to sparse computation seamlessly, without having to modify the dense implementation.
This sparse experiment accomplishes this through the [`sparsify()`](index.html#jax.experimental.sparse.sparsify) transform.
Consider this function, which computes a more complicated result from a matrix and a vector input:
```
>>> def f(M, v):
... return 2 * jnp.dot(jnp.log1p(M.T), v) + 1
...
>>> f(M, y)
Array([17.635532, 5.158883, 17.09438 , 7.591674], dtype=float32)
```
Were we to pass a sparse matrix to this directly, it would result in an error, because `jnp`
functions do not recognize sparse inputs. However, with [`sparsify()`](index.html#jax.experimental.sparse.sparsify), we get a version of this function that does accept sparse matrices:
```
>>> f_sp = sparse.sparsify(f)
```
```
>>> f_sp(M_sp, y)
Array([17.635532, 5.158883, 17.09438 , 7.591674], dtype=float32)
```
Support for [`sparsify()`](index.html#jax.experimental.sparse.sparsify) includes a large number of the most common primitives, including:
* generalized (batched) matrix products & einstein summations (`dot_general_p`)
* zero-preserving elementwise binary operations (e.g. `add_p`, `mul_p`, etc.)
* zero-preserving elementwise unary operations (e.g. `abs_p`, `jax.lax.neg_p`, etc.)
* summation reductions (`reduce_sum_p`)
* general indexing operations (`slice_p`, lax.dynamic_slice_p, lax.gather_p)
* concatenation and stacking (`concatenate_p`)
* transposition & reshaping ((`transpose_p`, `reshape_p`,
`squeeze_p`, `broadcast_in_dim_p`)
* some higher-order functions (`cond_p`, `while_p`, `scan_p`)
* some simple 1D convolutions (`conv_general_dilated_p`)
Nearly any [`jax.numpy`](index.html#module-jax.numpy) function that lowers to these supported primitives can be used within a sparsify transform to operate on sparse arrays. This set of primitives is enough to enable relatively sophisticated sparse workflows, as the next section will show.
###### Example: sparse logistic regression[#](#example-sparse-logistic-regression)
As an example of a more complicated sparse workflow, let’s consider a simple logistic regression implemented in JAX. Notice that the following implementation has no reference to sparsity:
```
>>> import functools
>>> from sklearn.datasets import make_classification
>>> from jax.scipy import optimize
```
```
>>> def sigmoid(x):
... return 0.5 * (jnp.tanh(x / 2) + 1)
...
>>> def y_model(params, X):
... return sigmoid(jnp.dot(X, params[1:]) + params[0])
...
>>> def loss(params, X, y):
... y_hat = y_model(params, X)
... return -jnp.mean(y * jnp.log(y_hat) + (1 - y) * jnp.log(1 - y_hat))
...
>>> def fit_logreg(X, y):
... params = jnp.zeros(X.shape[1] + 1)
... result = optimize.minimize(functools.partial(loss, X=X, y=y),
... x0=params, method='BFGS')
... return result.x
```
```
>>> X, y = make_classification(n_classes=2, random_state=1701)
>>> params_dense = fit_logreg(X, y)
>>> print(params_dense)
[-0.7298445 0.29893667 1.0248291 -0.44436368 0.8785025 -0.7724008
-0.62893456 0.2934014 0.82974285 0.16838408 -0.39774987 -0.5071844
0.2028872 0.5227761 -0.3739224 -0.7104083 2.4212713 0.6310087
-0.67060554 0.03139788 -0.05359547]
```
This returns the best-fit parameters of a dense logistic regression problem.
To fit the same model on sparse data, we can apply the [`sparsify()`](index.html#jax.experimental.sparse.sparsify) transform:
```
>>> Xsp = sparse.BCOO.fromdense(X) # Sparse version of the input
>>> fit_logreg_sp = sparse.sparsify(fit_logreg) # Sparse-transformed fit function
>>> params_sparse = fit_logreg_sp(Xsp, y)
>>> print(params_sparse)
[-0.72971725 0.29878938 1.0246326 -0.44430563 0.8784217 -0.77225566
-0.6288222 0.29335397 0.8293481 0.16820715 -0.39764675 -0.5069753
0.202579 0.522672 -0.3740134 -0.7102678 2.4209507 0.6310593
-0.670236 0.03132951 -0.05356663]
```
###### Sparse API Reference[#](#sparse-api-reference)
| | |
| --- | --- |
| [`sparsify`](index.html#jax.experimental.sparse.sparsify)(f[, use_tracer]) | Experimental sparsification transform. |
| [`grad`](index.html#jax.experimental.sparse.grad)(fun[, argnums, has_aux]) | Sparse-aware version of [`jax.grad()`](index.html#jax.grad) |
| [`value_and_grad`](index.html#jax.experimental.sparse.value_and_grad)(fun[, argnums, has_aux]) | Sparse-aware version of [`jax.value_and_grad()`](index.html#jax.value_and_grad) |
| [`empty`](index.html#jax.experimental.sparse.empty)(shape[, dtype, index_dtype, sparse_format]) | Create an empty sparse array. |
| [`eye`](index.html#jax.experimental.sparse.eye)(N[, M, k, dtype, index_dtype, sparse_format]) | Create 2D sparse identity matrix. |
| [`todense`](index.html#jax.experimental.sparse.todense)(arr) | Convert input to a dense matrix. |
| [`random_bcoo`](index.html#jax.experimental.sparse.random_bcoo)(key, shape, *[, dtype, ...]) | Generate a random BCOO matrix. |
| [`JAXSparse`](index.html#jax.experimental.sparse.JAXSparse)(args, *, shape) | Base class for high-level JAX sparse objects. |
###### BCOO Data Structure[#](#bcoo-data-structure)
[`BCOO`](index.html#jax.experimental.sparse.BCOO) is the *Batched COO format*, and is the main sparse data structure implemented in [`jax.experimental.sparse`](#module-jax.experimental.sparse).
Its operations are compatible with JAX’s core transformations, including batching
(e.g. [`jax.vmap()`](index.html#jax.vmap)) and autodiff (e.g. [`jax.grad()`](index.html#jax.grad)).
| | |
| --- | --- |
| [`BCOO`](index.html#jax.experimental.sparse.BCOO)(args, *, shape[, indices_sorted, ...]) | Experimental batched COO matrix implemented in JAX |
| [`bcoo_broadcast_in_dim`](index.html#jax.experimental.sparse.bcoo_broadcast_in_dim)(mat, *, shape, ...) | Expand the size and rank of a BCOO array by duplicating the data. |
| [`bcoo_concatenate`](index.html#jax.experimental.sparse.bcoo_concatenate)(operands, *, dimension) | Sparse implementation of [`jax.lax.concatenate()`](index.html#jax.lax.concatenate) |
| [`bcoo_dot_general`](index.html#jax.experimental.sparse.bcoo_dot_general)(lhs, rhs, *, dimension_numbers) | A general contraction operation. |
| [`bcoo_dot_general_sampled`](index.html#jax.experimental.sparse.bcoo_dot_general_sampled)(A, B, indices, *, ...) | A contraction operation with output computed at given sparse indices. |
| [`bcoo_dynamic_slice`](index.html#jax.experimental.sparse.bcoo_dynamic_slice)(mat, start_indices, ...) | Sparse implementation of {func}`jax.lax.dynamic_slice`. |
| [`bcoo_extract`](index.html#jax.experimental.sparse.bcoo_extract)(sparr, arr, *[, assume_unique]) | Extract values from a dense array according to the sparse array's indices. |
| [`bcoo_fromdense`](index.html#jax.experimental.sparse.bcoo_fromdense)(mat, *[, nse, n_batch, ...]) | Create BCOO-format sparse matrix from a dense matrix. |
| [`bcoo_gather`](index.html#jax.experimental.sparse.bcoo_gather)(operand, start_indices, ...[, ...]) | BCOO version of lax.gather. |
| [`bcoo_multiply_dense`](index.html#jax.experimental.sparse.bcoo_multiply_dense)(sp_mat, v) | An element-wise multiplication between a sparse and a dense array. |
| [`bcoo_multiply_sparse`](index.html#jax.experimental.sparse.bcoo_multiply_sparse)(lhs, rhs) | An element-wise multiplication of two sparse arrays. |
| [`bcoo_update_layout`](index.html#jax.experimental.sparse.bcoo_update_layout)(mat, *[, n_batch, ...]) | Update the storage layout (i.e. |
| [`bcoo_reduce_sum`](index.html#jax.experimental.sparse.bcoo_reduce_sum)(mat, *, axes) | Sum array element over given axes. |
| [`bcoo_reshape`](index.html#jax.experimental.sparse.bcoo_reshape)(mat, *, new_sizes[, dimensions]) | Sparse implementation of {func}`jax.lax.reshape`. |
| [`bcoo_slice`](index.html#jax.experimental.sparse.bcoo_slice)(mat, *, start_indices, limit_indices) | Sparse implementation of {func}`jax.lax.slice`. |
| [`bcoo_sort_indices`](index.html#jax.experimental.sparse.bcoo_sort_indices)(mat) | Sort indices of a BCOO array. |
| [`bcoo_squeeze`](index.html#jax.experimental.sparse.bcoo_squeeze)(arr, *, dimensions) | Sparse implementation of {func}`jax.lax.squeeze`. |
| [`bcoo_sum_duplicates`](index.html#jax.experimental.sparse.bcoo_sum_duplicates)(mat[, nse]) | Sums duplicate indices within a BCOO array, returning an array with sorted indices. |
| [`bcoo_todense`](index.html#jax.experimental.sparse.bcoo_todense)(mat) | Convert batched sparse matrix to a dense matrix. |
| [`bcoo_transpose`](index.html#jax.experimental.sparse.bcoo_transpose)(mat, *, permutation) | Transpose a BCOO-format array. |
###### BCSR Data Structure[#](#bcsr-data-structure)
[`BCSR`](index.html#jax.experimental.sparse.BCSR) is the *Batched Compressed Sparse Row* format, and is under development.
Its operations are compatible with JAX’s core transformations, including batching
(e.g. [`jax.vmap()`](index.html#jax.vmap)) and autodiff (e.g. [`jax.grad()`](index.html#jax.grad)).
| | |
| --- | --- |
| [`BCSR`](index.html#jax.experimental.sparse.BCSR)(args, *, shape[, indices_sorted, ...]) | Experimental batched CSR matrix implemented in JAX. |
| [`bcsr_dot_general`](index.html#jax.experimental.sparse.bcsr_dot_general)(lhs, rhs, *, dimension_numbers) | A general contraction operation. |
| [`bcsr_extract`](index.html#jax.experimental.sparse.bcsr_extract)(indices, indptr, mat) | Extract values from a dense matrix at given BCSR (indices, indptr). |
| [`bcsr_fromdense`](index.html#jax.experimental.sparse.bcsr_fromdense)(mat, *[, nse, n_batch, ...]) | Create BCSR-format sparse matrix from a dense matrix. |
| [`bcsr_todense`](index.html#jax.experimental.sparse.bcsr_todense)(mat) | Convert batched sparse matrix to a dense matrix. |
###### Other Sparse Data Structures[#](#other-sparse-data-structures)
Other sparse data structures include [`COO`](index.html#jax.experimental.sparse.COO), [`CSR`](index.html#jax.experimental.sparse.CSR), and [`CSC`](index.html#jax.experimental.sparse.CSC). These are reference implementations of simple sparse structures with a few core operations implemented.
Their operations are generally compatible with autodiff transformations such as [`jax.grad()`](index.html#jax.grad),
but not with batching transforms like [`jax.vmap()`](index.html#jax.vmap).
| | |
| --- | --- |
| [`COO`](index.html#jax.experimental.sparse.COO)(args, *, shape[, rows_sorted, cols_sorted]) | Experimental COO matrix implemented in JAX. |
| [`CSC`](index.html#jax.experimental.sparse.CSC)(args, *, shape) | Experimental CSC matrix implemented in JAX; API subject to change. |
| [`CSR`](index.html#jax.experimental.sparse.CSR)(args, *, shape) | Experimental CSR matrix implemented in JAX. |
| [`coo_fromdense`](index.html#jax.experimental.sparse.coo_fromdense)(mat, *[, nse, index_dtype]) | Create a COO-format sparse matrix from a dense matrix. |
| [`coo_matmat`](index.html#jax.experimental.sparse.coo_matmat)(mat, B, *[, transpose]) | Product of COO sparse matrix and a dense matrix. |
| [`coo_matvec`](index.html#jax.experimental.sparse.coo_matvec)(mat, v[, transpose]) | Product of COO sparse matrix and a dense vector. |
| [`coo_todense`](index.html#jax.experimental.sparse.coo_todense)(mat) | Convert a COO-format sparse matrix to a dense matrix. |
| [`csr_fromdense`](index.html#jax.experimental.sparse.csr_fromdense)(mat, *[, nse, index_dtype]) | Create a CSR-format sparse matrix from a dense matrix. |
| [`csr_matmat`](index.html#jax.experimental.sparse.csr_matmat)(mat, B, *[, transpose]) | Product of CSR sparse matrix and a dense matrix. |
| [`csr_matvec`](index.html#jax.experimental.sparse.csr_matvec)(mat, v[, transpose]) | Product of CSR sparse matrix and a dense vector. |
| [`csr_todense`](index.html#jax.experimental.sparse.csr_todense)(mat) | Convert a CSR-format sparse matrix to a dense matrix. |
###### `jax.experimental.sparse.linalg`[#](#module-jax.experimental.sparse.linalg)
Sparse linear algebra routines.
| | |
| --- | --- |
| [`spsolve`](index.html#jax.experimental.sparse.linalg.spsolve)(data, indices, indptr, b[, tol, reorder]) | A sparse direct solver using QR factorization. |
| [`lobpcg_standard`](index.html#jax.experimental.sparse.linalg.lobpcg_standard)(A, X[, m, tol]) | Compute the top-k standard eigenvalues using the LOBPCG routine. |
###### `jax.experimental.jet` module[#](#module-jax.experimental.jet)
Jet is an experimental module for higher-order automatic differentiation that does not rely on repeated first-order automatic differentiation.
How? Through the propagation of truncated Taylor polynomials.
Consider a function \(f = g \circ h\), some point \(x\)
and some offset \(v\).
First-order automatic differentiation (such as [`jax.jvp()`](index.html#jax.jvp))
computes the pair \((f(x), \partial f(x)[v])\) from the pair
\((h(x), \partial h(x)[v])\).
[`jet()`](#jax.experimental.jet.jet) implements the higher-order analogue:
Given the tuple
\[(h_0, ... h_K) :=
(h(x), \partial h(x)[v], \partial^2 h(x)[v, v], ..., \partial^K h(x)[v,...,v]),\]
which represents a \(K\)-th order Taylor approximation of \(h\) at \(x\), [`jet()`](#jax.experimental.jet.jet) returns a \(K\)-th order Taylor approximation of \(f\) at \(x\),
\[(f_0, ..., f_K) :=
(f(x), \partial f(x)[v], \partial^2 f(x)[v, v], ..., \partial^K f(x)[v,...,v]).\]
More specifically, [`jet()`](#jax.experimental.jet.jet) computes
\[f_0, (f_1, . . . , f_K) = \texttt{jet} (f, h_0, (h_1, . . . , h_K))\]
and can thus be used for high-order automatic differentiation of \(f\).
Details are explained in
[these notes](https://github.com/google/jax/files/6717197/jet.pdf).
Note
Help improve [`jet()`](#jax.experimental.jet.jet) by contributing
[outstanding primitive rules](https://github.com/google/jax/issues/2431).
###### API[#](#api)
jax.experimental.jet.jet(*fun*, *primals*, *series*)[[source]](_modules/jax/experimental/jet.html#jet)[#](#jax.experimental.jet.jet)
Taylor-mode higher-order automatic differentiation.
Parameters:
* **fun** – Function to be differentiated. Its arguments should be arrays, scalars,
or standard Python containers of arrays or scalars. It should return an array, scalar, or standard Python container of arrays or scalars.
* **primals** – The primal values at which the Taylor approximation of `fun` should be evaluated. Should be either a tuple or a list of arguments,
and its length should be equal to the number of positional parameters of
`fun`.
* **series** – Higher order Taylor-series-coefficients.
Together, primals and series make up a truncated Taylor polynomial.
Should be either a tuple or a list of tuples or lists,
and its length dictates the degree of the truncated Taylor polynomial.
Returns:
A `(primals_out, series_out)` pair, where `primals_out` is `fun(*primals)`,
and together, `primals_out` and `series_out` are a truncated Taylor polynomial of \(f(h(\cdot))\).
The `primals_out` value has the same Python tree structure as `primals`,
and the `series_out` value the same Python tree structure as `series`.
For example:
```
>>> import jax
>>> import jax.numpy as np
```
Consider the function \(h(z) = z^3\), \(x = 0.5\),
and the first few Taylor coefficients
\(h_0=x^3\), \(h_1=3x^2\), and \(h_2=6x\).
Let \(f(y) = \sin(y)\).
```
>>> h0, h1, h2 = 0.5**3., 3.*0.5**2., 6.*0.5
>>> f, df, ddf = np.sin, np.cos, lambda *args: -np.sin(*args)
```
[`jet()`](#jax.experimental.jet.jet) returns the Taylor coefficients of \(f(h(z)) = \sin(z^3)\)
according to <NAME> Bruno’s formula:
```
>>> f0, (f1, f2) = jet(f, (h0,), ((h1, h2),))
>>> print(f0, f(h0))
0.12467473 0.12467473
```
```
>>> print(f1, df(h0) * h1)
0.7441479 0.74414825
```
```
>>> print(f2, ddf(h0) * h1 ** 2 + df(h0) * h2)
2.9064622 2.9064634
```
###### `jax.experimental.custom_partitioning` module[#](#module-jax.experimental.custom_partitioning)
###### API[#](#api)
jax.experimental.custom_partitioning.custom_partitioning(*fun*, *static_argnums=()*)[[source]](_modules/jax/experimental/custom_partitioning.html#custom_partitioning)[#](#jax.experimental.custom_partitioning.custom_partitioning)
Inserts a CustomCallOp into the XLA graph with custom SPMD lowering rules.
```
@custom_partitioning def f(*args):
return ...
def propagate_user_sharding(mesh, user_shape):
'''Update the sharding of the op from a user's shape.sharding.'''
user_sharding = jax.tree_map(lambda x: x.sharding, user_shape)
def partition(mesh, arg_shapes, result_shape):
def lower_fn(*args):
... builds computation on per-device shapes ...
result_shardings = jax.tree_map(lambda x: x.sharding, result_shape)
arg_shardings = jax.tree_map(lambda x: x.sharding, arg_shapes)
# result_sharding and arg_shardings may optionally be modified and the
# partitioner will insert collectives to reshape.
return mesh, lower_fn, result_sharding, arg_shardings
def infer_sharding_from_operands(mesh, arg_shapes, shape):
'''Compute the result sharding from the sharding of the operands.'''
arg_shardings = jax.tree_map(lambda x: x.sharding, arg_shapes)
f.def_partition(partition, propagate_user_sharding, infer_sharding_from_operands)
```
The args to `def_partition` are as follows:
* `propagate_user_sharding`: Callable which takes the sharding of a user (in the dag)
and returns a suggestion for a new NamedSharding. The default implementation is just to return the suggested sharding.
* `partition`: Callable which takes the SPMD suggested partition shapes and partition specs and returns the mesh, a per-shard lowering function, and the final input and output sharding specs (the SPMD partitioner will repartition the inputs to match). The mesh is returned to allow configuring axis_names for collectives when no mesh is provided.
* `infer_sharding_from_operands`: Callable which computes an output `NamedSharding`
from the `NamedSharding` chosen for each argument.
* `decode_shardings`: When set to True, convert input `GSPMDSharding``s to
``NamedSharding` if possible. This may not be possible if the user does not provide a contextual mesh.
Positional arguments can be specified as static using static_argnums. JAX uses
`inspect.signature(fun)` to resolve these positional arguments.
Example
As an example, assume we want to enhance the existing `jax.numpy.fft.fft`. This function computes the discrete Fourier transform of an N-dimensional input along the last dimension, and is batched along the first N-1 dimensions.
By default, however, it will ignore the sharding of the input and gather the input on all devices.
However, since `jax.numpy.fft.fft` is batched along the first N-1 dimensions,
this is unnecessary. We will create a new `my_fft` op that, instead, does not alter the sharding along the first N-1 dimensions, and only gathers the input along the last dimension if needed.
```
import jax from jax.sharding import NamedSharding from jax.experimental.custom_partitioning import custom_partitioning from jax.experimental.pjit import pjit from jax.sharding import PartitionSpec as P from jax.sharding import Mesh from jax.numpy.fft import fft import regex as re import numpy as np
# Pattern to detect all-gather or dynamic-slice in the generated HLO
_PATTERN = '(dynamic-slice|all-gather)'
# For an N-D input, keeps sharding along the first N-1 dimensions
# but replicate along the last dimension def supported_sharding(sharding, shape):
rank = len(shape.shape)
max_shared_dims = min(len(sharding.spec), rank-1)
names = tuple(sharding.spec[:max_shared_dims]) + tuple(None for _ in range(rank - max_shared_dims))
return NamedSharding(sharding.mesh, P(*names))
def partition(mesh, arg_shapes, result_shape):
result_shardings = jax.tree_map(lambda x: x.sharding, result_shape)
arg_shardings = jax.tree_map(lambda x: x.sharding, arg_shapes)
return mesh, fft, supported_sharding(arg_shardings[0], arg_shapes[0]), (supported_sharding(arg_shardings[0], arg_shapes[0]),)
def infer_sharding_from_operands(mesh, arg_shapes, result_shape):
arg_shardings = jax.tree_map(lambda x: x.sharding, arg_shapes)
return supported_sharding(arg_shardings[0], arg_shapes[0])
@custom_partitioning def my_fft(x):
return fft(x)
my_fft.def_partition(
infer_sharding_from_operands=infer_sharding_from_operands,
partition=partition)
```
Now create a 2D array sharded along the first axis, pass it through `my_fft`
and notice how it is still sharded as expected, and identical to the output of `fft`. However, inspecting the HLO
(using `lower(x).compile().runtime_executable().hlo_modules()`) reveals that
`my_fft` does not create any all-gather or dynamic-slice, while `fft` does.
```
with Mesh(np.array(jax.devices()), ('x',)):
x = np.asarray(np.random.randn(32*1024, 1024), dtype=np.complex64)
y = pjit(lambda x: x, in_shardings=None, out_shardings=P('x'))(x)
pjit_my_fft = pjit(my_fft, in_shardings=P('x'), out_shardings=P('x'))
pjit_fft = pjit(fft, in_shardings=P('x'), out_shardings=P('x'))
print(pjit_my_fft(y))
print(pjit_fft(y))
# dynamic-slice or all-gather are not present in the HLO for my_fft, because x is a 2D array
assert(re.search(_PATTERN, pjit_my_fft.lower(x).compile().runtime_executable().hlo_modules()[0].to_string()) is None)
# dynamic-slice or all-gather are present in the HLO for fft
assert(re.search(_PATTERN, pjit_fft.lower(x).compile().runtime_executable().hlo_modules()[0].to_string()) is not None)
```
```
# my_fft
[[-38.840824 +0.j -40.649452 +11.845365j
...
-1.6937828 +0.8402481j 15.999859 -4.0156755j]]
# jax.numpy.fft.fft
[[-38.840824 +0.j -40.649452 +11.845365j
...
-1.6937828 +0.8402481j 15.999859 -4.0156755j]]
```
Because of the logic in `supported_sharding`, `my_fft` also works on 1-dimensional arrays.
However, in this case, the HLO of `my_fft` does show a a dynamic-slice, since the last dimension is the dimension along which FFTs are calculated and needs to be replicated on all devices before the computation can be done.
```
with Mesh(np.array(jax.devices()), ('x',)):
x = np.asarray(np.random.randn(32*1024*1024), dtype=np.complex64)
y = pjit(lambda x: x, in_shardings=None, out_shardings=P('x'))(x)
pjit_my_fft = pjit(my_fft, in_shardings=P('x'), out_shardings=P('x'))
pjit_fft = pjit(fft, in_shardings=P('x'), out_shardings=P('x'))
print(pjit_my_fft(y))
print(pjit_fft(y))
# dynamic-slice or all-gather are present in the HLO for my_fft, because x is a 1D array
assert(re.search(_PATTERN, pjit_my_fft.lower(x).compile().runtime_executable().hlo_modules()[0].to_string()) is None)
# dynamic-slice or all-gather are present in the HLO for fft
assert(re.search(_PATTERN, pjit_fft.lower(x).compile().runtime_executable().hlo_modules()[0].to_string()) is not None)
```
```
# my_fft
[ 7.217285 +0.j -3012.4937 +4287.635j -405.83594 +3042.984j
... 1422.4502 +7271.4297j -405.84033 -3042.983j
-3012.4963 -4287.6343j]
# jax.numpy.fft.fft
[ 7.217285 +0.j -3012.4937 +4287.635j -405.83594 +3042.984j
... 1422.4502 +7271.4297j -405.84033 -3042.983j
-3012.4963 -4287.6343j]
```
###### `jax.experimental.multihost_utils` module[#](#module-jax.experimental.multihost_utils)
Utilities for synchronizing and communication across multiple hosts.
###### Multihost Utils API Reference[#](#multihost-utils-api-reference)
| | |
| --- | --- |
| [`broadcast_one_to_all`](index.html#jax.experimental.multihost_utils.broadcast_one_to_all)(in_tree[, is_source]) | Broadcast data from a source host (host 0 by default) to all other hosts. |
| [`sync_global_devices`](index.html#jax.experimental.multihost_utils.sync_global_devices)(name) | Creates a barrier across all hosts/devices. |
| [`process_allgather`](index.html#jax.experimental.multihost_utils.process_allgather)(in_tree[, tiled]) | Gather data from across processes. |
| [`assert_equal`](index.html#jax.experimental.multihost_utils.assert_equal)(in_tree[, fail_message]) | Verifies that all the hosts have the same tree of values. |
| [`host_local_array_to_global_array`](index.html#jax.experimental.multihost_utils.host_local_array_to_global_array)(...) | Converts a host local value to a globally sharded jax.Array. |
| [`global_array_to_host_local_array`](index.html#jax.experimental.multihost_utils.global_array_to_host_local_array)(...) | Converts a global jax.Array to a host local jax.Array. |
##### Experimental APIs[#](#experimental-apis)
| | |
| --- | --- |
| [`enable_x64`](index.html#jax.experimental.enable_x64)([new_val]) | Experimental context manager to temporarily enable X64 mode. |
| [`disable_x64`](index.html#jax.experimental.disable_x64)() | Experimental context manager to temporarily disable X64 mode. |
| [`jax.experimental.checkify.checkify`](index.html#jax.experimental.checkify.checkify)(f[, errors]) | Functionalize check calls in fun, and optionally add run-time error checks. |
| [`jax.experimental.checkify.check`](index.html#jax.experimental.checkify.check)(pred, msg, ...) | Check a predicate, add an error with msg if predicate is False. |
| [`jax.experimental.checkify.check_error`](index.html#jax.experimental.checkify.check_error)(error) | Raise an Exception if `error` represents a failure. |
#### `jax.lib` module[#](#jax-lib-module)
The jax.lib package is a set of internal tools and types for bridging between JAX’s Python frontend and its XLA backend.
##### jax.lib.xla_bridge[#](#jax-lib-xla-bridge)
| | |
| --- | --- |
| [`default_backend`](index.html#jax.lib.xla_bridge.default_backend)() | Returns the platform name of the default XLA backend. |
| [`get_backend`](index.html#jax.lib.xla_bridge.get_backend)([platform]) |
param platform:
|
| [`get_compile_options`](index.html#jax.lib.xla_bridge.get_compile_options)(num_replicas, num_partitions) | Returns the compile options to use, as derived from flag values. |
##### jax.lib.xla_client[#](#jax-lib-xla-client)
### Configuration[#](#configuration)
| | |
| --- | --- |
| [`config`](index.html#jax.config) | |
| [`check_tracer_leaks`](index.html#jax.check_tracer_leaks) | Context manager for jax_check_tracer_leaks config option. |
| [`checking_leaks`](index.html#jax.checking_leaks) | Context manager for jax_check_tracer_leaks config option. |
| [`debug_nans`](index.html#jax.debug_nans) | Context manager for jax_debug_nans config option. |
| [`debug_infs`](index.html#jax.debug_infs) | Context manager for jax_debug_infs config option. |
| [`default_device`](index.html#jax.default_device) | Context manager for jax_default_device config option. |
| [`default_matmul_precision`](index.html#jax.default_matmul_precision) | Context manager for jax_default_matmul_precision config option. |
| [`default_prng_impl`](index.html#jax.default_prng_impl) | Context manager for jax_default_prng_impl config option. |
| [`enable_checks`](index.html#jax.enable_checks) | Context manager for jax_enable_checks config option. |
| [`enable_custom_prng`](index.html#jax.enable_custom_prng) | Context manager for jax_enable_custom_prng config option (transient). |
| [`enable_custom_vjp_by_custom_transpose`](index.html#jax.enable_custom_vjp_by_custom_transpose) | Context manager for jax_enable_custom_vjp_by_custom_transpose config option (transient). |
| [`log_compiles`](index.html#jax.log_compiles) | Context manager for jax_log_compiles config option. |
| [`numpy_rank_promotion`](index.html#jax.numpy_rank_promotion) | Context manager for jax_numpy_rank_promotion config option. |
| [`transfer_guard`](index.html#jax.transfer_guard)(new_val) | A contextmanager to control the transfer guard level for all transfers. |
### Just-in-time compilation (`jit`)[#](#just-in-time-compilation-jit)
| | |
| --- | --- |
| [`jit`](index.html#jax.jit)(fun[, in_shardings, out_shardings, ...]) | Sets up `fun` for just-in-time compilation with XLA. |
| [`disable_jit`](index.html#jax.disable_jit)([disable]) | Context manager that disables [`jit()`](index.html#jax.jit) behavior under its dynamic context. |
| [`ensure_compile_time_eval`](index.html#jax.ensure_compile_time_eval)() | Context manager to ensure evaluation at trace/compile time (or error). |
| [`xla_computation`](index.html#jax.xla_computation)(fun[, static_argnums, ...]) | Creates a function that produces its XLA computation given example args. |
| [`make_jaxpr`](index.html#jax.make_jaxpr)(fun[, static_argnums, axis_env, ...]) | Creates a function that produces its jaxpr given example args. |
| [`eval_shape`](index.html#jax.eval_shape)(fun, *args, **kwargs) | Compute the shape/dtype of `fun` without any FLOPs. |
| [`ShapeDtypeStruct`](index.html#jax.ShapeDtypeStruct)(shape, dtype[, ...]) | A container for the shape, dtype, and other static attributes of an array. |
| [`device_put`](index.html#jax.device_put)(x[, device, src]) | Transfers `x` to `device`. |
| [`device_put_replicated`](index.html#jax.device_put_replicated)(x, devices) | Transfer array(s) to each specified device and form Array(s). |
| [`device_put_sharded`](index.html#jax.device_put_sharded)(shards, devices) | Transfer array shards to specified devices and form Array(s). |
| [`device_get`](index.html#jax.device_get)(x) | Transfer `x` to host. |
| [`default_backend`](index.html#jax.default_backend)() | Returns the platform name of the default XLA backend. |
| [`named_call`](index.html#jax.named_call)(fun, *[, name]) | Adds a user specified name to a function when staging out JAX computations. |
| [`named_scope`](index.html#jax.named_scope)(name) | A context manager that adds a user specified name to the JAX name stack. |
| [`block_until_ready`](index.html#jax.block_until_ready)(x) | Tries to call a `block_until_ready` method on pytree leaves. |
### Automatic differentiation[#](#automatic-differentiation)
| | |
| --- | --- |
| [`grad`](index.html#jax.grad)(fun[, argnums, has_aux, holomorphic, ...]) | Creates a function that evaluates the gradient of `fun`. |
| [`value_and_grad`](index.html#jax.value_and_grad)(fun[, argnums, has_aux, ...]) | Create a function that evaluates both `fun` and the gradient of `fun`. |
| [`jacfwd`](index.html#jax.jacfwd)(fun[, argnums, has_aux, holomorphic]) | Jacobian of `fun` evaluated column-by-column using forward-mode AD. |
| [`jacrev`](index.html#jax.jacrev)(fun[, argnums, has_aux, holomorphic, ...]) | Jacobian of `fun` evaluated row-by-row using reverse-mode AD. |
| [`hessian`](index.html#jax.hessian)(fun[, argnums, has_aux, holomorphic]) | Hessian of `fun` as a dense array. |
| [`jvp`](index.html#jax.jvp)(fun, primals, tangents[, has_aux]) | Computes a (forward-mode) Jacobian-vector product of `fun`. |
| [`linearize`](index.html#jax.linearize)(fun, *primals[, has_aux]) | Produces a linear approximation to `fun` using [`jvp()`](index.html#jax.jvp) and partial eval. |
| [`linear_transpose`](index.html#jax.linear_transpose)(fun, *primals[, reduce_axes]) | Transpose a function that is promised to be linear. |
| [`vjp`](index.html#jax.vjp)(fun, *primals[, has_aux, reduce_axes]) | Compute a (reverse-mode) vector-Jacobian product of `fun`. |
| [`custom_jvp`](index.html#jax.custom_jvp)(fun[, nondiff_argnums]) | Set up a JAX-transformable function for a custom JVP rule definition. |
| [`custom_vjp`](index.html#jax.custom_vjp)(fun[, nondiff_argnums]) | Set up a JAX-transformable function for a custom VJP rule definition. |
| [`closure_convert`](index.html#jax.closure_convert)(fun, *example_args) | Closure conversion utility, for use with higher-order custom derivatives. |
| [`checkpoint`](index.html#jax.checkpoint)(fun, *[, prevent_cse, policy, ...]) | Make `fun` recompute internal linearization points when differentiated. |
### jax.Array (`jax.Array`)[#](#jax-array-jax-array)
| | |
| --- | --- |
| [`Array`](index.html#jax.Array)() | Array base class for JAX |
| [`make_array_from_callback`](index.html#jax.make_array_from_callback)(shape, sharding, ...) | Returns a `jax.Array` via data fetched from `data_callback`. |
| [`make_array_from_single_device_arrays`](index.html#jax.make_array_from_single_device_arrays)(shape, ...) | Returns a `jax.Array` from a sequence of `jax.Array`s on a single device. |
### Vectorization (`vmap`)[#](#vectorization-vmap)
| | |
| --- | --- |
| [`vmap`](index.html#jax.vmap)(fun[, in_axes, out_axes, axis_name, ...]) | Vectorizing map. |
| [`numpy.vectorize`](index.html#jax.numpy.vectorize)(pyfunc, *[, excluded, signature]) | Define a vectorized function with broadcasting. |
### Parallelization (`pmap`)[#](#parallelization-pmap)
| | |
| --- | --- |
| [`pmap`](index.html#jax.pmap)(fun[, axis_name, in_axes, out_axes, ...]) | Parallel map with support for collective operations. |
| [`devices`](index.html#jax.devices)([backend]) | Returns a list of all devices for a given backend. |
| [`local_devices`](index.html#jax.local_devices)([process_index, backend, host_id]) | Like [`jax.devices()`](index.html#jax.devices), but only returns devices local to a given process. |
| [`process_index`](index.html#jax.process_index)([backend]) | Returns the integer process index of this process. |
| [`device_count`](index.html#jax.device_count)([backend]) | Returns the total number of devices. |
| [`local_device_count`](index.html#jax.local_device_count)([backend]) | Returns the number of devices addressable by this process. |
| [`process_count`](index.html#jax.process_count)([backend]) | Returns the number of JAX processes associated with the backend. |
### Callbacks[#](#callbacks)
| | |
| --- | --- |
| [`pure_callback`](index.html#jax.pure_callback)(callback, result_shape_dtypes, ...) | Applies a functionally pure Python callable. |
| [`experimental.io_callback`](index.html#jax.experimental.io_callback)(callback, ...[, ...]) | Calls an impure Python callback. |
| [`debug.callback`](index.html#jax.debug.callback)(callback, *args[, ordered]) | Calls a stageable Python callback. |
| [`debug.print`](index.html#jax.debug.print)(fmt, *args[, ordered]) | Prints values and works in staged out JAX functions. |
### Miscellaneous[#](#miscellaneous)
| | |
| --- | --- |
| [`Device`](index.html#jax.Device) | A descriptor of an available device. |
| [`print_environment_info`](index.html#jax.print_environment_info)([return_string]) | Returns a string containing local environment & JAX installation information. |
| [`live_arrays`](index.html#jax.live_arrays)([platform]) | Return all live arrays in the backend for platform. |
| [`clear_caches`](index.html#jax.clear_caches)() | Clear all compilation and staging caches. |
Change log[#](#change-log)
---
Best viewed [here](https://jax.readthedocs.io/en/latest/changelog.html).
jax 0.4.20[#](#jax-0-4-20)
---
jaxlib 0.4.20[#](#jaxlib-0-4-20)
---
jax 0.4.19 (Oct 19, 2023)[#](#jax-0-4-19-oct-19-2023)
---
* New Features
+ Added [`jax.typing.DTypeLike`](index.html#jax.typing.DTypeLike), which can be used to annotate objects that
are convertible to JAX dtypes.
* Changes
+ JAX now requires SciPy 1.9 or newer.
* Bug fixes
+ Only process 0 in a multicontroller distributed JAX program will write
persistent compilation cache entries. This fixes write contention if the
cache is placed on a network filesystem such as GCS.
+ The version check for cusolver and cufft no longer considers the patch
versions when determining if the installed version of these libraries is at
least as new as the versions against which JAX was built.
jaxlib 0.4.19 (Oct 19, 2023)[#](#jaxlib-0-4-19-oct-19-2023)
---
* Changes
+ jaxlib will now always prefer pip-installed NVIDIA CUDA libraries
(nvidia-… packages) over any other CUDA installation if they are
installed, including installations named in `LD_LIBRARY_PATH`. If this
causes problems and the intent is to use a system-installed CUDA, the fix is
to remove the pip installed CUDA library packages.
jax 0.4.18 (Oct 6, 2023)[#](#jax-0-4-18-oct-6-2023)
---
jaxlib 0.4.18 (Oct 6, 2023)[#](#jaxlib-0-4-18-oct-6-2023)
---
* Changes
+ CUDA jaxlibs now depend on the user to install a compatible NCCL version.
If using the recommended `cuda12_pip` installation, NCCL should be installed
automatically. Currently, NCCL 2.16 or newer is required.
+ We now provide Linux aarch64 wheels, both with and without NVIDIA GPU
support.
* Deprecations
+ A number of internal utilities and inadvertent exports in [`jax.lax`](index.html#module-jax.lax) have
been deprecated, and will be removed in a future release.
- `jax.lax.dtypes`: use `jax.dtypes` instead.
- `jax.lax.itertools`: use `itertools` instead.
- `naryop`, `naryop_dtype_rule`, `standard_abstract_eval`, `standard_naryop`,
`standard_primitive`, `standard_unop`, `unop`, and `unop_dtype_rule` are
internal utilities, now deprecated without replacement.
* Bug fixes
+ Fixed Cloud TPU regression where compilation would OOM due to smem.
jax 0.4.17 (Oct 3, 2023)[#](#jax-0-4-17-oct-3-2023)
---
* New features
+ Added new [`jax.numpy.bitwise_count()`](index.html#jax.numpy.bitwise_count) function, matching the API of the simlar
function recently added to NumPy.
* Deprecations
+ Removed the deprecated module `jax.abstract_arrays` and all its contents.
+ Named key constructors in [`jax.random`](index.html#module-jax.random) are deprecated. Pass the `impl` argument
to [`jax.random.PRNGKey()`](index.html#jax.random.PRNGKey) or [`jax.random.key()`](index.html#jax.random.key) instead:
- `random.threefry2x32_key(seed)` becomes `random.PRNGKey(seed, impl='threefry2x32')`
- `random.rbg_key(seed)` becomes `random.PRNGKey(seed, impl='rbg')`
- `random.unsafe_rbg_key(seed)` becomes `random.PRNGKey(seed, impl='unsafe_rbg')`
* Changes:
+ CUDA: JAX now verifies that the CUDA libraries it finds are at least as new
as the CUDA libraries that JAX was built against. If older libraries are
found, JAX raises an exception since that is preferable to mysterious
failures and crashes.
+ Removed the “No GPU/TPU” found warning. Instead warn if, on Linux, an
NVIDIA GPU or a Google TPU are found but not used and `--jax_platforms` was
not specified.
+ [`jax.scipy.stats.mode()`](index.html#jax.scipy.stats.mode) now returns a 0 count if the mode is taken
across a size-0 axis, matching the behavior of `scipy.stats.mode` in SciPy
1.11.
+ Most `jax.numpy` functions and attributes now have fully-defined type stubs.
Previously many of these were treated as `Any` by static type checkers like
`mypy` and `pytype`.
jaxlib 0.4.17 (Oct 3, 2023)[#](#jaxlib-0-4-17-oct-3-2023)
---
* Changes:
+ Python 3.12 wheels were added in this release.
+ The CUDA 12 wheels now require CUDA 12.2 or newer and cuDNN 8.9.4 or newer.
* Bug fixes:
+ Fixed log spam from ABSL when the JAX CPU backend was initialized.
### jax 0.4.16 (Sept 18, 2023)[#](#jax-0-4-16-sept-18-2023)
* Changes
+ Added [`jax.numpy.ufunc`](index.html#jax.numpy.ufunc), as well as [`jax.numpy.frompyfunc()`](index.html#jax.numpy.frompyfunc), which can convert
any scalar-valued function into a `numpy.ufunc()`-like object, with methods such as
`outer()`, `reduce()`,
`accumulate()`, `at()`, and
`reduceat()` ([#17054](https://github.com/google/jax/issues/17054)).
+ Added [`jax.scipy.integrate.trapezoid()`](index.html#jax.scipy.integrate.trapezoid).
+ When not running under IPython: when an exception is raised, JAX now filters out the
entirety of its internal frames from tracebacks. (Without the “unfiltered stack trace”
that previously appeared.) This should produce much friendlier-looking tracebacks. See
[here](https://github.com/google/jax/pull/16949) for an example.
This behavior can be changed by setting `JAX_TRACEBACK_FILTERING=remove_frames` (for two
separate unfiltered/filtered tracebacks, which was the old behavior) or
`JAX_TRACEBACK_FILTERING=off` (for one unfiltered traceback).
+ jax2tf default serialization version is now 7, which introduces new shape
[safety assertions](https://github.com/google/jax/blob/main/jax/experimental/jax2tf/README.md#errors-in-presence-of-shape-polymorphism).
+ Devices passed to `jax.sharding.Mesh` should be hashable. This specifically
applies to mock devices or user created devices. `jax.devices()` are
already hashable.
* Breaking changes:
+ jax2tf now uses native serialization by default. See
the [jax2tf documentation](https://github.com/google/jax/blob/main/jax/experimental/jax2tf/README.md)
for details and for mechanisms to override the default.
+ The option `--jax_coordination_service` has been removed. It is now always
`True`.
+ `jax.jaxpr_util` has been removed from the public JAX namespace.
+ `JAX_USE_PJRT_C_API_ON_TPU` no longer has an effect (i.e. it always defaults to true).
+ The backwards compatibility flag `--jax_host_callback_ad_transforms`
introduced in December 2021, has been removed.
* Deprecations:
+ Several `jax.numpy` APIs have been deprecated following
[NumPy NEP-52](https://numpy.org/neps/nep-0052-python-api-cleanup.html):
- `jax.numpy.NINF` has been deprecated. Use `-jax.numpy.inf` instead.
- `jax.numpy.PZERO` has been deprecated. Use `0.0` instead.
- `jax.numpy.NZERO` has been deprecated. Use `-0.0` instead.
- `jax.numpy.issubsctype(x, t)` has been deprecated. Use `jax.numpy.issubdtype(x.dtype, t)`.
- `jax.numpy.row_stack` has been deprecated. Use `jax.numpy.vstack` instead.
- `jax.numpy.in1d` has been deprecated. Use `jax.numpy.isin` instead.
- `jax.numpy.trapz` has been deprecated. Use `jax.scipy.integrate.trapezoid` instead.
+ `jax.scipy.linalg.tril` and `jax.scipy.linalg.triu` have been deprecated,
following SciPy. Use `jax.numpy.tril` and `jax.numpy.triu` instead.
+ `jax.lax.prod` has been removed after being deprecated in JAX v0.4.11.
Use the built-in `math.prod` instead.
+ A number of exports from `jax.interpreters.xla` related to defining
HLO lowering rules for custom JAX primitives have been deprecated. Custom
primitives should be defined using the StableHLO lowering utilities in
`jax.interpreters.mlir` instead.
+ The following previously-deprecated functions have been removed after a
three-month deprecation period:
- `jax.abstract_arrays.ShapedArray`: use `jax.core.ShapedArray`.
- `jax.abstract_arrays.raise_to_shaped`: use `jax.core.raise_to_shaped`.
- `jax.numpy.alltrue`: use `jax.numpy.all`.
- `jax.numpy.sometrue`: use `jax.numpy.any`.
- `jax.numpy.product`: use `jax.numpy.prod`.
- `jax.numpy.cumproduct`: use `jax.numpy.cumprod`.
* Deprecations/removals:
+ The internal submodule `jax.prng` is now deprecated. Its contents are available at
[`jax.extend.random`](index.html#module-jax.extend.random).
+ The internal submodule path `jax.linear_util` has been deprecated. Use
[`jax.extend.linear_util`](index.html#module-jax.extend.linear_util) instead (Part of [jax.extend: a module for extensions](index.html#jax-extend-jep))
+ `jax.random.PRNGKeyArray` and `jax.random.KeyArray` are deprecated. Use [`jax.Array`](index.html#jax.Array)
for type annotations, and `jax.dtypes.issubdtype(arr.dtype, jax.dtypes.prng_key)` for
runtime detection of typed prng keys.
+ The method `PRNGKeyArray.unsafe_raw_array` is deprecated. Use
[`jax.random.key_data()`](index.html#jax.random.key_data) instead.
+ `jax.experimental.pjit.with_sharding_constraint` is deprecated. Use
`jax.lax.with_sharding_constraint` instead.
+ The internal utilities `jax.core.is_opaque_dtype` and `jax.core.has_opaque_dtype`
have been removed. Opaque dtypes have been renamed to Extended dtypes; use
`jnp.issubdtype(dtype, jax.dtypes.extended)` instead (available since jax v0.4.14).
+ The utility `jax.interpreters.xla.register_collective_primitive` has been
removed. This utility did nothing useful in recent JAX releases and calls
to it can be safely removed.
+ The internal submodule path `jax.linear_util` has been deprecated. Use
[`jax.extend.linear_util`](index.html#module-jax.extend.linear_util) instead (Part of [jax.extend: a module for extensions](index.html#jax-extend-jep))
### jaxlib 0.4.16 (Sept 18, 2023)[#](#jaxlib-0-4-16-sept-18-2023)
* Changes:
+ Sparse CSR matrix multiplications via the experimental jax sparse APIs
no longer uses a deterministic algorithm on NVIDIA GPUs. This change was
made to improve compatibility with CUDA 12.2.1.
* Bug fixes:
+ Fixed a crash on Windows due to a fatal LLVM error related to out-of-order
sections and IMAGE_REL_AMD64_ADDR32NB relocations
(https://github.com/openxla/xla/commit/cb732a921f0c4184995cbed82394931011d12bd4).
### jax 0.4.14 (July 27, 2023)[#](#jax-0-4-14-july-27-2023)
* Changes
+ `jax.jit` takes `donate_argnames` as an argument. It’s semantics are similar
to `static_argnames`.
If neither donate_argnums nor donate_argnames is provided, no
arguments are donated. If donate_argnums is not provided but
donate_argnames is, or vice versa, JAX uses
`inspect.signature(fun)` to find any positional arguments that
correspond to donate_argnames (or vice versa). If both donate_argnums and donate_argnames are provided, inspect.signature is not used, and only actual
parameters listed in either donate_argnums or donate_argnames will
be donated.
+ [`jax.random.gamma()`](index.html#jax.random.gamma) has been re-factored to a more efficient algorithm
with more robust endpoint behavior ([#16779](https://github.com/google/jax/issues/16779)). This means that the
sequence of values returned for a given `key` will change between JAX v0.4.13
and v0.4.14 for `gamma` and related samplers (including [`jax.random.ball()`](index.html#jax.random.ball),
[`jax.random.beta()`](index.html#jax.random.beta), [`jax.random.chisquare()`](index.html#jax.random.chisquare), [`jax.random.dirichlet()`](index.html#jax.random.dirichlet),
[`jax.random.generalized_normal()`](index.html#jax.random.generalized_normal), [`jax.random.loggamma()`](index.html#jax.random.loggamma), [`jax.random.t()`](index.html#jax.random.t)).
* Deletions
+ `in_axis_resources` and `out_axis_resources` have been deleted from pjit since
it has been more than 3 months since their deprecation. Please use
`in_shardings` and `out_shardings` as the replacement.
This is a safe and trivial name replacement. It does not change any of the
current pjit semantics and doesn’t break any code.
You can still pass in `PartitionSpecs` to in_shardings and out_shardings.
* Deprecations
+ Python 3.8 support has been dropped as per
https://jax.readthedocs.io/en/latest/deprecation.html
+ JAX now requires NumPy 1.22 or newer as per
https://jax.readthedocs.io/en/latest/deprecation.html
+ Passing optional arguments to [`jax.numpy.ndarray.at()`](index.html#jax.numpy.ndarray.at) by position is
no longer supported, after being deprecated in JAX version 0.4.7.
For example, instead of `x.at[i].get(True)`, use `x.at[i].get(indices_are_sorted=True)`
+ The following `jax.Array` methods have been removed, after being deprecated
in JAX v0.4.5:
- `jax.Array.broadcast`: use [`jax.lax.broadcast()`](index.html#jax.lax.broadcast) instead.
- `jax.Array.broadcast_in_dim`: use [`jax.lax.broadcast_in_dim()`](index.html#jax.lax.broadcast_in_dim) instead.
- `jax.Array.split`: use [`jax.numpy.split()`](index.html#jax.numpy.split) instead.
+ The following APIs have been removed after previous deprecation:
- `jax.ad`: use `jax.interpreters.ad`.
- `jax.curry`: use `curry = lambda f: partial(partial, f)`.
- `jax.partial_eval`: use `jax.interpreters.partial_eval`.
- `jax.pxla`: use `jax.interpreters.pxla`.
- `jax.xla`: use `jax.interpreters.xla`.
- `jax.ShapedArray`: use `jax.core.ShapedArray`.
- `jax.interpreters.pxla.device_put`: use [`jax.device_put()`](index.html#jax.device_put).
- `jax.interpreters.pxla.make_sharded_device_array`: use [`jax.make_array_from_single_device_arrays()`](index.html#jax.make_array_from_single_device_arrays).
- `jax.interpreters.pxla.ShardedDeviceArray`: use [`jax.Array`](index.html#jax.Array).
- `jax.numpy.DeviceArray`: use [`jax.Array`](index.html#jax.Array).
- `jax.stages.Compiled.compiler_ir`: use [`jax.stages.Compiled.as_text()`](index.html#jax.stages.Compiled.as_text).
* Breaking changes
+ JAX now requires ml_dtypes version 0.2.0 or newer.
+ To fix a corner case, calls to [`jax.lax.cond()`](index.html#jax.lax.cond) with five
arguments will always resolve to the “common operands” `cond`
behavior (as documented) if the second and third arguments are
callable, even if other operands are callable as well. See
[#16413](https://github.com/google/jax/issues/16413).
+ The deprecated config options `jax_array` and `jax_jit_pjit_api_merge`,
which did nothing, have been removed. These options have been true by
default for many releases.
* New features
+ JAX now supports a configuration flag –jax_serialization_version
and a JAX_SERIALIZATION_VERSION environment variable to control the
serialization version ([#16746](https://github.com/google/jax/issues/16746)).
+ jax2tf in presence of shape polymorphism now generates code that checks
certain shape constraints, if the serialization version is at least 7.
See https://github.com/google/jax/blob/main/jax/experimental/jax2tf/README.md#errors-in-presence-of-shape-polymorphism.
### jaxlib 0.4.14 (July 27, 2023)[#](#jaxlib-0-4-14-july-27-2023)
* Deprecations
+ Python 3.8 support has been dropped as per
https://jax.readthedocs.io/en/latest/deprecation.html
### jax 0.4.13 (June 22, 2023)[#](#jax-0-4-13-june-22-2023)
* Changes
+ `jax.jit` now allows `None` to be passed to `in_shardings` and
`out_shardings`. The semantics are as follows:
- For in_shardings, JAX will mark is as replicated but this behavior
can change in the future.
- For out_shardings, we will rely on the XLA GSPMD partitioner to
determine the output shardings.
+ `jax.experimental.pjit.pjit` also allows `None` to be passed to
`in_shardings` and `out_shardings`. The semantics are as follows:
- If the mesh context manager is *not* provided, JAX has the freedom to
choose whatever sharding it wants.
* For in_shardings, JAX will mark is as replicated but this behavior
can change in the future.
* For out_shardings, we will rely on the XLA GSPMD partitioner to
determine the output shardings.
- If the mesh context manager is provided, None will imply that the value
will be replicated on all devices of the mesh.
+ Executable.cost_analysis() works on Cloud TPU
+ Added a warning if a non-allowlisted `jaxlib` plugin is in use.
+ Added `jax.tree_util.tree_leaves_with_path`.
+ `None` is not a valid input to
`jax.experimental.multihost_utils.host_local_array_to_global_array` or
`jax.experimental.multihost_utils.global_array_to_host_local_array`.
Please use `jax.sharding.PartitionSpec()` if you wanted to replicate your
input.
* Bug fixes
+ Fixed incorrect wheel name in CUDA 12 releases (#16362); the correct wheel
is named `cudnn89` instead of `cudnn88`.
* Deprecations
+ The `native_serialization_strict_checks` parameter to
`jax.experimental.jax2tf.convert()` is deprecated in favor of the
new `native_serializaation_disabled_checks` ([#16347](https://github.com/google/jax/issues/16347)).
### jaxlib 0.4.13 (June 22, 2023)[#](#jaxlib-0-4-13-june-22-2023)
* Changes
+ Added Windows CPU-only wheels to the `jaxlib` Pypi release.
* Bug fixes
+ `__cuda_array_interface__` was broken in previous jaxlib versions and is now
fixed ([#16440](https://github.com/google/jax/issues/16440)).
+ Concurrent CUDA kernel tracing is now enabled by default on NVIDIA GPUs.
### jax 0.4.12 (June 8, 2023)[#](#jax-0-4-12-june-8-2023)
* Changes
+ Added [`scipy.spatial.transform.Rotation`](https://docs.scipy.org/doc/scipy-1.8.1/reference/generated/scipy.spatial.transform.Rotation.html#scipy.spatial.transform.Rotation) and [`scipy.spatial.transform.Slerp`](https://docs.scipy.org/doc/scipy-1.8.1/reference/generated/scipy.spatial.transform.Slerp.html#scipy.spatial.transform.Slerp)
* Deprecations
+ `jax.abstract_arrays` and its contents are now deprecated. See related
functionality in :mod:`jax.core`.
+ `jax.numpy.alltrue`: use `jax.numpy.all`. This follows the deprecation
of `numpy.alltrue` in NumPy version 1.25.0.
+ `jax.numpy.sometrue`: use `jax.numpy.any`. This follows the deprecation
of `numpy.sometrue` in NumPy version 1.25.0.
+ `jax.numpy.product`: use `jax.numpy.prod`. This follows the deprecation
of `numpy.product` in NumPy version 1.25.0.
+ `jax.numpy.cumproduct`: use `jax.numpy.cumprod`. This follows the deprecation
of `numpy.cumproduct` in NumPy version 1.25.0.
+ `jax.sharding.OpShardingSharding` has been removed since it has been 3
months since it was deprecated.
### jaxlib 0.4.12 (June 8, 2023)[#](#jaxlib-0-4-12-june-8-2023)
* Changes
+ Includes PTX/SASS for Hopper (SM version 9.0+) GPUs. Previous
versions of jaxlib should work on Hopper but would have a long
JIT-compilation delay the first time a JAX operation was executed.
* Bug fixes
+ Fixes incorrect source line information in JAX-generated Python tracebacks
under Python 3.11.
+ Fixes crash when printing local variables of frames in JAX-generated Python
tracebacks (#16027).
### jax 0.4.11 (May 31, 2023)[#](#jax-0-4-11-may-31-2023)
* Deprecations
+ The following APIs have been removed after a 3 month deprecation period, in
accordance with the [API compatibility](index.html#api-compatibility) policy:
- `jax.experimental.PartitionSpec`: use `jax.sharding.PartitionSpec`.
- `jax.experimental.maps.Mesh`: use `jax.sharding.Mesh`
- `jax.experimental.pjit.NamedSharding`: use `jax.sharding.NamedSharding`.
- `jax.experimental.pjit.PartitionSpec`: use `jax.sharding.PartitionSpec`.
- `jax.experimental.pjit.FROM_GDA`. Instead pass sharded `jax.Array` objects
as input and remove the optional `in_shardings` argument to `pjit`.
- `jax.interpreters.pxla.PartitionSpec`: use `jax.sharding.PartitionSpec`.
- `jax.interpreters.pxla.Mesh`: use `jax.sharding.Mesh`
- `jax.interpreters.xla.Buffer`: use `jax.Array`.
- `jax.interpreters.xla.Device`: use `jax.Device`.
- `jax.interpreters.xla.DeviceArray`: use `jax.Array`.
- `jax.interpreters.xla.device_put`: use `jax.device_put`.
- `jax.interpreters.xla.xla_call_p`: use `jax.experimental.pjit.pjit_p`.
- `axis_resources` argument of `with_sharding_constraint` is removed. Please
use `shardings` instead.
### jaxlib 0.4.11 (May 31, 2023)[#](#jaxlib-0-4-11-may-31-2023)
* Changes
+ Added `memory_stats()` method to `Device`s. If supported, this returns a
dict of string stat names with int values, e.g. `"bytes_in_use"`, or None if
the platform doesn’t support memory statistics. The exact stats returned may
vary across platforms. Currently only implemented on Cloud TPU.
+ Readded support for the Python buffer protocol (`memoryview`) on CPU
devices.
### jax 0.4.10 (May 11, 2023)[#](#jax-0-4-10-may-11-2023)
### jaxlib 0.4.10 (May 11, 2023)[#](#jaxlib-0-4-10-may-11-2023)
* Changes
+ Fixed `'apple-m1' is not a recognized processor for this target (ignoring processor)` issue that prevented previous release from running on Mac M1.
### jax 0.4.9 (May 9, 2023)[#](#jax-0-4-9-may-9-2023)
* Changes
+ The flags experimental_cpp_jit, experimental_cpp_pjit and
experimental_cpp_pmap have been removed.
They are now always on.
+ Accuracy of singular value decomposition (SVD) on TPU has been improved
(requires jaxlib 0.4.9).
* Deprecations
+ `jax.experimental.gda_serialization` is deprecated and has been renamed to
`jax.experimental.array_serialization`.
Please change your imports to use `jax.experimental.array_serialization`.
+ The `in_axis_resources` and `out_axis_resources` arguments of pjit have been
deprecated. Please use `in_shardings` and `out_shardings` respectively.
+ The function `jax.numpy.msort` has been removed. It has been deprecated since
JAX v0.4.1. Use `jnp.sort(a, axis=0)` instead.
+ `in_parts` and `out_parts` arguments have been removed from `jax.xla_computation`
since they were only used with sharded_jit and sharded_jit is long gone.
+ `instantiate_const_outputs` argument has been removed from `jax.xla_computation`
since it has been unused for a very long time.
### jaxlib 0.4.9 (May 9, 2023)[#](#jaxlib-0-4-9-may-9-2023)
### jax 0.4.8 (March 29, 2023)[#](#jax-0-4-8-march-29-2023)
* Breaking changes
+ A major component of the Cloud TPU runtime has been upgraded. This enables
the following new features on Cloud TPU:
- [`jax.debug.print()`](index.html#jax.debug.print), [`jax.debug.callback()`](index.html#jax.debug.callback), and
[`jax.debug.breakpoint()`](index.html#jax.debug.breakpoint) now work on Cloud TPU
- Automatic TPU memory defragmentation[`jax.experimental.host_callback()`](index.html#module-jax.experimental.host_callback) is no longer supported on Cloud TPU
with the new runtime component. Please file an issue on the [JAX issue
tracker](https://github.com/google/jax/issues) if the new `jax.debug` APIs
are insufficient for your use case.
The old runtime component will be available for at least the next three
months by setting the environment variable
`JAX_USE_PJRT_C_API_ON_TPU=false`. If you find you need to disable the new
runtime for any reason, please let us know on the [JAX issue
tracker](https://github.com/google/jax/issues).
* Changes
+ The minimum jaxlib version has been bumped from 0.4.6 to 0.4.7.
* Deprecations
+ CUDA 11.4 support has been dropped. JAX GPU wheels only support
CUDA 11.8 and CUDA 12. Older CUDA versions may work if jaxlib is built
from source.
+ `global_arg_shapes` argument of pmap only worked with sharded_jit and has
been removed from pmap. Please migrate to pjit and remove global_arg_shapes
from pmap.
### jax 0.4.7 (March 27, 2023)[#](#jax-0-4-7-march-27-2023)
* Changes
+ As per https://jax.readthedocs.io/en/latest/jax_array_migration.html#jax-array-migration
`jax.config.jax_array` cannot be disabled anymore.
+ `jax.config.jax_jit_pjit_api_merge` cannot be disabled anymore.
+ `jax.experimental.jax2tf.convert()` now supports the `native_serialization`
parameter to use JAX’s native lowering to StableHLO to obtain a
StableHLO module for the entire JAX function instead of lowering each JAX
primitive to a TensorFlow op. This simplifies the internals and increases
the confidence that what you serialize matches the JAX native semantics.
See [documentation](https://github.com/google/jax/blob/main/jax/experimental/jax2tf/README.md).
As part of this change the config flag `--jax2tf_default_experimental_native_lowering`
has been renamed to `--jax2tf_native_serialization`.
+ JAX now depends on `ml_dtypes`, which contains definitions of NumPy types
like bfloat16. These definitions were previously internal to JAX, but have
been split into a separate package to facilitate sharing them with other
projects.
+ JAX now requires NumPy 1.21 or newer and SciPy 1.7 or newer.
* Deprecations
+ The type `jax.numpy.DeviceArray` is deprecated. Use `jax.Array` instead,
for which it is an alias.
+ The type `jax.interpreters.pxla.ShardedDeviceArray` is deprecated. Use
`jax.Array` instead.
+ Passing additional arguments to [`jax.numpy.ndarray.at()`](index.html#jax.numpy.ndarray.at) by position is deprecated.
For example, instead of `x.at[i].get(True)`, use `x.at[i].get(indices_are_sorted=True)`
+ `jax.interpreters.xla.device_put` is deprecated. Please use `jax.device_put`.
+ `jax.interpreters.pxla.device_put` is deprecated. Please use `jax.device_put`.
+ `jax.experimental.pjit.FROM_GDA` is deprecated. Please pass in sharded
jax.Arrays as input and remove the `in_shardings` argument to pjit since
it is optional.
### jaxlib 0.4.7 (March 27, 2023)[#](#jaxlib-0-4-7-march-27-2023)
Changes:
* jaxlib now depends on `ml_dtypes`, which contains definitions of NumPy types like bfloat16. These definitions were previously internal to JAX, but have been split into a separate package to facilitate sharing them with other projects.
### jax 0.4.6 (Mar 9, 2023)[#](#jax-0-4-6-mar-9-2023)
* Changes
+ `jax.tree_util` now contain a set of APIs that allow user to define keys for their
custom pytree node. This includes:
- `tree_flatten_with_path` that flattens a tree and return not only each leaf but
also their key paths.
- `tree_map_with_paths` that can map a function that takes the key path as argument.
- `register_pytree_with_keys`` to register how the key path and leaves should looks
like in a custom pytree node.
- `keystr` that pretty-prints a key path.
+ `jax2tf.call_tf()` has a new parameter `output_shape_dtype` (default `None`)
that can be used to declare the output shape and type of the result. This enables
`jax2tf.call_tf()` to work in the presence of shape polymorphism. ([#14734](https://github.com/google/jax/issues/14734)).
* Deprecations
+ The old key-path APIs in `jax.tree_util` are deprecated and will be removed 3 months
from Mar 10 2023:
- `register_keypaths`: use [`jax.tree_util.register_pytree_with_keys()`](index.html#jax.tree_util.register_pytree_with_keys) instead.
- `AttributeKeyPathEntry` : use `GetAttrKey` instead.
- `GetitemKeyPathEntry` : use `SequenceKey` or `DictKey` instead.
### jaxlib 0.4.6 (Mar 9, 2023)[#](#jaxlib-0-4-6-mar-9-2023)
### jax 0.4.5 (Mar 2, 2023)[#](#jax-0-4-5-mar-2-2023)
* Deprecations
+ `jax.sharding.OpShardingSharding` has been renamed to `jax.sharding.GSPMDSharding`.
`jax.sharding.OpShardingSharding` will be removed in 3 months from Feb 17, 2023.
+ The following `jax.Array` methods are deprecated and will be removed 3 months from
Feb 23 2023:
- `jax.Array.broadcast`: use [`jax.lax.broadcast()`](index.html#jax.lax.broadcast) instead.
- `jax.Array.broadcast_in_dim`: use [`jax.lax.broadcast_in_dim()`](index.html#jax.lax.broadcast_in_dim) instead.
- `jax.Array.split`: use [`jax.numpy.split()`](index.html#jax.numpy.split) instead.
### jax 0.4.4 (Feb 16, 2023)[#](#jax-0-4-4-feb-16-2023)
* Changes
+ The implementation of `jit` and `pjit` has been merged. Merging jit and pjit
changes the internals of JAX without affecting the public API of JAX.
Before, `jit` was a final style primitive. Final style means that the creation
of jaxpr was delayed as much as possible and transformations were stacked
on top of each other. With the `jit`-`pjit` implementation merge, `jit`
becomes an initial style primitive which means that we trace to jaxpr
as early as possible. For more information see
[this section in autodidax](https://jax.readthedocs.io/en/latest/autodidax.html#on-the-fly-final-style-and-staged-initial-style-processing).
Moving to initial style should simplify JAX’s internals and make
development of features like dynamic shapes, etc easier.
You can disable it only via the environment variable i.e.
`os.environ['JAX_JIT_PJIT_API_MERGE'] = '0'`.
The merge must be disabled via an environment variable since it affects JAX
at import time so it needs to be disabled before jax is imported.
+ `axis_resources` argument of `with_sharding_constraint` is deprecated.
Please use `shardings` instead. There is no change needed if you were using
`axis_resources` as an arg. If you were using it as a kwarg, then please
use `shardings` instead. `axis_resources` will be removed after 3 months
from Feb 13, 2023.
+ added the [`jax.typing`](index.html#module-jax.typing) module, with tools for type annotations of JAX
functions.
+ The following names have been deprecated:
- `jax.xla.Device` and `jax.interpreters.xla.Device`: use `jax.Device`.
- `jax.experimental.maps.Mesh`. Use `jax.sharding.Mesh`
instead.
- `jax.experimental.pjit.NamedSharding`: use `jax.sharding.NamedSharding`.
- `jax.experimental.pjit.PartitionSpec`: use `jax.sharding.PartitionSpec`.
- `jax.interpreters.pxla.Mesh`: use `jax.sharding.Mesh`.
- `jax.interpreters.pxla.PartitionSpec`: use `jax.sharding.PartitionSpec`.
* Breaking Changes
+ the `initial` argument to reduction functions like :func:`jax.numpy.sum`
is now required to be a scalar, consistent with the corresponding NumPy API.
The previous behavior of broadcating the output against non-scalar `initial`
values was an unintentional implementation detail ([#14446](https://github.com/google/jax/issues/14446)).
### jaxlib 0.4.4 (Feb 16, 2023)[#](#jaxlib-0-4-4-feb-16-2023)
* Breaking changes
+ Support for NVIDIA Kepler series GPUs has been removed from the default
`jaxlib` builds. If Kepler support is needed, it is still possible to
build `jaxlib` from source with Kepler support (via the
`--cuda_compute_capabilities=sm_35` option to `build.py`), however note
that CUDA 12 has completely dropped support for Kepler GPUs.
### jax 0.4.3 (Feb 8, 2023)[#](#jax-0-4-3-feb-8-2023)
* Breaking changes
+ Deleted `jax.scipy.linalg.polar_unitary()`, which was a deprecated JAX
extension to the scipy API. Use [`jax.scipy.linalg.polar()`](index.html#jax.scipy.linalg.polar) instead.
* Changes
+ Added [`jax.scipy.stats.rankdata()`](index.html#jax.scipy.stats.rankdata).
### jaxlib 0.4.3 (Feb 8, 2023)[#](#jaxlib-0-4-3-feb-8-2023)
* `jax.Array` now has the non-blocking `is_ready()` method, which returns `True`
if the array is ready (see also [`jax.block_until_ready()`](index.html#jax.block_until_ready)).
### jax 0.4.2 (Jan 24, 2023)[#](#jax-0-4-2-jan-24-2023)
* Breaking changes
+ Deleted `jax.experimental.callback`
+ Operations with dimensions in presence of jax2tf shape polymorphism have
been generalized to work in more scenarios, by converting the symbolic
dimension to JAX arrays. Operations involving symbolic dimensions and
`np.ndarray` now can raise errors when the result is used as a shape value
([#14106](https://github.com/google/jax/issues/14106)).
+ jaxpr objects now raise an error on attribute setting in order to avoid
problematic mutations ([#14102](https://github.com/google/jax/issues/14102))
* Changes
+ `jax2tf.call_tf()` has a new parameter `has_side_effects` (default `True`)
that can be used to declare whether an instance can be removed or replicated
by JAX optimizations such as dead-code elimination ([#13980](https://github.com/google/jax/issues/13980)).
+ Added more support for floordiv and mod for jax2tf shape polymorphism. Previously,
certain division operations resulted in errors in presence of symbolic dimensions
([#14108](https://github.com/google/jax/issues/14108)).
### jaxlib 0.4.2 (Jan 24, 2023)[#](#jaxlib-0-4-2-jan-24-2023)
* Changes
+ Set JAX_USE_PJRT_C_API_ON_TPU=1 to enable new Cloud TPU runtime, featuring
automatic device memory defragmentation.
### jax 0.4.1 (Dec 13, 2022)[#](#jax-0-4-1-dec-13-2022)
* Changes
+ Support for Python 3.7 has been dropped, in accordance with JAX’s
[Python and NumPy version support policy](index.html#version-support-policy).
+ We introduce `jax.Array` which is a unified array type that subsumes
`DeviceArray`, `ShardedDeviceArray`, and `GlobalDeviceArray` types in JAX.
The `jax.Array` type helps make parallelism a core feature of JAX,
simplifies and unifies JAX internals, and allows us to unify `jit` and
`pjit`. `jax.Array` has been enabled by default in JAX 0.4 and makes some
breaking change to the `pjit` API. The [jax.Array migration
guide](https://jax.readthedocs.io/en/latest/jax_array_migration.html) can
help you migrate your codebase to `jax.Array`. You can also look at the
[Distributed arrays and automatic parallelization](https://jax.readthedocs.io/en/latest/notebooks/Distributed_arrays_and_automatic_parallelization.html)
tutorial to understand the new concepts.
+ `PartitionSpec` and `Mesh` are now out of experimental. The new API endpoints
are `jax.sharding.PartitionSpec` and `jax.sharding.Mesh`.
`jax.experimental.maps.Mesh` and `jax.experimental.PartitionSpec` are
deprecated and will be removed in 3 months.
+ `with_sharding_constraint`s new public endpoint is
`jax.lax.with_sharding_constraint`.
+ If using ABSL flags together with `jax.config`, the ABSL flag values are no
longer read or written after the JAX configuration options are initially
populated from the ABSL flags. This change improves performance of reading
`jax.config` options, which are used pervasively in JAX.
+ The jax2tf.call_tf function now uses for TF lowering the first TF
device of the same platform as used by the embedding JAX computation.
Before, it was using the 0th device for the JAX-default backend.
+ A number of `jax.numpy` functions now have their arguments marked as
positional-only, matching NumPy.
+ `jnp.msort` is now deprecated, following the deprecation of `np.msort` in numpy 1.24.
It will be removed in a future release, in accordance with the [API compatibility](index.html#api-compatibility)
policy. It can be replaced with `jnp.sort(a, axis=0)`.
### jaxlib 0.4.1 (Dec 13, 2022)[#](#jaxlib-0-4-1-dec-13-2022)
* Changes
+ Support for Python 3.7 has been dropped, in accordance with JAX’s
[Python and NumPy version support policy](index.html#version-support-policy).
+ The behavior of `XLA_PYTHON_CLIENT_MEM_FRACTION=.XX` has been changed to allocate XX% of
the total GPU memory instead of the previous behavior of using currently available GPU memory
to calculate preallocation. Please refer to
[GPU memory allocation](https://jax.readthedocs.io/en/latest/gpu_memory_allocation.html) for
more details.
+ The deprecated method `.block_host_until_ready()` has been removed. Use
`.block_until_ready()` instead.
### jax 0.4.0 (Dec 12, 2022)[#](#jax-0-4-0-dec-12-2022)
* The release was yanked.
### jaxlib 0.4.0 (Dec 12, 2022)[#](#jaxlib-0-4-0-dec-12-2022)
* The release was yanked.
### jax 0.3.25 (Nov 15, 2022)[#](#jax-0-3-25-nov-15-2022)
* Changes
+ [`jax.numpy.linalg.pinv()`](index.html#jax.numpy.linalg.pinv) now supports the `hermitian` option.
+ [`jax.scipy.linalg.hessenberg()`](index.html#jax.scipy.linalg.hessenberg) is now supported on CPU only. Requires
jaxlib > 0.3.24.
+ New functions [`jax.lax.linalg.hessenberg()`](index.html#jax.lax.linalg.hessenberg),
[`jax.lax.linalg.tridiagonal()`](index.html#jax.lax.linalg.tridiagonal), and
[`jax.lax.linalg.householder_product()`](index.html#jax.lax.linalg.householder_product) were added. Householder reduction
is currently CPU-only and tridiagonal reductions are supported on CPU and
GPU only.
+ The gradients of `svd` and `jax.numpy.linalg.pinv` are now computed more
economically for non-square matrices.
* Breaking Changes
+ Deleted the `jax_experimental_name_stack` config option.
+ Convert a string `axis_names` arguments to the
`jax.experimental.maps.Mesh` constructor into a singleton tuple
instead of unpacking the string into a sequence of character axis names.
### jaxlib 0.3.25 (Nov 15, 2022)[#](#jaxlib-0-3-25-nov-15-2022)
* Changes
+ Added support for tridiagonal reductions on CPU and GPU.
+ Added support for upper Hessenberg reductions on CPU.
* Bugs
+ Fixed a bug that meant that frames in tracebacks captured by JAX were
incorrectly mapped to source lines under Python 3.10+
### jax 0.3.24 (Nov 4, 2022)[#](#jax-0-3-24-nov-4-2022)
* Changes
+ JAX should be faster to import. We now import scipy lazily, which accounted
for a significant fraction of JAX’s import time.
+ Setting the env var `JAX_PERSISTENT_CACHE_MIN_COMPILE_TIME_SECS=$N` can be
used to limit the number of cache entries written to the persistent cache.
By default, computations that take 1 second or more to compile will be
cached.
- Added [`jax.scipy.stats.mode()`](index.html#jax.scipy.stats.mode).
+ The default device order used by `pmap` on TPU if no order is specified now
matches `jax.devices()` for single-process jobs. Previously the
two orderings differed, which could lead to unnecessary copies or
out-of-memory errors. Requiring the orderings to agree simplifies matters.
* Breaking Changes
+ [`jax.numpy.gradient()`](index.html#jax.numpy.gradient) now behaves like most other functions in [`jax.numpy`](index.html#module-jax.numpy),
and forbids passing lists or tuples in place of arrays ([#12958](https://github.com/google/jax/issues/12958))
+ Functions in [`jax.numpy.linalg`](index.html#module-jax.numpy.linalg) and [`jax.numpy.fft`](index.html#module-jax.numpy.fft) now uniformly
require inputs to be array-like: i.e. lists and tuples cannot be used in place
of arrays. Part of [#7737](https://github.com/google/jax/issues/7737).
* Deprecations
+ `jax.sharding.MeshPspecSharding` has been renamed to `jax.sharding.NamedSharding`.
`jax.sharding.MeshPspecSharding` name will be removed in 3 months.
### jaxlib 0.3.24 (Nov 4, 2022)[#](#jaxlib-0-3-24-nov-4-2022)
* Changes
+ Buffer donation now works on CPU. This may break code that marked buffers
for donation on CPU but relied on donation not being implemented.
### jax 0.3.23 (Oct 12, 2022)[#](#jax-0-3-23-oct-12-2022)
* Changes
+ Update Colab TPU driver version for new jaxlib release.
### jax 0.3.22 (Oct 11, 2022)[#](#jax-0-3-22-oct-11-2022)
* Changes
+ Add `JAX_PLATFORMS=tpu,cpu` as default setting in TPU initialization,
so JAX will raise an error if TPU cannot be initialized instead of falling
back to CPU. Set `JAX_PLATFORMS=''` to override this behavior and automatically
choose an available backend (the original default), or set `JAX_PLATFORMS=cpu`
to always use CPU regardless of if the TPU is available.
* Deprecations
+ Several test utilities deprecated in JAX v0.3.8 are now removed from
`jax.test_util`.
### jaxlib 0.3.22 (Oct 11, 2022)[#](#jaxlib-0-3-22-oct-11-2022)
### jax 0.3.21 (Sep 30, 2022)[#](#jax-0-3-21-sep-30-2022)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.3.20...jax-v0.3.21).
* Changes
+ The persistent compilation cache will now warn instead of raising an
exception on error ([#12582](https://github.com/google/jax/issues/12582)), so program execution can continue
if something goes wrong with the cache. Set
`JAX_RAISE_PERSISTENT_CACHE_ERRORS=true` to revert this behavior.
### jax 0.3.20 (Sep 28, 2022)[#](#jax-0-3-20-sep-28-2022)
* Bug fixes:
+ Adds missing `.pyi` files that were missing from the previous release ([#12536](https://github.com/google/jax/issues/12536)).
+ Fixes an incompatibility between `jax` 0.3.19 and the libtpu version it pinned ([#12550](https://github.com/google/jax/issues/12550)). Requires jaxlib 0.3.20.
+ Fix incorrect `pip` url in `setup.py` comment ([#12528](https://github.com/google/jax/issues/12528)).
### jaxlib 0.3.20 (Sep 28, 2022)[#](#jaxlib-0-3-20-sep-28-2022)
* [GitHub commits](https://github.com/google/jax/compare/jaxlib-v0.3.15...jaxlib-v0.3.20).
* Bug fixes
+ Fixes support for limiting the visible CUDA devices via
`jax_cuda_visible_devices` in distributed jobs. This functionality is needed for
the JAX/SLURM integration on GPU ([#12533](https://github.com/google/jax/issues/12533)).
### jax 0.3.19 (Sep 27, 2022)[#](#jax-0-3-19-sep-27-2022)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.3.18...jax-v0.3.19).
* Fixes required jaxlib version.
### jax 0.3.18 (Sep 26, 2022)[#](#jax-0-3-18-sep-26-2022)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.3.17...jax-v0.3.18).
* Changes
+ Ahead-of-time lowering and compilation functionality (tracked in
[#7733](https://github.com/google/jax/issues/7733)) is stable and public. See [the
overview](https://jax.readthedocs.io/en/latest/aot.html) and the API docs
for [`jax.stages`](index.html#module-jax.stages).
+ Introduced [`jax.Array`](index.html#jax.Array), intended to be used for both `isinstance` checks
and type annotations for array types in JAX. Notice that this included some subtle
changes to how `isinstance` works for [`jax.numpy.ndarray`](index.html#jax.numpy.ndarray) for jax-internal
objects, as [`jax.numpy.ndarray`](index.html#jax.numpy.ndarray) is now a simple alias of [`jax.Array`](index.html#jax.Array).
* Breaking changes
+ `jax._src` is no longer imported into the from the public `jax` namespace.
This may break users that were using JAX internals.
+ `jax.soft_pmap` has been deleted. Please use `pjit` or `xmap` instead.
`jax.soft_pmap` is undocumented. If it were documented, a deprecation period
would have been provided.
### jax 0.3.17 (Aug 31, 2022)[#](#jax-0-3-17-aug-31-2022)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.3.16...jax-v0.3.17).
* Bugs
+ Fix corner case issue in gradient of `lax.pow` with an exponent of zero
([#12041](https://github.com/google/jax/issues/12041))
* Breaking changes
+ [`jax.checkpoint()`](index.html#jax.checkpoint), also known as `jax.remat()`, no longer supports
the `concrete` option, following the previous version’s deprecation; see
[JEP 11830](https://jax.readthedocs.io/en/latest/jep/11830-new-remat-checkpoint.html).
* Changes
+ Added [`jax.pure_callback()`](index.html#jax.pure_callback) that enables calling back to pure Python functions from compiled functions (e.g. functions decorated with `jax.jit` or `jax.pmap`).
* Deprecations:
+ The deprecated `DeviceArray.tile()` method has been removed. Use [`jax.numpy.tile()`](index.html#jax.numpy.tile)
([#11944](https://github.com/google/jax/issues/11944)).
+ `DeviceArray.to_py()` has been deprecated. Use `np.asarray(x)` instead.
### jax 0.3.16[#](#jax-0-3-16)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.3.15...main).
* Breaking changes
+ Support for NumPy 1.19 has been dropped, per the
[deprecation policy](https://jax.readthedocs.io/en/latest/deprecation.html).
Please upgrade to NumPy 1.20 or newer.
* Changes
+ Added [`jax.debug`](index.html#module-jax.debug) that includes utilities for runtime value debugging such at [`jax.debug.print()`](index.html#jax.debug.print) and [`jax.debug.breakpoint()`](index.html#jax.debug.breakpoint).
+ Added new documentation for [runtime value debugging](index.html#document-debugging/index)
* Deprecations
+ `jax.mask()` `jax.shapecheck()` APIs have been removed.
See [#11557](https://github.com/google/jax/issues/11557).
+ `jax.experimental.loops` has been removed. See [#10278](https://github.com/google/jax/issues/10278)
for an alternative API.
+ `jax.tree_util.tree_multimap()` has been removed. It has been deprecated since
JAX release 0.3.5, and [`jax.tree_util.tree_map()`](index.html#jax.tree_util.tree_map) is a direct replacement.
+ Removed `jax.experimental.stax`; it has long been a deprecated alias of
[`jax.example_libraries.stax`](index.html#module-jax.example_libraries.stax).
+ Removed `jax.experimental.optimizers`; it has long been a deprecated alias of
[`jax.example_libraries.optimizers`](index.html#module-jax.example_libraries.optimizers).
+ [`jax.checkpoint()`](index.html#jax.checkpoint), also known as `jax.remat()`, has a new
implementation switched on by default, meaning the old implementation is
deprecated; see [JEP 11830](https://jax.readthedocs.io/en/latest/jep/11830-new-remat-checkpoint.html).
### jax 0.3.15 (July 22, 2022)[#](#jax-0-3-15-july-22-2022)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.3.14...jax-v0.3.15).
* Changes
+ `JaxTestCase` and `JaxTestLoader` have been removed from `jax.test_util`. These
classes have been deprecated since v0.3.1 ([#11248](https://github.com/google/jax/issues/11248)).
+ Added `jax.scipy.gaussian_kde` ([#11237](https://github.com/google/jax/issues/11237)).
+ Binary operations between JAX arrays and built-in collections (`dict`, `list`, `set`, `tuple`)
now raise a `TypeError` in all cases. Previously some cases (particularly equality and inequality)
would return boolean scalars inconsistent with similar operations in NumPy ([#11234](https://github.com/google/jax/issues/11234)).
+ Several [`jax.tree_util`](index.html#module-jax.tree_util) routines accessed as top-level JAX package imports are now
deprecated, and will be removed in a future JAX release in accordance with the
[API compatibility](index.html#api-compatibility) policy:
- `jax.treedef_is_leaf()` is deprecated in favor of [`jax.tree_util.treedef_is_leaf()`](index.html#jax.tree_util.treedef_is_leaf)
- `jax.tree_flatten()` is deprecated in favor of [`jax.tree_util.tree_flatten()`](index.html#jax.tree_util.tree_flatten)
- `jax.tree_leaves()` is deprecated in favor of [`jax.tree_util.tree_leaves()`](index.html#jax.tree_util.tree_leaves)
- `jax.tree_structure()` is deprecated in favor of [`jax.tree_util.tree_structure()`](index.html#jax.tree_util.tree_structure)
- `jax.tree_transpose()` is deprecated in favor of [`jax.tree_util.tree_transpose()`](index.html#jax.tree_util.tree_transpose)
- `jax.tree_unflatten()` is deprecated in favor of [`jax.tree_util.tree_unflatten()`](index.html#jax.tree_util.tree_unflatten)
+ The `sym_pos` argument of [`jax.scipy.linalg.solve()`](index.html#jax.scipy.linalg.solve) is deprecated in favor of `assume_a='pos'`,
following a similar deprecation in [`scipy.linalg.solve()`](https://docs.scipy.org/doc/scipy-1.8.1/reference/generated/scipy.linalg.solve.html#scipy.linalg.solve).
### jaxlib 0.3.15 (July 22, 2022)[#](#jaxlib-0-3-15-july-22-2022)
* [GitHub commits](https://github.com/google/jax/compare/jaxlib-v0.3.14...jaxlib-v0.3.15).
### jax 0.3.14 (June 27, 2022)[#](#jax-0-3-14-june-27-2022)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.3.13...jax-v0.3.14).
* Breaking changes
+ `jax.experimental.compilation_cache.initialize_cache()` does not support
`max_cache_size_ bytes` anymore and will not get that as an input.
+ `JAX_PLATFORMS` now raises an exception when platform initialization fails.
* Changes
+ Fixed compatibility problems with NumPy 1.23.
+ [`jax.numpy.linalg.slogdet()`](index.html#jax.numpy.linalg.slogdet) now accepts an optional `method` argument
that allows selection between an LU-decomposition based implementation and
an implementation based on QR decomposition.
+ [`jax.numpy.linalg.qr()`](index.html#jax.numpy.linalg.qr) now supports `mode="raw"`.
+ `pickle`, `copy.copy`, and `copy.deepcopy` now have more complete support when
used on jax arrays ([#10659](https://github.com/google/jax/issues/10659)). In particular:
- `pickle` and `deepcopy` previously returned `np.ndarray` objects when used
on a `DeviceArray`; now `DeviceArray` objects are returned. For `deepcopy`,
the copied array is on the same device as the original. For `pickle` the
deserialized array will be on the default device.
- Within function transformations (i.e. traced code), `deepcopy` and `copy`
previously were no-ops. Now they use the same mechanism as `DeviceArray.copy()`.
- Calling `pickle` on a traced array now results in an explicit
`ConcretizationTypeError`.
+ The implementation of singular value decomposition (SVD) and
symmetric/Hermitian eigendecomposition should be significantly faster on
TPU, especially for matrices above 1000x1000 or so. Both now use a spectral
divide-and-conquer algorithm for eigendecomposition (QDWH-eig).
+ [`jax.numpy.ldexp()`](index.html#jax.numpy.ldexp) no longer silently promotes all inputs to float64,
instead it promotes to float32 for integer inputs of size int32 or smaller
([#10921](https://github.com/google/jax/issues/10921)).
+ Add a `create_perfetto_link` option to [`jax.profiler.start_trace()`](index.html#jax.profiler.start_trace) and
[`jax.profiler.start_trace()`](index.html#jax.profiler.start_trace). When used, the profiler will generate a
link to the Perfetto UI to view the trace.
+ Changed the semantics of `jax.profiler.start_server(...)()` to store the
keepalive globally, rather than requiring the user to keep a reference to
it.
+ Added [`jax.random.generalized_normal()`](index.html#jax.random.generalized_normal).
+ Added [`jax.random.ball()`](index.html#jax.random.ball).
+ Added [`jax.default_device()`](index.html#jax.default_device).
+ Added a `python -m jax.collect_profile` script to manually capture program
traces as an alternative to the Tensorboard UI.
+ Added a `jax.named_scope` context manager that adds profiler metadata to
Python programs (similar to `jax.named_call`).
+ In scatter-update operations (i.e. :attr:`jax.numpy.ndarray.at`), unsafe implicit
dtype casts are deprecated, and now result in a `FutureWarning`.
In a future release, this will become an error. An example of an unsafe implicit
cast is `jnp.zeros(4, dtype=int).at[0].set(1.5)`, in which `1.5` previously was
silently truncated to `1`.
+ `jax.experimental.compilation_cache.initialize_cache()` now supports gcs
bucket path as input.
+ Added [`jax.scipy.stats.gennorm()`](index.html#module-jax.scipy.stats.gennorm).
+ [`jax.numpy.roots()`](index.html#jax.numpy.roots) is now better behaved when `strip_zeros=False` when
coefficients have leading zeros ([#11215](https://github.com/google/jax/issues/11215)).
### jaxlib 0.3.14 (June 27, 2022)[#](#jaxlib-0-3-14-june-27-2022)
* [GitHub commits](https://github.com/google/jax/compare/jaxlib-v0.3.10...jaxlib-v0.3.14).
+ x86-64 Mac wheels now require Mac OS 10.14 (Mojave) or newer. Mac OS 10.14
was released in 2018, so this should not be a very onerous requirement.
+ The bundled version of NCCL was updated to 2.12.12, fixing some deadlocks.
+ The Python flatbuffers package is no longer a dependency of jaxlib.
### jax 0.3.13 (May 16, 2022)[#](#jax-0-3-13-may-16-2022)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.3.12...jax-v0.3.13).
### jax 0.3.12 (May 15, 2022)[#](#jax-0-3-12-may-15-2022)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.3.11...jax-v0.3.12).
* Changes
+ Fixes [#10717](https://github.com/google/jax/issues/10717).
### jax 0.3.11 (May 15, 2022)[#](#jax-0-3-11-may-15-2022)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.3.10...jax-v0.3.11).
* Changes
+ `jax.lax.eigh()` now accepts an optional `sort_eigenvalues` argument
that allows users to opt out of eigenvalue sorting on TPU.
* Deprecations
+ Non-array arguments to functions in [`jax.lax.linalg`](index.html#module-jax.lax.linalg) are now marked
keyword-only. As a backward-compatibility step passing keyword-only
arguments positionally yields a warning, but in a future JAX release passing
keyword-only arguments positionally will fail.
However, most users should prefer to use [`jax.numpy.linalg`](index.html#module-jax.numpy.linalg) instead.
+ `jax.scipy.linalg.polar_unitary()`, which was a JAX extension to the
scipy API, is deprecated. Use [`jax.scipy.linalg.polar()`](index.html#jax.scipy.linalg.polar) instead.
### jax 0.3.10 (May 3, 2022)[#](#jax-0-3-10-may-3-2022)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.3.9...jax-v0.3.10).
### jaxlib 0.3.10 (May 3, 2022)[#](#jaxlib-0-3-10-may-3-2022)
* [GitHub commits](https://github.com/google/jax/compare/jaxlib-v0.3.7...jaxlib-v0.3.10).
* Changes
+ [TF commit](https://github.com/tensorflow/tensorflow/commit/207d50d253e11c3a3430a700af478a1d524a779a)
fixes an issue in the MHLO canonicalizer that caused constant folding to
take a long time or crash for certain programs.
### jax 0.3.9 (May 2, 2022)[#](#jax-0-3-9-may-2-2022)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.3.8...jax-v0.3.9).
* Changes
+ Added support for fully asynchronous checkpointing for GlobalDeviceArray.
### jax 0.3.8 (April 29 2022)[#](#jax-0-3-8-april-29-2022)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.3.7...jax-v0.3.8).
* Changes
+ [`jax.numpy.linalg.svd()`](index.html#jax.numpy.linalg.svd) on TPUs uses a qdwh-svd solver.
+ [`jax.numpy.linalg.cond()`](index.html#jax.numpy.linalg.cond) on TPUs now accepts complex input.
+ [`jax.numpy.linalg.pinv()`](index.html#jax.numpy.linalg.pinv) on TPUs now accepts complex input.
+ [`jax.numpy.linalg.matrix_rank()`](index.html#jax.numpy.linalg.matrix_rank) on TPUs now accepts complex input.
+ `jax.scipy.cluster.vq.vq()` has been added.
+ `jax.experimental.maps.mesh` has been deleted.
Please use `jax.experimental.maps.Mesh`. Please see https://jax.readthedocs.io/en/latest/_autosummary/jax.experimental.maps.Mesh.html#jax.experimental.maps.Mesh
for more information.
+ [`jax.scipy.linalg.qr()`](index.html#jax.scipy.linalg.qr) now returns a length-1 tuple rather than the raw array when
`mode='r'`, in order to match the behavior of `scipy.linalg.qr` ([#10452](https://github.com/google/jax/issues/10452))
+ [`jax.numpy.take_along_axis()`](index.html#jax.numpy.take_along_axis) now takes an optional `mode` parameter
that specifies the behavior of out-of-bounds indexing. By default,
invalid values (e.g., NaN) will be returned for out-of-bounds indices. In
previous versions of JAX, invalid indices were clamped into range. The
previous behavior can be restored by passing `mode="clip"`.
+ [`jax.numpy.take()`](index.html#jax.numpy.take) now defaults to `mode="fill"`, which returns
invalid values (e.g., NaN) for out-of-bounds indices.
+ Scatter operations, such as `x.at[...].set(...)`, now have `"drop"` semantics.
This has no effect on the scatter operation itself, but it means that when
differentiated the gradient of a scatter will yield zero cotangents for
out-of-bounds indices. Previously out-of-bounds indices were clamped into
range for the gradient, which was not mathematically correct.
+ [`jax.numpy.take_along_axis()`](index.html#jax.numpy.take_along_axis) now raises a `TypeError` if its indices
are not of an integer type, matching the behavior of
[`numpy.take_along_axis()`](https://numpy.org/doc/stable/reference/generated/numpy.take_along_axis.html#numpy.take_along_axis). Previously non-integer indices were silently
cast to integers.
+ [`jax.numpy.ravel_multi_index()`](index.html#jax.numpy.ravel_multi_index) now raises a `TypeError` if its `dims` argument
is not of an integer type, matching the behavior of
[`numpy.ravel_multi_index()`](https://numpy.org/doc/stable/reference/generated/numpy.ravel_multi_index.html#numpy.ravel_multi_index). Previously non-integer `dims` was silently
cast to integers.
+ [`jax.numpy.split()`](index.html#jax.numpy.split) now raises a `TypeError` if its `axis` argument
is not of an integer type, matching the behavior of
[`numpy.split()`](https://numpy.org/doc/stable/reference/generated/numpy.split.html#numpy.split). Previously non-integer `axis` was silently
cast to integers.
+ [`jax.numpy.indices()`](index.html#jax.numpy.indices) now raises a `TypeError` if its dimensions
are not of an integer type, matching the behavior of
[`numpy.indices()`](https://numpy.org/doc/stable/reference/generated/numpy.indices.html#numpy.indices). Previously non-integer dimensions were silently
cast to integers.
+ [`jax.numpy.diag()`](index.html#jax.numpy.diag) now raises a `TypeError` if its `k` argument
is not of an integer type, matching the behavior of
[`numpy.diag()`](https://numpy.org/doc/stable/reference/generated/numpy.diag.html#numpy.diag). Previously non-integer `k` was silently
cast to integers.
+ Added [`jax.random.orthogonal()`](index.html#jax.random.orthogonal).
* Deprecations
+ Many functions and objects available in `jax.test_util` are now deprecated and will raise a
warning on import. This includes `cases_from_list`, `check_close`, `check_eq`, `device_under_test`,
`format_shape_dtype_string`, `rand_uniform`, `skip_on_devices`, `with_config`, `xla_bridge`, and
`_default_tolerance` ([#10389](https://github.com/google/jax/issues/10389)). These, along with previously-deprecated `JaxTestCase`,
`JaxTestLoader`, and `BufferDonationTestCase`, will be removed in a future JAX release.
Most of these utilites can be replaced by calls to standard python & numpy testing utilities found
in e.g. [`unittest`](https://docs.python.org/3/library/unittest.html#module-unittest), `absl.testing`, [`numpy.testing`](https://numpy.org/doc/stable/reference/routines.testing.html#module-numpy.testing), etc. JAX-specific functionality
such as device checking can be replaced through the use of public APIs such as [`jax.devices()`](index.html#jax.devices).
Many of the deprecated utilities will still exist in `jax._src.test_util`, but these are not
public APIs and as such may be changed or removed without notice in future releases.
### jax 0.3.7 (April 15, 2022)[#](#jax-0-3-7-april-15-2022)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.3.6...jax-v0.3.7).
* Changes:
+ Fixed a performance problem if the indices passed to
[`jax.numpy.take_along_axis()`](index.html#jax.numpy.take_along_axis) were broadcasted ([#10281](https://github.com/google/jax/issues/10281)).
+ [`jax.scipy.special.expit()`](index.html#jax.scipy.special.expit) and [`jax.scipy.special.logit()`](index.html#jax.scipy.special.logit) now
require their arguments to be scalars or JAX arrays. They also now promote
integer arguments to floating point.
+ The `DeviceArray.tile()` method is deprecated, because numpy arrays do not have a
`tile()` method. As a replacement for this, use [`jax.numpy.tile()`](index.html#jax.numpy.tile)
([#10266](https://github.com/google/jax/issues/10266)).
### jaxlib 0.3.7 (April 15, 2022)[#](#jaxlib-0-3-7-april-15-2022)
* Changes:
+ Linux wheels are now built conforming to the `manylinux2014` standard, instead
of `manylinux2010`.
### jax 0.3.6 (April 12, 2022)[#](#jax-0-3-6-april-12-2022)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.3.5...jax-v0.3.6).
* Changes:
+ Upgraded libtpu wheel to a version that fixes a hang when initializing a TPU
pod. Fixes [#10218](https://github.com/google/jax/issues/10218).
* Deprecations:
+ `jax.experimental.loops` is being deprecated. See [#10278](https://github.com/google/jax/issues/10278)
for an alternative API.
### jax 0.3.5 (April 7, 2022)[#](#jax-0-3-5-april-7-2022)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.3.4...jax-v0.3.5).
* Changes:
+ added [`jax.random.loggamma()`](index.html#jax.random.loggamma) & improved behavior of [`jax.random.beta()`](index.html#jax.random.beta)
and [`jax.random.dirichlet()`](index.html#jax.random.dirichlet) for small parameter values ([#9906](https://github.com/google/jax/issues/9906)).
+ the private `lax_numpy` submodule is no longer exposed in the `jax.numpy` namespace ([#10029](https://github.com/google/jax/issues/10029)).
+ added array creation routines [`jax.numpy.frombuffer()`](index.html#jax.numpy.frombuffer), [`jax.numpy.fromfunction()`](index.html#jax.numpy.fromfunction),
and [`jax.numpy.fromstring()`](index.html#jax.numpy.fromstring) ([#10049](https://github.com/google/jax/issues/10049)).
+ `DeviceArray.copy()` now returns a `DeviceArray` rather than a `np.ndarray` ([#10069](https://github.com/google/jax/issues/10069))
+ added [`jax.scipy.linalg.rsf2csf()`](index.html#jax.scipy.linalg.rsf2csf)
+ `jax.experimental.sharded_jit` has been deprecated and will be removed soon.
* Deprecations:
+ `jax.nn.normalize()` is being deprecated. Use [`jax.nn.standardize()`](index.html#jax.nn.standardize) instead ([#9899](https://github.com/google/jax/issues/9899)).
+ `jax.tree_util.tree_multimap()` is deprecated. Use [`jax.tree_util.tree_map()`](index.html#jax.tree_util.tree_map) instead ([#5746](https://github.com/google/jax/issues/5746)).
+ `jax.experimental.sharded_jit` is deprecated. Use `pjit` instead.
### jaxlib 0.3.5 (April 7, 2022)[#](#jaxlib-0-3-5-april-7-2022)
* Bug fixes
+ Fixed a bug where double-precision complex-to-real IRFFTs would mutate their
input buffers on GPU ([#9946](https://github.com/google/jax/issues/9946)).
+ Fixed incorrect constant-folding of complex scatters ([#10159](https://github.com/google/jax/issues/10159))
### jax 0.3.4 (March 18, 2022)[#](#jax-0-3-4-march-18-2022)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.3.3...jax-v0.3.4).
### jax 0.3.3 (March 17, 2022)[#](#jax-0-3-3-march-17-2022)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.3.2...jax-v0.3.3).
### jax 0.3.2 (March 16, 2022)[#](#jax-0-3-2-march-16-2022)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.3.1...jax-v0.3.2).
* Changes:
+ The functions `jax.ops.index_update`, `jax.ops.index_add`, which were
deprecated in 0.2.22, have been removed. Please use
[the `.at` property on JAX arrays](https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.ndarray.at.html)
instead, e.g., `x.at[idx].set(y)`.
+ Moved `jax.experimental.ann.approx_*_k` into `jax.lax`. These functions are
optimized alternatives to `jax.lax.top_k`.
+ [`jax.numpy.broadcast_arrays()`](index.html#jax.numpy.broadcast_arrays) and [`jax.numpy.broadcast_to()`](index.html#jax.numpy.broadcast_to) now require scalar
or array-like inputs, and will fail if they are passed lists (part of [#7737](https://github.com/google/jax/issues/7737)).
+ The standard jax[tpu] install can now be used with Cloud TPU v4 VMs.
+ `pjit` now works on CPU (in addition to previous TPU and GPU support).
### jaxlib 0.3.2 (March 16, 2022)[#](#jaxlib-0-3-2-march-16-2022)
* Changes
+ `XlaComputation.as_hlo_text()` now supports printing large constants by
passing boolean flag `print_large_constants=True`.
* Deprecations:
+ The `.block_host_until_ready()` method on JAX arrays has been deprecated.
Use `.block_until_ready()` instead.
### jax 0.3.1 (Feb 18, 2022)[#](#jax-0-3-1-feb-18-2022)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.3.0...jax-v0.3.1).
* Changes:
+ `jax.test_util.JaxTestCase` and `jax.test_util.JaxTestLoader` are now deprecated.
The suggested replacement is to use `parametrized.TestCase` directly. For tests that
rely on custom asserts such as `JaxTestCase.assertAllClose()`, the suggested replacement
is to use standard numpy testing utilities such as [`numpy.testing.assert_allclose()`](https://numpy.org/doc/stable/reference/generated/numpy.testing.assert_allclose.html#numpy.testing.assert_allclose),
which work directly with JAX arrays ([#9620](https://github.com/google/jax/issues/9620)).
+ `jax.test_util.JaxTestCase` now sets `jax_numpy_rank_promotion='raise'` by default
([#9562](https://github.com/google/jax/issues/9562)). To recover the previous behavior, use the new
`jax.test_util.with_config` decorator:
```
@jtu.with_config(jax_numpy_rank_promotion='allow')
class MyTestCase(jtu.JaxTestCase):
...
```
+ Added [`jax.scipy.linalg.schur()`](index.html#jax.scipy.linalg.schur), [`jax.scipy.linalg.sqrtm()`](index.html#jax.scipy.linalg.sqrtm),
[`jax.scipy.signal.csd()`](index.html#jax.scipy.signal.csd), [`jax.scipy.signal.stft()`](index.html#jax.scipy.signal.stft),
[`jax.scipy.signal.welch()`](index.html#jax.scipy.signal.welch).
### jax 0.3.0 (Feb 10, 2022)[#](#jax-0-3-0-feb-10-2022)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.2.28...jax-v0.3.0).
* Changes
+ jax version has been bumped to 0.3.0. Please see the [design doc](https://jax.readthedocs.io/en/latest/design_notes/jax_versioning.html)
for the explanation.
### jaxlib 0.3.0 (Feb 10, 2022)[#](#jaxlib-0-3-0-feb-10-2022)
* Changes
+ Bazel 5.0.0 is now required to build jaxlib.
+ jaxlib version has been bumped to 0.3.0. Please see the [design doc](https://jax.readthedocs.io/en/latest/design_notes/jax_versioning.html)
for the explanation.
### jax 0.2.28 (Feb 1, 2022)[#](#jax-0-2-28-feb-1-2022)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.2.27...jax-v0.2.28).
+ `jax.jit(f).lower(...).compiler_ir()` now defaults to the MHLO dialect if no
`dialect=` is passed.
+ The `jax.jit(f).lower(...).compiler_ir(dialect='mhlo')` now returns an MLIR
`ir.Module` object instead of its string representation.
### jaxlib 0.1.76 (Jan 27, 2022)[#](#jaxlib-0-1-76-jan-27-2022)
* New features
+ Includes precompiled SASS for NVidia compute capability 8.0 GPUS
(e.g. A100). Removes precompiled SASS for compute capability 6.1 so as not
to increase the number of compute capabilities: GPUs with compute capability
6.1 can use the 6.0 SASS.
+ With jaxlib 0.1.76, JAX uses the MHLO MLIR dialect as its primary target compiler IR
by default.
* Breaking changes
+ Support for NumPy 1.18 has been dropped, per the
[deprecation policy](https://jax.readthedocs.io/en/latest/deprecation.html).
Please upgrade to a supported NumPy version.
* Bug fixes
+ Fixed a bug where apparently identical pytreedef objects constructed by different routes
do not compare as equal (#9066).
+ The JAX jit cache requires two static arguments to have identical types for a cache hit (#9311).
### jax 0.2.27 (Jan 18 2022)[#](#jax-0-2-27-jan-18-2022)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.2.26...jax-v0.2.27).
* Breaking changes:
+ Support for NumPy 1.18 has been dropped, per the
[deprecation policy](https://jax.readthedocs.io/en/latest/deprecation.html).
Please upgrade to a supported NumPy version.
+ The host_callback primitives have been simplified to drop the
special autodiff handling for hcb.id_tap and id_print.
From now on, only the primals are tapped. The old behavior can be
obtained (for a limited time) by setting the `JAX_HOST_CALLBACK_AD_TRANSFORMS`
environment variable, or the `--flax_host_callback_ad_transforms` flag.
Additionally, added documentation for how to implement the old behavior
using JAX custom AD APIs ([#8678](https://github.com/google/jax/issues/8678)).
+ Sorting now matches the behavior of NumPy for `0.0` and `NaN` regardless of the
bit representation. In particular, `0.0` and `-0.0` are now treated as equivalent,
where previously `-0.0` was treated as less than `0.0`. Additionally all `NaN`
representations are now treated as equivalent and sorted to the end of the array.
Previously negative `NaN` values were sorted to the front of the array, and `NaN`
values with different internal bit representations were not treated as equivalent, and
were sorted according to those bit patterns ([#9178](https://github.com/google/jax/issues/9178)).
+ [`jax.numpy.unique()`](index.html#jax.numpy.unique) now treats `NaN` values in the same way as `np.unique` in
NumPy versions 1.21 and newer: at most one `NaN` value will appear in the uniquified
output ([#9184](https://github.com/google/jax/issues/9184)).
* Bug fixes:
+ host_callback now supports ad_checkpoint.checkpoint ([#8907](https://github.com/google/jax/issues/8907)).
* New features:
+ add `jax.block_until_ready` ({jax-issue}`#8941)
+ Added a new debugging flag/environment variable `JAX_DUMP_IR_TO=/path`.
If set, JAX dumps the MHLO/HLO IR it generates for each computation to a
file under the given path.
+ Added `jax.ensure_compile_time_eval` to the public api ([#7987](https://github.com/google/jax/issues/7987)).
+ jax2tf now supports a flag jax2tf_associative_scan_reductions to change
the lowering for associative reductions, e.g., jnp.cumsum, to behave
like JAX on CPU and GPU (to use an associative scan). See the jax2tf README
for more details ([#9189](https://github.com/google/jax/issues/9189)).
### jaxlib 0.1.75 (Dec 8, 2021)[#](#jaxlib-0-1-75-dec-8-2021)
* New features:
+ Support for python 3.10.
### jax 0.2.26 (Dec 8, 2021)[#](#jax-0-2-26-dec-8-2021)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.2.25...jax-v0.2.26).
* Bug fixes:
+ Out-of-bounds indices to `jax.ops.segment_sum` will now be handled with
`FILL_OR_DROP` semantics, as documented. This primarily afects the
reverse-mode derivative, where gradients corresponding to out-of-bounds
indices will now be returned as 0. (#8634).
+ jax2tf will force the converted code to use XLA for the code fragments
under jax.jit, e.g., most jax.numpy functions ([#7839](https://github.com/google/jax/issues/7839)).
### jaxlib 0.1.74 (Nov 17, 2021)[#](#jaxlib-0-1-74-nov-17-2021)
* Enabled peer-to-peer copies between GPUs. Previously, GPU copies were bounced via the host, which is usually slower.
* Added experimental MLIR Python bindings for use by JAX.
### jax 0.2.25 (Nov 10, 2021)[#](#jax-0-2-25-nov-10-2021)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.2.24...jax-v0.2.25).
* New features:
+ (Experimental) `jax.distributed.initialize` exposes multi-host GPU backend.
+ `jax.random.permutation` supports new `independent` keyword argument
([#8430](https://github.com/google/jax/issues/8430))
* Breaking changes
+ Moved `jax.experimental.stax` to `jax.example_libraries.stax`
+ Moved `jax.experimental.optimizers` to `jax.example_libraries.optimizers`
* New features:
+ Added `jax.lax.linalg.qdwh`.
### jax 0.2.24 (Oct 19, 2021)[#](#jax-0-2-24-oct-19-2021)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.2.22...jax-v0.2.24).
* New features:
+ `jax.random.choice` and `jax.random.permutation` now support
multidimensional arrays and an optional `axis` argument ([#8158](https://github.com/google/jax/issues/8158))
* Breaking changes:
+ `jax.numpy.take` and `jax.numpy.take_along_axis` now require array-like inputs
(see [#7737](https://github.com/google/jax/issues/7737))
### jaxlib 0.1.73 (Oct 18, 2021)[#](#jaxlib-0-1-73-oct-18-2021)
* Multiple cuDNN versions are now supported for jaxlib GPU `cuda11` wheels.
+ cuDNN 8.2 or newer. We recommend using the cuDNN 8.2 wheel if your cuDNN
installation is new enough, since it supports additional functionality.
+ cuDNN 8.0.5 or newer.
* Breaking changes:
+ The install commands for GPU jaxlib are as follows:
```
pip install --upgrade pip
# Installs the wheel compatible with CUDA 11 and cuDNN 8.2 or newer.
pip install --upgrade "jax[cuda]" -f https://storage.googleapis.com/jax-releases/jax_releases.html
# Installs the wheel compatible with Cuda 11 and cudnn 8.2 or newer.
pip install jax[cuda11_cudnn82] -f https://storage.googleapis.com/jax-releases/jax_releases.html
# Installs the wheel compatible with Cuda 11 and cudnn 8.0.5 or newer.
pip install jax[cuda11_cudnn805] -f https://storage.googleapis.com/jax-releases/jax_releases.html
```
### jax 0.2.22 (Oct 12, 2021)[#](#jax-0-2-22-oct-12-2021)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.2.21...jax-v0.2.22).
* Breaking Changes
+ Static arguments to `jax.pmap` must now be hashable.
Unhashable static arguments have long been disallowed on `jax.jit`, but they
were still permitted on `jax.pmap`; `jax.pmap` compared unhashable static
arguments using object identity.
This behavior is a footgun, since comparing arguments using
object identity leads to recompilation each time the object identity
changes. Instead, we now ban unhashable arguments: if a user of `jax.pmap`
wants to compare static arguments by object identity, they can define
`__hash__` and `__eq__` methods on their objects that do that, or wrap their
objects in an object that has those operations with object identity
semantics. Another option is to use `functools.partial` to encapsulate the
unhashable static arguments into the function object.
+ `jax.util.partial` was an accidental export that has now been removed. Use
`functools.partial` from the Python standard library instead.
* Deprecations
+ The functions `jax.ops.index_update`, `jax.ops.index_add` etc. are
deprecated and will be removed in a future JAX release. Please use
[the `.at` property on JAX arrays](https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.ndarray.at.html)
instead, e.g., `x.at[idx].set(y)`. For now, these functions produce a
`DeprecationWarning`.
* New features:
+ An optimized C++ code-path improving the dispatch time for `pmap` is now the
default when using jaxlib 0.1.72 or newer. The feature can be disabled using
the `--experimental_cpp_pmap` flag (or `JAX_CPP_PMAP` environment variable).
+ `jax.numpy.unique` now supports an optional `fill_value` argument ([#8121](https://github.com/google/jax/issues/8121))
### jaxlib 0.1.72 (Oct 12, 2021)[#](#jaxlib-0-1-72-oct-12-2021)
* Breaking changes:
+ Support for CUDA 10.2 and CUDA 10.1 has been dropped. Jaxlib now supports
CUDA 11.1+.
* Bug fixes:
+ Fixes https://github.com/google/jax/issues/7461, which caused wrong
outputs on all platforms due to incorrect buffer aliasing inside the XLA
compiler.
### jax 0.2.21 (Sept 23, 2021)[#](#jax-0-2-21-sept-23-2021)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.2.20...jax-v0.2.21).
* Breaking Changes
+ `jax.api` has been removed. Functions that were available as `jax.api.*`
were aliases for functions in `jax.*`; please use the functions in
`jax.*` instead.
+ `jax.partial`, and `jax.lax.partial` were accidental exports that have now
been removed. Use `functools.partial` from the Python standard library
instead.
+ Boolean scalar indices now raise a `TypeError`; previously this silently
returned wrong results ([#7925](https://github.com/google/jax/issues/7925)).
+ Many more `jax.numpy` functions now require array-like inputs, and will error
if passed a list ([#7747](https://github.com/google/jax/issues/7747) [#7802](https://github.com/google/jax/issues/7802) [#7907](https://github.com/google/jax/issues/7907)).
See [#7737](https://github.com/google/jax/issues/7737) for a discussion of the rationale behind this change.
+ When inside a transformation such as `jax.jit`, `jax.numpy.array` always
stages the array it produces into the traced computation. Previously
`jax.numpy.array` would sometimes produce a on-device array, even under
a `jax.jit` decorator. This change may break code that used JAX arrays to
perform shape or index computations that must be known statically; the
workaround is to perform such computations using classic NumPy arrays
instead.
+ `jnp.ndarray` is now a true base-class for JAX arrays. In particular, this
means that for a standard numpy array `x`, `isinstance(x, jnp.ndarray)` will
now return `False` ([#7927](https://github.com/google/jax/issues/7927)).
* New features:
+ Added [`jax.numpy.insert()`](index.html#jax.numpy.insert) implementation ([#7936](https://github.com/google/jax/issues/7936)).
### jax 0.2.20 (Sept 2, 2021)[#](#jax-0-2-20-sept-2-2021)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.2.19...jax-v0.2.20).
* Breaking Changes
+ `jnp.poly*` functions now require array-like inputs ([#7732](https://github.com/google/jax/issues/7732))
+ `jnp.unique` and other set-like operations now require array-like inputs
([#7662](https://github.com/google/jax/issues/7662))
### jaxlib 0.1.71 (Sep 1, 2021)[#](#jaxlib-0-1-71-sep-1-2021)
* Breaking changes:
+ Support for CUDA 11.0 and CUDA 10.1 has been dropped. Jaxlib now supports
CUDA 10.2 and CUDA 11.1+.
### jax 0.2.19 (Aug 12, 2021)[#](#jax-0-2-19-aug-12-2021)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.2.18...jax-v0.2.19).
* Breaking changes:
+ Support for NumPy 1.17 has been dropped, per the
[deprecation policy](https://jax.readthedocs.io/en/latest/deprecation.html).
Please upgrade to a supported NumPy version.
+ The `jit` decorator has been added around the implementation of a number of
operators on JAX arrays. This speeds up dispatch times for common
operators such as `+`.
This change should largely be transparent to most users. However, there is
one known behavioral change, which is that large integer constants may now
produce an error when passed directly to a JAX operator
(e.g., `x + 2**40`). The workaround is to cast the constant to an
explicit type (e.g., `np.float64(2**40)`).
* New features:
+ Improved the support for shape polymorphism in jax2tf for operations that
need to use a dimension size in array computation, e.g., `jnp.mean`.
([#7317](https://github.com/google/jax/issues/7317))
* Bug fixes:
+ Some leaked trace errors from the previous release ([#7613](https://github.com/google/jax/issues/7613))
### jaxlib 0.1.70 (Aug 9, 2021)[#](#jaxlib-0-1-70-aug-9-2021)
* Breaking changes:
+ Support for Python 3.6 has been dropped, per the
[deprecation policy](https://jax.readthedocs.io/en/latest/deprecation.html).
Please upgrade to a supported Python version.
+ Support for NumPy 1.17 has been dropped, per the
[deprecation policy](https://jax.readthedocs.io/en/latest/deprecation.html).
Please upgrade to a supported NumPy version.
+ The host_callback mechanism now uses one thread per local device for
making the calls to the Python callbacks. Previously there was a single
thread for all devices. This means that the callbacks may now be called
interleaved. The callbacks corresponding to one device will still be
called in sequence.
### jax 0.2.18 (July 21 2021)[#](#jax-0-2-18-july-21-2021)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.2.17...jax-v0.2.18).
* Breaking changes:
+ Support for Python 3.6 has been dropped, per the
[deprecation policy](https://jax.readthedocs.io/en/latest/deprecation.html).
Please upgrade to a supported Python version.
+ The minimum jaxlib version is now 0.1.69.
+ The `backend` argument to [`jax.dlpack.from_dlpack()`](index.html#jax.dlpack.from_dlpack) has been
removed.
* New features:
+ Added a polar decomposition ([`jax.scipy.linalg.polar()`](index.html#jax.scipy.linalg.polar)).
* Bug fixes:
+ Tightened the checks for lax.argmin and lax.argmax to ensure they are
not used with an invalid `axis` value, or with an empty reduction dimension.
([#7196](https://github.com/google/jax/issues/7196))
### jaxlib 0.1.69 (July 9 2021)[#](#jaxlib-0-1-69-july-9-2021)
* Fix bugs in TFRT CPU backend that results in incorrect results.
### jax 0.2.17 (July 9 2021)[#](#jax-0-2-17-july-9-2021)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.2.16...jax-v0.2.17).
* Bug fixes:
+ Default to the older “stream_executor” CPU runtime for jaxlib <= 0.1.68
to work around #7229, which caused wrong outputs on CPU due to a concurrency
problem.
* New features:
+ New SciPy function [`jax.scipy.special.sph_harm()`](index.html#jax.scipy.special.sph_harm).
+ Reverse-mode autodiff functions ([`jax.grad()`](index.html#jax.grad),
[`jax.value_and_grad()`](index.html#jax.value_and_grad), [`jax.vjp()`](index.html#jax.vjp), and
[`jax.linear_transpose()`](index.html#jax.linear_transpose)) support a parameter that indicates which named
axes should be summed over in the backward pass if they were broadcasted
over in the forward pass. This enables use of these APIs in a
non-per-example way inside maps (initially only
[`jax.experimental.maps.xmap()`](index.html#jax.experimental.maps.xmap)) ([#6950](https://github.com/google/jax/issues/6950)).
### jax 0.2.16 (June 23 2021)[#](#jax-0-2-16-june-23-2021)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.2.15...jax-v0.2.16).
### jax 0.2.15 (June 23 2021)[#](#jax-0-2-15-june-23-2021)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.2.14...jax-v0.2.15).
* New features:
+ [#7042](https://github.com/google/jax/pull/7042) Turned on TFRT CPU backend
with significant dispatch performance improvements on CPU.
+ The `jax2tf.convert()` supports inequalities and min/max for booleans
([#6956](https://github.com/google/jax/issues/6956)).
+ New SciPy function [`jax.scipy.special.lpmn_values()`](index.html#jax.scipy.special.lpmn_values).
* Breaking changes:
+ Support for NumPy 1.16 has been dropped, per the
[deprecation policy](https://jax.readthedocs.io/en/latest/deprecation.html).
* Bug fixes:
+ Fixed bug that prevented round-tripping from JAX to TF and back:
`jax2tf.call_tf(jax2tf.convert)` ([#6947](https://github.com/google/jax/issues/6947)).
### jaxlib 0.1.68 (June 23 2021)[#](#jaxlib-0-1-68-june-23-2021)
* Bug fixes:
+ Fixed bug in TFRT CPU backend that gets nans when transfer TPU buffer to
CPU.
### jax 0.2.14 (June 10 2021)[#](#jax-0-2-14-june-10-2021)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.2.13...jax-v0.2.14).
* New features:
+ The `jax2tf.convert()` now has support for `pjit` and `sharded_jit`.
+ A new configuration option JAX_TRACEBACK_FILTERING controls how JAX filters
tracebacks.
+ A new traceback filtering mode using `__tracebackhide__` is now enabled by
default in sufficiently recent versions of IPython.
+ The `jax2tf.convert()` supports shape polymorphism even when the
unknown dimensions are used in arithmetic operations, e.g., `jnp.reshape(-1)`
([#6827](https://github.com/google/jax/issues/6827)).
+ The `jax2tf.convert()` generates custom attributes with location information
in TF ops. The code that XLA generates after jax2tf
has the same location information as JAX/XLA.
+ New SciPy function [`jax.scipy.special.lpmn()`](index.html#jax.scipy.special.lpmn).
* Bug fixes:
+ The `jax2tf.convert()` now ensures that it uses the same typing rules
for Python scalars and for choosing 32-bit vs. 64-bit computations
as JAX ([#6883](https://github.com/google/jax/issues/6883)).
+ The `jax2tf.convert()` now scopes the `enable_xla` conversion parameter
properly to apply only during the just-in-time conversion
([#6720](https://github.com/google/jax/issues/6720)).
+ The `jax2tf.convert()` now converts `lax.dot_general` using the
`XlaDot` TensorFlow op, for better fidelity w.r.t. JAX numerical precision
([#6717](https://github.com/google/jax/issues/6717)).
+ The `jax2tf.convert()` now has support for inequality comparisons and
min/max for complex numbers ([#6892](https://github.com/google/jax/issues/6892)).
### jaxlib 0.1.67 (May 17 2021)[#](#jaxlib-0-1-67-may-17-2021)
### jaxlib 0.1.66 (May 11 2021)[#](#jaxlib-0-1-66-may-11-2021)
* New features:
+ CUDA 11.1 wheels are now supported on all CUDA 11 versions 11.1 or higher.
NVidia now promises compatibility between CUDA minor releases starting with
CUDA 11.1. This means that JAX can release a single CUDA 11.1 wheel that
is compatible with CUDA 11.2 and 11.3.
There is no longer a separate jaxlib release for CUDA 11.2 (or higher); use
the CUDA 11.1 wheel for those versions (cuda111).
+ Jaxlib now bundles `libdevice.10.bc` in CUDA wheels. There should be no need
to point JAX to a CUDA installation to find this file.
+ Added automatic support for static keyword arguments to the `jit()`
implementation.
+ Added support for pretransformation exception traces.
+ Initial support for pruning unused arguments from `jit()` -transformed
computations.
Pruning is still a work in progress.
+ Improved the string representation of `PyTreeDef` objects.
+ Added support for XLA’s variadic ReduceWindow.
* Bug fixes:
+ Fixed a bug in the remote cloud TPU support when large numbers of arguments
are passed to a computation.
+ Fix a bug that meant that JAX garbage collection was not triggered by
`jit()` transformed functions.
### jax 0.2.13 (May 3 2021)[#](#jax-0-2-13-may-3-2021)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.2.12...jax-v0.2.13).
* New features:
+ When combined with jaxlib 0.1.66, [`jax.jit()`](index.html#jax.jit) now supports static
keyword arguments. A new `static_argnames` option has been added to specify
keyword arguments as static.
+ `jax.nonzero()` has a new optional `size` argument that allows it to
be used within `jit` ([#6501](https://github.com/google/jax/issues/6501))
+ [`jax.numpy.unique()`](index.html#jax.numpy.unique) now supports the `axis` argument ([#6532](https://github.com/google/jax/issues/6532)).
+ [`jax.experimental.host_callback.call()`](index.html#jax.experimental.host_callback.call) now supports `pjit.pjit` ([#6569](https://github.com/google/jax/issues/6569)).
+ Added [`jax.scipy.linalg.eigh_tridiagonal()`](index.html#jax.scipy.linalg.eigh_tridiagonal) that computes the
eigenvalues of a tridiagonal matrix. Only eigenvalues are supported at
present.
+ The order of the filtered and unfiltered stack traces in exceptions has been
changed. The traceback attached to an exception thrown from JAX-transformed
code is now filtered, with an `UnfilteredStackTrace` exception
containing the original trace as the `__cause__` of the filtered exception.
Filtered stack traces now also work with Python 3.6.
+ If an exception is thrown by code that has been transformed by reverse-mode
automatic differentiation, JAX now attempts to attach as a `__cause__` of
the exception a `JaxStackTraceBeforeTransformation` object that contains the
stack trace that created the original operation in the forward pass.
Requires jaxlib 0.1.66.
* Breaking changes:
+ The following function names have changed. There are still aliases, so this
should not break existing code, but the aliases will eventually be removed
so please change your code.
- `host_id` –> [`process_index()`](index.html#jax.process_index)
- `host_count` –> [`process_count()`](index.html#jax.process_count)
- `host_ids` –> `range(jax.process_count())`
+ Similarly, the argument to [`local_devices()`](index.html#jax.local_devices) has been renamed from
`host_id` to `process_index`.
+ Arguments to [`jax.jit()`](index.html#jax.jit) other than the function are now marked as
keyword-only. This change is to prevent accidental breakage when arguments
are added to `jit`.
* Bug fixes:
+ The `jax2tf.convert()` now works in presence of gradients for functions
with integer inputs ([#6360](https://github.com/google/jax/issues/6360)).
+ Fixed assertion failure in `jax2tf.call_tf()` when used with captured
`tf.Variable` ([#6572](https://github.com/google/jax/issues/6572)).
### jaxlib 0.1.65 (April 7 2021)[#](#jaxlib-0-1-65-april-7-2021)
### jax 0.2.12 (April 1 2021)[#](#jax-0-2-12-april-1-2021)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.2.11...v0.2.12).
* New features
+ New profiling APIs: [`jax.profiler.start_trace()`](index.html#jax.profiler.start_trace),
[`jax.profiler.stop_trace()`](index.html#jax.profiler.stop_trace), and [`jax.profiler.trace()`](index.html#jax.profiler.trace)
+ [`jax.lax.reduce()`](index.html#jax.lax.reduce) is now differentiable.
* Breaking changes:
+ The minimum jaxlib version is now 0.1.64.
+ Some profiler APIs names have been changed. There are still aliases, so this
should not break existing code, but the aliases will eventually be removed
so please change your code.
- `TraceContext` –> [`TraceAnnotation()`](index.html#jax.profiler.TraceAnnotation)
- `StepTraceContext` –> [`StepTraceAnnotation()`](index.html#jax.profiler.StepTraceAnnotation)
- `trace_function` –> [`annotate_function()`](index.html#jax.profiler.annotate_function)
+ Omnistaging can no longer be disabled. See [omnistaging](https://github.com/google/jax/blob/main/docs/design_notes/omnistaging.md)
for more information.
+ Python integers larger than the maximum `int64` value will now lead to an overflow
in all cases, rather than being silently converted to `uint64` in some cases ([#6047](https://github.com/google/jax/issues/6047)).
+ Outside X64 mode, Python integers outside the range representable by `int32` will now lead to an
`OverflowError` rather than having their value silently truncated.
* Bug fixes:
+ `host_callback` now supports empty arrays in arguments and results ([#6262](https://github.com/google/jax/issues/6262)).
+ [`jax.random.randint()`](index.html#jax.random.randint) clips rather than wraps of out-of-bounds limits, and can now generate
integers in the full range of the specified dtype ([#5868](https://github.com/google/jax/issues/5868))
### jax 0.2.11 (March 23 2021)[#](#jax-0-2-11-march-23-2021)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.2.10...jax-v0.2.11).
* New features:
+ [#6112](https://github.com/google/jax/pull/6112) added context managers:
`jax.enable_checks`, `jax.check_tracer_leaks`, `jax.debug_nans`,
`jax.debug_infs`, `jax.log_compiles`.
+ [#6085](https://github.com/google/jax/pull/6085) added `jnp.delete`
* Bug fixes:
+ [#6136](https://github.com/google/jax/pull/6136) generalized
`jax.flatten_util.ravel_pytree` to handle integer dtypes.
+ [#6129](https://github.com/google/jax/issues/6129) fixed a bug with handling
some constants like `enum.IntEnums`
+ [#6145](https://github.com/google/jax/pull/6145) fixed batching issues with
incomplete beta functions
+ [#6014](https://github.com/google/jax/pull/6014) fixed H2D transfers during
tracing
+ [#6165](https://github.com/google/jax/pull/6165) avoids OverflowErrors when
converting some large Python integers to floats
* Breaking changes:
+ The minimum jaxlib version is now 0.1.62.
### jaxlib 0.1.64 (March 18 2021)[#](#jaxlib-0-1-64-march-18-2021)
### jaxlib 0.1.63 (March 17 2021)[#](#jaxlib-0-1-63-march-17-2021)
### jax 0.2.10 (March 5 2021)[#](#jax-0-2-10-march-5-2021)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.2.9...jax-v0.2.10).
* New features:
+ [`jax.scipy.stats.chi2()`](index.html#module-jax.scipy.stats.chi2) is now available as a distribution with logpdf and pdf methods.
+ [`jax.scipy.stats.betabinom()`](index.html#module-jax.scipy.stats.betabinom) is now available as a distribution with logpmf and pmf methods.
+ Added `jax.experimental.jax2tf.call_tf()` to call TensorFlow functions
from JAX ([#5627](https://github.com/google/jax/issues/5627))
and [README](https://github.com/google/jax/blob/main/jax/experimental/jax2tf/README.md#calling-tensorflow-functions-from-jax)).
+ Extended the batching rule for `lax.pad` to support batching of the padding values.
* Bug fixes:
+ [`jax.numpy.take()`](index.html#jax.numpy.take) properly handles negative indices ([#5768](https://github.com/google/jax/issues/5768))
* Breaking changes:
+ JAX’s promotion rules were adjusted to make promotion more consistent and
invariant to JIT. In particular, binary operations can now result in weakly-typed
values when appropriate. The main user-visible effect of the change is that
some operations result in outputs of different precision than before; for
example the expression `jnp.bfloat16(1) + 0.1 * jnp.arange(10)`
previously returned a `float64` array, and now returns a `bfloat16` array.
JAX’s type promotion behavior is described at [Type promotion semantics](index.html#type-promotion).
+ [`jax.numpy.linspace()`](index.html#jax.numpy.linspace) now computes the floor of integer values, i.e.,
rounding towards -inf rather than 0. This change was made to match NumPy
1.20.0.
+ [`jax.numpy.i0()`](index.html#jax.numpy.i0) no longer accepts complex numbers. Previously the
function computed the absolute value of complex arguments. This change was
made to match the semantics of NumPy 1.20.0.
+ Several [`jax.numpy`](index.html#module-jax.numpy) functions no longer accept tuples or lists in place
of array arguments: [`jax.numpy.pad()`](index.html#jax.numpy.pad), :func`jax.numpy.ravel`,
[`jax.numpy.repeat()`](index.html#jax.numpy.repeat), [`jax.numpy.reshape()`](index.html#jax.numpy.reshape).
In general, [`jax.numpy`](index.html#module-jax.numpy) functions should be used with scalars or array arguments.
### jaxlib 0.1.62 (March 9 2021)[#](#jaxlib-0-1-62-march-9-2021)
* New features:
+ jaxlib wheels are now built to require AVX instructions on x86-64 machines
by default. If you want to use JAX on a machine that doesn’t support AVX,
you can build a jaxlib from source using the `--target_cpu_features` flag
to `build.py`. `--target_cpu_features` also replaces
`--enable_march_native`.
### jaxlib 0.1.61 (February 12 2021)[#](#jaxlib-0-1-61-february-12-2021)
### jaxlib 0.1.60 (Febuary 3 2021)[#](#jaxlib-0-1-60-febuary-3-2021)
* Bug fixes:
+ Fixed a memory leak when converting CPU DeviceArrays to NumPy arrays. The
memory leak was present in jaxlib releases 0.1.58 and 0.1.59.
+ `bool`, `int8`, and `uint8` are now considered safe to cast to
`bfloat16` NumPy extension type.
### jax 0.2.9 (January 26 2021)[#](#jax-0-2-9-january-26-2021)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.2.8...jax-v0.2.9).
* New features:
+ Extend the `jax.experimental.loops` module with support for pytrees. Improved
error checking and error messages.
+ Add [`jax.experimental.enable_x64()`](index.html#jax.experimental.enable_x64) and [`jax.experimental.disable_x64()`](index.html#jax.experimental.disable_x64).
These are context managers which allow X64 mode to be temporarily enabled/disabled
within a session.
* Breaking changes:
+ [`jax.ops.segment_sum()`](index.html#jax.ops.segment_sum) now drops segment IDs that are out of range rather
than wrapping them into the segment ID space. This was done for performance
reasons.
### jaxlib 0.1.59 (January 15 2021)[#](#jaxlib-0-1-59-january-15-2021)
### jax 0.2.8 (January 12 2021)[#](#jax-0-2-8-january-12-2021)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.2.7...jax-v0.2.8).
* New features:
+ Add [`jax.closure_convert()`](index.html#jax.closure_convert) for use with higher-order custom
derivative functions. ([#5244](https://github.com/google/jax/issues/5244))
+ Add [`jax.experimental.host_callback.call()`](index.html#jax.experimental.host_callback.call) to call a custom Python
function on the host and return a result to the device computation.
([#5243](https://github.com/google/jax/issues/5243))
* Bug fixes:
+ `jax.numpy.arccosh` now returns the same branch as `numpy.arccosh` for
complex inputs ([#5156](https://github.com/google/jax/issues/5156))
+ `host_callback.id_tap` now works for `jax.pmap` also. There is an
optional parameter for `id_tap` and `id_print` to request that the
device from which the value is tapped be passed as a keyword argument
to the tap function ([#5182](https://github.com/google/jax/issues/5182)).
* Breaking changes:
+ `jax.numpy.pad` now takes keyword arguments. Positional argument `constant_values`
has been removed. In addition, passing unsupported keyword arguments raises an error.
+ Changes for [`jax.experimental.host_callback.id_tap()`](index.html#jax.experimental.host_callback.id_tap) ([#5243](https://github.com/google/jax/issues/5243)):
- Removed support for `kwargs` for [`jax.experimental.host_callback.id_tap()`](index.html#jax.experimental.host_callback.id_tap).
(This support has been deprecated for a few months.)
- Changed the printing of tuples for [`jax.experimental.host_callback.id_print()`](index.html#jax.experimental.host_callback.id_print)
to use ‘(’ instead of ‘[‘.
- Changed the [`jax.experimental.host_callback.id_print()`](index.html#jax.experimental.host_callback.id_print) in presence of JVP
to print a pair of primal and tangent. Previously, there were two separate
print operations for the primals and the tangent.
- `host_callback.outfeed_receiver` has been removed (it is not necessary,
and was deprecated a few months ago).
* New features:
+ New flag for debugging `inf`, analagous to that for `NaN` ([#5224](https://github.com/google/jax/issues/5224)).
### jax 0.2.7 (Dec 4 2020)[#](#jax-0-2-7-dec-4-2020)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.2.6...jax-v0.2.7).
* New features:
+ Add `jax.device_put_replicated`
+ Add multi-host support to `jax.experimental.sharded_jit`
+ Add support for differentiating eigenvalues computed by `jax.numpy.linalg.eig`
+ Add support for building on Windows platforms
+ Add support for general in_axes and out_axes in `jax.pmap`
+ Add complex support for `jax.numpy.linalg.slogdet`
* Bug fixes:
+ Fix higher-than-second order derivatives of `jax.numpy.sinc` at zero
+ Fix some hard-to-hit bugs around symbolic zeros in transpose rules
* Breaking changes:
+ `jax.experimental.optix` has been deleted, in favor of the standalone
`optax` Python package.
+ indexing of JAX arrays with non-tuple sequences now raises a `TypeError`. This type of indexing
has been deprecated in Numpy since v1.16, and in JAX since v0.2.4.
See [#4564](https://github.com/google/jax/issues/4564).
### jax 0.2.6 (Nov 18 2020)[#](#jax-0-2-6-nov-18-2020)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.2.5...jax-v0.2.6).
* New Features:
+ Add support for shape-polymorphic tracing for the jax.experimental.jax2tf converter.
See [README.md](https://github.com/google/jax/blob/main/jax/experimental/jax2tf/README.md).
* Breaking change cleanup
+ Raise an error on non-hashable static arguments for jax.jit and
xla_computation. See [cb48f42](https://github.com/google/jax/commit/cb48f42).
+ Improve consistency of type promotion behavior ([#4744](https://github.com/google/jax/issues/4744)):
- Adding a complex Python scalar to a JAX floating point number respects the precision of
the JAX float. For example, `jnp.float32(1) + 1j` now returns `complex64`, where previously
it returned `complex128`.
- Results of type promotion with 3 or more terms involving uint64, a signed int, and a third type
are now independent of the order of arguments. For example:
`jnp.result_type(jnp.uint64, jnp.int64, jnp.float16)` and
`jnp.result_type(jnp.float16, jnp.uint64, jnp.int64)` both return `float16`, where previously
the first returned `float64` and the second returned `float16`.
+ The contents of the (undocumented) `jax.lax_linalg` linear algebra module
are now exposed publicly as `jax.lax.linalg`.
+ `jax.random.PRNGKey` now produces the same results in and out of JIT compilation
([#4877](https://github.com/google/jax/issues/4877)).
This required changing the result for a given seed in a few particular cases:
- With `jax_enable_x64=False`, negative seeds passed as Python integers now return a different result
outside JIT mode. For example, `jax.random.PRNGKey(-1)` previously returned
`[4294967295, 4294967295]`, and now returns `[0, 4294967295]`. This matches the behavior in JIT.
- Seeds outside the range representable by `int64` outside JIT now result in an `OverflowError`
rather than a `TypeError`. This matches the behavior in JIT.To recover the keys returned previously for negative integers with `jax_enable_x64=False`
outside JIT, you can use:
```
key = random.PRNGKey(-1).at[0].set(0xFFFFFFFF)
```
+ DeviceArray now raises `RuntimeError` instead of `ValueError` when trying
to access its value while it has been deleted.
### jaxlib 0.1.58 (January 12ish 2021)[#](#jaxlib-0-1-58-january-12ish-2021)
* Fixed a bug that meant JAX sometimes return platform-specific types (e.g.,
`np.cint`) instead of standard types (e.g., `np.int32`). (#4903)
* Fixed a crash when constant-folding certain int16 operations. (#4971)
* Added an `is_leaf` predicate to `pytree.flatten()`.
### jaxlib 0.1.57 (November 12 2020)[#](#jaxlib-0-1-57-november-12-2020)
* Fixed manylinux2010 compliance issues in GPU wheels.
* Switched the CPU FFT implementation from Eigen to PocketFFT.
* Fixed a bug where the hash of bfloat16 values was not correctly initialized and could change (#4651).
* Add support for retaining ownership when passing arrays to DLPack (#4636).
* Fixed a bug for batched triangular solves with sizes greater than 128 but not a multiple of 128.
* Fixed a bug when performing concurrent FFTs on multiple GPUs (#3518).
* Fixed a bug in profiler where tools are missing (#4427).
* Dropped support for CUDA 10.0.
### jax 0.2.5 (October 27 2020)[#](#jax-0-2-5-october-27-2020)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.2.4...jax-v0.2.5).
* Improvements:
+ Ensure that `check_jaxpr` does not perform FLOPS. See [#4650](https://github.com/google/jax/issues/4650).
+ Expanded the set of JAX primitives converted by jax2tf.
See [primitives_with_limited_support.md](https://github.com/google/jax/blob/main/jax/experimental/jax2tf/primitives_with_limited_support.md).
### jax 0.2.4 (October 19 2020)[#](#jax-0-2-4-october-19-2020)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.2.3...jax-v0.2.4).
* Improvements:
+ Add support for `remat` to jax.experimental.host_callback. See [#4608](https://github.com/google/jax/issues/4608).
* Deprecations
+ Indexing with non-tuple sequences is now deprecated, following a similar deprecation in Numpy.
In a future release, this will result in a TypeError. See [#4564](https://github.com/google/jax/issues/4564).
### jaxlib 0.1.56 (October 14, 2020)[#](#jaxlib-0-1-56-october-14-2020)
### jax 0.2.3 (October 14 2020)[#](#jax-0-2-3-october-14-2020)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.2.2...jax-v0.2.3).
* The reason for another release so soon is we need to temporarily roll back a new jit fastpath while we look into a performance degradation
### jax 0.2.2 (October 13 2020)[#](#jax-0-2-2-october-13-2020)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.2.1...jax-v0.2.2).
### jax 0.2.1 (October 6 2020)[#](#jax-0-2-1-october-6-2020)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.2.0...jax-v0.2.1).
* Improvements:
+ As a benefit of omnistaging, the host_callback functions are executed (in program
order) even if the result of the [`jax.experimental.host_callback.id_print()`](index.html#jax.experimental.host_callback.id_print)/
[`jax.experimental.host_callback.id_tap()`](index.html#jax.experimental.host_callback.id_tap) is not used in the computation.
### jax (0.2.0) (September 23 2020)[#](#jax-0-2-0-september-23-2020)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.1.77...jax-v0.2.0).
* Improvements:
+ Omnistaging on by default. See [#3370](https://github.com/google/jax/issues/3370) and
[omnistaging](https://github.com/google/jax/blob/main/docs/design_notes/omnistaging.md)
### jax (0.1.77) (September 15 2020)[#](#jax-0-1-77-september-15-2020)
* Breaking changes:
+ New simplified interface for [`jax.experimental.host_callback.id_tap()`](index.html#jax.experimental.host_callback.id_tap) (#4101)
### jaxlib 0.1.55 (September 8, 2020)[#](#jaxlib-0-1-55-september-8-2020)
* Update XLA:
+ Fix bug in DLPackManagedTensorToBuffer (#4196)
### jax 0.1.76 (September 8, 2020)[#](#jax-0-1-76-september-8-2020)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.1.75...jax-v0.1.76).
### jax 0.1.75 (July 30, 2020)[#](#jax-0-1-75-july-30-2020)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.1.74...jax-v0.1.75).
* Bug Fixes:
+ make jnp.abs() work for unsigned inputs (#3914)
* Improvements:
+ “Omnistaging” behavior added behind a flag, disabled by default (#3370)
### jax 0.1.74 (July 29, 2020)[#](#jax-0-1-74-july-29-2020)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.1.73...jax-v0.1.74).
* New Features:
+ BFGS (#3101)
+ TPU support for half-precision arithmetic (#3878)
* Bug Fixes:
+ Prevent some accidental dtype warnings (#3874)
+ Fix a multi-threading bug in custom derivatives (#3845, #3869)
* Improvements:
+ Faster searchsorted implementation (#3873)
+ Better test coverage for jax.numpy sorting algorithms (#3836)
### jaxlib 0.1.52 (July 22, 2020)[#](#jaxlib-0-1-52-july-22-2020)
* Update XLA.
### jax 0.1.73 (July 22, 2020)[#](#jax-0-1-73-july-22-2020)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.1.72...jax-v0.1.73).
* The minimum jaxlib version is now 0.1.51.
* New Features:
+ jax.image.resize. (#3703)
+ hfft and ihfft (#3664)
+ jax.numpy.intersect1d (#3726)
+ jax.numpy.lexsort (#3812)
+ `lax.scan` and the `scan` primitive support an `unroll`
parameter for loop unrolling when lowering to XLA
([#3738](https://github.com/google/jax/issues/3738)).
* Bug Fixes:
+ Fix reduction repeated axis error (#3618)
+ Fix shape rule for lax.pad for input dimensions of size 0. (#3608)
+ make psum transpose handle zero cotangents (#3653)
+ Fix shape error when taking JVP of reduce-prod over size 0 axis. (#3729)
+ Support differentiation through jax.lax.all_to_all (#3733)
+ address nan issue in jax.scipy.special.zeta (#3777)
* Improvements:
+ Many improvements to jax2tf
+ Reimplement argmin/argmax using a single pass variadic reduction. (#3611)
+ Enable XLA SPMD partitioning by default. (#3151)
+ Add support for 0d transpose convolution (#3643)
+ Make LU gradient work for low-rank matrices (#3610)
+ support multiple_results and custom JVPs in jet (#3657)
+ Generalize reduce-window padding to support (lo, hi) pairs. (#3728)
+ Implement complex convolutions on CPU and GPU. (#3735)
+ Make jnp.take work for empty slices of empty arrays. (#3751)
+ Relax dimension ordering rules for dot_general. (#3778)
+ Enable buffer donation for GPU. (#3800)
+ Add support for base dilation and window dilation to reduce window op… (#3803)
### jaxlib 0.1.51 (July 2, 2020)[#](#jaxlib-0-1-51-july-2-2020)
* Update XLA.
* Add new runtime support for host_callback.
### jax 0.1.72 (June 28, 2020)[#](#jax-0-1-72-june-28-2020)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.1.71...jax-v0.1.72).
* Bug fixes:
+ Fix an odeint bug introduced in the previous release, see
[#3587](https://github.com/google/jax/issues/3587).
### jax 0.1.71 (June 25, 2020)[#](#jax-0-1-71-june-25-2020)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.1.70...jax-v0.1.71).
* The minimum jaxlib version is now 0.1.48.
* Bug fixes:
+ Allow `jax.experimental.ode.odeint` dynamics functions to close over
values with respect to which we’re differentiating
[#3562](https://github.com/google/jax/issues/3562).
### jaxlib 0.1.50 (June 25, 2020)[#](#jaxlib-0-1-50-june-25-2020)
* Add support for CUDA 11.0.
* Drop support for CUDA 9.2 (we only maintain support for the last four CUDA versions.)
* Update XLA.
### jaxlib 0.1.49 (June 19, 2020)[#](#jaxlib-0-1-49-june-19-2020)
* Bug fixes:
+ Fix build issue that could result in slow compiles
([tensorflow/tensorflow](https://github.com/tensorflow/tensorflow/commit/f805153a25b00d12072bd728e91bb1621bfcf1b1))
### jaxlib 0.1.48 (June 12, 2020)[#](#jaxlib-0-1-48-june-12-2020)
* New features:
+ Adds support for fast traceback collection.
+ Adds preliminary support for on-device heap profiling.
+ Implements `np.nextafter` for `bfloat16` types.
+ Complex128 support for FFTs on CPU and GPU.
* Bugfixes:
+ Improved float64 `tanh` accuracy on GPU.
+ float64 scatters on GPU are much faster.
+ Complex matrix multiplication on CPU should be much faster.
+ Stable sorts on CPU should actually be stable now.
+ Concurrency bug fix in CPU backend.
### jax 0.1.70 (June 8, 2020)[#](#jax-0-1-70-june-8-2020)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.1.69...jax-v0.1.70).
* New features:
+ `lax.switch` introduces indexed conditionals with multiple
branches, together with a generalization of the `cond`
primitive
[#3318](https://github.com/google/jax/issues/3318).
### jax 0.1.69 (June 3, 2020)[#](#jax-0-1-69-june-3-2020)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.1.68...jax-v0.1.69).
### jax 0.1.68 (May 21, 2020)[#](#jax-0-1-68-may-21-2020)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.1.67...jax-v0.1.68).
* New features:
+ `lax.cond()` supports a single-operand form, taken as the argument
to both branches
[#2993](https://github.com/google/jax/issues/2993).
* Notable changes:
+ The format of the `transforms` keyword for the [`jax.experimental.host_callback.id_tap()`](index.html#jax.experimental.host_callback.id_tap)
primitive has changed [#3132](https://github.com/google/jax/issues/3132).
### jax 0.1.67 (May 12, 2020)[#](#jax-0-1-67-may-12-2020)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.1.66...jax-v0.1.67).
* New features:
+ Support for reduction over subsets of a pmapped axis using `axis_index_groups`
[#2382](https://github.com/google/jax/issues/2382).
+ Experimental support for printing and calling host-side Python function from
compiled code. See [id_print and id_tap](https://jax.readthedocs.io/en/latest/jax.experimental.host_callback.html)
([#3006](https://github.com/google/jax/issues/3006)).
* Notable changes:
+ The visibility of names exported from [`jax.numpy`](index.html#module-jax.numpy) has been
tightened. This may break code that was making use of names that were
previously exported accidentally.
### jaxlib 0.1.47 (May 8, 2020)[#](#jaxlib-0-1-47-may-8-2020)
* Fixes crash for outfeed.
### jax 0.1.66 (May 5, 2020)[#](#jax-0-1-66-may-5-2020)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.1.65...jax-v0.1.66).
* New features:
+ Support for `in_axes=None` on `pmap()`
[#2896](https://github.com/google/jax/issues/2896).
### jaxlib 0.1.46 (May 5, 2020)[#](#jaxlib-0-1-46-may-5-2020)
* Fixes crash for linear algebra functions on Mac OS X (#432).
* Fixes an illegal instruction crash caused by using AVX512 instructions when an operating system or hypervisor disabled them (#2906).
### jax 0.1.65 (April 30, 2020)[#](#jax-0-1-65-april-30-2020)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.1.64...jax-v0.1.65).
* New features:
+ Differentiation of determinants of singular matrices
[#2809](https://github.com/google/jax/issues/2809).
* Bug fixes:
+ Fix `odeint()` differentiation with respect to time of ODEs with
time-dependent dynamics [#2817](https://github.com/google/jax/issues/2817),
also add ODE CI testing.
+ Fix `lax_linalg.qr()` differentiation
[#2867](https://github.com/google/jax/issues/2867).
### jaxlib 0.1.45 (April 21, 2020)[#](#jaxlib-0-1-45-april-21-2020)
* Fixes segfault: [#2755](https://github.com/google/jax/issues/2755)
* Plumb is_stable option on Sort HLO through to Python.
### jax 0.1.64 (April 21, 2020)[#](#jax-0-1-64-april-21-2020)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.1.63...jax-v0.1.64).
* New features:
+ Add syntactic sugar for functional indexed updates
[#2684](https://github.com/google/jax/issues/2684).
+ Add [`jax.numpy.linalg.multi_dot()`](index.html#jax.numpy.linalg.multi_dot) [#2726](https://github.com/google/jax/issues/2726).
+ Add [`jax.numpy.unique()`](index.html#jax.numpy.unique) [#2760](https://github.com/google/jax/issues/2760).
+ Add [`jax.numpy.rint()`](index.html#jax.numpy.rint) [#2724](https://github.com/google/jax/issues/2724).
+ Add [`jax.numpy.rint()`](index.html#jax.numpy.rint) [#2724](https://github.com/google/jax/issues/2724).
+ Add more primitive rules for [`jax.experimental.jet()`](index.html#module-jax.experimental.jet).
* Bug fixes:
+ Fix `logaddexp()` and `logaddexp2()` differentiation at zero [#2107](https://github.com/google/jax/issues/2107).
+ Improve memory usage in reverse-mode autodiff without `jit()`
[#2719](https://github.com/google/jax/issues/2719).
* Better errors:
+ Improves error message for reverse-mode differentiation of `lax.while_loop()`
[#2129](https://github.com/google/jax/issues/2129).
### jaxlib 0.1.44 (April 16, 2020)[#](#jaxlib-0-1-44-april-16-2020)
* Fixes a bug where if multiple GPUs of different models were present, JAX would only compile programs suitable for the first GPU.
* Bugfix for `batch_group_count` convolutions.
* Added precompiled SASS for more GPU versions to avoid startup PTX compilation hang.
### jax 0.1.63 (April 12, 2020)[#](#jax-0-1-63-april-12-2020)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.1.62...jax-v0.1.63).
* Added `jax.custom_jvp` and `jax.custom_vjp` from [#2026](https://github.com/google/jax/issues/2026), see the [tutorial notebook](https://jax.readthedocs.io/en/latest/notebooks/Custom_derivative_rules_for_Python_code.html). Deprecated `jax.custom_transforms` and removed it from the docs (though it still works).
* Add `scipy.sparse.linalg.cg` [#2566](https://github.com/google/jax/issues/2566).
* Changed how Tracers are printed to show more useful information for debugging [#2591](https://github.com/google/jax/issues/2591).
* Made `jax.numpy.isclose` handle `nan` and `inf` correctly [#2501](https://github.com/google/jax/issues/2501).
* Added several new rules for `jax.experimental.jet` [#2537](https://github.com/google/jax/issues/2537).
* Fixed `jax.experimental.stax.BatchNorm` when `scale`/`center` isn’t provided.
* Fix some missing cases of broadcasting in `jax.numpy.einsum` [#2512](https://github.com/google/jax/issues/2512).
* Implement `jax.numpy.cumsum` and `jax.numpy.cumprod` in terms of a parallel prefix scan [#2596](https://github.com/google/jax/issues/2596) and make `reduce_prod` differentiable to arbitray order [#2597](https://github.com/google/jax/issues/2597).
* Add `batch_group_count` to `conv_general_dilated` [#2635](https://github.com/google/jax/issues/2635).
* Add docstring for `test_util.check_grads` [#2656](https://github.com/google/jax/issues/2656).
* Add `callback_transform` [#2665](https://github.com/google/jax/issues/2665).
* Implement `rollaxis`, `convolve`/`correlate` 1d & 2d, `copysign`,
`trunc`, `roots`, and `quantile`/`percentile` interpolation options.
### jaxlib 0.1.43 (March 31, 2020)[#](#jaxlib-0-1-43-march-31-2020)
* Fixed a performance regression for Resnet-50 on GPU.
### jax 0.1.62 (March 21, 2020)[#](#jax-0-1-62-march-21-2020)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.1.61...jax-v0.1.62).
* JAX has dropped support for Python 3.5. Please upgrade to Python 3.6 or newer.
* Removed the internal function `lax._safe_mul`, which implemented the convention `0. * nan == 0.`. This change means some programs when differentiated will produce nans when they previously produced correct values, though it ensures nans rather than silently incorrect results are produced for other programs. See #2447 and #1052 for details.
* Added an `all_gather` parallel convenience function.
* More type annotations in core code.
### jaxlib 0.1.42 (March 19, 2020)[#](#jaxlib-0-1-42-march-19-2020)
* jaxlib 0.1.41 broke cloud TPU support due to an API incompatibility. This release fixes it again.
* JAX has dropped support for Python 3.5. Please upgrade to Python 3.6 or newer.
### jax 0.1.61 (March 17, 2020)[#](#jax-0-1-61-march-17-2020)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.1.60...jax-v0.1.61).
* Fixes Python 3.5 support. This will be the last JAX or jaxlib release that supports Python 3.5.
### jax 0.1.60 (March 17, 2020)[#](#jax-0-1-60-march-17-2020)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.1.59...jax-v0.1.60).
* New features:
+ [`jax.pmap()`](index.html#jax.pmap) has `static_broadcast_argnums` argument which allows
the user to specify arguments that should be treated as compile-time
constants and should be broadcasted to all devices. It works analogously to
`static_argnums` in [`jax.jit()`](index.html#jax.jit).
+ Improved error messages for when tracers are mistakenly saved in global state.
+ Added [`jax.nn.one_hot()`](index.html#jax.nn.one_hot) utility function.
+ Added [`jax.experimental.jet`](index.html#module-jax.experimental.jet) for exponentially faster
higher-order automatic differentiation.
+ Added more correctness checking to arguments of [`jax.lax.broadcast_in_dim()`](index.html#jax.lax.broadcast_in_dim).
* The minimum jaxlib version is now 0.1.41.
### jaxlib 0.1.40 (March 4, 2020)[#](#jaxlib-0-1-40-march-4-2020)
* Adds experimental support in Jaxlib for TensorFlow profiler, which allows tracing of CPU and GPU computations from TensorBoard.
* Includes prototype support for multihost GPU computations that communicate via NCCL.
* Improves performance of NCCL collectives on GPU.
* Adds TopK, CustomCallWithoutLayout, CustomCallWithLayout, IGammaGradA and RandomGamma implementations.
* Supports device assignments known at XLA compilation time.
### jax 0.1.59 (February 11, 2020)[#](#jax-0-1-59-february-11-2020)
* [GitHub commits](https://github.com/google/jax/compare/jax-v0.1.58...jax-v0.1.59).
* Breaking changes
+ The minimum jaxlib version is now 0.1.38.
+ Simplified `Jaxpr` by removing the `Jaxpr.freevars` and
`Jaxpr.bound_subjaxprs`. The call primitives (`xla_call`, `xla_pmap`,
`sharded_call`, and `remat_call`) get a new parameter `call_jaxpr` with a
fully-closed (no `constvars`) jaxpr. Also, added a new field `call_primitive`
to primitives.
* New features:
+ Reverse-mode automatic differentiation (e.g. `grad`) of `lax.cond`, making it
now differentiable in both modes ([#2091](https://github.com/google/jax/issues/2091))
+ JAX now supports DLPack, which allows sharing CPU and GPU arrays in a
zero-copy way with other libraries, such as PyTorch.
+ JAX GPU DeviceArrays now support `__cuda_array_interface__`, which is another
zero-copy protocol for sharing GPU arrays with other libraries such as CuPy
and Numba.
+ JAX CPU device buffers now implement the Python buffer protocol, which allows
zero-copy buffer sharing between JAX and NumPy.
+ Added JAX_SKIP_SLOW_TESTS environment variable to skip tests known as slow.
### jaxlib 0.1.39 (February 11, 2020)[#](#jaxlib-0-1-39-february-11-2020)
* Updates XLA.
### jaxlib 0.1.38 (January 29, 2020)[#](#jaxlib-0-1-38-january-29-2020)
* CUDA 9.0 is no longer supported.
* CUDA 10.2 wheels are now built by default.
### jax 0.1.58 (January 28, 2020)[#](#jax-0-1-58-january-28-2020)
* [GitHub commits](https://github.com/google/jax/compare/46014da21...jax-v0.1.58).
* Breaking changes
+ JAX has dropped Python 2 support, because Python 2 reached its end of life on
January 1, 2020. Please update to Python 3.5 or newer.
* New features
> > > > + Forward-mode automatic differentiation (`jvp`) of while loop
> > ([#1980](https://github.com/google/jax/issues/1980))
> > + New NumPy and SciPy functions:
> - [`jax.numpy.fft.fft2()`](index.html#jax.numpy.fft.fft2)
> - [`jax.numpy.fft.ifft2()`](index.html#jax.numpy.fft.ifft2)
> - [`jax.numpy.fft.rfft()`](index.html#jax.numpy.fft.rfft)
> - [`jax.numpy.fft.irfft()`](index.html#jax.numpy.fft.irfft)
> - [`jax.numpy.fft.rfft2()`](index.html#jax.numpy.fft.rfft2)
> - [`jax.numpy.fft.irfft2()`](index.html#jax.numpy.fft.irfft2)
> - [`jax.numpy.fft.rfftn()`](index.html#jax.numpy.fft.rfftn)
> - [`jax.numpy.fft.irfftn()`](index.html#jax.numpy.fft.irfftn)
> - [`jax.numpy.fft.fftfreq()`](index.html#jax.numpy.fft.fftfreq)
> - [`jax.numpy.fft.rfftfreq()`](index.html#jax.numpy.fft.rfftfreq)
> - [`jax.numpy.linalg.matrix_rank()`](index.html#jax.numpy.linalg.matrix_rank)
> - [`jax.numpy.linalg.matrix_power()`](index.html#jax.numpy.linalg.matrix_power)
> - [`jax.scipy.special.betainc()`](index.html#jax.scipy.special.betainc)
> + Batched Cholesky decomposition on GPU now uses a more efficient batched
> kernel.
#### Notable bug fixes[#](#notable-bug-fixes)
* With the Python 3 upgrade, JAX no longer depends on `fastcache`, which should help with installation.
JAX Glossary of Terms[#](#jax-glossary-of-terms)
---
CPU[#](#term-CPU)Short for *Central Processing Unit*, CPUs are the standard computational architecture available in most computers. JAX can run computations on CPUs, but often can achieve much better performance on [GPU](#term-GPU) and [TPU](#term-TPU).
Device[#](#term-Device)The generic name used to refer to the [CPU](#term-CPU), [GPU](#term-GPU), or [TPU](#term-TPU) used by JAX to perform computations.
DeviceArray[#](#term-DeviceArray)JAX’s analog of the [`numpy.ndarray`](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray). See `jaxlib.xla_extension.DeviceArray`.
forward-mode autodiff[#](#term-forward-mode-autodiff)See [JVP](#term-JVP)
functional programming[#](#term-functional-programming)A programming paradigm in which programs are defined by applying and composing
[pure functions](#term-pure-function). JAX is designed for use with functional programs.
GPU[#](#term-GPU)Short for *Graphical Processing Unit*, GPUs were originally specialized for operations related to rendering of images on screen, but now are much more general-purpose. JAX is able to target GPUs for fast operations on arrays (see also [CPU](#term-CPU) and [TPU](#term-TPU)).
jaxpr[#](#term-jaxpr)Short for *JAX Expression*, a jaxpr is an intermediate representation of a computation that is generated by JAX, and is forwarded to [XLA](#term-XLA) for compilation and execution.
See [Understanding Jaxprs](index.html#understanding-jaxprs) for more information.
JIT[#](#term-JIT)Short for *Just In Time* compilation, JIT in JAX generally refers to the compilation of array operations to [XLA](#term-XLA), most often accomplished using [`jax.jit()`](index.html#jax.jit).
JVP[#](#term-JVP)Short for *Jacobian Vector Product*, also sometimes known as *forward-mode* automatic differentiation. For more details, see [Jacobian-Vector products (JVPs, aka forward-mode autodiff)](index.html#jacobian-vector-product). In JAX, JVP is a [transformation](#term-transformation) that is implemented via [`jax.jvp()`](index.html#jax.jvp). See also [VJP](#term-VJP).
pure function[#](#term-pure-function)A pure function is a function whose outputs are based only on its inputs, and which has no side-effects. JAX’s [transformation](#term-transformation) model is designed to work with pure functions.
See also [functional programming](#term-functional-programming).
reverse-mode autodiff[#](#term-reverse-mode-autodiff)See [VJP](#term-VJP).
SPMD[#](#term-SPMD)Short for *Single Program Multi Data*, it refers to a parallel computation technique in which the same computation (e.g., the forward pass of a neural net) is run on different input data
(e.g., different inputs in a batch) in parallel on different devices (e.g., several TPUs).
[`jax.pmap()`](index.html#jax.pmap) is a JAX [transformation](#term-transformation) that implements SPMD parallelism.
static[#](#term-static)In a [JIT](#term-JIT) compilation, a value that is not traced (see [Tracer](#term-Tracer)). Also sometimes refers to compile-time computations on static values.
TPU[#](#term-TPU)Short for *Tensor Processing Unit*, TPUs are chips specifically engineered for fast operations on N-dimensional tensors used in deep learning applications. JAX is able to target TPUs for fast operations on arrays (see also [CPU](#term-CPU) and [GPU](#term-GPU)).
Tracer[#](#term-Tracer)An object used as a standin for a JAX [DeviceArray](#term-DeviceArray) in order to determine the sequence of operations performed by a Python function. Internally, JAX implements this via the `jax.core.Tracer` class.
transformation[#](#term-transformation)A higher-order function: that is, a function that takes a function as input and outputs a transformed function. Examples in JAX include [`jax.jit()`](index.html#jax.jit), [`jax.vmap()`](index.html#jax.vmap), and
[`jax.grad()`](index.html#jax.grad).
VJP[#](#term-VJP)Short for *Vector Jacobian Product*, also sometimes known as *reverse-mode* automatic differentiation. For more details, see [Vector-Jacobian products (VJPs, aka reverse-mode autodiff)](index.html#vector-jacobian-product). In JAX, VJP is a [transformation](#term-transformation) that is implemented via [`jax.vjp()`](index.html#jax.vjp). See also [JVP](#term-JVP).
XLA[#](#term-XLA)Short for *Accelerated Linear Algebra*, XLA is a domain-specific compiler for linear algebra operations that is the primary backend for [JIT](#term-JIT)-compiled JAX code.
See <https://www.tensorflow.org/xla/>.
weak type[#](#term-weak-type)A JAX data type that has the same type promotion semantics as Python scalars;
see [Weakly-typed values in JAX](index.html#weak-types).
Contents
Getting Started
* [Installing JAX](index.html#document-installation)
* [JAX Quickstart](index.html#document-notebooks/quickstart)
* [How to Think in JAX](index.html#document-notebooks/thinking_in_jax)
* [🔪 JAX - The Sharp Bits 🔪](index.html#document-notebooks/Common_Gotchas_in_JAX)
* [JAX Frequently Asked Questions (FAQ)](index.html#document-faq)
* [Tutorial: JAX 101](index.html#document-jax-101/index)
Further Resources
* [User Guides](index.html#document-user_guides)
+ [Profiling JAX programs](index.html#document-profiling)
+ [Device Memory Profiling](index.html#document-device_memory_profiling)
+ [Runtime value debugging in JAX](index.html#document-debugging/index)
+ [Understanding Jaxprs](index.html#document-jaxpr)
+ [External Callbacks in JAX](index.html#document-notebooks/external_callbacks)
+ [Type promotion semantics](index.html#document-type_promotion)
+ [Pytrees](index.html#document-pytrees)
+ [Ahead-of-time lowering and compilation](index.html#document-aot)
+ [JAX Errors](index.html#document-errors)
+ [Transfer guard](index.html#document-transfer_guard)
+ [Pallas: a JAX kernel language](index.html#document-pallas/index)
* [Advanced Tutorials](index.html#document-advanced_guide)
+ [Training a Simple Neural Network, with tensorflow/datasets Data Loading](index.html#document-notebooks/neural_network_with_tfds_data)
+ [Training a Simple Neural Network, with PyTorch Data Loading](index.html#document-notebooks/Neural_Network_and_Data_Loading)
+ [Autobatching for Bayesian Inference](index.html#document-notebooks/vmapped_log_probs)
+ [Using JAX in multi-host and multi-process environments](index.html#document-multi_process)
+ [Distributed arrays and automatic parallelization](index.html#document-notebooks/Distributed_arrays_and_automatic_parallelization)
+ [Named axes and easy-to-revise parallelism with `xmap`](index.html#document-notebooks/xmap_tutorial)
+ [The Autodiff Cookbook](index.html#document-notebooks/autodiff_cookbook)
+ [Custom derivative rules for JAX-transformable Python functions](index.html#document-notebooks/Custom_derivative_rules_for_Python_code)
+ [Control autodiff’s saved values with `jax.checkpoint` (aka `jax.remat`)](index.html#document-notebooks/autodiff_remat)
+ [How JAX primitives work](index.html#document-notebooks/How_JAX_primitives_work)
+ [Writing custom Jaxpr interpreters in JAX](index.html#document-notebooks/Writing_custom_interpreters_in_Jax)
+ [Custom operations for GPUs with C++ and CUDA](index.html#document-Custom_Operation_for_GPUs)
+ [Generalized Convolutions in JAX](index.html#document-notebooks/convolutions)
* [Developer Documentation](index.html#document-contributor_guide)
+ [Contributing to JAX](index.html#document-contributing)
+ [Building from source](index.html#document-developer)
+ [Internal APIs](index.html#document-jax_internal_api)
+ [Autodidax: JAX core from scratch](index.html#document-autodidax)
+ [JAX Enhancement Proposals (JEPs)](index.html#document-jep/index)
* [Building on JAX](index.html#document-building_on_jax)
+ [Gradient Computation](index.html#gradient-computation)
+ [Computational Speedup on a Single Core across Multiple Devices](index.html#computational-speedup-on-a-single-core-across-multiple-devices)
+ [Single and Multi Computer Speedup Using Parallelization](index.html#single-and-multi-computer-speedup-using-parallelization)
+ [Incorporating JAX code into your, or your users, workflows](index.html#incorporating-jax-code-into-your-or-your-users-workflows)
* [Notes](index.html#document-notes)
+ [API compatibility](index.html#document-api_compatibility)
+ [Python and NumPy version support policy](index.html#document-deprecation)
+ [jax.Array migration](index.html#document-jax_array_migration)
+ [Asynchronous dispatch](index.html#document-async_dispatch)
+ [Concurrency](index.html#document-concurrency)
+ [GPU memory allocation](index.html#document-gpu_memory_allocation)
+ [Rank promotion warning](index.html#document-rank_promotion_warning)
* [Public API: jax package](index.html#document-jax)
+ [Subpackages](index.html#subpackages)
+ [Configuration](index.html#configuration)
+ [Just-in-time compilation (`jit`)](index.html#just-in-time-compilation-jit)
+ [Automatic differentiation](index.html#automatic-differentiation)
+ [jax.Array (`jax.Array`)](index.html#jax-array-jax-array)
+ [Vectorization (`vmap`)](index.html#vectorization-vmap)
+ [Parallelization (`pmap`)](index.html#parallelization-pmap)
+ [Callbacks](index.html#callbacks)
+ [Miscellaneous](index.html#miscellaneous)
* [Change log](index.html#document-changelog)
* [jax 0.4.20](index.html#jax-0-4-20)
* [jaxlib 0.4.20](index.html#jaxlib-0-4-20)
* [jax 0.4.19 (Oct 19, 2023)](index.html#jax-0-4-19-oct-19-2023)
* [jaxlib 0.4.19 (Oct 19, 2023)](index.html#jaxlib-0-4-19-oct-19-2023)
* [jax 0.4.18 (Oct 6, 2023)](index.html#jax-0-4-18-oct-6-2023)
* [jaxlib 0.4.18 (Oct 6, 2023)](index.html#jaxlib-0-4-18-oct-6-2023)
* [jax 0.4.17 (Oct 3, 2023)](index.html#jax-0-4-17-oct-3-2023)
* [jaxlib 0.4.17 (Oct 3, 2023)](index.html#jaxlib-0-4-17-oct-3-2023)
* [JAX Glossary of Terms](index.html#document-glossary)
{% macro head_pre_bootstrap() %}
{% endmacro %}
{% macro body_post() %}
{% endmacro %}
{# Load FontAwesome icons #}
{% macro head_pre_icons() %}
{% endmacro %}
{% macro head_pre_assets() %}
{% endmacro %}
{% macro head_js_preload() %}
{% endmacro %}
{% macro body_post() %}
{% endmacro %} |
nuxt-set-cl | npm | JavaScript | > Vue.js Meta Framework to create complex, fast & universal web application *quickly*.
Links
---
* 📘 Documentation: <https://nuxtjs.org>
* 🎬 Video: [1 minute demo](https://www.youtube.com/watch?v=kmf-p-pTi40)
* 🐦 Twitter: [@nuxt_js](https://twitter.com/nuxt_js)
* 👥 [Nuxt.js Community](https://github.com/nuxt-community)
* 📦 [Nuxt.js Modules](https://github.com/nuxt-community/modules)
* 👉 [Play with Nuxt.js online](https://glitch.com/edit/#!/nuxt-hello-world)
Features
---
* Automatic transpilation and bundling (with webpack and babel)
* Hot code reloading
* Server-side rendering OR Single Page App OR Static Generated, you choose 🔥
* Static file serving. `./static/` is mapped to `/`
* Configurable with a `nuxt.config.js` file
* Custom layouts with the `layouts/` directory
* Middleware
* Code splitting for every `pages/`
Learn more at [nuxtjs.org](https://nuxtjs.org).
Sponsors
---
Become a sponsor and get your logo on our README on Github with a link to your site. [[Become a sponsor](https://opencollective.com/nuxtjs#sponsor)]
Backers
---
Support us with a monthly donation and help us continue our activities. [[Become a backer](https://opencollective.com/nuxtjs#backer)]
Getting started
---
```
$ npm install nuxt --save
```
Add a script to your package.json like this:
```
{ "scripts": { "start": "nuxt" }}
```
After that, the file-system is the main API. Every .vue file becomes a route that gets automatically processed and rendered.
Populate `./pages/index.vue` inside your project:
```
<template> <h1>Hello {{ name }}!</h1></template> <script>export default { data: () => { return { name: 'world' } }}</script```
And then run:
```
npm start
```
Go to <http://localhost:3000Templates
---
👉 We recommend to start directly with our cli [create-nuxt-app](https://github.com/nuxt-community/create-nuxt-app) for the latest updates.
Or you can start by using one of our starter templates:
* [starter](https://github.com/nuxt-community/starter-template): Basic Nuxt.js project template
* [express](https://github.com/nuxt-community/express-template): Nuxt.js + Express
* [koa](https://github.com/nuxt-community/koa-template): Nuxt.js + Koa
* [adonuxt](https://github.com/nuxt-community/adonuxt-template): Nuxt.js + AdonisJS
* [micro](https://github.com/nuxt-community/micro-template): Nuxt.js + Micro
* [nuxtent](https://github.com/nuxt-community/nuxtent-template): Nuxt.js + Nuxtent module for content heavy sites
Using nuxt.js programmatically
---
```
const { Nuxt, Builder } = require('nuxt') // Import and set nuxt.js optionslet config = require('./nuxt.config.js')config.dev = (process.env.NODE_ENV !== 'production') let nuxt = new Nuxt(config) // Start build process (only in development)if (config.dev) { new Builder(nuxt).build()} // You can use nuxt.render(req, res) or nuxt.renderRoute(route, context)
```
Learn more: <https://nuxtjs.org/api/nuxtUsing nuxt.js as a middleware
---
You might want to use your own server with you configurations, your API and everything awesome your created with. That's why you can use nuxt.js as a middleware. It's recommended to use it at the end of your middleware since it will handle the rendering of your web application and won't call next().
```
app.use(nuxt.render)
```
Learn more: <https://nuxtjs.org/api/nuxt-renderRender a specific route
---
This is mostly used for `nuxt generate` and test purposes but you might find another utility!
```
nuxt.renderRoute('/about', context).then(function ({ html, error }) { // You can check error to know if your app displayed the error page for this route // Useful to set the correct status code if an error appended: if (error) { return res.status(error.statusCode || 500).send(html) } res.send(html)}).catch(function (error) { // And error appended while rendering the route})
```
Learn more: <https://nuxtjs.org/api/nuxt-render-routeExamples
---
Please take a look at <https://nuxtjs.org/examples> or directly in <https://github.com/nuxt/nuxt.js/tree/dev/examples>.
Production deployment
---
To deploy, instead of running nuxt, you probably want to build ahead of time. Therefore, building and starting are separate commands:
```
nuxt buildnuxt start
```
For example, to deploy with [`now`](https://zeit.co/now) a `package.json` like follows is recommended:
```
{ "name": "my-app", "dependencies": { "nuxt": "latest" }, "scripts": { "dev": "nuxt", "build": "nuxt build", "start": "nuxt start" }}
```
Then run `now` and enjoy!
Note: we recommend putting `.nuxt` in `.npmignore` or `.gitignore`.
Core team
---
| [<NAME>](https://github.com/Atinux) | [<NAME>](https://github.com/alexchopin) | [<NAME>](https://github.com/pi0) | [<NAME>](https://github.com/clarkdo) |
| --- | --- | --- | --- |
| | | | |
Contributors
---
Thank you to all our [contributors](https://github.com/nuxt/nuxt.js/graphs/contributors)!
Contributing
---
Please see our [CONTRIBUTING.md](https://github.com/nuxt/nuxt.js/blob/HEAD/CONTRIBUTING.md)
Roadmap
---
<https://trello.com/b/lgy93IOl/nuxtjs-10>
Readme
---
### Keywords
* nuxt
* nuxt.js
* nuxtjs
* vue
* vue.js
* vuejs
* vue universal
* vue ssr
* vue isomorphic
* vue versatile |
sp-beefy | rust | Rust | Crate sp_beefy
===
Primitives for BEEFY protocol.
The crate contains shared data types used by BEEFY protocol and documentation (in a form of code) for building a BEEFY light client.
BEEFY is a gadget that runs alongside another finality gadget (for instance GRANDPA).
For simplicity (and the initially intended use case) the documentation says GRANDPA in places where a more abstract “Finality Gadget” term could be used, but there is no reason why BEEFY wouldn’t run with some other finality scheme.
BEEFY validator set is supposed to be tracking the Finality Gadget validator set, but note that it will use a different set of keys. For Polkadot use case we plan to use `secp256k1` for BEEFY,
while GRANDPA uses `ed25519`.
Modules
---
cryptoBEEFY cryptographic typesknown_payloadsRegistry of all known `BeefyPayloadId`.mmrBEEFY + MMR utilties.witnessPrimitives for light, 2-phase interactive verification protocol.Structs
---
CommitmentA commitment signed by GRANDPA validators as part of BEEFY protocol.EquivocationProofProof of voter misbehavior on a given set id. Misbehavior/equivocation in BEEFY happens when a voter votes on the same round/block for different payloads.
Proving is achieved by collecting the signed commitments of conflicting votes.KeyringIterAn iterator over the variants of SelfOpaqueKeyOwnershipProofAn opaque type used to represent the key ownership proof at the runtime API boundary. The inner value is an encoded representation of the actual key ownership proof which will be parameterized when defining the runtime. At the runtime API boundary this type is unknown and as such we keep this opaque representation, implementors of the runtime API will have to make sure that all usages of `OpaqueKeyOwnershipProof` refer to the same type.PayloadA BEEFY payload type allowing for future extensibility of adding additional kinds of payloads.SignedCommitmentA commitment with matching GRANDPA validators’ signatures.ValidatorSetA set of BEEFY authorities, a.k.a. validators.VoteMessageBEEFY vote message.Enums
---
ConsensusLogA consensus log item for BEEFY.KeyringSet of test accounts using `crate::crypto` types.VersionedFinalityProofA SignedCommitment with a version number.Constants
---
BEEFY_ENGINE_IDThe `ConsensusEngineId` of BEEFY.GENESIS_AUTHORITY_SET_IDAuthority set id starts with zero at BEEFY pallet genesis.KEY_TYPEKey type for BEEFY module.Traits
---
BeefyApiAPI necessary for BEEFY voters.BeefyAuthorityIdTrait representing BEEFY authority id, including custom signature verification.OnNewValidatorSetNew BEEFY validator set notification hook.PayloadProviderTrait for custom BEEFY payload providers.Functions
---
check_commitment_signatureCheck a commitment signature by encoding the commitment and verifying the provided signature using the expected authority id.check_equivocation_proofVerifies the equivocation proof by making sure that both votes target different blocks and that its signatures are valid.generate_equivocation_proofCreate a new `EquivocationProof` based on given arguments.Type Definitions
---
AuthorityIndexThe index of an authority.BeefyPayloadIdId of different payloads in the `crate::Commitment` data.MmrRootHashThe type used to represent an MMR root hash.ValidatorSetIdA typedef for validator set id.
Crate sp_beefy
===
Primitives for BEEFY protocol.
The crate contains shared data types used by BEEFY protocol and documentation (in a form of code) for building a BEEFY light client.
BEEFY is a gadget that runs alongside another finality gadget (for instance GRANDPA).
For simplicity (and the initially intended use case) the documentation says GRANDPA in places where a more abstract “Finality Gadget” term could be used, but there is no reason why BEEFY wouldn’t run with some other finality scheme.
BEEFY validator set is supposed to be tracking the Finality Gadget validator set, but note that it will use a different set of keys. For Polkadot use case we plan to use `secp256k1` for BEEFY,
while GRANDPA uses `ed25519`.
Modules
---
cryptoBEEFY cryptographic typesknown_payloadsRegistry of all known `BeefyPayloadId`.mmrBEEFY + MMR utilties.witnessPrimitives for light, 2-phase interactive verification protocol.Structs
---
CommitmentA commitment signed by GRANDPA validators as part of BEEFY protocol.EquivocationProofProof of voter misbehavior on a given set id. Misbehavior/equivocation in BEEFY happens when a voter votes on the same round/block for different payloads.
Proving is achieved by collecting the signed commitments of conflicting votes.KeyringIterAn iterator over the variants of SelfOpaqueKeyOwnershipProofAn opaque type used to represent the key ownership proof at the runtime API boundary. The inner value is an encoded representation of the actual key ownership proof which will be parameterized when defining the runtime. At the runtime API boundary this type is unknown and as such we keep this opaque representation, implementors of the runtime API will have to make sure that all usages of `OpaqueKeyOwnershipProof` refer to the same type.PayloadA BEEFY payload type allowing for future extensibility of adding additional kinds of payloads.SignedCommitmentA commitment with matching GRANDPA validators’ signatures.ValidatorSetA set of BEEFY authorities, a.k.a. validators.VoteMessageBEEFY vote message.Enums
---
ConsensusLogA consensus log item for BEEFY.KeyringSet of test accounts using `crate::crypto` types.VersionedFinalityProofA SignedCommitment with a version number.Constants
---
BEEFY_ENGINE_IDThe `ConsensusEngineId` of BEEFY.GENESIS_AUTHORITY_SET_IDAuthority set id starts with zero at BEEFY pallet genesis.KEY_TYPEKey type for BEEFY module.Traits
---
BeefyApiAPI necessary for BEEFY voters.BeefyAuthorityIdTrait representing BEEFY authority id, including custom signature verification.OnNewValidatorSetNew BEEFY validator set notification hook.PayloadProviderTrait for custom BEEFY payload providers.Functions
---
check_commitment_signatureCheck a commitment signature by encoding the commitment and verifying the provided signature using the expected authority id.check_equivocation_proofVerifies the equivocation proof by making sure that both votes target different blocks and that its signatures are valid.generate_equivocation_proofCreate a new `EquivocationProof` based on given arguments.Type Definitions
---
AuthorityIndexThe index of an authority.BeefyPayloadIdId of different payloads in the `crate::Commitment` data.MmrRootHashThe type used to represent an MMR root hash.ValidatorSetIdA typedef for validator set id.
Module sp_beefy::crypto
===
BEEFY cryptographic types
This module basically introduces three crypto types:
* `crypto::Pair`
* `crypto::Public`
* `crypto::Signature`
Your code should use the above types as concrete types for all crypto related functionality.
The current underlying crypto scheme used is ECDSA. This can be changed,
without affecting code restricted against the above listed crypto types.
Structs
---
PairA generic `AppPublic` wrapper type over $pair crypto; this has no specific App.PublicA generic `AppPublic` wrapper type over $public crypto; this has no specific App.SignatureA generic `AppPublic` wrapper type over $public crypto; this has no specific App.Type Definitions
---
AuthorityIdIdentity of a BEEFY authority using ECDSA as its crypto.AuthoritySignatureSignature for a BEEFY authority using ECDSA as its crypto.
Module sp_beefy::known_payloads
===
Registry of all known `BeefyPayloadId`.
Constants
---
MMR_ROOT_IDA `Payload` identifier for Merkle Mountain Range root hash.
Type Definition sp_beefy::BeefyPayloadId
===
```
pub type BeefyPayloadId = [u8; 2];
```
Id of different payloads in the `crate::Commitment` data.
Module sp_beefy::mmr
===
BEEFY + MMR utilties.
While BEEFY can be used completely independently as an additional consensus gadget,
it is designed around a main use case of bridging standalone networks together.
For that use case it’s common to use some aggregated data structure (like MMR) to be used in conjunction with BEEFY, to be able to efficiently prove any past blockchain data.
This module contains primitives used by Polkadot implementation of the BEEFY+MMR bridge,
but we imagine they will be useful for other chains that either want to bridge with Polkadot or are completely standalone, but heavily inspired by Polkadot.
Structs
---
BeefyAuthoritySetDetails of a BEEFY authority set.MmrLeafA standard leaf that gets added every block to the MMR constructed by Substrate’s `pallet_mmr`.MmrLeafVersionAn MMR leaf versioning scheme.MmrRootProviderA `crate::Payload` provider where payload is Merkle Mountain Range root hash.Traits
---
BeefyDataProviderA provider for extra data that gets added to the Mmr leafFunctions
---
find_mmr_root_digestExtract the MMR root hash from a digest in the given header, if it exists.Type Definitions
---
BeefyNextAuthoritySetDetails of the next BEEFY authority set.
Module sp_beefy::witness
===
Primitives for light, 2-phase interactive verification protocol.
Instead of submitting full list of signatures, it’s possible to submit first a witness form of SignedCommitment.
This can later be verified by the client requesting only some (out of all) signatures for verification. This allows lowering the data and computation cost of verifying the signed commitment.
Structs
---
SignedCommitmentWitnessA light form of SignedCommitment.
Struct sp_beefy::Commitment
===
```
pub struct Commitment<TBlockNumber> {
pub payload: Payload,
pub block_number: TBlockNumber,
pub validator_set_id: ValidatorSetId,
}
```
A commitment signed by GRANDPA validators as part of BEEFY protocol.
The commitment contains a payload extracted from the finalized block at height block_number.
GRANDPA validators collect signatures on commitments and a stream of such signed commitments
(see SignedCommitment) forms the BEEFY protocol.
Fields
---
`payload: Payload`A collection of payloads to be signed, see `Payload` for details.
One of the payloads should be some form of cumulative representation of the chain (think MMR root hash). Additionally one of the payloads should also contain some details that allow the light client to verify next validator set. The protocol does not enforce any particular format of this data, nor how often it should be present in commitments, however the light client has to be provided with full validator set whenever it performs the transition (i.e. importing first block with validator_set_id incremented).
`block_number: TBlockNumber`Finalized block number this commitment is for.
GRANDPA validators agree on a block they create a commitment for and start collecting signatures. This process is called a round.
There might be multiple rounds in progress (depending on the block choice rule), however since the payload is supposed to be cumulative, it is not required to import all commitments.
BEEFY light client is expected to import at least one commitment per epoch,
but is free to import as many as it requires.
`validator_set_id: ValidatorSetId`BEEFY validator set supposed to sign this commitment.
Validator set is changing once per epoch. The Light Client must be provided by details about the validator set whenever it’s importing first commitment with a new
`validator_set_id`. Validator set data MUST be verifiable, for instance using payload information.
Trait Implementations
---
### impl<TBlockNumber: Clone> Clone for Commitment<TBlockNumber#### fn clone(&self) -> Commitment<TBlockNumberReturns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
Formats the value using the given formatter.
TBlockNumber: Decode,
#### fn decode<__CodecInputEdqy: Input>(
__codec_input_edqy: &mut __CodecInputEdqy
) -> Result<Self, ErrorAttempt to deserialise the value from input.#### fn skip<I>(input: &mut I) -> Result<(), Error>where
I: Input,
Attempt to skip the encoded value from input.
TBlockNumber: Encode,
#### fn encode_to<__CodecOutputEdqy: Output + ?Sized>(
&self,
__codec_dest_edqy: &mut __CodecOutputEdqy
)
Convert self to a slice and append it to the destination.#### fn size_hint(&self) -> usize
If possible give a hint of expected size of the encoding.
F: FnOnce(&[u8]) -> R,
Convert self to a slice and then invoke the given closure with it.#### fn encoded_size(&self) -> usize
Calculates the encoded size.
TBlockNumber: Ord,
#### fn cmp(&self, other: &Self) -> Ordering
This method returns an `Ordering` between `self` and `other`. Read more1.21.0 · source#### fn max(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the maximum of two values. Read more1.21.0 · source#### fn min(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the minimum of two values. Read more1.50.0 · source#### fn clamp(self, min: Self, max: Self) -> Selfwhere
Self: Sized + PartialOrd<Self>,
Restrict a value to a certain interval.
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl<TBlockNumber> PartialOrd<Commitment<TBlockNumber>> for Commitment<TBlockNumber>where
TBlockNumber: Ord,
#### fn partial_cmp(&self, other: &Self) -> Option<OrderingThis method returns an ordering between `self` and `other` values if one exists. Read more1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool
This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool
This method tests less than or equal to (for `self` and `other`) and is used by the `<=`
operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool
This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool
This method tests greater than or equal to (for `self` and `other`) and is used by the `>=`
operator.
TBlockNumber: TypeInfo + 'static,
#### type Identity = Commitment<TBlockNumberThe type identifying for which type info is provided.
Returns the static type identifier for `Self`.### impl<TBlockNumber> EncodeLike<Commitment<TBlockNumber>> for Commitment<TBlockNumber>where
TBlockNumber: Encode,
### impl<TBlockNumber: Eq> Eq for Commitment<TBlockNumber### impl<TBlockNumber> StructuralEq for Commitment<TBlockNumber### impl<TBlockNumber> StructuralPartialEq for Commitment<TBlockNumberAuto Trait Implementations
---
### impl<TBlockNumber> RefUnwindSafe for Commitment<TBlockNumber>where
TBlockNumber: RefUnwindSafe,
### impl<TBlockNumber> Send for Commitment<TBlockNumber>where
TBlockNumber: Send,
### impl<TBlockNumber> Sync for Commitment<TBlockNumber>where
TBlockNumber: Sync,
### impl<TBlockNumber> Unpin for Commitment<TBlockNumber>where
TBlockNumber: Unpin,
### impl<TBlockNumber> UnwindSafe for Commitment<TBlockNumber>where
TBlockNumber: UnwindSafe,
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn checked_from<T>(t: T) -> Option<Self>where
Self: TryFrom<T>,
Convert from a value of `T` into an equivalent instance of `Option<Self>`.
Self: TryInto<T>,
Consume self to return `Some` equivalent value of `Option<T>`.
T: Decode,
#### fn decode_all(input: &mut &[u8]) -> Result<T, ErrorDecode `Self` and consume all of the given input data.
T: Decode,
#### fn decode_all_with_depth_limit(limit: u32, input: &mut &[u8]) -> Result<T, ErrorDecode `Self` and consume all of the given input data.
I: Input,
Decode `Self` with the given maximum recursion depth and advance `input` by the number of bytes consumed.
T: Any,
#### fn into_any(self: Box<T, Global>) -> Box<dyn Any + 'static, GlobalConvert `Box<dyn Trait>` (where `Trait: Downcast`) to `Box<dyn Any>`. `Box<dyn Any>` can then be further `downcast` into `Box<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn into_any_rc(self: Rc<T>) -> Rc<dyn Any + 'staticConvert `Rc<Trait>` (where `Trait: Downcast`) to `Rc<Any>`. `Rc<Any>` can then be further `downcast` into `Rc<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn as_any(&self) -> &(dyn Any + 'static)
Convert `&Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&Any`’s vtable from `&Trait`’s.#### fn as_any_mut(&mut self) -> &mut (dyn Any + 'static)
Convert `&mut Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&mut Any`’s vtable from `&mut Trait`’s.### impl<T> DowncastSync for Twhere
T: Any + Send + Sync,
#### fn into_any_arc(self: Arc<T>) -> Arc<dyn Any + Send + Sync + 'staticConvert `Arc<Trait>` (where `Trait: Downcast`) to `Arc<Any>`. `Arc<Any>` can then be further `downcast` into `Arc<ConcreteType>` where `ConcreteType` implements `Trait`.### impl<T> DynClone for Twhere
T: Clone,
#### fn __clone_box(&self, _: Private) -> *mut()
### impl<Q, K> Equivalent<K> for Qwhere
Q: Eq + ?Sized,
K: Borrow<Q> + ?Sized,
#### fn equivalent(&self, key: &K) -> bool
Compare self to `key` and return `true` if they are equal.### impl<Q, K> Equivalent<K> for Qwhere
Q: Eq + ?Sized,
K: Borrow<Q> + ?Sized,
#### fn equivalent(&self, key: &K) -> bool
Checks if this value is equivalent to the given key.
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> FullLeaf for Twhere
T: Encode + Decode + Clone + PartialEq<T> + Debug,
#### fn using_encoded<R, F>(&self, f: F, _compact: bool) -> Rwhere
F: FnOnce(&[u8]) -> R,
Encode the leaf either in its full or compact form.
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, Outer> IsWrappedBy<Outer> for Twhere
Outer: AsRef<T> + AsMut<T> + From<T>,
T: From<Outer>,
#### fn from_ref(outer: &Outer) -> &T
Get a reference to the inner from the outer.
#### fn from_mut(outer: &mut Outer) -> &mut T
Get a mutable reference to the inner from the outer.
### impl<T> KeyedVec for Twhere
T: Codec,
#### fn to_keyed_vec(&self, prepend_key: &[u8]) -> Vec<u8, GlobalReturn an encoding of `Self` prepended by given slice.### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> SaturatedConversion for T
#### fn saturated_from<T>(t: T) -> Selfwhere
Self: UniqueSaturatedFrom<T>,
Convert from a value of `T` into an equivalent instance of `Self`.
Self: UniqueSaturatedInto<T>,
Consume self to return an equivalent value of `T`.
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<S, T> UncheckedInto<T> for Swhere
T: UncheckedFrom<S>,
#### fn unchecked_into(self) -> T
The counterpart to `unchecked_from`.### impl<T, S> UniqueSaturatedInto<T> for Swhere
T: Bounded,
S: TryInto<T>,
#### fn unique_saturated_into(self) -> T
Consume self to return an equivalent value of `T`.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
S: Decode + Encode,
### impl<T> EncodeLike<&&T> for Twhere
T: Encode,
### impl<T> EncodeLike<&T> for Twhere
T: Encode,
### impl<T> EncodeLike<&mut T> for Twhere
T: Encode,
### impl<T> EncodeLike<Arc<T>> for Twhere
T: Encode,
### impl<T> EncodeLike<Box<T, Global>> for Twhere
T: Encode,
### impl<'a, T> EncodeLike<Cow<'a, T>> for Twhere
T: ToOwned + Encode,
### impl<T> EncodeLike<Rc<T>> for Twhere
T: Encode,
### impl<S> FullCodec for Swhere
S: Decode + FullEncode,
### impl<S> FullEncode for Swhere
S: Encode + EncodeLike<S>,
### impl<T> MaybeDebug for Twhere
T: Debug,
### impl<T> MaybeDebug for Twhere
T: Debug,
### impl<T> MaybeRefUnwindSafe for Twhere
T: RefUnwindSafe,
### impl<T> Member for Twhere
T: Send + Sync + Debug + Eq + PartialEq<T> + Clone + 'static,
### impl<T> StaticTypeInfo for Twhere
T: TypeInfo + 'static,
{"Vec<u8, Global>":"<h3>Notable traits for <code><a class=\"struct\" href=\"https://doc.rust-lang.org/nightly/alloc/vec/struct.Vec.html\" title=\"struct alloc::vec::Vec\">Vec</a><<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.u8.html\">u8</a>, A></code></h3><pre><code><span class=\"where fmt-newline\">impl<A> <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/std/io/trait.Write.html\" title=\"trait std::io::Write\">Write</a> for <a class=\"struct\" href=\"https://doc.rust-lang.org/nightly/alloc/vec/struct.Vec.html\" title=\"struct alloc::vec::Vec\">Vec</a><<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.u8.html\">u8</a>, A><span class=\"where fmt-newline\">where\n A: <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/alloc/trait.Allocator.html\" title=\"trait core::alloc::Allocator\">Allocator</a>,</span></span>"}
Struct sp_beefy::EquivocationProof
===
```
pub struct EquivocationProof<Number, Id, Signature> {
pub first: VoteMessage<Number, Id, Signature>,
pub second: VoteMessage<Number, Id, Signature>,
}
```
Proof of voter misbehavior on a given set id. Misbehavior/equivocation in BEEFY happens when a voter votes on the same round/block for different payloads.
Proving is achieved by collecting the signed commitments of conflicting votes.
Fields
---
`first: VoteMessage<Number, Id, Signature>`The first vote in the equivocation.
`second: VoteMessage<Number, Id, Signature>`The second vote in the equivocation.
Implementations
---
### impl<Number, Id, Signature> EquivocationProof<Number, Id, Signature#### pub fn offender_id(&self) -> &Id
Returns the authority id of the equivocator.
#### pub fn round_number(&self) -> &Number
Returns the round number at which the equivocation occurred.
#### pub fn set_id(&self) -> ValidatorSetId
Returns the set id at which the equivocation occurred.
Trait Implementations
---
### impl<Number: Clone, Id: Clone, Signature: Clone> Clone for EquivocationProof<Number, Id, Signature#### fn clone(&self) -> EquivocationProof<Number, Id, SignatureReturns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
Formats the value using the given formatter.
VoteMessage<Number, Id, Signature>: Decode,
#### fn decode<__CodecInputEdqy: Input>(
__codec_input_edqy: &mut __CodecInputEdqy
) -> Result<Self, ErrorAttempt to deserialise the value from input.#### fn skip<I>(input: &mut I) -> Result<(), Error>where
I: Input,
Attempt to skip the encoded value from input.
VoteMessage<Number, Id, Signature>: Encode,
#### fn encode_to<__CodecOutputEdqy: Output + ?Sized>(
&self,
__codec_dest_edqy: &mut __CodecOutputEdqy
)
Convert self to a slice and append it to the destination.#### fn size_hint(&self) -> usize
If possible give a hint of expected size of the encoding.
F: FnOnce(&[u8]) -> R,
Convert self to a slice and then invoke the given closure with it.#### fn encoded_size(&self) -> usize
Calculates the encoded size.
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl<Number, Id, Signature> TypeInfo for EquivocationProof<Number, Id, Signature>where
VoteMessage<Number, Id, Signature>: TypeInfo + 'static,
Number: TypeInfo + 'static,
Id: TypeInfo + 'static,
Signature: TypeInfo + 'static,
#### type Identity = EquivocationProof<Number, Id, SignatureThe type identifying for which type info is provided.
Returns the static type identifier for `Self`.### impl<Number, Id, Signature> EncodeLike<EquivocationProof<Number, Id, Signature>> for EquivocationProof<Number, Id, Signature>where
VoteMessage<Number, Id, Signature>: Encode,
### impl<Number, Id, Signature> StructuralPartialEq for EquivocationProof<Number, Id, SignatureAuto Trait Implementations
---
### impl<Number, Id, Signature> RefUnwindSafe for EquivocationProof<Number, Id, Signature>where
Id: RefUnwindSafe,
Number: RefUnwindSafe,
Signature: RefUnwindSafe,
### impl<Number, Id, Signature> Send for EquivocationProof<Number, Id, Signature>where
Id: Send,
Number: Send,
Signature: Send,
### impl<Number, Id, Signature> Sync for EquivocationProof<Number, Id, Signature>where
Id: Sync,
Number: Sync,
Signature: Sync,
### impl<Number, Id, Signature> Unpin for EquivocationProof<Number, Id, Signature>where
Id: Unpin,
Number: Unpin,
Signature: Unpin,
### impl<Number, Id, Signature> UnwindSafe for EquivocationProof<Number, Id, Signature>where
Id: UnwindSafe,
Number: UnwindSafe,
Signature: UnwindSafe,
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn checked_from<T>(t: T) -> Option<Self>where
Self: TryFrom<T>,
Convert from a value of `T` into an equivalent instance of `Option<Self>`.
Self: TryInto<T>,
Consume self to return `Some` equivalent value of `Option<T>`.
T: Decode,
#### fn decode_all(input: &mut &[u8]) -> Result<T, ErrorDecode `Self` and consume all of the given input data.
T: Decode,
#### fn decode_all_with_depth_limit(limit: u32, input: &mut &[u8]) -> Result<T, ErrorDecode `Self` and consume all of the given input data.
I: Input,
Decode `Self` with the given maximum recursion depth and advance `input` by the number of bytes consumed.
T: Any,
#### fn into_any(self: Box<T, Global>) -> Box<dyn Any + 'static, GlobalConvert `Box<dyn Trait>` (where `Trait: Downcast`) to `Box<dyn Any>`. `Box<dyn Any>` can then be further `downcast` into `Box<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn into_any_rc(self: Rc<T>) -> Rc<dyn Any + 'staticConvert `Rc<Trait>` (where `Trait: Downcast`) to `Rc<Any>`. `Rc<Any>` can then be further `downcast` into `Rc<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn as_any(&self) -> &(dyn Any + 'static)
Convert `&Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&Any`’s vtable from `&Trait`’s.#### fn as_any_mut(&mut self) -> &mut (dyn Any + 'static)
Convert `&mut Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&mut Any`’s vtable from `&mut Trait`’s.### impl<T> DowncastSync for Twhere
T: Any + Send + Sync,
#### fn into_any_arc(self: Arc<T>) -> Arc<dyn Any + Send + Sync + 'staticConvert `Arc<Trait>` (where `Trait: Downcast`) to `Arc<Any>`. `Arc<Any>` can then be further `downcast` into `Arc<ConcreteType>` where `ConcreteType` implements `Trait`.### impl<T> DynClone for Twhere
T: Clone,
#### fn __clone_box(&self, _: Private) -> *mut()
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> FullLeaf for Twhere
T: Encode + Decode + Clone + PartialEq<T> + Debug,
#### fn using_encoded<R, F>(&self, f: F, _compact: bool) -> Rwhere
F: FnOnce(&[u8]) -> R,
Encode the leaf either in its full or compact form.
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, Outer> IsWrappedBy<Outer> for Twhere
Outer: AsRef<T> + AsMut<T> + From<T>,
T: From<Outer>,
#### fn from_ref(outer: &Outer) -> &T
Get a reference to the inner from the outer.
#### fn from_mut(outer: &mut Outer) -> &mut T
Get a mutable reference to the inner from the outer.
### impl<T> KeyedVec for Twhere
T: Codec,
#### fn to_keyed_vec(&self, prepend_key: &[u8]) -> Vec<u8, GlobalReturn an encoding of `Self` prepended by given slice.### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> SaturatedConversion for T
#### fn saturated_from<T>(t: T) -> Selfwhere
Self: UniqueSaturatedFrom<T>,
Convert from a value of `T` into an equivalent instance of `Self`.
Self: UniqueSaturatedInto<T>,
Consume self to return an equivalent value of `T`.
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<S, T> UncheckedInto<T> for Swhere
T: UncheckedFrom<S>,
#### fn unchecked_into(self) -> T
The counterpart to `unchecked_from`.### impl<T, S> UniqueSaturatedInto<T> for Swhere
T: Bounded,
S: TryInto<T>,
#### fn unique_saturated_into(self) -> T
Consume self to return an equivalent value of `T`.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
S: Decode + Encode,
### impl<T> EncodeLike<&&T> for Twhere
T: Encode,
### impl<T> EncodeLike<&T> for Twhere
T: Encode,
### impl<T> EncodeLike<&mut T> for Twhere
T: Encode,
### impl<T> EncodeLike<Arc<T>> for Twhere
T: Encode,
### impl<T> EncodeLike<Box<T, Global>> for Twhere
T: Encode,
### impl<'a, T> EncodeLike<Cow<'a, T>> for Twhere
T: ToOwned + Encode,
### impl<T> EncodeLike<Rc<T>> for Twhere
T: Encode,
### impl<S> FullCodec for Swhere
S: Decode + FullEncode,
### impl<S> FullEncode for Swhere
S: Encode + EncodeLike<S>,
### impl<T> MaybeDebug for Twhere
T: Debug,
### impl<T> MaybeDebug for Twhere
T: Debug,
### impl<T> MaybeRefUnwindSafe for Twhere
T: RefUnwindSafe,
### impl<T> StaticTypeInfo for Twhere
T: TypeInfo + 'static,
{"Vec<u8, Global>":"<h3>Notable traits for <code><a class=\"struct\" href=\"https://doc.rust-lang.org/nightly/alloc/vec/struct.Vec.html\" title=\"struct alloc::vec::Vec\">Vec</a><<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.u8.html\">u8</a>, A></code></h3><pre><code><span class=\"where fmt-newline\">impl<A> <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/std/io/trait.Write.html\" title=\"trait std::io::Write\">Write</a> for <a class=\"struct\" href=\"https://doc.rust-lang.org/nightly/alloc/vec/struct.Vec.html\" title=\"struct alloc::vec::Vec\">Vec</a><<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.u8.html\">u8</a>, A><span class=\"where fmt-newline\">where\n A: <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/alloc/trait.Allocator.html\" title=\"trait core::alloc::Allocator\">Allocator</a>,</span></span>"}
Struct sp_beefy::KeyringIter
===
```
pub struct KeyringIter { /* private fields */ }
```
An iterator over the variants of Self
Trait Implementations
---
### impl Clone for KeyringIter
#### fn clone(&self) -> KeyringIter
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn next_back(&mut self) -> Option<<Self as Iterator>::ItemRemoves and returns an element from the end of the iterator.
Self: Sized,
F: FnMut(B, Self::Item) -> R,
R: Try<Output = B>,
This is the reverse version of `Iterator::try_fold()`: it takes elements starting from the back of the iterator. Read more1.27.0 · source#### fn rfold<B, F>(self, init: B, f: F) -> Bwhere
Self: Sized,
F: FnMut(B, Self::Item) -> B,
An iterator method that reduces the iterator’s elements to a single,
final value, starting from the back. Read more1.27.0 · source#### fn rfind<P>(&mut self, predicate: P) -> Option<Self::Item>where
Self: Sized,
P: FnMut(&Self::Item) -> bool,
Searches for an element of an iterator from the back that satisfies a predicate.
#### fn len(&self) -> usize
Returns the exact remaining length of the iterator.
🔬This is a nightly-only experimental API. (`exact_size_is_empty`)Returns `true` if the iterator is empty.
#### type Item = Keyring
The type of the elements being iterated over.#### fn next(&mut self) -> Option<<Self as Iterator>::ItemAdvances the iterator and returns the next value.
Returns the bounds on the remaining length of the iterator.
&mut self
) -> Result<[Self::Item; N], IntoIter<Self::Item, N>>where
Self: Sized,
🔬This is a nightly-only experimental API. (`iter_next_chunk`)Advances the iterator and returns an array containing the next `N` values. Read more1.0.0 · source#### fn count(self) -> usizewhere
Self: Sized,
Consumes the iterator, counting the number of iterations and returning it. Read more1.0.0 · source#### fn last(self) -> Option<Self::Item>where
Self: Sized,
Consumes the iterator, returning the last element.
Self: Sized,
Creates an iterator starting at the same point, but stepping by the given amount at each iteration. Read more1.0.0 · source#### fn chain<U>(self, other: U) -> Chain<Self, <U as IntoIterator>::IntoIter>where
Self: Sized,
U: IntoIterator<Item = Self::Item>,
Takes two iterators and creates a new iterator over both in sequence. Read more1.0.0 · source#### fn zip<U>(self, other: U) -> Zip<Self, <U as IntoIterator>::IntoIter>where
Self: Sized,
U: IntoIterator,
‘Zips up’ two iterators into a single iterator of pairs.
Self: Sized,
Self::Item: Clone,
🔬This is a nightly-only experimental API. (`iter_intersperse`)Creates a new iterator which places a copy of `separator` between adjacent items of the original iterator.
Self: Sized,
G: FnMut() -> Self::Item,
🔬This is a nightly-only experimental API. (`iter_intersperse`)Creates a new iterator which places an item generated by `separator`
between adjacent items of the original iterator. Read more1.0.0 · source#### fn map<B, F>(self, f: F) -> Map<Self, F>where
Self: Sized,
F: FnMut(Self::Item) -> B,
Takes a closure and creates an iterator which calls that closure on each element. Read more1.21.0 · source#### fn for_each<F>(self, f: F)where
Self: Sized,
F: FnMut(Self::Item),
Calls a closure on each element of an iterator. Read more1.0.0 · source#### fn filter<P>(self, predicate: P) -> Filter<Self, P>where
Self: Sized,
P: FnMut(&Self::Item) -> bool,
Creates an iterator which uses a closure to determine if an element should be yielded. Read more1.0.0 · source#### fn filter_map<B, F>(self, f: F) -> FilterMap<Self, F>where
Self: Sized,
F: FnMut(Self::Item) -> Option<B>,
Creates an iterator that both filters and maps. Read more1.0.0 · source#### fn enumerate(self) -> Enumerate<Self>where
Self: Sized,
Creates an iterator which gives the current iteration count as well as the next value. Read more1.0.0 · source#### fn peekable(self) -> Peekable<Self>where
Self: Sized,
Creates an iterator which can use the `peek` and `peek_mut` methods to look at the next element of the iterator without consuming it. See their documentation for more information. Read more1.0.0 · source#### fn skip_while<P>(self, predicate: P) -> SkipWhile<Self, P>where
Self: Sized,
P: FnMut(&Self::Item) -> bool,
Creates an iterator that `skip`s elements based on a predicate. Read more1.0.0 · source#### fn take_while<P>(self, predicate: P) -> TakeWhile<Self, P>where
Self: Sized,
P: FnMut(&Self::Item) -> bool,
Creates an iterator that yields elements based on a predicate. Read more1.57.0 · source#### fn map_while<B, P>(self, predicate: P) -> MapWhile<Self, P>where
Self: Sized,
P: FnMut(Self::Item) -> Option<B>,
Creates an iterator that both yields elements based on a predicate and maps. Read more1.0.0 · source#### fn skip(self, n: usize) -> Skip<Self>where
Self: Sized,
Creates an iterator that skips the first `n` elements. Read more1.0.0 · source#### fn take(self, n: usize) -> Take<Self>where
Self: Sized,
Creates an iterator that yields the first `n` elements, or fewer if the underlying iterator ends sooner. Read more1.0.0 · source#### fn scan<St, B, F>(self, initial_state: St, f: F) -> Scan<Self, St, F>where
Self: Sized,
F: FnMut(&mut St, Self::Item) -> Option<B>,
An iterator adapter which, like `fold`, holds internal state, but unlike `fold`, produces a new iterator. Read more1.0.0 · source#### fn flat_map<U, F>(self, f: F) -> FlatMap<Self, U, F>where
Self: Sized,
U: IntoIterator,
F: FnMut(Self::Item) -> U,
Creates an iterator that works like map, but flattens nested structure. Read more1.0.0 · source#### fn fuse(self) -> Fuse<Self>where
Self: Sized,
Creates an iterator which ends after the first `None`. Read more1.0.0 · source#### fn inspect<F>(self, f: F) -> Inspect<Self, F>where
Self: Sized,
F: FnMut(&Self::Item),
Does something with each element of an iterator, passing the value on. Read more1.0.0 · source#### fn by_ref(&mut self) -> &mut Selfwhere
Self: Sized,
Borrows an iterator, rather than consuming it. Read more1.0.0 · source#### fn collect<B>(self) -> Bwhere
B: FromIterator<Self::Item>,
Self: Sized,
Transforms an iterator into a collection.
E: Extend<Self::Item>,
Self: Sized,
🔬This is a nightly-only experimental API. (`iter_collect_into`)Collects all the items from an iterator into a collection. Read more1.0.0 · source#### fn partition<B, F>(self, f: F) -> (B, B)where
Self: Sized,
B: Default + Extend<Self::Item>,
F: FnMut(&Self::Item) -> bool,
Consumes an iterator, creating two collections from it.
T: 'a,
Self: Sized + DoubleEndedIterator<Item = &'a mut T>,
P: FnMut(&T) -> bool,
🔬This is a nightly-only experimental API. (`iter_partition_in_place`)Reorders the elements of this iterator *in-place* according to the given predicate,
such that all those that return `true` precede all those that return `false`.
Returns the number of `true` elements found.
Self: Sized,
P: FnMut(Self::Item) -> bool,
🔬This is a nightly-only experimental API. (`iter_is_partitioned`)Checks if the elements of this iterator are partitioned according to the given predicate,
such that all those that return `true` precede all those that return `false`. Read more1.27.0 · source#### fn try_fold<B, F, R>(&mut self, init: B, f: F) -> Rwhere
Self: Sized,
F: FnMut(B, Self::Item) -> R,
R: Try<Output = B>,
An iterator method that applies a function as long as it returns successfully, producing a single, final value. Read more1.27.0 · source#### fn try_for_each<F, R>(&mut self, f: F) -> Rwhere
Self: Sized,
F: FnMut(Self::Item) -> R,
R: Try<Output = ()>,
An iterator method that applies a fallible function to each item in the iterator, stopping at the first error and returning that error. Read more1.0.0 · source#### fn fold<B, F>(self, init: B, f: F) -> Bwhere
Self: Sized,
F: FnMut(B, Self::Item) -> B,
Folds every element into an accumulator by applying an operation,
returning the final result. Read more1.51.0 · source#### fn reduce<F>(self, f: F) -> Option<Self::Item>where
Self: Sized,
F: FnMut(Self::Item, Self::Item) -> Self::Item,
Reduces the elements to a single one, by repeatedly applying a reducing operation.
&mut self,
f: F
) -> <<R as Try>::Residual as Residual<Option<<R as Try>::Output>>>::TryTypewhere
Self: Sized,
F: FnMut(Self::Item, Self::Item) -> R,
R: Try<Output = Self::Item>,
<R as Try>::Residual: Residual<Option<Self::Item>>,
🔬This is a nightly-only experimental API. (`iterator_try_reduce`)Reduces the elements to a single one by repeatedly applying a reducing operation. If the closure returns a failure, the failure is propagated back to the caller immediately. Read more1.0.0 · source#### fn all<F>(&mut self, f: F) -> boolwhere
Self: Sized,
F: FnMut(Self::Item) -> bool,
Tests if every element of the iterator matches a predicate. Read more1.0.0 · source#### fn any<F>(&mut self, f: F) -> boolwhere
Self: Sized,
F: FnMut(Self::Item) -> bool,
Tests if any element of the iterator matches a predicate. Read more1.0.0 · source#### fn find<P>(&mut self, predicate: P) -> Option<Self::Item>where
Self: Sized,
P: FnMut(&Self::Item) -> bool,
Searches for an element of an iterator that satisfies a predicate. Read more1.30.0 · source#### fn find_map<B, F>(&mut self, f: F) -> Option<B>where
Self: Sized,
F: FnMut(Self::Item) -> Option<B>,
Applies function to the elements of iterator and returns the first non-none result.
&mut self,
f: F
) -> <<R as Try>::Residual as Residual<Option<Self::Item>>>::TryTypewhere
Self: Sized,
F: FnMut(&Self::Item) -> R,
R: Try<Output = bool>,
<R as Try>::Residual: Residual<Option<Self::Item>>,
🔬This is a nightly-only experimental API. (`try_find`)Applies function to the elements of iterator and returns the first true result or the first error. Read more1.0.0 · source#### fn position<P>(&mut self, predicate: P) -> Option<usize>where
Self: Sized,
P: FnMut(Self::Item) -> bool,
Searches for an element in an iterator, returning its index. Read more1.0.0 · source#### fn rposition<P>(&mut self, predicate: P) -> Option<usize>where
P: FnMut(Self::Item) -> bool,
Self: Sized + ExactSizeIterator + DoubleEndedIterator,
Searches for an element in an iterator from the right, returning its index. Read more1.6.0 · source#### fn max_by_key<B, F>(self, f: F) -> Option<Self::Item>where
B: Ord,
Self: Sized,
F: FnMut(&Self::Item) -> B,
Returns the element that gives the maximum value from the specified function. Read more1.15.0 · source#### fn max_by<F>(self, compare: F) -> Option<Self::Item>where
Self: Sized,
F: FnMut(&Self::Item, &Self::Item) -> Ordering,
Returns the element that gives the maximum value with respect to the specified comparison function. Read more1.6.0 · source#### fn min_by_key<B, F>(self, f: F) -> Option<Self::Item>where
B: Ord,
Self: Sized,
F: FnMut(&Self::Item) -> B,
Returns the element that gives the minimum value from the specified function. Read more1.15.0 · source#### fn min_by<F>(self, compare: F) -> Option<Self::Item>where
Self: Sized,
F: FnMut(&Self::Item, &Self::Item) -> Ordering,
Returns the element that gives the minimum value with respect to the specified comparison function. Read more1.0.0 · source#### fn rev(self) -> Rev<Self>where
Self: Sized + DoubleEndedIterator,
Reverses an iterator’s direction. Read more1.0.0 · source#### fn unzip<A, B, FromA, FromB>(self) -> (FromA, FromB)where
FromA: Default + Extend<A>,
FromB: Default + Extend<B>,
Self: Sized + Iterator<Item = (A, B)>,
Converts an iterator of pairs into a pair of containers. Read more1.36.0 · source#### fn copied<'a, T>(self) -> Copied<Self>where
T: 'a + Copy,
Self: Sized + Iterator<Item = &'a T>,
Creates an iterator which copies all of its elements. Read more1.0.0 · source#### fn cloned<'a, T>(self) -> Cloned<Self>where
T: 'a + Clone,
Self: Sized + Iterator<Item = &'a T>,
Creates an iterator which `clone`s all of its elements. Read more1.0.0 · source#### fn cycle(self) -> Cycle<Self>where
Self: Sized + Clone,
Repeats an iterator endlessly.
Self: Sized,
🔬This is a nightly-only experimental API. (`iter_array_chunks`)Returns an iterator over `N` elements of the iterator at a time. Read more1.11.0 · source#### fn sum<S>(self) -> Swhere
Self: Sized,
S: Sum<Self::Item>,
Sums the elements of an iterator. Read more1.11.0 · source#### fn product<P>(self) -> Pwhere
Self: Sized,
P: Product<Self::Item>,
Iterates over the entire iterator, multiplying all the elements
Self: Sized,
I: IntoIterator,
F: FnMut(Self::Item, <I as IntoIterator>::Item) -> Ordering,
🔬This is a nightly-only experimental API. (`iter_order_by`)Lexicographically compares the elements of this `Iterator` with those of another with respect to the specified comparison function. Read more1.5.0 · source#### fn partial_cmp<I>(self, other: I) -> Option<Ordering>where
I: IntoIterator,
Self::Item: PartialOrd<<I as IntoIterator>::Item>,
Self: Sized,
Lexicographically compares the `PartialOrd` elements of this `Iterator` with those of another. The comparison works like short-circuit evaluation, returning a result without comparing the remaining elements.
As soon as an order can be determined, the evaluation stops and a result is returned.
Self: Sized,
I: IntoIterator,
F: FnMut(Self::Item, <I as IntoIterator>::Item) -> Option<Ordering>,
🔬This is a nightly-only experimental API. (`iter_order_by`)Lexicographically compares the elements of this `Iterator` with those of another with respect to the specified comparison function. Read more1.5.0 · source#### fn eq<I>(self, other: I) -> boolwhere
I: IntoIterator,
Self::Item: PartialEq<<I as IntoIterator>::Item>,
Self: Sized,
Determines if the elements of this `Iterator` are equal to those of another.
Self: Sized,
I: IntoIterator,
F: FnMut(Self::Item, <I as IntoIterator>::Item) -> bool,
🔬This is a nightly-only experimental API. (`iter_order_by`)Determines if the elements of this `Iterator` are equal to those of another with respect to the specified equality function. Read more1.5.0 · source#### fn ne<I>(self, other: I) -> boolwhere
I: IntoIterator,
Self::Item: PartialEq<<I as IntoIterator>::Item>,
Self: Sized,
Determines if the elements of this `Iterator` are unequal to those of another. Read more1.5.0 · source#### fn lt<I>(self, other: I) -> boolwhere
I: IntoIterator,
Self::Item: PartialOrd<<I as IntoIterator>::Item>,
Self: Sized,
Determines if the elements of this `Iterator` are lexicographically less than those of another. Read more1.5.0 · source#### fn le<I>(self, other: I) -> boolwhere
I: IntoIterator,
Self::Item: PartialOrd<<I as IntoIterator>::Item>,
Self: Sized,
Determines if the elements of this `Iterator` are lexicographically less or equal to those of another. Read more1.5.0 · source#### fn gt<I>(self, other: I) -> boolwhere
I: IntoIterator,
Self::Item: PartialOrd<<I as IntoIterator>::Item>,
Self: Sized,
Determines if the elements of this `Iterator` are lexicographically greater than those of another. Read more1.5.0 · source#### fn ge<I>(self, other: I) -> boolwhere
I: IntoIterator,
Self::Item: PartialOrd<<I as IntoIterator>::Item>,
Self: Sized,
Determines if the elements of this `Iterator` are lexicographically greater than or equal to those of another.
Self: Sized,
F: FnMut(&Self::Item, &Self::Item) -> Option<Ordering>,
🔬This is a nightly-only experimental API. (`is_sorted`)Checks if the elements of this iterator are sorted using the given comparator function.
Self: Sized,
F: FnMut(Self::Item) -> K,
K: PartialOrd<K>,
🔬This is a nightly-only experimental API. (`is_sorted`)Checks if the elements of this iterator are sorted using the given key extraction function. Read moreAuto Trait Implementations
---
### impl RefUnwindSafe for KeyringIter
### impl Send for KeyringIter
### impl Sync for KeyringIter
### impl Unpin for KeyringIter
### impl UnwindSafe for KeyringIter
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn checked_from<T>(t: T) -> Option<Self>where
Self: TryFrom<T>,
Convert from a value of `T` into an equivalent instance of `Option<Self>`.
Self: TryInto<T>,
Consume self to return `Some` equivalent value of `Option<T>`.
T: Any,
#### fn into_any(self: Box<T, Global>) -> Box<dyn Any + 'static, GlobalConvert `Box<dyn Trait>` (where `Trait: Downcast`) to `Box<dyn Any>`. `Box<dyn Any>` can then be further `downcast` into `Box<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn into_any_rc(self: Rc<T>) -> Rc<dyn Any + 'staticConvert `Rc<Trait>` (where `Trait: Downcast`) to `Rc<Any>`. `Rc<Any>` can then be further `downcast` into `Rc<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn as_any(&self) -> &(dyn Any + 'static)
Convert `&Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&Any`’s vtable from `&Trait`’s.#### fn as_any_mut(&mut self) -> &mut (dyn Any + 'static)
Convert `&mut Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&mut Any`’s vtable from `&mut Trait`’s.### impl<T> DowncastSync for Twhere
T: Any + Send + Sync,
#### fn into_any_arc(self: Arc<T>) -> Arc<dyn Any + Send + Sync + 'staticConvert `Arc<Trait>` (where `Trait: Downcast`) to `Arc<Any>`. `Arc<Any>` can then be further `downcast` into `Arc<ConcreteType>` where `ConcreteType` implements `Trait`.### impl<T> DynClone for Twhere
T: Clone,
#### fn __clone_box(&self, _: Private) -> *mut()
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<I> IntoIterator for Iwhere
I: Iterator,
#### type Item = <I as Iterator>::Item
The type of the elements being iterated over.#### type IntoIter = I
Which kind of iterator are we turning this into?const: unstable · source#### fn into_iter(self) -> I
Creates an iterator from a value.
Outer: AsRef<T> + AsMut<T> + From<T>,
T: From<Outer>,
#### fn from_ref(outer: &Outer) -> &T
Get a reference to the inner from the outer.
#### fn from_mut(outer: &mut Outer) -> &mut T
Get a mutable reference to the inner from the outer.
### impl<I> IteratorRandom for Iwhere
I: Iterator,
#### fn choose<R>(self, rng: &mut R) -> Option<Self::Item>where
R: Rng + ?Sized,
Choose one element at random from the iterator.
R: Rng + ?Sized,
Choose one element at random from the iterator.
R: Rng + ?Sized,
Collects values at random from the iterator into a supplied buffer until that buffer is filled.
R: Rng + ?Sized,
Collects `amount` values at random from the iterator into a vector.
I: Iterator,
#### fn choose<R>(self, rng: &mut R) -> Option<Self::Item>where
R: Rng + ?Sized,
Choose one element at random from the iterator.
R: Rng + ?Sized,
Collects values at random from the iterator into a supplied buffer until that buffer is filled.
R: Rng + ?Sized,
Collects `amount` values at random from the iterator into a vector.
#### type Output = T
Should always be `Self`### impl<T> SaturatedConversion for T
#### fn saturated_from<T>(t: T) -> Selfwhere
Self: UniqueSaturatedFrom<T>,
Convert from a value of `T` into an equivalent instance of `Self`.
Self: UniqueSaturatedInto<T>,
Consume self to return an equivalent value of `T`.
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
I: ExactSizeIterator<Item = T> + Iterator,
Bound: Get<u32>,
#### type Error = &'static str
The error type that gets returned when a collection can’t be made from `self`.#### fn try_collect(
self
) -> Result<BoundedVec<T, Bound>, <I as TryCollect<BoundedVec<T, Bound>>>::ErrorConsume self and try to collect the results into `C`.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<S, T> UncheckedInto<T> for Swhere
T: UncheckedFrom<S>,
#### fn unchecked_into(self) -> T
The counterpart to `unchecked_from`.### impl<T, S> UniqueSaturatedInto<T> for Swhere
T: Bounded,
S: TryInto<T>,
#### fn unique_saturated_into(self) -> T
Consume self to return an equivalent value of `T`.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: RefUnwindSafe,
{"KeyringIter":"<h3>Notable traits for <code><a class=\"struct\" href=\"struct.KeyringIter.html\" title=\"struct sp_beefy::KeyringIter\">KeyringIter</a></code></h3><pre><code><span class=\"where fmt-newline\">impl <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html\" title=\"trait core::iter::traits::iterator::Iterator\">Iterator</a> for <a class=\"struct\" href=\"struct.KeyringIter.html\" title=\"struct sp_beefy::KeyringIter\">KeyringIter</a></span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html#associatedtype.Item\" class=\"associatedtype\">Item</a> = <a class=\"enum\" href=\"enum.Keyring.html\" title=\"enum sp_beefy::Keyring\">Keyring</a>;</span>","Vec<Self::Item, Global>":"<h3>Notable traits for <code><a class=\"struct\" href=\"https://doc.rust-lang.org/nightly/alloc/vec/struct.Vec.html\" title=\"struct alloc::vec::Vec\">Vec</a><<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.u8.html\">u8</a>, A></code></h3><pre><code><span class=\"where fmt-newline\">impl<A> <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/std/io/trait.Write.html\" title=\"trait std::io::Write\">Write</a> for <a class=\"struct\" href=\"https://doc.rust-lang.org/nightly/alloc/vec/struct.Vec.html\" title=\"struct alloc::vec::Vec\">Vec</a><<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.u8.html\">u8</a>, A><span class=\"where fmt-newline\">where\n A: <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/alloc/trait.Allocator.html\" title=\"trait core::alloc::Allocator\">Allocator</a>,</span></span>"}
Struct sp_beefy::OpaqueKeyOwnershipProof
===
```
pub struct OpaqueKeyOwnershipProof(_);
```
An opaque type used to represent the key ownership proof at the runtime API boundary. The inner value is an encoded representation of the actual key ownership proof which will be parameterized when defining the runtime. At the runtime API boundary this type is unknown and as such we keep this opaque representation, implementors of the runtime API will have to make sure that all usages of `OpaqueKeyOwnershipProof` refer to the same type.
Implementations
---
### impl OpaqueKeyOwnershipProof
#### pub fn new(inner: Vec<u8>) -> OpaqueKeyOwnershipProof
Create a new `OpaqueKeyOwnershipProof` using the given encoded representation.
#### pub fn decode<T: Decode>(self) -> Option<TTry to decode this `OpaqueKeyOwnershipProof` into the given concrete key ownership proof type.
Trait Implementations
---
### impl Decode for OpaqueKeyOwnershipProof
#### fn decode<__CodecInputEdqy: Input>(
__codec_input_edqy: &mut __CodecInputEdqy
) -> Result<Self, ErrorAttempt to deserialise the value from input.#### fn skip<I>(input: &mut I) -> Result<(), Error>where
I: Input,
Attempt to skip the encoded value from input.
#### fn encode_to<__CodecOutputEdqy: Output + ?Sized>(
&self,
__codec_dest_edqy: &mut __CodecOutputEdqy
)
Convert self to a slice and append it to the destination.#### fn encode(&self) -> Vec<u8Convert self to an owned vector.#### fn using_encoded<R, F: FnOnce(&[u8]) -> R>(&self, f: F) -> R
Convert self to a slice and then invoke the given closure with it.#### fn size_hint(&self) -> usize
If possible give a hint of expected size of the encoding.
Calculates the encoded size.
#### fn eq(&self, other: &OpaqueKeyOwnershipProof) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl EncodeLike<OpaqueKeyOwnershipProof> for OpaqueKeyOwnershipProof
### impl StructuralPartialEq for OpaqueKeyOwnershipProof
Auto Trait Implementations
---
### impl RefUnwindSafe for OpaqueKeyOwnershipProof
### impl Send for OpaqueKeyOwnershipProof
### impl Sync for OpaqueKeyOwnershipProof
### impl Unpin for OpaqueKeyOwnershipProof
### impl UnwindSafe for OpaqueKeyOwnershipProof
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn checked_from<T>(t: T) -> Option<Self>where
Self: TryFrom<T>,
Convert from a value of `T` into an equivalent instance of `Option<Self>`.
Self: TryInto<T>,
Consume self to return `Some` equivalent value of `Option<T>`.
T: Decode,
#### fn decode_all(input: &mut &[u8]) -> Result<T, ErrorDecode `Self` and consume all of the given input data.
T: Decode,
#### fn decode_all_with_depth_limit(limit: u32, input: &mut &[u8]) -> Result<T, ErrorDecode `Self` and consume all of the given input data.
I: Input,
Decode `Self` with the given maximum recursion depth and advance `input` by the number of bytes consumed.
T: Any,
#### fn into_any(self: Box<T, Global>) -> Box<dyn Any + 'static, GlobalConvert `Box<dyn Trait>` (where `Trait: Downcast`) to `Box<dyn Any>`. `Box<dyn Any>` can then be further `downcast` into `Box<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn into_any_rc(self: Rc<T>) -> Rc<dyn Any + 'staticConvert `Rc<Trait>` (where `Trait: Downcast`) to `Rc<Any>`. `Rc<Any>` can then be further `downcast` into `Rc<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn as_any(&self) -> &(dyn Any + 'static)
Convert `&Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&Any`’s vtable from `&Trait`’s.#### fn as_any_mut(&mut self) -> &mut (dyn Any + 'static)
Convert `&mut Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&mut Any`’s vtable from `&mut Trait`’s.### impl<T> DowncastSync for Twhere
T: Any + Send + Sync,
#### fn into_any_arc(self: Arc<T>) -> Arc<dyn Any + Send + Sync + 'staticConvert `Arc<Trait>` (where `Trait: Downcast`) to `Arc<Any>`. `Arc<Any>` can then be further `downcast` into `Arc<ConcreteType>` where `ConcreteType` implements `Trait`.### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, Outer> IsWrappedBy<Outer> for Twhere
Outer: AsRef<T> + AsMut<T> + From<T>,
T: From<Outer>,
#### fn from_ref(outer: &Outer) -> &T
Get a reference to the inner from the outer.
#### fn from_mut(outer: &mut Outer) -> &mut T
Get a mutable reference to the inner from the outer.
### impl<T> KeyedVec for Twhere
T: Codec,
#### fn to_keyed_vec(&self, prepend_key: &[u8]) -> Vec<u8, GlobalReturn an encoding of `Self` prepended by given slice.### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> SaturatedConversion for T
#### fn saturated_from<T>(t: T) -> Selfwhere
Self: UniqueSaturatedFrom<T>,
Convert from a value of `T` into an equivalent instance of `Self`.
Self: UniqueSaturatedInto<T>,
Consume self to return an equivalent value of `T`.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<S, T> UncheckedInto<T> for Swhere
T: UncheckedFrom<S>,
#### fn unchecked_into(self) -> T
The counterpart to `unchecked_from`.### impl<T, S> UniqueSaturatedInto<T> for Swhere
T: Bounded,
S: TryInto<T>,
#### fn unique_saturated_into(self) -> T
Consume self to return an equivalent value of `T`.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
S: Decode + Encode,
### impl<T> EncodeLike<&&T> for Twhere
T: Encode,
### impl<T> EncodeLike<&T> for Twhere
T: Encode,
### impl<T> EncodeLike<&mut T> for Twhere
T: Encode,
### impl<T> EncodeLike<Arc<T>> for Twhere
T: Encode,
### impl<T> EncodeLike<Box<T, Global>> for Twhere
T: Encode,
### impl<T> EncodeLike<Rc<T>> for Twhere
T: Encode,
### impl<S> FullCodec for Swhere
S: Decode + FullEncode,
### impl<S> FullEncode for Swhere
S: Encode + EncodeLike<S>,
### impl<T> MaybeRefUnwindSafe for Twhere
T: RefUnwindSafe,
{"Vec<u8, Global>":"<h3>Notable traits for <code><a class=\"struct\" href=\"https://doc.rust-lang.org/nightly/alloc/vec/struct.Vec.html\" title=\"struct alloc::vec::Vec\">Vec</a><<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.u8.html\">u8</a>, A></code></h3><pre><code><span class=\"where fmt-newline\">impl<A> <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/std/io/trait.Write.html\" title=\"trait std::io::Write\">Write</a> for <a class=\"struct\" href=\"https://doc.rust-lang.org/nightly/alloc/vec/struct.Vec.html\" title=\"struct alloc::vec::Vec\">Vec</a><<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.u8.html\">u8</a>, A><span class=\"where fmt-newline\">where\n A: <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/alloc/trait.Allocator.html\" title=\"trait core::alloc::Allocator\">Allocator</a>,</span></span>","Vec<u8>":"<h3>Notable traits for <code><a class=\"struct\" href=\"https://doc.rust-lang.org/nightly/alloc/vec/struct.Vec.html\" title=\"struct alloc::vec::Vec\">Vec</a><<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.u8.html\">u8</a>, A></code></h3><pre><code><span class=\"where fmt-newline\">impl<A> <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/std/io/trait.Write.html\" title=\"trait std::io::Write\">Write</a> for <a class=\"struct\" href=\"https://doc.rust-lang.org/nightly/alloc/vec/struct.Vec.html\" title=\"struct alloc::vec::Vec\">Vec</a><<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.u8.html\">u8</a>, A><span class=\"where fmt-newline\">where\n A: <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/alloc/trait.Allocator.html\" title=\"trait core::alloc::Allocator\">Allocator</a>,</span></span>"}
Struct sp_beefy::Payload
===
```
pub struct Payload(_);
```
A BEEFY payload type allowing for future extensibility of adding additional kinds of payloads.
The idea is to store a vector of SCALE-encoded values with an extra identifier.
Identifiers MUST be sorted by the `BeefyPayloadId` to allow efficient lookup of expected value. Duplicated identifiers are disallowed. It’s okay for different implementations to only support a subset of possible values.
Implementations
---
### impl Payload
#### pub fn from_single_entry(id: BeefyPayloadId, value: Vec<u8>) -> Self
Construct a new payload given an initial vallue
#### pub fn get_raw(&self, id: &BeefyPayloadId) -> Option<&Vec<u8>Returns a raw payload under given `id`.
If the `BeefyPayloadId` is not found in the payload `None` is returned.
#### pub fn get_decoded<T: Decode>(&self, id: &BeefyPayloadId) -> Option<TReturns a decoded payload value under given `id`.
In case the value is not there or it cannot be decoded does not match `None` is returned.
#### pub fn push_raw(self, id: BeefyPayloadId, value: Vec<u8>) -> Self
Push a `Vec<u8>` with a given id into the payload vec.
This method will internally sort the payload vec after every push.
Returns self to allow for daisy chaining.
Trait Implementations
---
### impl Clone for Payload
#### fn clone(&self) -> Payload
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn decode<__CodecInputEdqy: Input>(
__codec_input_edqy: &mut __CodecInputEdqy
) -> Result<Self, ErrorAttempt to deserialise the value from input.#### fn skip<I>(input: &mut I) -> Result<(), Error>where
I: Input,
Attempt to skip the encoded value from input.
#### fn encode_to<__CodecOutputEdqy: Output + ?Sized>(
&self,
__codec_dest_edqy: &mut __CodecOutputEdqy
)
Convert self to a slice and append it to the destination.#### fn encode(&self) -> Vec<u8Convert self to an owned vector.#### fn using_encoded<R, F: FnOnce(&[u8]) -> R>(&self, f: F) -> R
Convert self to a slice and then invoke the given closure with it.#### fn size_hint(&self) -> usize
If possible give a hint of expected size of the encoding.
Calculates the encoded size.
#### fn hash<__H: Hasher>(&self, state: &mut __H)
Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where
H: Hasher,
Self: Sized,
Feeds a slice of this type into the given `Hasher`.
#### fn cmp(&self, other: &Payload) -> Ordering
This method returns an `Ordering` between `self` and `other`. Read more1.21.0 · source#### fn max(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the maximum of two values. Read more1.21.0 · source#### fn min(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the minimum of two values. Read more1.50.0 · source#### fn clamp(self, min: Self, max: Self) -> Selfwhere
Self: Sized + PartialOrd<Self>,
Restrict a value to a certain interval.
#### fn eq(&self, other: &Payload) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialOrd<Payload> for Payload
#### fn partial_cmp(&self, other: &Payload) -> Option<OrderingThis method returns an ordering between `self` and `other` values if one exists. Read more1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool
This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool
This method tests less than or equal to (for `self` and `other`) and is used by the `<=`
operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool
This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool
This method tests greater than or equal to (for `self` and `other`) and is used by the `>=`
operator.
#### type Identity = Payload
The type identifying for which type info is provided.
Returns the static type identifier for `Self`.### impl EncodeLike<Payload> for Payload
### impl Eq for Payload
### impl StructuralEq for Payload
### impl StructuralPartialEq for Payload
Auto Trait Implementations
---
### impl RefUnwindSafe for Payload
### impl Send for Payload
### impl Sync for Payload
### impl Unpin for Payload
### impl UnwindSafe for Payload
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
T: Hash + ?Sized,
#### default fn get_hash<H, B>(value: &H, build_hasher: &B) -> u64where
H: Hash + ?Sized,
B: BuildHasher,
### impl<T> CheckedConversion for T
#### fn checked_from<T>(t: T) -> Option<Self>where
Self: TryFrom<T>,
Convert from a value of `T` into an equivalent instance of `Option<Self>`.
Self: TryInto<T>,
Consume self to return `Some` equivalent value of `Option<T>`.
T: Decode,
#### fn decode_all(input: &mut &[u8]) -> Result<T, ErrorDecode `Self` and consume all of the given input data.
T: Decode,
#### fn decode_all_with_depth_limit(limit: u32, input: &mut &[u8]) -> Result<T, ErrorDecode `Self` and consume all of the given input data.
I: Input,
Decode `Self` with the given maximum recursion depth and advance `input` by the number of bytes consumed.
T: Any,
#### fn into_any(self: Box<T, Global>) -> Box<dyn Any + 'static, GlobalConvert `Box<dyn Trait>` (where `Trait: Downcast`) to `Box<dyn Any>`. `Box<dyn Any>` can then be further `downcast` into `Box<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn into_any_rc(self: Rc<T>) -> Rc<dyn Any + 'staticConvert `Rc<Trait>` (where `Trait: Downcast`) to `Rc<Any>`. `Rc<Any>` can then be further `downcast` into `Rc<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn as_any(&self) -> &(dyn Any + 'static)
Convert `&Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&Any`’s vtable from `&Trait`’s.#### fn as_any_mut(&mut self) -> &mut (dyn Any + 'static)
Convert `&mut Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&mut Any`’s vtable from `&mut Trait`’s.### impl<T> DowncastSync for Twhere
T: Any + Send + Sync,
#### fn into_any_arc(self: Arc<T>) -> Arc<dyn Any + Send + Sync + 'staticConvert `Arc<Trait>` (where `Trait: Downcast`) to `Arc<Any>`. `Arc<Any>` can then be further `downcast` into `Arc<ConcreteType>` where `ConcreteType` implements `Trait`.### impl<T> DynClone for Twhere
T: Clone,
#### fn __clone_box(&self, _: Private) -> *mut()
### impl<Q, K> Equivalent<K> for Qwhere
Q: Eq + ?Sized,
K: Borrow<Q> + ?Sized,
#### fn equivalent(&self, key: &K) -> bool
Compare self to `key` and return `true` if they are equal.### impl<Q, K> Equivalent<K> for Qwhere
Q: Eq + ?Sized,
K: Borrow<Q> + ?Sized,
#### fn equivalent(&self, key: &K) -> bool
Checks if this value is equivalent to the given key.
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> FullLeaf for Twhere
T: Encode + Decode + Clone + PartialEq<T> + Debug,
#### fn using_encoded<R, F>(&self, f: F, _compact: bool) -> Rwhere
F: FnOnce(&[u8]) -> R,
Encode the leaf either in its full or compact form.
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, Outer> IsWrappedBy<Outer> for Twhere
Outer: AsRef<T> + AsMut<T> + From<T>,
T: From<Outer>,
#### fn from_ref(outer: &Outer) -> &T
Get a reference to the inner from the outer.
#### fn from_mut(outer: &mut Outer) -> &mut T
Get a mutable reference to the inner from the outer.
### impl<T> KeyedVec for Twhere
T: Codec,
#### fn to_keyed_vec(&self, prepend_key: &[u8]) -> Vec<u8, GlobalReturn an encoding of `Self` prepended by given slice.### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> SaturatedConversion for T
#### fn saturated_from<T>(t: T) -> Selfwhere
Self: UniqueSaturatedFrom<T>,
Convert from a value of `T` into an equivalent instance of `Self`.
Self: UniqueSaturatedInto<T>,
Consume self to return an equivalent value of `T`.
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<S, T> UncheckedInto<T> for Swhere
T: UncheckedFrom<S>,
#### fn unchecked_into(self) -> T
The counterpart to `unchecked_from`.### impl<T, S> UniqueSaturatedInto<T> for Swhere
T: Bounded,
S: TryInto<T>,
#### fn unique_saturated_into(self) -> T
Consume self to return an equivalent value of `T`.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
S: Decode + Encode,
### impl<T> EncodeLike<&&T> for Twhere
T: Encode,
### impl<T> EncodeLike<&T> for Twhere
T: Encode,
### impl<T> EncodeLike<&mut T> for Twhere
T: Encode,
### impl<T> EncodeLike<Arc<T>> for Twhere
T: Encode,
### impl<T> EncodeLike<Box<T, Global>> for Twhere
T: Encode,
### impl<'a, T> EncodeLike<Cow<'a, T>> for Twhere
T: ToOwned + Encode,
### impl<T> EncodeLike<Rc<T>> for Twhere
T: Encode,
### impl<S> FullCodec for Swhere
S: Decode + FullEncode,
### impl<S> FullEncode for Swhere
S: Encode + EncodeLike<S>,
### impl<T> MaybeDebug for Twhere
T: Debug,
### impl<T> MaybeDebug for Twhere
T: Debug,
### impl<T> MaybeHash for Twhere
T: Hash,
### impl<T> MaybeHash for Twhere
T: Hash,
### impl<T> MaybeRefUnwindSafe for Twhere
T: RefUnwindSafe,
### impl<T> Member for Twhere
T: Send + Sync + Debug + Eq + PartialEq<T> + Clone + 'static,
### impl<T> StaticTypeInfo for Twhere
T: TypeInfo + 'static,
{"Vec<u8, Global>":"<h3>Notable traits for <code><a class=\"struct\" href=\"https://doc.rust-lang.org/nightly/alloc/vec/struct.Vec.html\" title=\"struct alloc::vec::Vec\">Vec</a><<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.u8.html\">u8</a>, A></code></h3><pre><code><span class=\"where fmt-newline\">impl<A> <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/std/io/trait.Write.html\" title=\"trait std::io::Write\">Write</a> for <a class=\"struct\" href=\"https://doc.rust-lang.org/nightly/alloc/vec/struct.Vec.html\" title=\"struct alloc::vec::Vec\">Vec</a><<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.u8.html\">u8</a>, A><span class=\"where fmt-newline\">where\n A: <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/alloc/trait.Allocator.html\" title=\"trait core::alloc::Allocator\">Allocator</a>,</span></span>","Vec<u8>":"<h3>Notable traits for <code><a class=\"struct\" href=\"https://doc.rust-lang.org/nightly/alloc/vec/struct.Vec.html\" title=\"struct alloc::vec::Vec\">Vec</a><<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.u8.html\">u8</a>, A></code></h3><pre><code><span class=\"where fmt-newline\">impl<A> <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/std/io/trait.Write.html\" title=\"trait std::io::Write\">Write</a> for <a class=\"struct\" href=\"https://doc.rust-lang.org/nightly/alloc/vec/struct.Vec.html\" title=\"struct alloc::vec::Vec\">Vec</a><<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.u8.html\">u8</a>, A><span class=\"where fmt-newline\">where\n A: <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/alloc/trait.Allocator.html\" title=\"trait core::alloc::Allocator\">Allocator</a>,</span></span>"}
Struct sp_beefy::SignedCommitment
===
```
pub struct SignedCommitment<TBlockNumber, TSignature> {
pub commitment: Commitment<TBlockNumber>,
pub signatures: Vec<Option<TSignature>>,
}
```
A commitment with matching GRANDPA validators’ signatures.
Note that SCALE-encoding of the structure is optimized for size efficiency over the wire,
please take a look at custom `Encode` and `Decode` implementations and
`CompactSignedCommitment` struct.
Fields
---
`commitment: Commitment<TBlockNumber>`The commitment signatures are collected for.
`signatures: Vec<Option<TSignature>>`GRANDPA validators’ signatures for the commitment.
The length of this `Vec` must match number of validators in the current set (see Commitment::validator_set_id).
Implementations
---
### impl<TBlockNumber, TSignature> SignedCommitment<TBlockNumber, TSignature#### pub fn no_of_signatures(&self) -> usize
Return the number of collected signatures.
Trait Implementations
---
### impl<TBlockNumber: Clone, TSignature: Clone> Clone for SignedCommitment<TBlockNumber, TSignature#### fn clone(&self) -> SignedCommitment<TBlockNumber, TSignatureReturns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
Formats the value using the given formatter.
TBlockNumber: Decode + Clone,
TSignature: Decode,
#### fn decode<I: Input>(input: &mut I) -> Result<Self, ErrorAttempt to deserialise the value from input.#### fn skip<I>(input: &mut I) -> Result<(), Error>where
I: Input,
Attempt to skip the encoded value from input.
TBlockNumber: Encode + Clone,
TSignature: Encode,
#### fn using_encoded<R, F: FnOnce(&[u8]) -> R>(&self, f: F) -> R
Convert self to a slice and then invoke the given closure with it.#### fn size_hint(&self) -> usize
If possible give a hint of expected size of the encoding.
T: Output + ?Sized,
Convert self to a slice and append it to the destination.#### fn encode(&self) -> Vec<u8, GlobalConvert self to an owned vector.#### fn encoded_size(&self) -> usize
Calculates the encoded size.
Converts to this type from the input type.### impl<TBlockNumber: PartialEq, TSignature: PartialEq> PartialEq<SignedCommitment<TBlockNumber, TSignature>> for SignedCommitment<TBlockNumber, TSignature#### fn eq(&self, other: &SignedCommitment<TBlockNumber, TSignature>) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl<TBlockNumber, TSignature> TypeInfo for SignedCommitment<TBlockNumber, TSignature>where
Commitment<TBlockNumber>: TypeInfo + 'static,
Vec<Option<TSignature>>: TypeInfo + 'static,
TBlockNumber: TypeInfo + 'static,
TSignature: TypeInfo + 'static,
#### type Identity = SignedCommitment<TBlockNumber, TSignatureThe type identifying for which type info is provided.
Returns the static type identifier for `Self`.### impl<TBlockNumber: Eq, TSignature: Eq> Eq for SignedCommitment<TBlockNumber, TSignature### impl<TBlockNumber, TSignature> StructuralEq for SignedCommitment<TBlockNumber, TSignature### impl<TBlockNumber, TSignature> StructuralPartialEq for SignedCommitment<TBlockNumber, TSignatureAuto Trait Implementations
---
### impl<TBlockNumber, TSignature> RefUnwindSafe for SignedCommitment<TBlockNumber, TSignature>where
TBlockNumber: RefUnwindSafe,
TSignature: RefUnwindSafe,
### impl<TBlockNumber, TSignature> Send for SignedCommitment<TBlockNumber, TSignature>where
TBlockNumber: Send,
TSignature: Send,
### impl<TBlockNumber, TSignature> Sync for SignedCommitment<TBlockNumber, TSignature>where
TBlockNumber: Sync,
TSignature: Sync,
### impl<TBlockNumber, TSignature> Unpin for SignedCommitment<TBlockNumber, TSignature>where
TBlockNumber: Unpin,
TSignature: Unpin,
### impl<TBlockNumber, TSignature> UnwindSafe for SignedCommitment<TBlockNumber, TSignature>where
TBlockNumber: UnwindSafe,
TSignature: UnwindSafe,
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn checked_from<T>(t: T) -> Option<Self>where
Self: TryFrom<T>,
Convert from a value of `T` into an equivalent instance of `Option<Self>`.
Self: TryInto<T>,
Consume self to return `Some` equivalent value of `Option<T>`.
T: Decode,
#### fn decode_all(input: &mut &[u8]) -> Result<T, ErrorDecode `Self` and consume all of the given input data.
T: Decode,
#### fn decode_all_with_depth_limit(limit: u32, input: &mut &[u8]) -> Result<T, ErrorDecode `Self` and consume all of the given input data.
I: Input,
Decode `Self` with the given maximum recursion depth and advance `input` by the number of bytes consumed.
T: Any,
#### fn into_any(self: Box<T, Global>) -> Box<dyn Any + 'static, GlobalConvert `Box<dyn Trait>` (where `Trait: Downcast`) to `Box<dyn Any>`. `Box<dyn Any>` can then be further `downcast` into `Box<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn into_any_rc(self: Rc<T>) -> Rc<dyn Any + 'staticConvert `Rc<Trait>` (where `Trait: Downcast`) to `Rc<Any>`. `Rc<Any>` can then be further `downcast` into `Rc<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn as_any(&self) -> &(dyn Any + 'static)
Convert `&Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&Any`’s vtable from `&Trait`’s.#### fn as_any_mut(&mut self) -> &mut (dyn Any + 'static)
Convert `&mut Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&mut Any`’s vtable from `&mut Trait`’s.### impl<T> DowncastSync for Twhere
T: Any + Send + Sync,
#### fn into_any_arc(self: Arc<T>) -> Arc<dyn Any + Send + Sync + 'staticConvert `Arc<Trait>` (where `Trait: Downcast`) to `Arc<Any>`. `Arc<Any>` can then be further `downcast` into `Arc<ConcreteType>` where `ConcreteType` implements `Trait`.### impl<T> DynClone for Twhere
T: Clone,
#### fn __clone_box(&self, _: Private) -> *mut()
### impl<Q, K> Equivalent<K> for Qwhere
Q: Eq + ?Sized,
K: Borrow<Q> + ?Sized,
#### fn equivalent(&self, key: &K) -> bool
Compare self to `key` and return `true` if they are equal.### impl<Q, K> Equivalent<K> for Qwhere
Q: Eq + ?Sized,
K: Borrow<Q> + ?Sized,
#### fn equivalent(&self, key: &K) -> bool
Checks if this value is equivalent to the given key.
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> FullLeaf for Twhere
T: Encode + Decode + Clone + PartialEq<T> + Debug,
#### fn using_encoded<R, F>(&self, f: F, _compact: bool) -> Rwhere
F: FnOnce(&[u8]) -> R,
Encode the leaf either in its full or compact form.
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, Outer> IsWrappedBy<Outer> for Twhere
Outer: AsRef<T> + AsMut<T> + From<T>,
T: From<Outer>,
#### fn from_ref(outer: &Outer) -> &T
Get a reference to the inner from the outer.
#### fn from_mut(outer: &mut Outer) -> &mut T
Get a mutable reference to the inner from the outer.
### impl<T> KeyedVec for Twhere
T: Codec,
#### fn to_keyed_vec(&self, prepend_key: &[u8]) -> Vec<u8, GlobalReturn an encoding of `Self` prepended by given slice.### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> SaturatedConversion for T
#### fn saturated_from<T>(t: T) -> Selfwhere
Self: UniqueSaturatedFrom<T>,
Convert from a value of `T` into an equivalent instance of `Self`.
Self: UniqueSaturatedInto<T>,
Consume self to return an equivalent value of `T`.
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<S, T> UncheckedInto<T> for Swhere
T: UncheckedFrom<S>,
#### fn unchecked_into(self) -> T
The counterpart to `unchecked_from`.### impl<T, S> UniqueSaturatedInto<T> for Swhere
T: Bounded,
S: TryInto<T>,
#### fn unique_saturated_into(self) -> T
Consume self to return an equivalent value of `T`.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
S: Decode + Encode,
### impl<T> EncodeLike<&&T> for Twhere
T: Encode,
### impl<T> EncodeLike<&T> for Twhere
T: Encode,
### impl<T> EncodeLike<&mut T> for Twhere
T: Encode,
### impl<T> EncodeLike<Arc<T>> for Twhere
T: Encode,
### impl<T> EncodeLike<Box<T, Global>> for Twhere
T: Encode,
### impl<'a, T> EncodeLike<Cow<'a, T>> for Twhere
T: ToOwned + Encode,
### impl<T> EncodeLike<Rc<T>> for Twhere
T: Encode,
### impl<T> MaybeDebug for Twhere
T: Debug,
### impl<T> MaybeDebug for Twhere
T: Debug,
### impl<T> MaybeRefUnwindSafe for Twhere
T: RefUnwindSafe,
### impl<T> Member for Twhere
T: Send + Sync + Debug + Eq + PartialEq<T> + Clone + 'static,
### impl<T> StaticTypeInfo for Twhere
T: TypeInfo + 'static,
{"Vec<u8, Global>":"<h3>Notable traits for <code><a class=\"struct\" href=\"https://doc.rust-lang.org/nightly/alloc/vec/struct.Vec.html\" title=\"struct alloc::vec::Vec\">Vec</a><<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.u8.html\">u8</a>, A></code></h3><pre><code><span class=\"where fmt-newline\">impl<A> <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/std/io/trait.Write.html\" title=\"trait std::io::Write\">Write</a> for <a class=\"struct\" href=\"https://doc.rust-lang.org/nightly/alloc/vec/struct.Vec.html\" title=\"struct alloc::vec::Vec\">Vec</a><<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.u8.html\">u8</a>, A><span class=\"where fmt-newline\">where\n A: <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/alloc/trait.Allocator.html\" title=\"trait core::alloc::Allocator\">Allocator</a>,</span></span>"}
Struct sp_beefy::ValidatorSet
===
```
pub struct ValidatorSet<AuthorityId> { /* private fields */ }
```
A set of BEEFY authorities, a.k.a. validators.
Implementations
---
### impl<AuthorityId> ValidatorSet<AuthorityId#### pub fn new<I>(validators: I, id: ValidatorSetId) -> Option<Self>where
I: IntoIterator<Item = AuthorityId>,
Return a validator set with the given validators and set id.
#### pub fn validators(&self) -> &[AuthorityId]
Return a reference to the vec of validators.
#### pub fn id(&self) -> ValidatorSetId
Return the validator set id.
#### pub fn len(&self) -> usize
Return the number of validators in the set.
Trait Implementations
---
### impl<AuthorityId: Clone> Clone for ValidatorSet<AuthorityId#### fn clone(&self) -> ValidatorSet<AuthorityIdReturns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
Formats the value using the given formatter.
Vec<AuthorityId>: Decode,
#### fn decode<__CodecInputEdqy: Input>(
__codec_input_edqy: &mut __CodecInputEdqy
) -> Result<Self, ErrorAttempt to deserialise the value from input.#### fn skip<I>(input: &mut I) -> Result<(), Error>where
I: Input,
Attempt to skip the encoded value from input.
Vec<AuthorityId>: Encode,
#### fn encode_to<__CodecOutputEdqy: Output + ?Sized>(
&self,
__codec_dest_edqy: &mut __CodecOutputEdqy
)
Convert self to a slice and append it to the destination.#### fn size_hint(&self) -> usize
If possible give a hint of expected size of the encoding.
F: FnOnce(&[u8]) -> R,
Convert self to a slice and then invoke the given closure with it.#### fn encoded_size(&self) -> usize
Calculates the encoded size.
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl<AuthorityId> TypeInfo for ValidatorSet<AuthorityId>where
Vec<AuthorityId>: TypeInfo + 'static,
AuthorityId: TypeInfo + 'static,
#### type Identity = ValidatorSet<AuthorityIdThe type identifying for which type info is provided.
Returns the static type identifier for `Self`.### impl<AuthorityId> EncodeLike<ValidatorSet<AuthorityId>> for ValidatorSet<AuthorityId>where
Vec<AuthorityId>: Encode,
### impl<AuthorityId> StructuralPartialEq for ValidatorSet<AuthorityIdAuto Trait Implementations
---
### impl<AuthorityId> RefUnwindSafe for ValidatorSet<AuthorityId>where
AuthorityId: RefUnwindSafe,
### impl<AuthorityId> Send for ValidatorSet<AuthorityId>where
AuthorityId: Send,
### impl<AuthorityId> Sync for ValidatorSet<AuthorityId>where
AuthorityId: Sync,
### impl<AuthorityId> Unpin for ValidatorSet<AuthorityId>where
AuthorityId: Unpin,
### impl<AuthorityId> UnwindSafe for ValidatorSet<AuthorityId>where
AuthorityId: UnwindSafe,
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn checked_from<T>(t: T) -> Option<Self>where
Self: TryFrom<T>,
Convert from a value of `T` into an equivalent instance of `Option<Self>`.
Self: TryInto<T>,
Consume self to return `Some` equivalent value of `Option<T>`.
T: Decode,
#### fn decode_all(input: &mut &[u8]) -> Result<T, ErrorDecode `Self` and consume all of the given input data.
T: Decode,
#### fn decode_all_with_depth_limit(limit: u32, input: &mut &[u8]) -> Result<T, ErrorDecode `Self` and consume all of the given input data.
I: Input,
Decode `Self` with the given maximum recursion depth and advance `input` by the number of bytes consumed.
T: Any,
#### fn into_any(self: Box<T, Global>) -> Box<dyn Any + 'static, GlobalConvert `Box<dyn Trait>` (where `Trait: Downcast`) to `Box<dyn Any>`. `Box<dyn Any>` can then be further `downcast` into `Box<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn into_any_rc(self: Rc<T>) -> Rc<dyn Any + 'staticConvert `Rc<Trait>` (where `Trait: Downcast`) to `Rc<Any>`. `Rc<Any>` can then be further `downcast` into `Rc<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn as_any(&self) -> &(dyn Any + 'static)
Convert `&Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&Any`’s vtable from `&Trait`’s.#### fn as_any_mut(&mut self) -> &mut (dyn Any + 'static)
Convert `&mut Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&mut Any`’s vtable from `&mut Trait`’s.### impl<T> DowncastSync for Twhere
T: Any + Send + Sync,
#### fn into_any_arc(self: Arc<T>) -> Arc<dyn Any + Send + Sync + 'staticConvert `Arc<Trait>` (where `Trait: Downcast`) to `Arc<Any>`. `Arc<Any>` can then be further `downcast` into `Arc<ConcreteType>` where `ConcreteType` implements `Trait`.### impl<T> DynClone for Twhere
T: Clone,
#### fn __clone_box(&self, _: Private) -> *mut()
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> FullLeaf for Twhere
T: Encode + Decode + Clone + PartialEq<T> + Debug,
#### fn using_encoded<R, F>(&self, f: F, _compact: bool) -> Rwhere
F: FnOnce(&[u8]) -> R,
Encode the leaf either in its full or compact form.
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, Outer> IsWrappedBy<Outer> for Twhere
Outer: AsRef<T> + AsMut<T> + From<T>,
T: From<Outer>,
#### fn from_ref(outer: &Outer) -> &T
Get a reference to the inner from the outer.
#### fn from_mut(outer: &mut Outer) -> &mut T
Get a mutable reference to the inner from the outer.
### impl<T> KeyedVec for Twhere
T: Codec,
#### fn to_keyed_vec(&self, prepend_key: &[u8]) -> Vec<u8, GlobalReturn an encoding of `Self` prepended by given slice.### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> SaturatedConversion for T
#### fn saturated_from<T>(t: T) -> Selfwhere
Self: UniqueSaturatedFrom<T>,
Convert from a value of `T` into an equivalent instance of `Self`.
Self: UniqueSaturatedInto<T>,
Consume self to return an equivalent value of `T`.
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<S, T> UncheckedInto<T> for Swhere
T: UncheckedFrom<S>,
#### fn unchecked_into(self) -> T
The counterpart to `unchecked_from`.### impl<T, S> UniqueSaturatedInto<T> for Swhere
T: Bounded,
S: TryInto<T>,
#### fn unique_saturated_into(self) -> T
Consume self to return an equivalent value of `T`.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
S: Decode + Encode,
### impl<T> EncodeLike<&&T> for Twhere
T: Encode,
### impl<T> EncodeLike<&T> for Twhere
T: Encode,
### impl<T> EncodeLike<&mut T> for Twhere
T: Encode,
### impl<T> EncodeLike<Arc<T>> for Twhere
T: Encode,
### impl<T> EncodeLike<Box<T, Global>> for Twhere
T: Encode,
### impl<'a, T> EncodeLike<Cow<'a, T>> for Twhere
T: ToOwned + Encode,
### impl<T> EncodeLike<Rc<T>> for Twhere
T: Encode,
### impl<S> FullCodec for Swhere
S: Decode + FullEncode,
### impl<S> FullEncode for Swhere
S: Encode + EncodeLike<S>,
### impl<T> MaybeDebug for Twhere
T: Debug,
### impl<T> MaybeDebug for Twhere
T: Debug,
### impl<T> MaybeRefUnwindSafe for Twhere
T: RefUnwindSafe,
### impl<T> StaticTypeInfo for Twhere
T: TypeInfo + 'static,
{"&[AuthorityId]":"<h3>Notable traits for <code>&[<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.u8.html\">u8</a>]</code></h3><pre><code><span class=\"where fmt-newline\">impl <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/std/io/trait.Read.html\" title=\"trait std::io::Read\">Read</a> for &[<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.u8.html\">u8</a>]</span><span class=\"where fmt-newline\">impl <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/std/io/trait.Write.html\" title=\"trait std::io::Write\">Write</a> for &mut [<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.u8.html\">u8</a>]</span>","Vec<u8, Global>":"<h3>Notable traits for <code><a class=\"struct\" href=\"https://doc.rust-lang.org/nightly/alloc/vec/struct.Vec.html\" title=\"struct alloc::vec::Vec\">Vec</a><<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.u8.html\">u8</a>, A></code></h3><pre><code><span class=\"where fmt-newline\">impl<A> <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/std/io/trait.Write.html\" title=\"trait std::io::Write\">Write</a> for <a class=\"struct\" href=\"https://doc.rust-lang.org/nightly/alloc/vec/struct.Vec.html\" title=\"struct alloc::vec::Vec\">Vec</a><<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.u8.html\">u8</a>, A><span class=\"where fmt-newline\">where\n A: <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/alloc/trait.Allocator.html\" title=\"trait core::alloc::Allocator\">Allocator</a>,</span></span>"}
Struct sp_beefy::VoteMessage
===
```
pub struct VoteMessage<Number, Id, Signature> {
pub commitment: Commitment<Number>,
pub id: Id,
pub signature: Signature,
}
```
BEEFY vote message.
A vote message is a direct vote created by a BEEFY node on every voting round and is gossiped to its peers.
Fields
---
`commitment: Commitment<Number>`Commit to information extracted from a finalized block
`id: Id`Node authority id
`signature: Signature`Node signature
Trait Implementations
---
### impl<Number: Clone, Id: Clone, Signature: Clone> Clone for VoteMessage<Number, Id, Signature#### fn clone(&self) -> VoteMessage<Number, Id, SignatureReturns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
Formats the value using the given formatter.
Commitment<Number>: Decode,
Id: Decode,
Signature: Decode,
#### fn decode<__CodecInputEdqy: Input>(
__codec_input_edqy: &mut __CodecInputEdqy
) -> Result<Self, ErrorAttempt to deserialise the value from input.#### fn skip<I>(input: &mut I) -> Result<(), Error>where
I: Input,
Attempt to skip the encoded value from input.
Commitment<Number>: Encode,
Id: Encode,
Signature: Encode,
#### fn encode_to<__CodecOutputEdqy: Output + ?Sized>(
&self,
__codec_dest_edqy: &mut __CodecOutputEdqy
)
Convert self to a slice and append it to the destination.#### fn size_hint(&self) -> usize
If possible give a hint of expected size of the encoding.
F: FnOnce(&[u8]) -> R,
Convert self to a slice and then invoke the given closure with it.#### fn encoded_size(&self) -> usize
Calculates the encoded size.
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl<Number, Id, Signature> TypeInfo for VoteMessage<Number, Id, Signature>where
Commitment<Number>: TypeInfo + 'static,
Id: TypeInfo + 'static,
Signature: TypeInfo + 'static,
Number: TypeInfo + 'static,
#### type Identity = VoteMessage<Number, Id, SignatureThe type identifying for which type info is provided.
Returns the static type identifier for `Self`.### impl<Number, Id, Signature> EncodeLike<VoteMessage<Number, Id, Signature>> for VoteMessage<Number, Id, Signature>where
Commitment<Number>: Encode,
Id: Encode,
Signature: Encode,
### impl<Number, Id, Signature> StructuralPartialEq for VoteMessage<Number, Id, SignatureAuto Trait Implementations
---
### impl<Number, Id, Signature> RefUnwindSafe for VoteMessage<Number, Id, Signature>where
Id: RefUnwindSafe,
Number: RefUnwindSafe,
Signature: RefUnwindSafe,
### impl<Number, Id, Signature> Send for VoteMessage<Number, Id, Signature>where
Id: Send,
Number: Send,
Signature: Send,
### impl<Number, Id, Signature> Sync for VoteMessage<Number, Id, Signature>where
Id: Sync,
Number: Sync,
Signature: Sync,
### impl<Number, Id, Signature> Unpin for VoteMessage<Number, Id, Signature>where
Id: Unpin,
Number: Unpin,
Signature: Unpin,
### impl<Number, Id, Signature> UnwindSafe for VoteMessage<Number, Id, Signature>where
Id: UnwindSafe,
Number: UnwindSafe,
Signature: UnwindSafe,
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn checked_from<T>(t: T) -> Option<Self>where
Self: TryFrom<T>,
Convert from a value of `T` into an equivalent instance of `Option<Self>`.
Self: TryInto<T>,
Consume self to return `Some` equivalent value of `Option<T>`.
T: Decode,
#### fn decode_all(input: &mut &[u8]) -> Result<T, ErrorDecode `Self` and consume all of the given input data.
T: Decode,
#### fn decode_all_with_depth_limit(limit: u32, input: &mut &[u8]) -> Result<T, ErrorDecode `Self` and consume all of the given input data.
I: Input,
Decode `Self` with the given maximum recursion depth and advance `input` by the number of bytes consumed.
T: Any,
#### fn into_any(self: Box<T, Global>) -> Box<dyn Any + 'static, GlobalConvert `Box<dyn Trait>` (where `Trait: Downcast`) to `Box<dyn Any>`. `Box<dyn Any>` can then be further `downcast` into `Box<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn into_any_rc(self: Rc<T>) -> Rc<dyn Any + 'staticConvert `Rc<Trait>` (where `Trait: Downcast`) to `Rc<Any>`. `Rc<Any>` can then be further `downcast` into `Rc<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn as_any(&self) -> &(dyn Any + 'static)
Convert `&Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&Any`’s vtable from `&Trait`’s.#### fn as_any_mut(&mut self) -> &mut (dyn Any + 'static)
Convert `&mut Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&mut Any`’s vtable from `&mut Trait`’s.### impl<T> DowncastSync for Twhere
T: Any + Send + Sync,
#### fn into_any_arc(self: Arc<T>) -> Arc<dyn Any + Send + Sync + 'staticConvert `Arc<Trait>` (where `Trait: Downcast`) to `Arc<Any>`. `Arc<Any>` can then be further `downcast` into `Arc<ConcreteType>` where `ConcreteType` implements `Trait`.### impl<T> DynClone for Twhere
T: Clone,
#### fn __clone_box(&self, _: Private) -> *mut()
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> FullLeaf for Twhere
T: Encode + Decode + Clone + PartialEq<T> + Debug,
#### fn using_encoded<R, F>(&self, f: F, _compact: bool) -> Rwhere
F: FnOnce(&[u8]) -> R,
Encode the leaf either in its full or compact form.
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, Outer> IsWrappedBy<Outer> for Twhere
Outer: AsRef<T> + AsMut<T> + From<T>,
T: From<Outer>,
#### fn from_ref(outer: &Outer) -> &T
Get a reference to the inner from the outer.
#### fn from_mut(outer: &mut Outer) -> &mut T
Get a mutable reference to the inner from the outer.
### impl<T> KeyedVec for Twhere
T: Codec,
#### fn to_keyed_vec(&self, prepend_key: &[u8]) -> Vec<u8, GlobalReturn an encoding of `Self` prepended by given slice.### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> SaturatedConversion for T
#### fn saturated_from<T>(t: T) -> Selfwhere
Self: UniqueSaturatedFrom<T>,
Convert from a value of `T` into an equivalent instance of `Self`.
Self: UniqueSaturatedInto<T>,
Consume self to return an equivalent value of `T`.
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<S, T> UncheckedInto<T> for Swhere
T: UncheckedFrom<S>,
#### fn unchecked_into(self) -> T
The counterpart to `unchecked_from`.### impl<T, S> UniqueSaturatedInto<T> for Swhere
T: Bounded,
S: TryInto<T>,
#### fn unique_saturated_into(self) -> T
Consume self to return an equivalent value of `T`.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
S: Decode + Encode,
### impl<T> EncodeLike<&&T> for Twhere
T: Encode,
### impl<T> EncodeLike<&T> for Twhere
T: Encode,
### impl<T> EncodeLike<&mut T> for Twhere
T: Encode,
### impl<T> EncodeLike<Arc<T>> for Twhere
T: Encode,
### impl<T> EncodeLike<Box<T, Global>> for Twhere
T: Encode,
### impl<'a, T> EncodeLike<Cow<'a, T>> for Twhere
T: ToOwned + Encode,
### impl<T> EncodeLike<Rc<T>> for Twhere
T: Encode,
### impl<S> FullCodec for Swhere
S: Decode + FullEncode,
### impl<S> FullEncode for Swhere
S: Encode + EncodeLike<S>,
### impl<T> MaybeDebug for Twhere
T: Debug,
### impl<T> MaybeDebug for Twhere
T: Debug,
### impl<T> MaybeRefUnwindSafe for Twhere
T: RefUnwindSafe,
### impl<T> StaticTypeInfo for Twhere
T: TypeInfo + 'static,
{"Vec<u8, Global>":"<h3>Notable traits for <code><a class=\"struct\" href=\"https://doc.rust-lang.org/nightly/alloc/vec/struct.Vec.html\" title=\"struct alloc::vec::Vec\">Vec</a><<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.u8.html\">u8</a>, A></code></h3><pre><code><span class=\"where fmt-newline\">impl<A> <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/std/io/trait.Write.html\" title=\"trait std::io::Write\">Write</a> for <a class=\"struct\" href=\"https://doc.rust-lang.org/nightly/alloc/vec/struct.Vec.html\" title=\"struct alloc::vec::Vec\">Vec</a><<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.u8.html\">u8</a>, A><span class=\"where fmt-newline\">where\n A: <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/alloc/trait.Allocator.html\" title=\"trait core::alloc::Allocator\">Allocator</a>,</span></span>"}
Enum sp_beefy::ConsensusLog
===
```
pub enum ConsensusLog<AuthorityId: Codec> {
AuthoritiesChange(ValidatorSet<AuthorityId>),
OnDisabled(AuthorityIndex),
MmrRoot(MmrRootHash),
}
```
A consensus log item for BEEFY.
Variants
---
### AuthoritiesChange(ValidatorSet<AuthorityId>)
The authorities have changed.
### OnDisabled(AuthorityIndex)
Disable the authority with given index.
### MmrRoot(MmrRootHash)
MMR root hash.
Trait Implementations
---
### impl<AuthorityId: Codec> Decode for ConsensusLog<AuthorityId>where
ValidatorSet<AuthorityId>: Decode,
#### fn decode<__CodecInputEdqy: Input>(
__codec_input_edqy: &mut __CodecInputEdqy
) -> Result<Self, ErrorAttempt to deserialise the value from input.#### fn skip<I>(input: &mut I) -> Result<(), Error>where
I: Input,
Attempt to skip the encoded value from input.
ValidatorSet<AuthorityId>: Encode,
#### fn encode_to<__CodecOutputEdqy: Output + ?Sized>(
&self,
__codec_dest_edqy: &mut __CodecOutputEdqy
)
Convert self to a slice and append it to the destination.#### fn size_hint(&self) -> usize
If possible give a hint of expected size of the encoding.
F: FnOnce(&[u8]) -> R,
Convert self to a slice and then invoke the given closure with it.#### fn encoded_size(&self) -> usize
Calculates the encoded size.
ValidatorSet<AuthorityId>: TypeInfo + 'static,
AuthorityId: Codec + TypeInfo + 'static,
#### type Identity = ConsensusLog<AuthorityIdThe type identifying for which type info is provided.
Returns the static type identifier for `Self`.### impl<AuthorityId: Codec> EncodeLike<ConsensusLog<AuthorityId>> for ConsensusLog<AuthorityId>where
ValidatorSet<AuthorityId>: Encode,
Auto Trait Implementations
---
### impl<AuthorityId> RefUnwindSafe for ConsensusLog<AuthorityId>where
AuthorityId: RefUnwindSafe,
### impl<AuthorityId> Send for ConsensusLog<AuthorityId>where
AuthorityId: Send,
### impl<AuthorityId> Sync for ConsensusLog<AuthorityId>where
AuthorityId: Sync,
### impl<AuthorityId> Unpin for ConsensusLog<AuthorityId>where
AuthorityId: Unpin,
### impl<AuthorityId> UnwindSafe for ConsensusLog<AuthorityId>where
AuthorityId: UnwindSafe,
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn checked_from<T>(t: T) -> Option<Self>where
Self: TryFrom<T>,
Convert from a value of `T` into an equivalent instance of `Option<Self>`.
Self: TryInto<T>,
Consume self to return `Some` equivalent value of `Option<T>`.
T: Decode,
#### fn decode_all(input: &mut &[u8]) -> Result<T, ErrorDecode `Self` and consume all of the given input data.
T: Decode,
#### fn decode_all_with_depth_limit(limit: u32, input: &mut &[u8]) -> Result<T, ErrorDecode `Self` and consume all of the given input data.
I: Input,
Decode `Self` with the given maximum recursion depth and advance `input` by the number of bytes consumed.
T: Any,
#### fn into_any(self: Box<T, Global>) -> Box<dyn Any + 'static, GlobalConvert `Box<dyn Trait>` (where `Trait: Downcast`) to `Box<dyn Any>`. `Box<dyn Any>` can then be further `downcast` into `Box<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn into_any_rc(self: Rc<T>) -> Rc<dyn Any + 'staticConvert `Rc<Trait>` (where `Trait: Downcast`) to `Rc<Any>`. `Rc<Any>` can then be further `downcast` into `Rc<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn as_any(&self) -> &(dyn Any + 'static)
Convert `&Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&Any`’s vtable from `&Trait`’s.#### fn as_any_mut(&mut self) -> &mut (dyn Any + 'static)
Convert `&mut Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&mut Any`’s vtable from `&mut Trait`’s.### impl<T> DowncastSync for Twhere
T: Any + Send + Sync,
#### fn into_any_arc(self: Arc<T>) -> Arc<dyn Any + Send + Sync + 'staticConvert `Arc<Trait>` (where `Trait: Downcast`) to `Arc<Any>`. `Arc<Any>` can then be further `downcast` into `Arc<ConcreteType>` where `ConcreteType` implements `Trait`.### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, Outer> IsWrappedBy<Outer> for Twhere
Outer: AsRef<T> + AsMut<T> + From<T>,
T: From<Outer>,
#### fn from_ref(outer: &Outer) -> &T
Get a reference to the inner from the outer.
#### fn from_mut(outer: &mut Outer) -> &mut T
Get a mutable reference to the inner from the outer.
### impl<T> KeyedVec for Twhere
T: Codec,
#### fn to_keyed_vec(&self, prepend_key: &[u8]) -> Vec<u8, GlobalReturn an encoding of `Self` prepended by given slice.### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> SaturatedConversion for T
#### fn saturated_from<T>(t: T) -> Selfwhere
Self: UniqueSaturatedFrom<T>,
Convert from a value of `T` into an equivalent instance of `Self`.
Self: UniqueSaturatedInto<T>,
Consume self to return an equivalent value of `T`.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<S, T> UncheckedInto<T> for Swhere
T: UncheckedFrom<S>,
#### fn unchecked_into(self) -> T
The counterpart to `unchecked_from`.### impl<T, S> UniqueSaturatedInto<T> for Swhere
T: Bounded,
S: TryInto<T>,
#### fn unique_saturated_into(self) -> T
Consume self to return an equivalent value of `T`.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
S: Decode + Encode,
### impl<T> EncodeLike<&&T> for Twhere
T: Encode,
### impl<T> EncodeLike<&T> for Twhere
T: Encode,
### impl<T> EncodeLike<&mut T> for Twhere
T: Encode,
### impl<T> EncodeLike<Arc<T>> for Twhere
T: Encode,
### impl<T> EncodeLike<Box<T, Global>> for Twhere
T: Encode,
### impl<T> EncodeLike<Rc<T>> for Twhere
T: Encode,
### impl<S> FullCodec for Swhere
S: Decode + FullEncode,
### impl<S> FullEncode for Swhere
S: Encode + EncodeLike<S>,
### impl<T> MaybeRefUnwindSafe for Twhere
T: RefUnwindSafe,
### impl<T> StaticTypeInfo for Twhere
T: TypeInfo + 'static,
{"Vec<u8, Global>":"<h3>Notable traits for <code><a class=\"struct\" href=\"https://doc.rust-lang.org/nightly/alloc/vec/struct.Vec.html\" title=\"struct alloc::vec::Vec\">Vec</a><<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.u8.html\">u8</a>, A></code></h3><pre><code><span class=\"where fmt-newline\">impl<A> <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/std/io/trait.Write.html\" title=\"trait std::io::Write\">Write</a> for <a class=\"struct\" href=\"https://doc.rust-lang.org/nightly/alloc/vec/struct.Vec.html\" title=\"struct alloc::vec::Vec\">Vec</a><<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.u8.html\">u8</a>, A><span class=\"where fmt-newline\">where\n A: <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/alloc/trait.Allocator.html\" title=\"trait core::alloc::Allocator\">Allocator</a>,</span></span>"}
Enum sp_beefy::Keyring
===
```
pub enum Keyring {
Alice,
Bob,
Charlie,
Dave,
Eve,
Ferdie,
One,
Two,
}
```
Set of test accounts using `crate::crypto` types.
Variants
---
### Alice
### Bob
### Charlie
### Dave
### Eve
### Ferdie
### One
### Two
Implementations
---
### impl Keyring
#### pub fn sign(self, msg: &[u8]) -> Signature
Sign `msg`.
#### pub fn pair(self) -> Pair
Return key pair.
#### pub fn public(self) -> Public
Return public key.
#### pub fn to_seed(self) -> String
Return seed string.
#### pub fn from_public(who: &Public) -> Option<KeyringGet Keyring from public key.
Trait Implementations
---
### impl Clone for Keyring
#### fn clone(&self) -> Keyring
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter.
#### fn from(k: Keyring) -> Self
Converts to this type from the input type.### impl From<Keyring> for Pair
#### fn from(k: Keyring) -> Self
Converts to this type from the input type.### impl From<Keyring> for Public
#### fn from(k: Keyring) -> Self
Converts to this type from the input type.### impl Hash for Keyring
#### fn hash<__H: Hasher>(&self, state: &mut __H)
Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where
H: Hasher,
Self: Sized,
Feeds a slice of this type into the given `Hasher`.
#### type Iterator = KeyringIter
#### fn iter() -> KeyringIter
### impl PartialEq<Keyring> for Keyring
#### fn eq(&self, other: &Keyring) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Copy for Keyring
### impl Eq for Keyring
### impl StructuralEq for Keyring
### impl StructuralPartialEq for Keyring
Auto Trait Implementations
---
### impl RefUnwindSafe for Keyring
### impl Send for Keyring
### impl Sync for Keyring
### impl Unpin for Keyring
### impl UnwindSafe for Keyring
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
T: Hash + ?Sized,
#### default fn get_hash<H, B>(value: &H, build_hasher: &B) -> u64where
H: Hash + ?Sized,
B: BuildHasher,
### impl<T> CheckedConversion for T
#### fn checked_from<T>(t: T) -> Option<Self>where
Self: TryFrom<T>,
Convert from a value of `T` into an equivalent instance of `Option<Self>`.
Self: TryInto<T>,
Consume self to return `Some` equivalent value of `Option<T>`.
T: Any,
#### fn into_any(self: Box<T, Global>) -> Box<dyn Any + 'static, GlobalConvert `Box<dyn Trait>` (where `Trait: Downcast`) to `Box<dyn Any>`. `Box<dyn Any>` can then be further `downcast` into `Box<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn into_any_rc(self: Rc<T>) -> Rc<dyn Any + 'staticConvert `Rc<Trait>` (where `Trait: Downcast`) to `Rc<Any>`. `Rc<Any>` can then be further `downcast` into `Rc<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn as_any(&self) -> &(dyn Any + 'static)
Convert `&Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&Any`’s vtable from `&Trait`’s.#### fn as_any_mut(&mut self) -> &mut (dyn Any + 'static)
Convert `&mut Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&mut Any`’s vtable from `&mut Trait`’s.### impl<T> DowncastSync for Twhere
T: Any + Send + Sync,
#### fn into_any_arc(self: Arc<T>) -> Arc<dyn Any + Send + Sync + 'staticConvert `Arc<Trait>` (where `Trait: Downcast`) to `Arc<Any>`. `Arc<Any>` can then be further `downcast` into `Arc<ConcreteType>` where `ConcreteType` implements `Trait`.### impl<T> DynClone for Twhere
T: Clone,
#### fn __clone_box(&self, _: Private) -> *mut()
### impl<Q, K> Equivalent<K> for Qwhere
Q: Eq + ?Sized,
K: Borrow<Q> + ?Sized,
#### fn equivalent(&self, key: &K) -> bool
Compare self to `key` and return `true` if they are equal.### impl<Q, K> Equivalent<K> for Qwhere
Q: Eq + ?Sized,
K: Borrow<Q> + ?Sized,
#### fn equivalent(&self, key: &K) -> bool
Checks if this value is equivalent to the given key.
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, Outer> IsWrappedBy<Outer> for Twhere
Outer: AsRef<T> + AsMut<T> + From<T>,
T: From<Outer>,
#### fn from_ref(outer: &Outer) -> &T
Get a reference to the inner from the outer.
#### fn from_mut(outer: &mut Outer) -> &mut T
Get a mutable reference to the inner from the outer.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> SaturatedConversion for T
#### fn saturated_from<T>(t: T) -> Selfwhere
Self: UniqueSaturatedFrom<T>,
Convert from a value of `T` into an equivalent instance of `Self`.
Self: UniqueSaturatedInto<T>,
Consume self to return an equivalent value of `T`.
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
T: Display + ?Sized,
#### default fn to_string(&self) -> String
Converts the given value to a `String`.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<S, T> UncheckedInto<T> for Swhere
T: UncheckedFrom<S>,
#### fn unchecked_into(self) -> T
The counterpart to `unchecked_from`.### impl<T, S> UniqueSaturatedInto<T> for Swhere
T: Bounded,
S: TryInto<T>,
#### fn unique_saturated_into(self) -> T
Consume self to return an equivalent value of `T`.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: 'static + Debug + Display + Send + Sync,
### impl<T> MaybeDebug for Twhere
T: Debug,
### impl<T> MaybeDebug for Twhere
T: Debug,
### impl<T> MaybeDisplay for Twhere
T: Display,
### impl<T> MaybeHash for Twhere
T: Hash,
### impl<T> MaybeHash for Twhere
T: Hash,
### impl<T> MaybeRefUnwindSafe for Twhere
T: RefUnwindSafe,
### impl<T> Member for Twhere
T: Send + Sync + Debug + Eq + PartialEq<T> + Clone + 'static,
{"KeyringIter":"<h3>Notable traits for <code><a class=\"struct\" href=\"struct.KeyringIter.html\" title=\"struct sp_beefy::KeyringIter\">KeyringIter</a></code></h3><pre><code><span class=\"where fmt-newline\">impl <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html\" title=\"trait core::iter::traits::iterator::Iterator\">Iterator</a> for <a class=\"struct\" href=\"struct.KeyringIter.html\" title=\"struct sp_beefy::KeyringIter\">KeyringIter</a></span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html#associatedtype.Item\" class=\"associatedtype\">Item</a> = <a class=\"enum\" href=\"enum.Keyring.html\" title=\"enum sp_beefy::Keyring\">Keyring</a>;</span>"}
Enum sp_beefy::VersionedFinalityProof
===
```
pub enum VersionedFinalityProof<N, S> {
V1(SignedCommitment<N, S>),
}
```
A SignedCommitment with a version number.
This variant will be appended to the block justifications for the block for which the signed commitment has been generated.
Note that this enum is subject to change in the future with introduction of additional cryptographic primitives to BEEFY.
Variants
---
### V1(SignedCommitment<N, S>)
Current active version
Trait Implementations
---
### impl<N: Clone, S: Clone> Clone for VersionedFinalityProof<N, S#### fn clone(&self) -> VersionedFinalityProof<N, SReturns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
Formats the value using the given formatter.
SignedCommitment<N, S>: Decode,
#### fn decode<__CodecInputEdqy: Input>(
__codec_input_edqy: &mut __CodecInputEdqy
) -> Result<Self, ErrorAttempt to deserialise the value from input.#### fn skip<I>(input: &mut I) -> Result<(), Error>where
I: Input,
Attempt to skip the encoded value from input.
SignedCommitment<N, S>: Encode,
#### fn encode_to<__CodecOutputEdqy: Output + ?Sized>(
&self,
__codec_dest_edqy: &mut __CodecOutputEdqy
)
Convert self to a slice and append it to the destination.#### fn size_hint(&self) -> usize
If possible give a hint of expected size of the encoding.
F: FnOnce(&[u8]) -> R,
Convert self to a slice and then invoke the given closure with it.#### fn encoded_size(&self) -> usize
Calculates the encoded size.
Converts to this type from the input type.### impl<N: PartialEq, S: PartialEq> PartialEq<VersionedFinalityProof<N, S>> for VersionedFinalityProof<N, S#### fn eq(&self, other: &VersionedFinalityProof<N, S>) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl<N, S> EncodeLike<VersionedFinalityProof<N, S>> for VersionedFinalityProof<N, S>where
SignedCommitment<N, S>: Encode,
### impl<N, S> StructuralPartialEq for VersionedFinalityProof<N, SAuto Trait Implementations
---
### impl<N, S> RefUnwindSafe for VersionedFinalityProof<N, S>where
N: RefUnwindSafe,
S: RefUnwindSafe,
### impl<N, S> Send for VersionedFinalityProof<N, S>where
N: Send,
S: Send,
### impl<N, S> Sync for VersionedFinalityProof<N, S>where
N: Sync,
S: Sync,
### impl<N, S> Unpin for VersionedFinalityProof<N, S>where
N: Unpin,
S: Unpin,
### impl<N, S> UnwindSafe for VersionedFinalityProof<N, S>where
N: UnwindSafe,
S: UnwindSafe,
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn checked_from<T>(t: T) -> Option<Self>where
Self: TryFrom<T>,
Convert from a value of `T` into an equivalent instance of `Option<Self>`.
Self: TryInto<T>,
Consume self to return `Some` equivalent value of `Option<T>`.
T: Decode,
#### fn decode_all(input: &mut &[u8]) -> Result<T, ErrorDecode `Self` and consume all of the given input data.
T: Decode,
#### fn decode_all_with_depth_limit(limit: u32, input: &mut &[u8]) -> Result<T, ErrorDecode `Self` and consume all of the given input data.
I: Input,
Decode `Self` with the given maximum recursion depth and advance `input` by the number of bytes consumed.
T: Any,
#### fn into_any(self: Box<T, Global>) -> Box<dyn Any + 'static, GlobalConvert `Box<dyn Trait>` (where `Trait: Downcast`) to `Box<dyn Any>`. `Box<dyn Any>` can then be further `downcast` into `Box<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn into_any_rc(self: Rc<T>) -> Rc<dyn Any + 'staticConvert `Rc<Trait>` (where `Trait: Downcast`) to `Rc<Any>`. `Rc<Any>` can then be further `downcast` into `Rc<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn as_any(&self) -> &(dyn Any + 'static)
Convert `&Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&Any`’s vtable from `&Trait`’s.#### fn as_any_mut(&mut self) -> &mut (dyn Any + 'static)
Convert `&mut Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&mut Any`’s vtable from `&mut Trait`’s.### impl<T> DowncastSync for Twhere
T: Any + Send + Sync,
#### fn into_any_arc(self: Arc<T>) -> Arc<dyn Any + Send + Sync + 'staticConvert `Arc<Trait>` (where `Trait: Downcast`) to `Arc<Any>`. `Arc<Any>` can then be further `downcast` into `Arc<ConcreteType>` where `ConcreteType` implements `Trait`.### impl<T> DynClone for Twhere
T: Clone,
#### fn __clone_box(&self, _: Private) -> *mut()
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> FullLeaf for Twhere
T: Encode + Decode + Clone + PartialEq<T> + Debug,
#### fn using_encoded<R, F>(&self, f: F, _compact: bool) -> Rwhere
F: FnOnce(&[u8]) -> R,
Encode the leaf either in its full or compact form.
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, Outer> IsWrappedBy<Outer> for Twhere
Outer: AsRef<T> + AsMut<T> + From<T>,
T: From<Outer>,
#### fn from_ref(outer: &Outer) -> &T
Get a reference to the inner from the outer.
#### fn from_mut(outer: &mut Outer) -> &mut T
Get a mutable reference to the inner from the outer.
### impl<T> KeyedVec for Twhere
T: Codec,
#### fn to_keyed_vec(&self, prepend_key: &[u8]) -> Vec<u8, GlobalReturn an encoding of `Self` prepended by given slice.### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> SaturatedConversion for T
#### fn saturated_from<T>(t: T) -> Selfwhere
Self: UniqueSaturatedFrom<T>,
Convert from a value of `T` into an equivalent instance of `Self`.
Self: UniqueSaturatedInto<T>,
Consume self to return an equivalent value of `T`.
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<S, T> UncheckedInto<T> for Swhere
T: UncheckedFrom<S>,
#### fn unchecked_into(self) -> T
The counterpart to `unchecked_from`.### impl<T, S> UniqueSaturatedInto<T> for Swhere
T: Bounded,
S: TryInto<T>,
#### fn unique_saturated_into(self) -> T
Consume self to return an equivalent value of `T`.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
S: Decode + Encode,
### impl<T> EncodeLike<&&T> for Twhere
T: Encode,
### impl<T> EncodeLike<&T> for Twhere
T: Encode,
### impl<T> EncodeLike<&mut T> for Twhere
T: Encode,
### impl<T> EncodeLike<Arc<T>> for Twhere
T: Encode,
### impl<T> EncodeLike<Box<T, Global>> for Twhere
T: Encode,
### impl<'a, T> EncodeLike<Cow<'a, T>> for Twhere
T: ToOwned + Encode,
### impl<T> EncodeLike<Rc<T>> for Twhere
T: Encode,
### impl<S> FullCodec for Swhere
S: Decode + FullEncode,
### impl<S> FullEncode for Swhere
S: Encode + EncodeLike<S>,
### impl<T> MaybeDebug for Twhere
T: Debug,
### impl<T> MaybeDebug for Twhere
T: Debug,
### impl<T> MaybeRefUnwindSafe for Twhere
T: RefUnwindSafe,
{"Vec<u8, Global>":"<h3>Notable traits for <code><a class=\"struct\" href=\"https://doc.rust-lang.org/nightly/alloc/vec/struct.Vec.html\" title=\"struct alloc::vec::Vec\">Vec</a><<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.u8.html\">u8</a>, A></code></h3><pre><code><span class=\"where fmt-newline\">impl<A> <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/std/io/trait.Write.html\" title=\"trait std::io::Write\">Write</a> for <a class=\"struct\" href=\"https://doc.rust-lang.org/nightly/alloc/vec/struct.Vec.html\" title=\"struct alloc::vec::Vec\">Vec</a><<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.u8.html\">u8</a>, A><span class=\"where fmt-newline\">where\n A: <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/alloc/trait.Allocator.html\" title=\"trait core::alloc::Allocator\">Allocator</a>,</span></span>"}
Constant sp_beefy::BEEFY_ENGINE_ID
===
```
pub const BEEFY_ENGINE_ID: ConsensusEngineId;
```
The `ConsensusEngineId` of BEEFY.
Constant sp_beefy::GENESIS_AUTHORITY_SET_ID
===
```
pub const GENESIS_AUTHORITY_SET_ID: u64 = 0;
```
Authority set id starts with zero at BEEFY pallet genesis.
Constant sp_beefy::KEY_TYPE
===
```
pub const KEY_TYPE: KeyTypeId;
```
Key type for BEEFY module.
Trait sp_beefy::BeefyApi
===
```
pub trait BeefyApi<Block: BlockT>: Core<Block> {
// Provided methods
fn beefy_genesis(
&self,
__runtime_api_at_param__: &BlockId<Block>
) -> Result<Option<NumberFor<Block>>, ApiError> { ... }
fn beefy_genesis_with_context(
&self,
__runtime_api_at_param__: &BlockId<Block>,
context: ExecutionContext
) -> Result<Option<NumberFor<Block>>, ApiError> { ... }
fn validator_set(
&self,
__runtime_api_at_param__: &BlockId<Block>
) -> Result<Option<ValidatorSet<AuthorityId>>, ApiError> { ... }
fn validator_set_with_context(
&self,
__runtime_api_at_param__: &BlockId<Block>,
context: ExecutionContext
) -> Result<Option<ValidatorSet<AuthorityId>>, ApiError> { ... }
fn submit_report_equivocation_unsigned_extrinsic(
&self,
__runtime_api_at_param__: &BlockId<Block>,
equivocation_proof: EquivocationProof<NumberFor<Block>, AuthorityId, Signature>,
key_owner_proof: OpaqueKeyOwnershipProof
) -> Result<Option<()>, ApiError> { ... }
fn submit_report_equivocation_unsigned_extrinsic_with_context(
&self,
__runtime_api_at_param__: &BlockId<Block>,
context: ExecutionContext,
equivocation_proof: EquivocationProof<NumberFor<Block>, AuthorityId, Signature>,
key_owner_proof: OpaqueKeyOwnershipProof
) -> Result<Option<()>, ApiError> { ... }
fn generate_key_ownership_proof(
&self,
__runtime_api_at_param__: &BlockId<Block>,
set_id: ValidatorSetId,
authority_id: AuthorityId
) -> Result<Option<OpaqueKeyOwnershipProof>, ApiError> { ... }
fn generate_key_ownership_proof_with_context(
&self,
__runtime_api_at_param__: &BlockId<Block>,
context: ExecutionContext,
set_id: ValidatorSetId,
authority_id: AuthorityId
) -> Result<Option<OpaqueKeyOwnershipProof>, ApiError> { ... }
}
```
API necessary for BEEFY voters.
Provided Methods
---
#### fn beefy_genesis(
&self,
__runtime_api_at_param__: &BlockId<Block>
) -> Result<Option<NumberFor<Block>>, ApiErrorReturn the block number where BEEFY consensus is enabled/started
#### fn beefy_genesis_with_context(
&self,
__runtime_api_at_param__: &BlockId<Block>,
context: ExecutionContext
) -> Result<Option<NumberFor<Block>>, ApiErrorReturn the block number where BEEFY consensus is enabled/started
#### fn validator_set(
&self,
__runtime_api_at_param__: &BlockId<Block>
) -> Result<Option<ValidatorSet<AuthorityId>>, ApiErrorReturn the current active BEEFY validator set
#### fn validator_set_with_context(
&self,
__runtime_api_at_param__: &BlockId<Block>,
context: ExecutionContext
) -> Result<Option<ValidatorSet<AuthorityId>>, ApiErrorReturn the current active BEEFY validator set
#### fn submit_report_equivocation_unsigned_extrinsic(
&self,
__runtime_api_at_param__: &BlockId<Block>,
equivocation_proof: EquivocationProof<NumberFor<Block>, AuthorityId, Signature>,
key_owner_proof: OpaqueKeyOwnershipProof
) -> Result<Option<()>, ApiErrorSubmits an unsigned extrinsic to report an equivocation. The caller must provide the equivocation proof and a key ownership proof
(should be obtained using `generate_key_ownership_proof`). The extrinsic will be unsigned and should only be accepted for local authorship (not to be broadcast to the network). This method returns
`None` when creation of the extrinsic fails, e.g. if equivocation reporting is disabled for the given runtime (i.e. this method is hardcoded to return `None`). Only useful in an offchain context.
#### fn submit_report_equivocation_unsigned_extrinsic_with_context(
&self,
__runtime_api_at_param__: &BlockId<Block>,
context: ExecutionContext,
equivocation_proof: EquivocationProof<NumberFor<Block>, AuthorityId, Signature>,
key_owner_proof: OpaqueKeyOwnershipProof
) -> Result<Option<()>, ApiErrorSubmits an unsigned extrinsic to report an equivocation. The caller must provide the equivocation proof and a key ownership proof
(should be obtained using `generate_key_ownership_proof`). The extrinsic will be unsigned and should only be accepted for local authorship (not to be broadcast to the network). This method returns
`None` when creation of the extrinsic fails, e.g. if equivocation reporting is disabled for the given runtime (i.e. this method is hardcoded to return `None`). Only useful in an offchain context.
#### fn generate_key_ownership_proof(
&self,
__runtime_api_at_param__: &BlockId<Block>,
set_id: ValidatorSetId,
authority_id: AuthorityId
) -> Result<Option<OpaqueKeyOwnershipProof>, ApiErrorGenerates a proof of key ownership for the given authority in the given set. An example usage of this module is coupled with the session historical module to prove that a given authority key is tied to a given staking identity during a specific session. Proofs of key ownership are necessary for submitting equivocation reports.
NOTE: even though the API takes a `set_id` as parameter the current implementations ignores this parameter and instead relies on this method being called at the correct block height, i.e. any point at which the given set id is live on-chain. Future implementations will instead use indexed data through an offchain worker, not requiring older states to be available.
#### fn generate_key_ownership_proof_with_context(
&self,
__runtime_api_at_param__: &BlockId<Block>,
context: ExecutionContext,
set_id: ValidatorSetId,
authority_id: AuthorityId
) -> Result<Option<OpaqueKeyOwnershipProof>, ApiErrorGenerates a proof of key ownership for the given authority in the given set. An example usage of this module is coupled with the session historical module to prove that a given authority key is tied to a given staking identity during a specific session. Proofs of key ownership are necessary for submitting equivocation reports.
NOTE: even though the API takes a `set_id` as parameter the current implementations ignores this parameter and instead relies on this method being called at the correct block height, i.e. any point at which the given set id is live on-chain. Future implementations will instead use indexed data through an offchain worker, not requiring older states to be available.
Trait Implementations
---
### impl<Block: BlockT> RuntimeApiInfo for dyn BeefyApi<Block#### const ID: [u8; 8] = _
The identifier of the runtime api.#### const VERSION: u32 = 1u32
The version of the runtime api.Implementors
---
Trait sp_beefy::BeefyAuthorityId
===
```
pub trait BeefyAuthorityId<MsgHash: Hash>: RuntimeAppPublic {
// Required method
fn verify(
&self,
signature: &<Self as RuntimeAppPublic>::Signature,
msg: &[u8]
) -> bool;
}
```
Trait representing BEEFY authority id, including custom signature verification.
Accepts custom hashing fn for the message and custom convertor fn for the signer.
Required Methods
---
#### fn verify(
&self,
signature: &<Self as RuntimeAppPublic>::Signature,
msg: &[u8]
) -> bool
Verify a signature.
Return `true` if signature over `msg` is valid for this id.
Implementors
---
### impl<MsgHash: Hash> BeefyAuthorityId<MsgHash> for AuthorityIdwhere
<MsgHash as Hash>::Output: Into<[u8; 32]>,
Trait sp_beefy::OnNewValidatorSet
===
```
pub trait OnNewValidatorSet<AuthorityId> {
// Required method
fn on_new_validator_set(
validator_set: &ValidatorSet<AuthorityId>,
next_validator_set: &ValidatorSet<AuthorityId>
);
}
```
New BEEFY validator set notification hook.
Required Methods
---
#### fn on_new_validator_set(
validator_set: &ValidatorSet<AuthorityId>,
next_validator_set: &ValidatorSet<AuthorityId>
)
Function called by the pallet when BEEFY validator set changes.
Implementations on Foreign Types
---
### impl<AuthorityId> OnNewValidatorSet<AuthorityId> for ()
No-op implementation of OnNewValidatorSet.
#### fn on_new_validator_set(
_: &ValidatorSet<AuthorityId>,
_: &ValidatorSet<AuthorityId>
)
Implementors
---
Trait sp_beefy::PayloadProvider
===
```
pub trait PayloadProvider<B: Block> {
// Required method
fn payload(&self, header: &B::Header) -> Option<Payload>;
}
```
Trait for custom BEEFY payload providers.
Required Methods
---
#### fn payload(&self, header: &B::Header) -> Option<PayloadProvide BEEFY payload if available for `header`.
Implementors
---
### impl<B, R> PayloadProvider<B> for MmrRootProvider<B, R>where
B: Block,
R: ProvideRuntimeApi<B>,
R::Api: MmrApi<B, MmrRootHash, NumberFor<B>>,
Function sp_beefy::check_commitment_signature
===
```
pub fn check_commitment_signature<Number, Id, MsgHash>(
commitment: &Commitment<Number>,
authority_id: &Id,
signature: &<Id as RuntimeAppPublic>::Signature
) -> boolwhere
Id: BeefyAuthorityId<MsgHash>,
Number: Clone + Encode + PartialEq,
MsgHash: Hash,
```
Check a commitment signature by encoding the commitment and verifying the provided signature using the expected authority id.
Function sp_beefy::check_equivocation_proof
===
```
pub fn check_equivocation_proof<Number, Id, MsgHash>(
report: &EquivocationProof<Number, Id, <Id as RuntimeAppPublic>::Signature>
) -> boolwhere
Id: BeefyAuthorityId<MsgHash> + PartialEq,
Number: Clone + Encode + PartialEq,
MsgHash: Hash,
```
Verifies the equivocation proof by making sure that both votes target different blocks and that its signatures are valid.
Function sp_beefy::generate_equivocation_proof
===
```
pub fn generate_equivocation_proof(
vote1: (u64, Payload, ValidatorSetId, &Keyring),
vote2: (u64, Payload, ValidatorSetId, &Keyring)
) -> EquivocationProof<u64, Public, Signature>
```
Create a new `EquivocationProof` based on given arguments.
Type Definition sp_beefy::AuthorityIndex
===
```
pub type AuthorityIndex = u32;
```
The index of an authority.
Type Definition sp_beefy::MmrRootHash
===
```
pub type MmrRootHash = H256;
```
The type used to represent an MMR root hash.
Type Definition sp_beefy::ValidatorSetId
===
```
pub type ValidatorSetId = u64;
```
A typedef for validator set id. |
gecko | cran | R | Package ‘gecko’
August 31, 2023
Type Package
Title Geographical Ecology and Conservation Knowledge Online
Version 0.1.2
Depends R (>= 4.1.0)
Imports terra, sp, grDevices, graphics, stats, utils, geosphere,
methods
BugReports https://github.com/VascoBranco/gecko/issues
Author <NAME> [cre, aut] (<https://orcid.org/0000-0001-7797-3183>),
<NAME> [aut] (<https://orcid.org/0000-0001-8119-9960>),
<NAME> [ctb] (<https://orcid.org/0000-0003-2439-1168>)
Maintainer <NAME> <<EMAIL>>
Description Includes a collection of geographical analysis functions aimed primarily at ecol-
ogy and conservation science studies, allowing processing of both point and raster data. Fu-
ture versions will integrate species threat datasets developed by the authors.
License GPL-2
Encoding UTF-8
RoxygenNote 7.2.3
Repository CRAN
NeedsCompilation no
Date/Publication 2023-08-31 11:10:02 UTC
R topics documented:
clea... 2
create.eas... 3
create.la... 3
create.lon... 4
create.nort... 5
distanc... 5
eo... 6
gecko.example... 7
map.dra... 8
mov... 9
outlier... 10
reduc... 10
thi... 11
clean Uniformize raster layers.
Description
Crop raster layers to minimum size possible and uniformize NA values across layers.
Usage
clean(layers)
Arguments
layers Raster* object as defined by package raster.
Details
Excludes all marginal rows and columns with only NA values and change values to NA if they are
NA in any of the layers.
Value
A Raster* object, same class as layers.
Examples
data = gecko.examples("gecko.layers")
terra::plot(clean(data))
create.east Create eastness layer.
Description
Create a layer depicting eastness based on an elevation layer.
Usage
create.east(dem)
Arguments
dem RasterLayer object of elevation (a digital elevation model - DEM) as defined by
package raster.
Details
Using elevation, aspect can be calculated. Yet, it is a circular variable (0 = 360) and has to be
converted to northness and eastness to be useful for modelling.
Value
A RasterLayer object.
Examples
data = gecko.examples("gecko.layers")
terra::plot(create.east(data[[3]]))
create.lat Create latitude layer.
Description
Create a layer depicting latitude based on any other.
Usage
create.lat(layers)
Arguments
layers Raster* object as defined by package raster.
Details
Using latitude (and longitude) in models may help limiting the extrapolation of the predicted area
much beyond known areas.
Value
A RasterLayer object.
Examples
data = gecko.examples("gecko.layers")
terra::plot(create.lat(data[[1]]))
create.long Create longitude layer.
Description
Create a layer depicting longitude based on any other.
Usage
create.long(layers)
Arguments
layers Raster* object as defined by package raster.
Details
Using longitude (and latitude) in models may help limiting the extrapolation of the predicted area
much beyond known areas.
Value
A RasterLayer object.
Examples
data = gecko.examples("gecko.layers")
terra::plot(create.long(data))
create.north Create northness layer.
Description
Create a layer depicting northness based on an elevation layer.
Usage
create.north(dem)
Arguments
dem RasterLayer object of elevation (a digital elevation model - DEM) as defined by
package raster.
Details
Using elevation, aspect can be calculated. Yet, it is a circular variable (0 = 360) and has to be
converted to northness and eastness to be useful for modelling.
Value
A RasterLayer object.
Examples
data = gecko.examples("gecko.layers")
terra::plot(create.north(data[[3]]))
distance Create distance layer.
Description
Creates a layer depicting distances to records using the minimum, average, distance to the minimum
convex polygon or distance taking into account a cost surface.
Usage
distance(longlat, layers, type = "minimum")
Arguments
longlat Matrix of longitude and latitude or eastness and northness (two columns in this
order) of species occurrence records.
layers Raster* object as defined by package raster to serve as model to create distance
layer.
type text string indicating whether the output should be the "minimum", "average" or
"mcp" distance to all records. "mcp" means the distance to the minimum convex
polygon encompassing all records.
Details
Using distance to records in models may help limiting the extrapolation of the predicted area much
beyond known areas.
Value
A RasterLayer object.
Examples
userpar <- par(no.readonly = TRUE)
layers = gecko.examples("gecko.layers")
alt = layers[[3]]
records = gecko.examples("gecko.records")
par(mfrow=c(3,2))
terra::plot(alt)
points(records)
terra::plot(distance(records, alt))
terra::plot(distance(records, alt, type = "average"))
par(userpar)
eoo Extent of Occurrence (EOO).
Description
Calculates the Extent of Occurrence of a species based on either records or predicted distribution.
Usage
eoo(spData)
Arguments
spData spData One of three options: 1) matrix of longitude and latitude (two columns)
of each occurrence record; 2) matrix of easting and northing (two columns, e.g.
UTM) of each occurrence record in meters; 3) RasterLayer object of predicted
distribution (either 0/1 or probabilistic values).
Details
EOO is calculated as the minimum convex polygon covering all known or predicted sites for the
species.
Value
A single value in km2 or a vector with lower confidence limit, consensus and upper confidence limit
(probabilities 0.975, 0.5 and 0.025 respectively).
gecko.examples Example data packaged with *gecko*
Description
Load data included in the package. This includes *gecko.records*, a matrix of longitude and lati-
tude (two columns) occurrence records for Hogna maderiana (Walckenaer, 1837); *gecko.range*, a
SpatRaster object, as defined by package terra, of the geographic range of Hogna maderiana (Wal-
ckenaer, 1837); *gecko.layers*, a SpatRaster object with layers representing the average annual
temperature, total annual precipitation, altitude and landcover for Madeira Island (Fick & Hijmans
2017, Tuanmu & Jetz 2014); and *worldborders* a vector of... World country borders.
Usage
gecko.examples(data = NULL)
Arguments
data Name of data in quotes. E.g.: ‘"gecko.records"‘ If ‘NULL‘, the example files
will be listed.
Source
This function is inspired by ‘palmerpanguins::path_to_file()‘ which in turn is based on ‘readxl::readxl_example()‘.
Examples
gecko.examples()
gecko.examples("gecko.range")
map.draw Map creation.
Description
Creates maps ready to print in pdf or other formats.
Usage
map.draw(
longlat = NULL,
layer,
spName,
borders = FALSE,
scale = TRUE,
legend = FALSE,
sites = TRUE,
mcp = FALSE,
print = FALSE
)
Arguments
longlat Matrix of longitude and latitude or eastness and northness (two columns in this
order) of each occurrence record.
layer RasterLayer object representing the presence/absence map for the species.
spName String of species name.
borders If TRUE country borders are drawn.
scale If TRUE a distance scale in km is drawn.
legend If TRUE the legend for the map is drawn.
sites If TRUE the record locations are drawn.
mcp If TRUE the minimum convex polygon representing the Extent of Occurrence
is drawn.
print If TRUE a pdf is saved instead of the output to the console.
move Move records to closest non-NA cell.
Description
Identifies and moves presence records to cells with environmental values.
Usage
move(longlat, layers, buffer = 0)
Arguments
longlat Matrix of longitude and latitude or eastness and northness (two columns in this
order) of species occurrence records.
layers Raster* object as defined by package raster.
buffer Maximum distance in map units that a record will move. If 0 all NA records will
be changed.
Details
Often records are in coastal or other areas for which no environmental data is available. This
function moves such records to the closest cells with data so that no information is lost during
modelling.
Value
A matrix with new coordinate values.
Examples
rast <- terra::rast(matrix(c(rep(NA,100), rep(1,100), rep(NA,100)), ncol = 15))
pts <- cbind(runif(100, 0, 0.55), runif(100, 0, 1))
terra::plot(rast)
points(pts)
pts <- move(pts, rast)
terra::plot(rast)
points(pts)
outliers Visual detection of outliers.
Description
Draws plots of sites in geographical (longlat) and environmental (2-axis PCA) space.
Usage
outliers(longlat, layers)
Arguments
longlat Matrix of longitude and latitude or eastness and northness (two columns in this
order) of species occurrence records.
layers Raster* object as defined by package raster. It can be any set of environmental
layers thought to allow the identification of environmental outliers.
Details
Erroneous data sources or errors in transcriptions may introduce outliers that can be easily detected
by looking at simple graphs of geographical or environmental space.
Value
A data.frame with coordinate values and distance to centroid in pca is returned. Two plots are
drawn for visual inspection. The environmental plot includes row numbers for easy identification
of possible outliers.
Examples
records = gecko.examples("gecko.records")
layers = gecko.examples("gecko.layers")
outliers(records, layers[[1:3]])
reduce Reduce dimensionality of raster layers.
Description
Reduce the number of layers by either performing a PCA on them or by eliminating highly corre-
lated ones.
Usage
reduce(layers, method = "pca", n = NULL, thres = NULL)
Arguments
layers Raster* object as defined by package raster.
method Either Principal Components Analysis ("pca", default) or Pearson’s correlation
("cor").
n Number of layers to reduce to.
thres Value for pairwise Pearson’s correlation above which one of the layers (ran-
domly selected) is eliminated.
Details
Using a large number of explanatory variables in models with few records may lead to overfitting.
This function allows to avoid it as much as possible. If both n and thres are given, n has priority. If
method is not recognized and layers come from read function, only landcover is reduced by using
only the dominating landuse of each cell.
Value
A RasterStack object.
thin Spatial thinning of occurrence records.
Description
Thinning of records with minimum distances either absolute or relative to the species range.
Usage
thin(longlat, distance = 0.01, relative = TRUE, runs = 100)
Arguments
longlat Matrix of longitude and latitude or eastness and northness (two columns in this
order) of species occurrence records.
distance Distance either in relative terms (proportion of maximum distance between any
two records) or in raster units.
relative If TRUE, represents the proportion of maximum distance between any two
records. If FALSE, is in raster units.
runs Number of runs
Details
Clumped distribution records due to ease of accessibility of sites, emphasis of sampling on certain
areas in the past, etc. may bias species distribution models. The algorithm used here eliminates
records closer than a given distance to any other record. The choice of records to eliminate is
random, so a number of runs are made and the one keeping more of the original records is chosen.
Value
A matrix of species occurrence records separated by at least the given distance.
Examples
userpar <- par(no.readonly = TRUE)
records <- matrix(sample(100), ncol = 2)
par(mfrow=c(1,2))
graphics::plot(records)
records <- thin(records, 0.1)
graphics::plot(records)
par(userpar) |
github.com/pkg/sftp | go | Go | README
[¶](#section-readme)
---
### sftp
The `sftp` package provides support for file system operations on remote ssh servers using the SFTP subsystem. It also implements an SFTP server for serving files from the filesystem.
![CI Status](https://github.com/pkg/sftp/workflows/CI/badge.svg?branch=master&event=push) [![Go Reference](https://pkg.go.dev/badge/github.com/pkg/sftp.svg)](https://pkg.go.dev/github.com/pkg/sftp)
Documentation
[¶](#section-documentation)
---
[Rendered for](https://go.dev/about#build-context)
linux/amd64 windows/amd64 darwin/amd64 js/wasm
### Overview [¶](#pkg-overview)
Package sftp implements the SSH File Transfer Protocol as described in
<https://filezilla-project.org/specs/draft-ietf-secsh-filexfer-02.txtExample [¶](#example-package)
```
package main
import (
"log"
"github.com/pkg/sftp"
"golang.org/x/crypto/ssh"
)
func main() {
var conn *ssh.Client
// open an SFTP session over an existing ssh connection.
client, err := sftp.NewClient(conn)
if err != nil {
log.Fatal(err)
}
defer client.Close()
// walk a directory
w := client.Walk("/home/user")
for w.Step() {
if w.Err() != nil {
continue
}
log.Println(w.Path())
}
// leave your mark
f, err := client.Create("hello.txt")
if err != nil {
log.Fatal(err)
}
if _, err := f.Write([]byte("Hello world!")); err != nil {
log.Fatal(err)
}
f.Close()
// check it's there
fi, err := client.Lstat("hello.txt")
if err != nil {
log.Fatal(err)
}
log.Println(fi)
}
```
```
Output:
```
Share Format
Run
### Index [¶](#pkg-index)
* [Constants](#pkg-constants)
* [Variables](#pkg-variables)
* [func Join(elem ...string) string](#Join)
* [func Match(pattern, name string) (matched bool, err error)](#Match)
* [func SetSFTPExtensions(extensions ...string) error](#SetSFTPExtensions)
* [func Split(p string) (dir, file string)](#Split)
* [type Client](#Client)
* + [func NewClient(conn *ssh.Client, opts ...ClientOption) (*Client, error)](#NewClient)
+ [func NewClientPipe(rd io.Reader, wr io.WriteCloser, opts ...ClientOption) (*Client, error)](#NewClientPipe)
* + [func (c *Client) Chmod(path string, mode os.FileMode) error](#Client.Chmod)
+ [func (c *Client) Chown(path string, uid, gid int) error](#Client.Chown)
+ [func (c *Client) Chtimes(path string, atime time.Time, mtime time.Time) error](#Client.Chtimes)
+ [func (c *Client) Close() error](#Client.Close)
+ [func (c *Client) Create(path string) (*File, error)](#Client.Create)
+ [func (c *Client) Getwd() (string, error)](#Client.Getwd)
+ [func (c *Client) Glob(pattern string) (matches []string, err error)](#Client.Glob)
+ [func (c *Client) HasExtension(name string) (string, bool)](#Client.HasExtension)
+ [func (c *Client) Join(elem ...string) string](#Client.Join)
+ [func (c *Client) Link(oldname, newname string) error](#Client.Link)
+ [func (c *Client) Lstat(p string) (os.FileInfo, error)](#Client.Lstat)
+ [func (c *Client) Mkdir(path string) error](#Client.Mkdir)
+ [func (c *Client) MkdirAll(path string) error](#Client.MkdirAll)
+ [func (c *Client) Open(path string) (*File, error)](#Client.Open)
+ [func (c *Client) OpenFile(path string, f int) (*File, error)](#Client.OpenFile)
+ [func (c *Client) PosixRename(oldname, newname string) error](#Client.PosixRename)
+ [func (c *Client) ReadDir(p string) ([]os.FileInfo, error)](#Client.ReadDir)
+ [func (c *Client) ReadLink(p string) (string, error)](#Client.ReadLink)
+ [func (c *Client) RealPath(path string) (string, error)](#Client.RealPath)
+ [func (c *Client) Remove(path string) error](#Client.Remove)
+ [func (c *Client) RemoveAll(path string) error](#Client.RemoveAll)
+ [func (c *Client) RemoveDirectory(path string) error](#Client.RemoveDirectory)
+ [func (c *Client) Rename(oldname, newname string) error](#Client.Rename)
+ [func (c *Client) Stat(p string) (os.FileInfo, error)](#Client.Stat)
+ [func (c *Client) StatVFS(path string) (*StatVFS, error)](#Client.StatVFS)
+ [func (c *Client) Symlink(oldname, newname string) error](#Client.Symlink)
+ [func (c *Client) Truncate(path string, size int64) error](#Client.Truncate)
+ [func (c *Client) Wait() error](#Client.Wait)
+ [func (c *Client) Walk(root string) *fs.Walker](#Client.Walk)
* [type ClientOption](#ClientOption)
* + [func MaxConcurrentRequestsPerFile(n int) ClientOption](#MaxConcurrentRequestsPerFile)
+ [func MaxPacket(size int) ClientOption](#MaxPacket)
+ [func MaxPacketChecked(size int) ClientOption](#MaxPacketChecked)
+ [func MaxPacketUnchecked(size int) ClientOption](#MaxPacketUnchecked)
+ [func UseConcurrentReads(value bool) ClientOption](#UseConcurrentReads)
+ [func UseConcurrentWrites(value bool) ClientOption](#UseConcurrentWrites)
+ [func UseFstat(value bool) ClientOption](#UseFstat)
* [type File](#File)
* + [func (f *File) Chmod(mode os.FileMode) error](#File.Chmod)
+ [func (f *File) Chown(uid, gid int) error](#File.Chown)
+ [func (f *File) Close() error](#File.Close)
+ [func (f *File) Name() string](#File.Name)
+ [func (f *File) Read(b []byte) (int, error)](#File.Read)
+ [func (f *File) ReadAt(b []byte, off int64) (int, error)](#File.ReadAt)
+ [func (f *File) ReadFrom(r io.Reader) (int64, error)](#File.ReadFrom)
+ [func (f *File) ReadFromWithConcurrency(r io.Reader, concurrency int) (read int64, err error)](#File.ReadFromWithConcurrency)
+ [func (f *File) Seek(offset int64, whence int) (int64, error)](#File.Seek)
+ [func (f *File) Stat() (os.FileInfo, error)](#File.Stat)
+ [func (f *File) Sync() error](#File.Sync)
+ [func (f *File) Truncate(size int64) error](#File.Truncate)
+ [func (f *File) Write(b []byte) (int, error)](#File.Write)
+ [func (f *File) WriteAt(b []byte, off int64) (written int, err error)](#File.WriteAt)
+ [func (f *File) WriteTo(w io.Writer) (written int64, err error)](#File.WriteTo)
* [type FileAttrFlags](#FileAttrFlags)
* [type FileCmder](#FileCmder)
* [type FileInfoExtendedData](#FileInfoExtendedData)
* [type FileInfoUidGid](#FileInfoUidGid)
* [type FileLister](#FileLister)
* [type FileOpenFlags](#FileOpenFlags)
* [type FileReader](#FileReader)
* [type FileStat](#FileStat)
* + [func (a FileStat) FileMode() os.FileMode](#FileStat.FileMode)
* [type FileWriter](#FileWriter)
* [type Handlers](#Handlers)
* + [func InMemHandler() Handlers](#InMemHandler)
* [type ListerAt](#ListerAt)
* [type LstatFileLister](#LstatFileLister)
* [type NameLookupFileLister](#NameLookupFileLister)
* [type OpenFileWriter](#OpenFileWriter)
* [type PosixRenameFileCmder](#PosixRenameFileCmder)
* [type ReadlinkFileLister](#ReadlinkFileLister)
* [type RealPathFileLister](#RealPathFileLister)
* [type Request](#Request)
* + [func NewRequest(method, path string) *Request](#NewRequest)
* + [func (r *Request) AttrFlags() FileAttrFlags](#Request.AttrFlags)
+ [func (r *Request) Attributes() *FileStat](#Request.Attributes)
+ [func (r *Request) Context() context.Context](#Request.Context)
+ [func (r *Request) Pflags() FileOpenFlags](#Request.Pflags)
+ [func (r *Request) WithContext(ctx context.Context) *Request](#Request.WithContext)
* [type RequestServer](#RequestServer)
* + [func NewRequestServer(rwc io.ReadWriteCloser, h Handlers, options ...RequestServerOption) *RequestServer](#NewRequestServer)
* + [func (rs *RequestServer) Close() error](#RequestServer.Close)
+ [func (rs *RequestServer) Serve() error](#RequestServer.Serve)
* [type RequestServerOption](#RequestServerOption)
* + [func WithRSAllocator() RequestServerOption](#WithRSAllocator)
+ [func WithStartDirectory(startDirectory string) RequestServerOption](#WithStartDirectory)
* [type Server](#Server)
* + [func NewServer(rwc io.ReadWriteCloser, options ...ServerOption) (*Server, error)](#NewServer)
* + [func (svr *Server) Serve() error](#Server.Serve)
* [type ServerOption](#ServerOption)
* + [func ReadOnly() ServerOption](#ReadOnly)
+ [func WithAllocator() ServerOption](#WithAllocator)
+ [func WithDebug(w io.Writer) ServerOption](#WithDebug)
+ [func WithServerWorkingDirectory(workDir string) ServerOption](#WithServerWorkingDirectory)
* [type StatExtended](#StatExtended)
* [type StatVFS](#StatVFS)
* + [func (p *StatVFS) FreeSpace() uint64](#StatVFS.FreeSpace)
+ [func (p *StatVFS) MarshalBinary() ([]byte, error)](#StatVFS.MarshalBinary)
+ [func (p *StatVFS) TotalSpace() uint64](#StatVFS.TotalSpace)
* [type StatVFSFileCmder](#StatVFSFileCmder)
* [type StatusError](#StatusError)
* + [func (s *StatusError) Error() string](#StatusError.Error)
+ [func (s *StatusError) FxCode() fxerr](#StatusError.FxCode)
* [type TransferError](#TransferError)
* [type WriterAtReaderAt](#WriterAtReaderAt)
#### Examples [¶](#pkg-examples)
* [Package](#example-package)
* [Client.Mkdir (Parents)](#example-Client.Mkdir-Parents)
* [File.ReadFrom (Bufio)](#example-File.ReadFrom-Bufio)
* [NewClientPipe](#example-NewClientPipe)
### Constants [¶](#pkg-constants)
```
const (
ErrSSHFxOk = fxerr(sshFxOk)
ErrSSHFxEOF = fxerr(sshFxEOF)
ErrSSHFxNoSuchFile = fxerr(sshFxNoSuchFile)
ErrSSHFxPermissionDenied = fxerr(sshFxPermissionDenied)
ErrSSHFxFailure = fxerr(sshFxFailure)
ErrSSHFxBadMessage = fxerr(sshFxBadMessage)
ErrSSHFxNoConnection = fxerr(sshFxNoConnection)
ErrSSHFxConnectionLost = fxerr(sshFxConnectionLost)
ErrSSHFxOpUnsupported = fxerr(sshFxOPUnsupported)
)
```
Error types that match the SFTP's SSH_FXP_STATUS codes. Gives you more direct control of the errors being sent vs. letting the library work them out from the standard os/io errors.
```
const (
ErrSshFxOk = [ErrSSHFxOk](#ErrSSHFxOk)
ErrSshFxEof = [ErrSSHFxEOF](#ErrSSHFxEOF)
ErrSshFxNoSuchFile = [ErrSSHFxNoSuchFile](#ErrSSHFxNoSuchFile)
ErrSshFxPermissionDenied = [ErrSSHFxPermissionDenied](#ErrSSHFxPermissionDenied)
ErrSshFxFailure = [ErrSSHFxFailure](#ErrSSHFxFailure)
ErrSshFxBadMessage = [ErrSSHFxBadMessage](#ErrSSHFxBadMessage)
ErrSshFxNoConnection = [ErrSSHFxNoConnection](#ErrSSHFxNoConnection)
ErrSshFxConnectionLost = [ErrSSHFxConnectionLost](#ErrSSHFxConnectionLost)
ErrSshFxOpUnsupported = [ErrSSHFxOpUnsupported](#ErrSSHFxOpUnsupported)
)
```
Deprecated error types, these are aliases for the new ones, please use the new ones directly
```
const EBADF = [syscall](/syscall).[EBADF](/syscall#EBADF)
```
```
const S_IFMT = [syscall](/syscall).[S_IFMT](/syscall#S_IFMT)
```
```
const (
// SftpServerWorkerCount defines the number of workers for the SFTP server
SftpServerWorkerCount = 8
)
```
### Variables [¶](#pkg-variables)
```
var (
// ErrInternalInconsistency indicates the packets sent and the data queued to be
// written to the file don't match up. It is an unusual error and usually is
// caused by bad behavior server side or connection issues. The error is
// limited in scope to the call where it happened, the client object is still
// OK to use as long as the connection is still open.
ErrInternalInconsistency = [errors](/errors).[New](/errors#New)("internal inconsistency")
// InternalInconsistency alias for ErrInternalInconsistency.
//
// Deprecated: please use ErrInternalInconsistency
InternalInconsistency = [ErrInternalInconsistency](#ErrInternalInconsistency)
)
```
```
var ErrBadPattern = [path](/path).[ErrBadPattern](/path#ErrBadPattern)
```
ErrBadPattern indicates a globbing pattern was malformed.
```
var MaxFilelist [int64](/builtin#int64) = 100
```
MaxFilelist is the max number of files to return in a readdir batch.
### Functions [¶](#pkg-functions)
####
func [Join](https://github.com/pkg/sftp/blob/v1.13.6/match.go#L129) [¶](#Join)
```
func Join(elem ...[string](/builtin#string)) [string](/builtin#string)
```
Join joins any number of path elements into a single path, separating them with slashes.
This is an alias for path.Join from the standard library,
offered so that callers need not import the path package.
For details, see <https://golang.org/pkg/path/#Join>.
####
func [Match](https://github.com/pkg/sftp/blob/v1.13.6/match.go#L16) [¶](#Match)
```
func Match(pattern, name [string](/builtin#string)) (matched [bool](/builtin#bool), err [error](/builtin#error))
```
Match reports whether name matches the shell pattern.
This is an alias for path.Match from the standard library,
offered so that callers need not import the path package.
For details, see <https://golang.org/pkg/path/#Match>.
####
func [SetSFTPExtensions](https://github.com/pkg/sftp/blob/v1.13.6/sftp.go#L247) [¶](#SetSFTPExtensions)
added in v1.11.0
```
func SetSFTPExtensions(extensions ...[string](/builtin#string)) [error](/builtin#error)
```
SetSFTPExtensions allows to customize the supported server extensions.
See the variable supportedSFTPExtensions for supported extensions.
This method accepts a slice of sshExtensionPair names for example '[email protected]'.
If an invalid extension is given an error will be returned and nothing will be changed
####
func [Split](https://github.com/pkg/sftp/blob/v1.13.6/match.go#L31) [¶](#Split)
```
func Split(p [string](/builtin#string)) (dir, file [string](/builtin#string))
```
Split splits the path p immediately following the final slash,
separating it into a directory and file name component.
This is an alias for path.Split from the standard library,
offered so that callers need not import the path package.
For details, see <https://golang.org/pkg/path/#Split>.
### Types [¶](#pkg-types)
####
type [Client](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L163) [¶](#Client)
```
type Client struct {
// contains filtered or unexported fields
}
```
Client represents an SFTP session on a *ssh.ClientConn SSH connection.
Multiple Clients can be active on a single SSH connection, and a Client may be called concurrently from multiple Goroutines.
Client implements the github.com/kr/fs.FileSystem interface.
####
func [NewClient](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L181) [¶](#NewClient)
```
func NewClient(conn *[ssh](/golang.org/x/crypto/ssh).[Client](/golang.org/x/crypto/ssh#Client), opts ...[ClientOption](#ClientOption)) (*[Client](#Client), [error](/builtin#error))
```
NewClient creates a new SFTP client on conn, using zero or more option functions.
####
func [NewClientPipe](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L204) [¶](#NewClientPipe)
```
func NewClientPipe(rd [io](/io).[Reader](/io#Reader), wr [io](/io).[WriteCloser](/io#WriteCloser), opts ...[ClientOption](#ClientOption)) (*[Client](#Client), [error](/builtin#error))
```
NewClientPipe creates a new SFTP client given a Reader and a WriteCloser.
This can be used for connecting to an SFTP server over TCP/TLS or by using the system's ssh client program (e.g. via exec.Command).
Example [¶](#example-NewClientPipe)
```
package main
import (
"fmt"
"log"
"os"
"os/exec"
"github.com/pkg/sftp"
)
func main() {
// Connect to a remote host and request the sftp subsystem via the 'ssh'
// command. This assumes that passwordless login is correctly configured.
cmd := exec.Command("ssh", "example.com", "-s", "sftp")
// send errors from ssh to stderr
cmd.Stderr = os.Stderr
// get stdin and stdout
wr, err := cmd.StdinPipe()
if err != nil {
log.Fatal(err)
}
rd, err := cmd.StdoutPipe()
if err != nil {
log.Fatal(err)
}
// start the process
if err := cmd.Start(); err != nil {
log.Fatal(err)
}
defer cmd.Wait()
// open the SFTP session
client, err := sftp.NewClientPipe(rd, wr)
if err != nil {
log.Fatal(err)
}
// read a directory
list, err := client.ReadDir("/")
if err != nil {
log.Fatal(err)
}
// print contents
for _, item := range list {
fmt.Println(item.Name())
}
// close the connection
client.Close()
}
```
```
Output:
```
Share Format
Run
####
func (*Client) [Chmod](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L568) [¶](#Client.Chmod)
```
func (c *[Client](#Client)) Chmod(path [string](/builtin#string), mode [os](/os).[FileMode](/os#FileMode)) [error](/builtin#error)
```
Chmod changes the permissions of the named file.
Chmod does not apply a umask, because even retrieving the umask is not possible in a portable way without causing a race condition. Callers should mask off umask bits, if desired.
####
func (*Client) [Chown](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L554) [¶](#Client.Chown)
```
func (c *[Client](#Client)) Chown(path [string](/builtin#string), uid, gid [int](/builtin#int)) [error](/builtin#error)
```
Chown changes the user and group owners of the named file.
####
func (*Client) [Chtimes](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L544) [¶](#Client.Chtimes)
```
func (c *[Client](#Client)) Chtimes(path [string](/builtin#string), atime [time](/time).[Time](/time#Time), mtime [time](/time).[Time](/time#Time)) [error](/builtin#error)
```
Chtimes changes the access and modification times of the named file.
####
func (*Client) [Close](https://github.com/pkg/sftp/blob/v1.13.6/conn.go#L61) [¶](#Client.Close)
```
func (c *Client) Close() [error](/builtin#error)
```
Close closes the SFTP session.
####
func (*Client) [Create](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L258) [¶](#Client.Create)
```
func (c *[Client](#Client)) Create(path [string](/builtin#string)) (*[File](#File), [error](/builtin#error))
```
Create creates the named file mode 0666 (before umask), truncating it if it already exists. If successful, methods on the returned File can be used for I/O; the associated file descriptor has mode O_RDWR. If you need more control over the flags/mode used to open the file see client.OpenFile.
Note that some SFTP servers (eg. AWS Transfer) do not support opening files read/write at the same time. For those services you will need to use
`client.OpenFile(os.O_WRONLY|os.O_CREATE|os.O_TRUNC)`.
####
func (*Client) [Getwd](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L855) [¶](#Client.Getwd)
```
func (c *[Client](#Client)) Getwd() ([string](/builtin#string), [error](/builtin#error))
```
Getwd returns the current working directory of the server. Operations involving relative paths will be based at this location.
####
func (*Client) [Glob](https://github.com/pkg/sftp/blob/v1.13.6/match.go#L43) [¶](#Client.Glob)
```
func (c *[Client](#Client)) Glob(pattern [string](/builtin#string)) (matches [][string](/builtin#string), err [error](/builtin#error))
```
Glob returns the names of all files matching pattern or nil if there is no matching file. The syntax of patterns is the same as in Match. The pattern may describe hierarchical names such as
/usr/*/bin/ed.
Glob ignores file system errors such as I/O errors reading directories.
The only possible returned error is ErrBadPattern, when pattern is malformed.
####
func (*Client) [HasExtension](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L314) [¶](#Client.HasExtension)
added in v1.13.0
```
func (c *[Client](#Client)) HasExtension(name [string](/builtin#string)) ([string](/builtin#string), [bool](/builtin#bool))
```
HasExtension checks whether the server supports a named extension.
The first return value is the extension data reported by the server
(typically a version number).
####
func (*Client) [Join](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L725) [¶](#Client.Join)
```
func (c *[Client](#Client)) Join(elem ...[string](/builtin#string)) [string](/builtin#string)
```
Join joins any number of path elements into a single path, adding a separating slash if necessary. The result is Cleaned; in particular, all empty strings are ignored.
####
func (*Client) [Link](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L467) [¶](#Client.Link)
added in v1.11.0
```
func (c *[Client](#Client)) Link(oldname, newname [string](/builtin#string)) [error](/builtin#error)
```
Link creates a hard link at 'newname', pointing at the same inode as 'oldname'
####
func (*Client) [Lstat](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L413) [¶](#Client.Lstat)
```
func (c *[Client](#Client)) Lstat(p [string](/builtin#string)) ([os](/os).[FileInfo](/os#FileInfo), [error](/builtin#error))
```
Lstat returns a FileInfo structure describing the file specified by path 'p'.
If 'p' is a symbolic link, the returned FileInfo structure describes the symbolic link.
####
func (*Client) [Mkdir](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L862) [¶](#Client.Mkdir)
```
func (c *[Client](#Client)) Mkdir(path [string](/builtin#string)) [error](/builtin#error)
```
Mkdir creates the specified directory. An error will be returned if a file or directory with the specified path already exists, or if the directory's parent folder does not exist (the method cannot create complete paths).
Example (Parents) [¶](#example-Client.Mkdir-Parents)
```
package main
import (
"fmt"
"log"
"os"
"path"
"strings"
"github.com/pkg/sftp"
"golang.org/x/crypto/ssh"
)
func main() {
// Example of mimicing 'mkdir --parents'; I.E. recursively create
// directoryies and don't error if any directories already exists.
var conn *ssh.Client
client, err := sftp.NewClient(conn)
if err != nil {
log.Fatal(err)
}
defer client.Close()
sshFxFailure := uint32(4)
mkdirParents := func(client *sftp.Client, dir string) (err error) {
var parents string
if path.IsAbs(dir) {
// Otherwise, an absolute path given below would be turned in to a relative one
// by splitting on "/"
parents = "/"
}
for _, name := range strings.Split(dir, "/") {
if name == "" {
// Paths with double-/ in them should just move along
// this will also catch the case of the first character being a "/", i.e. an absolute path
continue
}
parents = path.Join(parents, name)
err = client.Mkdir(parents)
if status, ok := err.(*sftp.StatusError); ok {
if status.Code == sshFxFailure {
var fi os.FileInfo
fi, err = client.Stat(parents)
if err == nil {
if !fi.IsDir() {
return fmt.Errorf("file exists: %s", parents)
}
}
}
}
if err != nil {
break
}
}
return err
}
err = mkdirParents(client, "/tmp/foo/bar")
if err != nil {
log.Fatal(err)
}
}
```
```
Output:
```
Share Format
Run
####
func (*Client) [MkdirAll](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L883) [¶](#Client.MkdirAll)
```
func (c *[Client](#Client)) MkdirAll(path [string](/builtin#string)) [error](/builtin#error)
```
MkdirAll creates a directory named path, along with any necessary parents,
and returns nil, or else returns an error.
If path is already a directory, MkdirAll does nothing and returns nil.
If path contains a regular file, an error is returned
####
func (*Client) [Open](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L583) [¶](#Client.Open)
```
func (c *[Client](#Client)) Open(path [string](/builtin#string)) (*[File](#File), [error](/builtin#error))
```
Open opens the named file for reading. If successful, methods on the returned file can be used for reading; the associated file descriptor has mode O_RDONLY.
####
func (*Client) [OpenFile](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L590) [¶](#Client.OpenFile)
```
func (c *[Client](#Client)) OpenFile(path [string](/builtin#string), f [int](/builtin#int)) (*[File](#File), [error](/builtin#error))
```
OpenFile is the generalized open call; most users will use Open or Create instead. It opens the named file with specified flag (O_RDONLY etc.). If successful, methods on the returned File can be used for I/O.
####
func (*Client) [PosixRename](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L803) [¶](#Client.PosixRename)
```
func (c *[Client](#Client)) PosixRename(oldname, newname [string](/builtin#string)) [error](/builtin#error)
```
PosixRename renames a file using the <EMAIL> extension which will replace newname if it already exists.
####
func (*Client) [ReadDir](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L326) [¶](#Client.ReadDir)
```
func (c *[Client](#Client)) ReadDir(p [string](/builtin#string)) ([][os](/os).[FileInfo](/os#FileInfo), [error](/builtin#error))
```
ReadDir reads the directory named by dirname and returns a list of directory entries.
####
func (*Client) [ReadLink](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L438) [¶](#Client.ReadLink)
```
func (c *[Client](#Client)) ReadLink(p [string](/builtin#string)) ([string](/builtin#string), [error](/builtin#error))
```
ReadLink reads the target of a symbolic link.
####
func (*Client) [RealPath](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L825) [¶](#Client.RealPath)
added in v1.13.0
```
func (c *[Client](#Client)) RealPath(path [string](/builtin#string)) ([string](/builtin#string), [error](/builtin#error))
```
RealPath can be used to have the server canonicalize any given path name to an absolute path.
This is useful for converting path names containing ".." components,
or relative pathnames without a leading slash into absolute paths.
####
func (*Client) [Remove](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L730) [¶](#Client.Remove)
```
func (c *[Client](#Client)) Remove(path [string](/builtin#string)) [error](/builtin#error)
```
Remove removes the specified file or directory. An error will be returned if no file or directory with the specified path exists, or if the specified directory is not empty.
####
func (*Client) [RemoveAll](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L929) [¶](#Client.RemoveAll)
added in v1.13.6
```
func (c *[Client](#Client)) RemoveAll(path [string](/builtin#string)) [error](/builtin#error)
```
RemoveAll delete files recursively in the directory and Recursively delete subdirectories.
An error will be returned if no file or directory with the specified path exists
####
func (*Client) [RemoveDirectory](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L765) [¶](#Client.RemoveDirectory)
```
func (c *[Client](#Client)) RemoveDirectory(path [string](/builtin#string)) [error](/builtin#error)
```
RemoveDirectory removes a directory path.
####
func (*Client) [Rename](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L783) [¶](#Client.Rename)
```
func (c *[Client](#Client)) Rename(oldname, newname [string](/builtin#string)) [error](/builtin#error)
```
Rename renames a file.
####
func (*Client) [Stat](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L403) [¶](#Client.Stat)
```
func (c *[Client](#Client)) Stat(p [string](/builtin#string)) ([os](/os).[FileInfo](/os#FileInfo), [error](/builtin#error))
```
Stat returns a FileInfo structure describing the file specified by path 'p'.
If 'p' is a symbolic link, the returned FileInfo structure describes the referent file.
####
func (*Client) [StatVFS](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L691) [¶](#Client.StatVFS)
```
func (c *[Client](#Client)) StatVFS(path [string](/builtin#string)) (*[StatVFS](#StatVFS), [error](/builtin#error))
```
StatVFS retrieves VFS statistics from a remote host.
It implements the [email protected] SSH_FXP_EXTENDED feature from <http://www.opensource.apple.com/source/OpenSSH/OpenSSH-175/openssh/PROTOCOL?txt>.
####
func (*Client) [Symlink](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L486) [¶](#Client.Symlink)
```
func (c *[Client](#Client)) Symlink(oldname, newname [string](/builtin#string)) [error](/builtin#error)
```
Symlink creates a symbolic link at 'newname', pointing at target 'oldname'
####
func (*Client) [Truncate](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L576) [¶](#Client.Truncate)
```
func (c *[Client](#Client)) Truncate(path [string](/builtin#string), size [int64](/builtin#int64)) [error](/builtin#error)
```
Truncate sets the size of the named file. Although it may be safely assumed that if the size is less than its current size it will be truncated to fit,
the SFTP protocol does not specify what behavior the server should do when setting size greater than the current size.
####
func (*Client) [Wait](https://github.com/pkg/sftp/blob/v1.13.6/conn.go#L55) [¶](#Client.Wait)
added in v1.10.0
```
func (c *Client) Wait() [error](/builtin#error)
```
Wait blocks until the conn has shut down, and return the error causing the shutdown. It can be called concurrently from multiple goroutines.
####
func (*Client) [Walk](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L320) [¶](#Client.Walk)
```
func (c *[Client](#Client)) Walk(root [string](/builtin#string)) *[fs](/github.com/kr/fs).[Walker](/github.com/kr/fs#Walker)
```
Walk returns a new Walker rooted at root.
####
type [ClientOption](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L35) [¶](#ClientOption)
```
type ClientOption func(*[Client](#Client)) [error](/builtin#error)
```
A ClientOption is a function which applies configuration to a Client.
####
func [MaxConcurrentRequestsPerFile](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L91) [¶](#MaxConcurrentRequestsPerFile)
```
func MaxConcurrentRequestsPerFile(n [int](/builtin#int)) [ClientOption](#ClientOption)
```
MaxConcurrentRequestsPerFile sets the maximum concurrent requests allowed for a single file.
The default maximum concurrent requests is 64.
####
func [MaxPacket](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L84) [¶](#MaxPacket)
```
func MaxPacket(size [int](/builtin#int)) [ClientOption](#ClientOption)
```
MaxPacket sets the maximum size of the payload, measured in bytes.
This option only accepts sizes servers should support, ie. <= 32768 bytes.
This is a synonym for MaxPacketChecked that provides backward compatibility.
If you get the error "failed to send packet header: EOF" when copying a large file, try lowering this number.
The default packet size is 32768 bytes.
####
func [MaxPacketChecked](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L44) [¶](#MaxPacketChecked)
```
func MaxPacketChecked(size [int](/builtin#int)) [ClientOption](#ClientOption)
```
MaxPacketChecked sets the maximum size of the payload, measured in bytes.
This option only accepts sizes servers should support, ie. <= 32768 bytes.
If you get the error "failed to send packet header: EOF" when copying a large file, try lowering this number.
The default packet size is 32768 bytes.
####
func [MaxPacketUnchecked](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L66) [¶](#MaxPacketUnchecked)
```
func MaxPacketUnchecked(size [int](/builtin#int)) [ClientOption](#ClientOption)
```
MaxPacketUnchecked sets the maximum size of the payload, measured in bytes.
It accepts sizes larger than the 32768 bytes all servers should support.
Only use a setting higher than 32768 if your application always connects to the same server or after sufficiently broad testing.
If you get the error "failed to send packet header: EOF" when copying a large file, try lowering this number.
The default packet size is 32768 bytes.
####
func [UseConcurrentReads](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L127) [¶](#UseConcurrentReads)
added in v1.13.0
```
func UseConcurrentReads(value [bool](/builtin#bool)) [ClientOption](#ClientOption)
```
UseConcurrentReads allows the Client to perform concurrent Reads.
Concurrent reads are generally safe to use and not using them will degrade performance, so this option is enabled by default.
When enabled, WriteTo will use Stat/Fstat to get the file size and determines how many concurrent workers to use.
Some "read once" servers will delete the file if they receive a stat call on an open file and then the download will fail.
Disabling concurrent reads you will be able to download files from these servers.
If concurrent reads are disabled, the UseFstat option is ignored.
####
func [UseConcurrentWrites](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L109) [¶](#UseConcurrentWrites)
added in v1.13.0
```
func UseConcurrentWrites(value [bool](/builtin#bool)) [ClientOption](#ClientOption)
```
UseConcurrentWrites allows the Client to perform concurrent Writes.
Using concurrency while doing writes, requires special consideration.
A write to a later offset in a file after an error,
could end up with a file length longer than what was successfully written.
When using this option, if you receive an error during `io.Copy` or `io.WriteTo`,
you may need to `Truncate` the target Writer to avoid “holes” in the data written.
####
func [UseFstat](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L151) [¶](#UseFstat)
added in v1.11.0
```
func UseFstat(value [bool](/builtin#bool)) [ClientOption](#ClientOption)
```
UseFstat sets whether to use Fstat or Stat when File.WriteTo is called
(usually when copying files).
Some servers limit the amount of open files and calling Stat after opening the file will throw an error From the server. Setting this flag will call Fstat instead of Stat which is suppose to be called on an open file handle.
It has been found that that with IBM Sterling SFTP servers which have
"extractability" level set to 1 which means only 1 file can be opened at any given time.
If the server you are working with still has an issue with both Stat and Fstat calls you can always open a file and read it until the end.
Another reason to read the file until its end and Fstat doesn't work is that in some servers, reading a full file will automatically delete the file as some of these mainframes map the file to a message in a queue.
Once the file has been read it will get deleted.
####
type [File](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L967) [¶](#File)
```
type File struct {
// contains filtered or unexported fields
}
```
File represents a remote file.
####
func (*File) [Chmod](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L1927) [¶](#File.Chmod)
```
func (f *[File](#File)) Chmod(mode [os](/os).[FileMode](/os#FileMode)) [error](/builtin#error)
```
Chmod changes the permissions of the current file.
See Client.Chmod for details.
####
func (*File) [Chown](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L1920) [¶](#File.Chown)
```
func (f *[File](#File)) Chown(uid, gid [int](/builtin#int)) [error](/builtin#error)
```
Chown changes the uid/gid of the current file.
####
func (*File) [Close](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L978) [¶](#File.Close)
```
func (f *[File](#File)) Close() [error](/builtin#error)
```
Close closes the File, rendering it unusable for I/O. It returns an error, if any.
####
func (*File) [Name](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L983) [¶](#File.Name)
```
func (f *[File](#File)) Name() [string](/builtin#string)
```
Name returns the name of the file as presented to Open or Create.
####
func (*File) [Read](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L996) [¶](#File.Read)
```
func (f *[File](#File)) Read(b [][byte](/builtin#byte)) ([int](/builtin#int), [error](/builtin#error))
```
Read reads up to len(b) bytes from the File. It returns the number of bytes read and an error, if any. Read follows io.Reader semantics, so when Read encounters an error or EOF condition after successfully reading n > 0 bytes,
it returns the number of bytes read.
To maximise throughput for transferring the entire file (especially over high latency links) it is recommended to use WriteTo rather than calling Read multiple times. io.Copy will do this automatically.
####
func (*File) [ReadAt](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L1064) [¶](#File.ReadAt)
added in v1.12.0
```
func (f *[File](#File)) ReadAt(b [][byte](/builtin#byte), off [int64](/builtin#int64)) ([int](/builtin#int), [error](/builtin#error))
```
ReadAt reads up to len(b) byte from the File at a given offset `off`. It returns the number of bytes read and an error, if any. ReadAt follows io.ReaderAt semantics,
so the file offset is not altered during the read.
####
func (*File) [ReadFrom](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L1814) [¶](#File.ReadFrom)
```
func (f *[File](#File)) ReadFrom(r [io](/io).[Reader](/io#Reader)) ([int64](/builtin#int64), [error](/builtin#error))
```
ReadFrom reads data from r until EOF and writes it to the file. The return value is the number of bytes read. Any error except io.EOF encountered during the read is also returned.
This method is preferred over calling Write multiple times to maximise throughput for transferring the entire file,
especially over high-latency links.
Example (Bufio) [¶](#example-File.ReadFrom-Bufio)
```
package main
import (
"bufio"
"io"
"github.com/pkg/sftp"
)
func main() {
// Using Bufio to buffer writes going to an sftp.File won't buffer as it
// skips buffering if the underlying writer support ReadFrom. The
// workaround is to wrap your writer in a struct that only implements
// io.Writer.
//
// For background see github.com/pkg/sftp/issues/125
var data_source io.Reader
var f *sftp.File
type writerOnly struct{ io.Writer }
bw := bufio.NewWriter(writerOnly{f}) // no ReadFrom()
bw.ReadFrom(data_source)
}
```
```
Output:
```
Share Format
Run
####
func (*File) [ReadFromWithConcurrency](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L1668) [¶](#File.ReadFromWithConcurrency)
added in v1.13.1
```
func (f *[File](#File)) ReadFromWithConcurrency(r [io](/io).[Reader](/io#Reader), concurrency [int](/builtin#int)) (read [int64](/builtin#int64), err [error](/builtin#error))
```
ReadFromWithConcurrency implements ReaderFrom,
but uses the given concurrency to issue multiple requests at the same time.
Giving a concurrency of less than one will default to the Client’s max concurrency.
Otherwise, the given concurrency will be capped by the Client's max concurrency.
####
func (*File) [Seek](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L1893) [¶](#File.Seek)
```
func (f *[File](#File)) Seek(offset [int64](/builtin#int64), whence [int](/builtin#int)) ([int64](/builtin#int64), [error](/builtin#error))
```
Seek implements io.Seeker by setting the client offset for the next Read or Write. It returns the next offset read. Seeking before or after the end of the file is undefined. Seeking relative to the end calls Stat.
####
func (*File) [Stat](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L1452) [¶](#File.Stat)
```
func (f *[File](#File)) Stat() ([os](/os).[FileInfo](/os#FileInfo), [error](/builtin#error))
```
Stat returns the FileInfo structure describing file. If there is an error.
####
func (*File) [Sync](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L1934) [¶](#File.Sync)
added in v1.13.0
```
func (f *[File](#File)) Sync() [error](/builtin#error)
```
Sync requests a flush of the contents of a File to stable storage.
Sync requires the server to support the <EMAIL> extension.
####
func (*File) [Truncate](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L1956) [¶](#File.Truncate)
```
func (f *[File](#File)) Truncate(size [int64](/builtin#int64)) [error](/builtin#error)
```
Truncate sets the size of the current file. Although it may be safely assumed that if the size is less than its current size it will be truncated to fit,
the SFTP protocol does not specify what behavior the server should do when setting size greater than the current size.
We send a SSH_FXP_FSETSTAT here since we have a file handle
####
func (*File) [Write](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L1468) [¶](#File.Write)
```
func (f *[File](#File)) Write(b [][byte](/builtin#byte)) ([int](/builtin#int), [error](/builtin#error))
```
Write writes len(b) bytes to the File. It returns the number of bytes written and an error, if any. Write returns a non-nil error when n !=
len(b).
To maximise throughput for transferring the entire file (especially over high latency links) it is recommended to use ReadFrom rather than calling Write multiple times. io.Copy will do this automatically.
####
func (*File) [WriteAt](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L1629) [¶](#File.WriteAt)
added in v1.13.0
```
func (f *[File](#File)) WriteAt(b [][byte](/builtin#byte), off [int64](/builtin#int64)) (written [int](/builtin#int), err [error](/builtin#error))
```
WriteAt writes up to len(b) byte to the File at a given offset `off`. It returns the number of bytes written and an error, if any. WriteAt follows io.WriterAt semantics,
so the file offset is not altered during the write.
####
func (*File) [WriteTo](https://github.com/pkg/sftp/blob/v1.13.6/client.go#L1257) [¶](#File.WriteTo)
```
func (f *[File](#File)) WriteTo(w [io](/io).[Writer](/io#Writer)) (written [int64](/builtin#int64), err [error](/builtin#error))
```
WriteTo writes the file to the given Writer.
The return value is the number of bytes written.
Any error encountered during the write is also returned.
This method is preferred over calling Read multiple times to maximise throughput for transferring the entire file,
especially over high latency links.
####
type [FileAttrFlags](https://github.com/pkg/sftp/blob/v1.13.6/request-attrs.go#L34) [¶](#FileAttrFlags)
```
type FileAttrFlags struct {
Size, UidGid, Permissions, Acmodtime [bool](/builtin#bool)
}
```
FileAttrFlags that indicate whether SFTP file attributes were passed. When a flag is true the corresponding attribute should be available from the FileStat object returned by Attributes method. Used with SetStat.
####
type [FileCmder](https://github.com/pkg/sftp/blob/v1.13.6/request-interfaces.go#L54) [¶](#FileCmder)
```
type FileCmder interface {
Filecmd(*[Request](#Request)) [error](/builtin#error)
}
```
FileCmder should return an error Note in cases of an error, the error text will be sent to the client.
Called for Methods: Setstat, Rename, Rmdir, Mkdir, Link, Symlink, Remove
####
type [FileInfoExtendedData](https://github.com/pkg/sftp/blob/v1.13.6/attrs.go#L81) [¶](#FileInfoExtendedData)
added in v1.13.6
```
type FileInfoExtendedData interface {
[os](/os).[FileInfo](/os#FileInfo)
Extended() [][StatExtended](#StatExtended)
}
```
FileInfoUidGid extends os.FileInfo and adds a callbacks for extended data retrieval.
####
type [FileInfoUidGid](https://github.com/pkg/sftp/blob/v1.13.6/attrs.go#L74) [¶](#FileInfoUidGid)
added in v1.13.6
```
type FileInfoUidGid interface {
[os](/os).[FileInfo](/os#FileInfo)
Uid() [uint32](/builtin#uint32)
Gid() [uint32](/builtin#uint32)
}
```
FileInfoUidGid extends os.FileInfo and adds callbacks for Uid and Gid retrieval,
as an alternative to *syscall.Stat_t objects on unix systems.
####
type [FileLister](https://github.com/pkg/sftp/blob/v1.13.6/request-interfaces.go#L82) [¶](#FileLister)
```
type FileLister interface {
Filelist(*[Request](#Request)) ([ListerAt](#ListerAt), [error](/builtin#error))
}
```
FileLister should return an object that fulfils the ListerAt interface Note in cases of an error, the error text will be sent to the client.
Called for Methods: List, Stat, Readlink
Since Filelist returns an os.FileInfo, this can make it non-ideal for implementing Readlink.
This is because the Name receiver method defined by that interface defines that it should only return the base name.
However, Readlink is required to be capable of returning essentially any arbitrary valid path relative or absolute.
In order to implement this more expressive requirement, implement [ReadlinkFileLister](#ReadlinkFileLister) which will then be used instead.
####
type [FileOpenFlags](https://github.com/pkg/sftp/blob/v1.13.6/request-attrs.go#L10) [¶](#FileOpenFlags)
```
type FileOpenFlags struct {
Read, Write, Append, Creat, Trunc, Excl [bool](/builtin#bool)
}
```
FileOpenFlags defines Open and Write Flags. Correlate directly with with os.OpenFile flags
(<https://golang.org/pkg/os/#pkg-constants>).
####
type [FileReader](https://github.com/pkg/sftp/blob/v1.13.6/request-interfaces.go#L26) [¶](#FileReader)
```
type FileReader interface {
Fileread(*[Request](#Request)) ([io](/io).[ReaderAt](/io#ReaderAt), [error](/builtin#error))
}
```
FileReader should return an io.ReaderAt for the filepath Note in cases of an error, the error text will be sent to the client.
Called for Methods: Get
####
type [FileStat](https://github.com/pkg/sftp/blob/v1.13.6/attrs.go#L49) [¶](#FileStat)
```
type FileStat struct {
Size [uint64](/builtin#uint64)
Mode [uint32](/builtin#uint32)
Mtime [uint32](/builtin#uint32)
Atime [uint32](/builtin#uint32)
UID [uint32](/builtin#uint32)
GID [uint32](/builtin#uint32)
Extended [][StatExtended](#StatExtended)
}
```
FileStat holds the original unmarshalled values from a call to READDIR or
*STAT. It is exported for the purposes of accessing the raw values via os.FileInfo.Sys(). It is also used server side to store the unmarshalled values for SetStat.
####
func (FileStat) [FileMode](https://github.com/pkg/sftp/blob/v1.13.6/request-attrs.go#L54) [¶](#FileStat.FileMode)
```
func (a [FileStat](#FileStat)) FileMode() [os](/os).[FileMode](/os#FileMode)
```
FileMode returns the Mode SFTP file attributes wrapped as os.FileMode
####
type [FileWriter](https://github.com/pkg/sftp/blob/v1.13.6/request-interfaces.go#L38) [¶](#FileWriter)
```
type FileWriter interface {
Filewrite(*[Request](#Request)) ([io](/io).[WriterAt](/io#WriterAt), [error](/builtin#error))
}
```
FileWriter should return an io.WriterAt for the filepath.
The request server code will call Close() on the returned io.WriterAt ojbect if an io.Closer type assertion succeeds.
Note in cases of an error, the error text will be sent to the client.
Note when receiving an Append flag it is important to not open files using O_APPEND if you plan to use WriteAt, as they conflict.
Called for Methods: Put, Open
####
type [Handlers](https://github.com/pkg/sftp/blob/v1.13.6/request-server.go#L16) [¶](#Handlers)
```
type Handlers struct {
FileGet [FileReader](#FileReader)
FilePut [FileWriter](#FileWriter)
FileCmd [FileCmder](#FileCmder)
FileList [FileLister](#FileLister)
}
```
Handlers contains the 4 SFTP server request handlers.
####
func [InMemHandler](https://github.com/pkg/sftp/blob/v1.13.6/request-example.go#L24) [¶](#InMemHandler)
```
func InMemHandler() [Handlers](#Handlers)
```
InMemHandler returns a Hanlders object with the test handlers.
####
type [ListerAt](https://github.com/pkg/sftp/blob/v1.13.6/request-interfaces.go#L148) [¶](#ListerAt)
```
type ListerAt interface {
ListAt([][os](/os).[FileInfo](/os#FileInfo), [int64](/builtin#int64)) ([int](/builtin#int), [error](/builtin#error))
}
```
ListerAt does for file lists what io.ReaderAt does for files, i.e. a []os.FileInfo buffer is passed to the ListAt function and the entries that are populated in the buffer will be passed to the client.
ListAt should return the number of entries copied and an io.EOF error if at end of list.
This is testable by comparing how many you copied to how many could be copied (eg. n < len(ls) below).
The copy() builtin is best for the copying.
Uid and gid information will on unix systems be retrieved from [os.FileInfo.Sys](/os#FileInfo.Sys)
if this function returns a [syscall.Stat_t](/syscall#Stat_t) when called on a populated entry.
Alternatively, if the entry implements [FileInfoUidGid](#FileInfoUidGid), it will be used for uid and gid information.
If a populated entry implements [FileInfoExtendedData](#FileInfoExtendedData), extended attributes will also be returned to the client.
Note in cases of an error, the error text will be sent to the client.
####
type [LstatFileLister](https://github.com/pkg/sftp/blob/v1.13.6/request-interfaces.go#L89) [¶](#LstatFileLister)
added in v1.13.0
```
type LstatFileLister interface {
[FileLister](#FileLister)
Lstat(*[Request](#Request)) ([ListerAt](#ListerAt), [error](/builtin#error))
}
```
LstatFileLister is a FileLister that implements the Lstat method.
If this interface is implemented Lstat requests will call it otherwise they will be handled in the same way as Stat
####
type [NameLookupFileLister](https://github.com/pkg/sftp/blob/v1.13.6/request-interfaces.go#L128) [¶](#NameLookupFileLister)
added in v1.13.3
```
type NameLookupFileLister interface {
[FileLister](#FileLister)
LookupUserName([string](/builtin#string)) [string](/builtin#string)
LookupGroupName([string](/builtin#string)) [string](/builtin#string)
}
```
NameLookupFileLister is a FileLister that implmeents the LookupUsername and LookupGroupName methods.
If this interface is implemented, then longname ls formatting will use these to convert usernames and groupnames.
####
type [OpenFileWriter](https://github.com/pkg/sftp/blob/v1.13.6/request-interfaces.go#L46) [¶](#OpenFileWriter)
added in v1.13.0
```
type OpenFileWriter interface {
[FileWriter](#FileWriter)
OpenFile(*[Request](#Request)) ([WriterAtReaderAt](#WriterAtReaderAt), [error](/builtin#error))
}
```
OpenFileWriter is a FileWriter that implements the generic OpenFile method.
You need to implement this optional interface if you want to be able to read and write from/to the same handle.
Called for Methods: Open
####
type [PosixRenameFileCmder](https://github.com/pkg/sftp/blob/v1.13.6/request-interfaces.go#L61) [¶](#PosixRenameFileCmder)
added in v1.13.0
```
type PosixRenameFileCmder interface {
[FileCmder](#FileCmder)
PosixRename(*[Request](#Request)) [error](/builtin#error)
}
```
PosixRenameFileCmder is a FileCmder that implements the PosixRename method.
If this interface is implemented PosixRename requests will call it otherwise they will be handled in the same way as Rename
####
type [ReadlinkFileLister](https://github.com/pkg/sftp/blob/v1.13.6/request-interfaces.go#L115) [¶](#ReadlinkFileLister)
added in v1.13.6
```
type ReadlinkFileLister interface {
[FileLister](#FileLister)
Readlink([string](/builtin#string)) ([string](/builtin#string), [error](/builtin#error))
}
```
ReadlinkFileLister is a FileLister that implements the Readlink method.
By implementing the Readlink method, it is possible to return any arbitrary valid path relative or absolute.
This allows giving a better response than via the default FileLister (which is limited to os.FileInfo, whose Name method should only return the base name of a file)
####
type [RealPathFileLister](https://github.com/pkg/sftp/blob/v1.13.6/request-interfaces.go#L107) [¶](#RealPathFileLister)
added in v1.13.1
```
type RealPathFileLister interface {
[FileLister](#FileLister)
RealPath([string](/builtin#string)) ([string](/builtin#string), [error](/builtin#error))
}
```
* [RealPath(string) string](#hdr-RealPath_string__string)
RealPathFileLister is a FileLister that implements the Realpath method.
The built-in RealPath implementation does not resolve symbolic links.
By implementing this interface you can customize the returned path and, for example, resolve symbolinc links if needed for your use case.
You have to return an absolute POSIX path.
Up to v1.13.5 the signature for the RealPath method was:
#### RealPath(string) string [¶](#hdr-RealPath_string__string)
we have added a legacyRealPathFileLister that implements the old method to ensure that your code does not break.
You should use the new method signature to avoid future issues
####
type [Request](https://github.com/pkg/sftp/blob/v1.13.6/request.go#L125) [¶](#Request)
```
type Request struct {
// Get, Put, Setstat, Stat, Rename, Remove
// Rmdir, Mkdir, List, Readlink, Link, Symlink
Method [string](/builtin#string)
Filepath [string](/builtin#string)
Flags [uint32](/builtin#uint32)
Attrs [][byte](/builtin#byte) // convert to sub-struct
Target [string](/builtin#string) // for renames and sym-links
// contains filtered or unexported fields
}
```
Request contains the data and state for the incoming service request.
####
func [NewRequest](https://github.com/pkg/sftp/blob/v1.13.6/request.go#L144) [¶](#NewRequest)
```
func NewRequest(method, path [string](/builtin#string)) *[Request](#Request)
```
NewRequest creates a new Request object.
####
func (*Request) [AttrFlags](https://github.com/pkg/sftp/blob/v1.13.6/request-attrs.go#L49) [¶](#Request.AttrFlags)
```
func (r *[Request](#Request)) AttrFlags() [FileAttrFlags](#FileAttrFlags)
```
AttrFlags returns a FileAttrFlags boolean struct based on the bitmap/uint32 file attribute flags from the SFTP packaet.
####
func (*Request) [Attributes](https://github.com/pkg/sftp/blob/v1.13.6/request-attrs.go#L60) [¶](#Request.Attributes)
```
func (r *[Request](#Request)) Attributes() *[FileStat](#FileStat)
```
Attributes parses file attributes byte blob and return them in a FileStat object.
####
func (*Request) [Context](https://github.com/pkg/sftp/blob/v1.13.6/request.go#L205) [¶](#Request.Context)
```
func (r *[Request](#Request)) Context() [context](/context).[Context](/context#Context)
```
Context returns the request's context. To change the context,
use WithContext.
The returned context is always non-nil; it defaults to the background context.
For incoming server requests, the context is canceled when the request is complete or the client's connection closes.
####
func (*Request) [Pflags](https://github.com/pkg/sftp/blob/v1.13.6/request-attrs.go#L27) [¶](#Request.Pflags)
```
func (r *[Request](#Request)) Pflags() [FileOpenFlags](#FileOpenFlags)
```
Pflags converts the bitmap/uint32 from SFTP Open packet pflag values,
into a FileOpenFlags struct with booleans set for flags set in bitmap.
####
func (*Request) [WithContext](https://github.com/pkg/sftp/blob/v1.13.6/request.go#L214) [¶](#Request.WithContext)
```
func (r *[Request](#Request)) WithContext(ctx [context](/context).[Context](/context#Context)) *[Request](#Request)
```
WithContext returns a copy of r with its context changed to ctx.
The provided ctx must be non-nil.
####
type [RequestServer](https://github.com/pkg/sftp/blob/v1.13.6/request-server.go#L24) [¶](#RequestServer)
```
type RequestServer struct {
Handlers [Handlers](#Handlers)
// contains filtered or unexported fields
}
```
RequestServer abstracts the sftp protocol with an http request-like protocol
####
func [NewRequestServer](https://github.com/pkg/sftp/blob/v1.13.6/request-server.go#L62) [¶](#NewRequestServer)
```
func NewRequestServer(rwc [io](/io).[ReadWriteCloser](/io#ReadWriteCloser), h [Handlers](#Handlers), options ...[RequestServerOption](#RequestServerOption)) *[RequestServer](#RequestServer)
```
NewRequestServer creates/allocates/returns new RequestServer.
Normally there will be one server per user-session.
####
func (*RequestServer) [Close](https://github.com/pkg/sftp/blob/v1.13.6/request-server.go#L126) [¶](#RequestServer.Close)
```
func (rs *[RequestServer](#RequestServer)) Close() [error](/builtin#error)
```
Close the read/write/closer to trigger exiting the main server loop
####
func (*RequestServer) [Serve](https://github.com/pkg/sftp/blob/v1.13.6/request-server.go#L160) [¶](#RequestServer.Serve)
```
func (rs *[RequestServer](#RequestServer)) Serve() [error](/builtin#error)
```
Serve requests for user session
####
type [RequestServerOption](https://github.com/pkg/sftp/blob/v1.13.6/request-server.go#L38) [¶](#RequestServerOption)
added in v1.12.0
```
type RequestServerOption func(*[RequestServer](#RequestServer))
```
A RequestServerOption is a function which applies configuration to a RequestServer.
####
func [WithRSAllocator](https://github.com/pkg/sftp/blob/v1.13.6/request-server.go#L44) [¶](#WithRSAllocator)
added in v1.12.0
```
func WithRSAllocator() [RequestServerOption](#RequestServerOption)
```
WithRSAllocator enable the allocator.
After processing a packet we keep in memory the allocated slices and we reuse them for new packets.
The allocator is experimental
####
func [WithStartDirectory](https://github.com/pkg/sftp/blob/v1.13.6/request-server.go#L54) [¶](#WithStartDirectory)
added in v1.13.5
```
func WithStartDirectory(startDirectory [string](/builtin#string)) [RequestServerOption](#RequestServerOption)
```
WithStartDirectory sets a start directory to use as base for relative paths.
If unset the default is "/"
####
type [Server](https://github.com/pkg/sftp/blob/v1.13.6/server.go#L28) [¶](#Server)
```
type Server struct {
// contains filtered or unexported fields
}
```
Server is an SSH File Transfer Protocol (sftp) server.
This is intended to provide the sftp subsystem to an ssh server daemon.
This implementation currently supports most of sftp server protocol version 3,
as specified at <https://filezilla-project.org/specs/draft-ietf-secsh-filexfer-02.txt####
func [NewServer](https://github.com/pkg/sftp/blob/v1.13.6/server.go#L77) [¶](#NewServer)
```
func NewServer(rwc [io](/io).[ReadWriteCloser](/io#ReadWriteCloser), options ...[ServerOption](#ServerOption)) (*[Server](#Server), [error](/builtin#error))
```
NewServer creates a new Server instance around the provided streams, serving content from the root of the filesystem. Optionally, ServerOption functions may be specified to further configure the Server.
A subsequent call to Serve() is required to begin serving files over SFTP.
####
func (*Server) [Serve](https://github.com/pkg/sftp/blob/v1.13.6/server.go#L331) [¶](#Server.Serve)
```
func (svr *[Server](#Server)) Serve() [error](/builtin#error)
```
Serve serves SFTP connections until the streams stop or the SFTP subsystem is stopped. It returns nil if the server exits cleanly.
####
type [ServerOption](https://github.com/pkg/sftp/blob/v1.13.6/server.go#L101) [¶](#ServerOption)
```
type ServerOption func(*[Server](#Server)) [error](/builtin#error)
```
A ServerOption is a function which applies configuration to a Server.
####
func [ReadOnly](https://github.com/pkg/sftp/blob/v1.13.6/server.go#L112) [¶](#ReadOnly)
```
func ReadOnly() [ServerOption](#ServerOption)
```
ReadOnly configures a Server to serve files in read-only mode.
####
func [WithAllocator](https://github.com/pkg/sftp/blob/v1.13.6/server.go#L123) [¶](#WithAllocator)
added in v1.12.0
```
func WithAllocator() [ServerOption](#ServerOption)
```
WithAllocator enable the allocator.
After processing a packet we keep in memory the allocated slices and we reuse them for new packets.
The allocator is experimental
####
func [WithDebug](https://github.com/pkg/sftp/blob/v1.13.6/server.go#L104) [¶](#WithDebug)
```
func WithDebug(w [io](/io).[Writer](/io#Writer)) [ServerOption](#ServerOption)
```
WithDebug enables Server debugging output to the supplied io.Writer.
####
func [WithServerWorkingDirectory](https://github.com/pkg/sftp/blob/v1.13.6/server.go#L135) [¶](#WithServerWorkingDirectory)
added in v1.13.6
```
func WithServerWorkingDirectory(workDir [string](/builtin#string)) [ServerOption](#ServerOption)
```
WithServerWorkingDirectory sets a working directory to use as base for relative paths.
If unset the default is current working directory (os.Getwd).
####
type [StatExtended](https://github.com/pkg/sftp/blob/v1.13.6/attrs.go#L60) [¶](#StatExtended)
```
type StatExtended struct {
ExtType [string](/builtin#string)
ExtData [string](/builtin#string)
}
```
StatExtended contains additional, extended information for a FileStat.
####
type [StatVFS](https://github.com/pkg/sftp/blob/v1.13.6/packet.go#L1109) [¶](#StatVFS)
```
type StatVFS struct {
ID [uint32](/builtin#uint32)
Bsize [uint64](/builtin#uint64) /* file system block size */
Frsize [uint64](/builtin#uint64) /* fundamental fs block size */
Blocks [uint64](/builtin#uint64) /* number of blocks (unit f_frsize) */
Bfree [uint64](/builtin#uint64) /* free blocks in file system */
Bavail [uint64](/builtin#uint64) /* free blocks for non-root */
Files [uint64](/builtin#uint64) /* total file inodes */
Ffree [uint64](/builtin#uint64) /* free file inodes */
Favail [uint64](/builtin#uint64) /* free file inodes for to non-root */
Fsid [uint64](/builtin#uint64) /* file system id */
Flag [uint64](/builtin#uint64) /* bit mask of f_flag values */
Namemax [uint64](/builtin#uint64) /* maximum filename length */
}
```
A StatVFS contains statistics about a filesystem.
####
func (*StatVFS) [FreeSpace](https://github.com/pkg/sftp/blob/v1.13.6/packet.go#L1130) [¶](#StatVFS.FreeSpace)
```
func (p *[StatVFS](#StatVFS)) FreeSpace() [uint64](/builtin#uint64)
```
FreeSpace calculates the amount of free space in a filesystem.
####
func (*StatVFS) [MarshalBinary](https://github.com/pkg/sftp/blob/v1.13.6/packet.go#L1145) [¶](#StatVFS.MarshalBinary)
```
func (p *[StatVFS](#StatVFS)) MarshalBinary() ([][byte](/builtin#byte), [error](/builtin#error))
```
MarshalBinary encodes the StatVFS as an SSH_FXP_EXTENDED_REPLY packet.
####
func (*StatVFS) [TotalSpace](https://github.com/pkg/sftp/blob/v1.13.6/packet.go#L1125) [¶](#StatVFS.TotalSpace)
```
func (p *[StatVFS](#StatVFS)) TotalSpace() [uint64](/builtin#uint64)
```
TotalSpace calculates the amount of total space in a filesystem.
####
type [StatVFSFileCmder](https://github.com/pkg/sftp/blob/v1.13.6/request-interfaces.go#L69) [¶](#StatVFSFileCmder)
added in v1.13.0
```
type StatVFSFileCmder interface {
[FileCmder](#FileCmder)
StatVFS(*[Request](#Request)) (*[StatVFS](#StatVFS), [error](/builtin#error))
}
```
StatVFSFileCmder is a FileCmder that implements the StatVFS method.
You need to implement this interface if you want to handle statvfs requests.
Please also be sure that the [email protected] extension is enabled
####
type [StatusError](https://github.com/pkg/sftp/blob/v1.13.6/sftp.go#L220) [¶](#StatusError)
```
type StatusError struct {
Code [uint32](/builtin#uint32)
// contains filtered or unexported fields
}
```
A StatusError is returned when an SFTP operation fails, and provides additional information about the failure.
####
func (*StatusError) [Error](https://github.com/pkg/sftp/blob/v1.13.6/sftp.go#L225) [¶](#StatusError.Error)
```
func (s *[StatusError](#StatusError)) Error() [string](/builtin#string)
```
####
func (*StatusError) [FxCode](https://github.com/pkg/sftp/blob/v1.13.6/sftp.go#L230) [¶](#StatusError.FxCode)
added in v1.11.0
```
func (s *[StatusError](#StatusError)) FxCode() fxerr
```
FxCode returns the error code typed to match against the exported codes
####
type [TransferError](https://github.com/pkg/sftp/blob/v1.13.6/request-interfaces.go#L155) [¶](#TransferError)
added in v1.11.0
```
type TransferError interface {
TransferError(err [error](/builtin#error))
}
```
TransferError is an optional interface that readerAt and writerAt can implement to be notified about the error causing Serve() to exit with the request still open
####
type [WriterAtReaderAt](https://github.com/pkg/sftp/blob/v1.13.6/request-interfaces.go#L10) [¶](#WriterAtReaderAt)
added in v1.13.0
```
type WriterAtReaderAt interface {
[io](/io).[WriterAt](/io#WriterAt)
[io](/io).[ReaderAt](/io#ReaderAt)
}
```
WriterAtReaderAt defines the interface to return when a file is to be opened for reading and writing |
@dantehemerson2/notion-to-md | npm | JavaScript | Notion to Markdown
===
Convert notion pages, block and list of blocks to markdown (supports nesting) using **[notion-sdk-js](https://github.com/makenotion/notion-sdk-js)**
> **Note:** Before getting started, create [an integration and find the token](https://www.notion.so/my-integrations).
Todo
---
* [x] heading
* [x] images
* [x] quotes
* [x] links
* [x] bullets
* [x] todo
* [x] inline code
* [x] code block
* [x] strikethrough, underline, bold, italic
* [x] nested blocks
* [ ] pages inside pages/child page
* [x] embeds, bookmarks, videos, files (converted to links)
* [ ] tables
* [x] divider
* [x] equation block (converted to code blocks)
* [x] convert returned markdown object to string (`toMarkdownString()`)
* [x] typescript support
* [ ] add tests
Install
---
```
$ npm install notion-to-md
```
Usage
---
> **Note:** Details on methods can be found in [API section](https://github.com/souvikinator/notion-to-md#api)
###
converting markdown objects to markdown string
This is how the notion page looks for this example:
```
const { Client } = require("@notionhq/client");
const { NotionToMarkdown } = require("notion-to-md");
// or
// import {NotionToMarkdown} from "notion-to-md";
const notion = new Client({
auth: "your integration token",
});
// passing notion client to the option const n2m = new NotionToMarkdown({ notionClient: notion });
(async () => {
const mdblocks = await n2m.pageToMarkdown("target_page_id");
const mdString = n2m.toMarkdownString(mdblocks);
//writing to file
fs.writeFile("test.md", mdString, (err) => {
console.log(err);
});
})();
```
**Output:**
###
converting page to markdown object
Example notion page:
```
const { Client } = require("@notionhq/client");
const { NotionToMarkdown } = require("notion-to-md");
const notion = new Client({
auth: "your integration token",
});
// passing notion client to the option const n2m = new NotionToMarkdown({ notionClient: notion });
(async () => {
// notice second argument, totalPage.
const x = await n2m.pageToMarkdown("target_page_id", 2);
console.log(x);
})();
```
**Output:**
```
[
{
"parent": "# heading 1",
"children": []
},
{
"parent": "- bullet 1",
"children": [
{
"parent": "- bullet 1.1",
"children": []
},
{
"parent": "- bullet 1.2",
"children": []
}
]
},
{
"parent": "- bullet 2",
"children": []
},
{
"parent": "- [ ] check box 1",
"children": [
{
"parent": "- [x] check box 1.2",
"children": []
},
{
"parent": "- [ ] check box 1.3",
"children": []
}
]
},
{
"parent": "- [ ] checkbox 2",
"children": []
}
]
```
###
converting list of blocks to markdown object
same notion page as before
```
const { Client } = require("@notionhq/client");
const { NotionToMarkdown } = require("notion-to-md");
const notion = new Client({
auth: "your integration token",
});
// passing notion client to the option const n2m = new NotionToMarkdown({ notionClient: notion });
(async () => {
// get all blocks in the page
const { results } = await notion.blocks.children.list({
block_id,
});
//convert to markdown
const x = await n2m.blocksToMarkdown(results);
console.log(x);
})();
```
**Output**: same as before
###
Converting a single block to markdown string
* only takes a single notion block and returns corresponding markdown string
* nesting is ignored
* independent of @notionhq/client
```
const { NotionToMarkdown } = require("notion-to-md");
// notion client not required const n2m = new NotionToMarkdown();
const result = n2m.blockToMarkdown(block);
console.log(result);
```
**result**:
```
![image](https://media.giphy.com/media/Ju7l5y9osyymQ/giphy.gif)
```
API
---
> ###
> `toMarkdownString(mdBlock[])`
> * takes output of `pageToMarkdown` or `blocksToMarkdown` as argument
> * converts to markdown string.
> ###
> `pageToMarkdown(id,totalPage)`
> * Uses `blocksToMarkdown` internally.
> * `id`(pageid) as input and converts all the blocks in the page to corresponding markdown object
> * `totalPage` is the retrieve block children request number i.e `page_size Maximum = totalPage * 100`.
> ###
> totalPage
> Default value is `1` which means only `100` blocks will be converted to markdown and rest will be ignored (due to notion api limitations, ref: [#9](https://github.com/souvikinator/notion-to-md/pull/9)).
> ###
> How to use `totalPage` arg ?
> * if the notion page contains less than or equal `100` blocks then `totalPage` arg is not required.
> * if the notion page contains `150` blocks then `totalPage` argument should be greater than or equal to `2` leading to `pageSize = 2 * 100` and rendering all `150` blocks.
> ###
> `blocksToMarkdown(blocks,totalPage)`
> **Note**: requires **notion-sdk-js** unlike `blockToMarkdown`
> * `blocks`: array of notion blocks
> * `totalPage`: the retrieve block children request number i.e `page_size Maximum = totalPage * 100`.
> * deals with **nested blocks**
> * uses `blockToMarkdown` internally.
> ###
> `blockToMarkdown(block)`
> * Takes single notion block and converts to markdown string
> * does not deal with nested notion blocks
> * This method doesn't require the `notion-sdk-js`.
> * Refer docs to know more about [notion blocks](https://developers.notion.com/reference/block)
Contribution
---
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.
Please make sure to update tests as appropriate.
License
---
[MIT](https://choosealicense.com/licenses/mit/)
Readme
---
### Keywords
* notion
* markdown
* md
* notion
* notion-api
* notion-to-md
* notion2md |
zipper | hex | Erlang |
Zipper [GitHub Actions CI](https://github.com/inaka/zipper)
===
Generic zipper implementation in Erlang.
[Zippers: what are they good for?](#zippers-what-are-they-good-for)
---
Zippers let you traverse immutable data structures with ease and flexibility.
###
[Contact Us](#contact-us)
If you find any **bugs** or have a **problem** while using this library, please
[open an issue](https://github.com/inaka/zipper/issues/new) in this repo (or a pull request :)).
And you can check all of our open-source projects at [inaka.github.io](https://inaka.github.io)
[Usage](#usage)
---
For a map tree structure like the following:
```
Root = #{type => planet,
attrs => #{name => "Earth"},
children => [
#{type => continent,
attrs => #{name => "America"},
children => [
#{type => country,
attrs => #{name => "Argentina"},
children => []},
#{type => country,
attrs => #{name => "Brasil"},
children => []}
]
},
#{type => continent,
attrs => #{name => "Europe"},
children => [
#{type => country,
attrs => #{name => "Sweden"},
children => []},
#{type => country,
attrs => #{name => "England"},
children => []}
]
}
]
},
```
You can build a zipper by providing three simple functions:
* `IsBranchFun`: takes a node and returns `true` if it is a branch node or
`false` otherwise.
* `ChildrenFun`: takes a node and returns a list of its children.
* `MakeNodeFun`: takes a node and a list of children and returns a new node containing the supplied list as children.
This is an example of how you would define a zipper and then use it to traverse the map tree structure above:
```
%% Create the zipper IsBranchFun = fun
(#{children := [_ | _]}) -> true;
(_) -> false
end,
ChildrenFun = fun(Node) -> maps:get(children, Node) end,
MakeNodeFun = fun(Node, Children) -> Node#{children => Children} end,
Zipper = zipper:new(fun is_map/1, ChildrenFun, MakeNodefun, Root),
%% Traverse the zipper with next Zipper1 = zipper:next(Zipper),
Zipper2 = zipper:next(Zipper),
%% Get the current zipper node Argentina = zipper:node(Zipper2).
io:format("~p", [Argentina]),
%%= #{type => country,
%%= attrs => #{name => "Argentina"},
%%= children => []}
%% Go up and get the node Zipper3 = zipper:up(Zipper2).
America = zipper:node(Zipper2).
io:format("~p", [America]),
%%= #{type => country,
%%= attrs => #{name => "America"},
%%= children => [#{...}, #{...}]}
```
[Tests](#tests)
---
Circular dependency in test environment ([Katana Test](https://github.com/inaka/katana-test) ->
[Elvis Core](https://github.com/inaka/elvis_core) -> [Zipper](https://github.com/inaka/zipper)) is fixed by including Zipper as a dep in the test profile in `rebar.config`
```
...
{profiles, [
{test, [
{deps, [
%% The tag will be replaced by the rebar.config.script
{zipper, {git, "https://github.com/inaka/zipper.git", {tag, "irrelevant"}}},
...
]}
]}
]}.
...
```
but then, we still replace the tag with the current branch. This is done in `rebar.config.script`.
Therefore, it's really important to have the branch updated and pushed to github before running the tests with `rebar3 ct`.
[References](#references)
---
* [The Zipper, GERARD HUET](https://www.st.cs.uni-saarland.de/edu/seminare/2005/advanced-fp/docs/huet-zipper.pdf)
* [clojure.zip](https://clojure.github.io/clojure/clojure.zip-api.html#clojure.zip/zipper)
[API Reference](api-reference.html)
[Next Page →
LICENSE](license.html)
zipper
===
Generic Zipper Implementation. Zippers let you traverse immutable data structures with ease and flexibility.
[Summary](#summary)
===
[Types](#types)
---
[children_fun/1](#t:children_fun/1)
[info/1](#t:info/1)
[is_branch_fun/1](#t:is_branch_fun/1)
[make_node_fun/1](#t:make_node_fun/1)
[operation/0](#t:operation/0)
[zipper/1](#t:zipper/1)
[Functions](#functions)
---
[append_child(T, Zipper)](#append_child/2)
Adds a node as the rightmost child of the current one.
[children(Zipper)](#children/1)
Returns the list of children zippers.
[down(Zipper)](#down/1)
Returns the zipper in the first child, if any.
[edit(Fun, Args, Zipper)](#edit/3)
Edits the current node by applying the given function. The parameters of said function will be [Node | Args].
[filter(Pred, Zipper)](#filter/2)
Returns a list of all the nodes in the zipper that match a predicate.
[fmap(Fun, Args, Zipper)](#fmap/3)
Returns the root of the tree, where the value of each node (after the current location of Zipper) is replaced with the result from applying Fun to the node as the first argument and Args as additional arguments.
[fold(Fun, A, Zipper)](#fold/3)
Applies Fun recursively on the zipper. The arguments of Fun will be (Node, Acc) where Acc is the result of the previous call or the initial value provided.
[insert_child(T, Zipper)](#insert_child/2)
Adds a node as the leftmost child of the current one.
[insert_left(T, Zipper)](#insert_left/2)
Inserts a node to the left of the current one.
[insert_right(T, Zipper)](#insert_right/2)
Inserts a node to the right of the current one.
[is_branch(_)](#is_branch/1)
Is this node a branch?
[is_end(Zipper)](#is_end/1)
Is it the end of the zipper traversal.
[left(Zipper)](#left/1)
Returns the zipper on the left, if any.
[leftmost(Zipper)](#leftmost/1)
Returns the leftmost zipper in the current zipper.
[map(Fun, Zipper)](#map/2)
Applies a function to all nodes of the zipper. Returns a list with the results.
[new(IsBranch, Children, MakeNode, T)](#new/4)
Builds a new zipper with nodes of type T.
[next(Zipper)](#next/1)
Returns the next zipper.
[node(_)](#node/1)
Returns the value of the current node in the zipper.
[prev(Zipper)](#prev/1)
Returns the previous zipper.
[remove(_)](#remove/1)
Removes current node from zipper. Moves down, if possible. If not, it moves to the rightmost node.
[replace(T, Zipper)](#replace/2)
Replaces the current node.
[right(Zipper)](#right/1)
Returns the zipper on the right, if any.
[rightmost(Zipper)](#rightmost/1)
Returns the rightmost zipper in the current zipper.
[root(Zipper)](#root/1)
Returns the node on the root of the zipper.
[size(Zipper)](#size/1)
Returns the size of the zipper.
[traverse(Rest, Zipper)](#traverse/2)
Traverses the zipper following the given list of operations. If, at some point, an operation is invalid, it will crash.
[up(Zipper)](#up/1)
Returns the zipper in the parent node, if possible.
[Types](#types)
===
[Functions](#functions)
===
zipper_default
===
[Summary](#summary)
===
[Types](#types)
---
[bin_tree_node/1](#t:bin_tree_node/1)
[Functions](#functions)
---
[bin_tree(Root)](#bin_tree/1)
Generates a zipper for binary trees.
[list(Root)](#list/1)
Generates a zipper for lists.
[map_tree(M, CK)](#map_tree/2)
Generates a zipper for maps.
[Types](#types)
===
[Functions](#functions)
=== |
aws-sdk-applicationinsights | rust | Rust | Crate aws_sdk_applicationinsights
===
**Please Note: The SDK is currently in Developer Preview and is intended strictly for feedback purposes only. Do not use this SDK for production workloads.**
Amazon CloudWatch Application Insights is a service that helps you detect common problems with your applications. It enables you to pinpoint the source of issues in your applications (built with technologies such as Microsoft IIS, .NET, and Microsoft SQL Server), by providing key insights into detected problems.
After you onboard your application, CloudWatch Application Insights identifies, recommends, and sets up metrics and logs. It continuously analyzes and correlates your metrics and logs for unusual behavior to surface actionable problems with your application. For example, if your application is slow and unresponsive and leading to HTTP 500 errors in your Application Load Balancer (ALB), Application Insights informs you that a memory pressure problem with your SQL Server database is occurring. It bases this analysis on impactful metrics and log errors.
### Getting Started
> Examples are available for many services and operations, check out the
> examples folder in GitHub.
The SDK provides one crate per AWS service. You must add Tokio as a dependency within your Rust project to execute asynchronous code. To add `aws-sdk-applicationinsights` to your project, add the following to your **Cargo.toml** file:
```
[dependencies]
aws-config = "0.56.1"
aws-sdk-applicationinsights = "0.33.0"
tokio = { version = "1", features = ["full"] }
```
Then in code, a client can be created with the following:
```
use aws_sdk_applicationinsights as applicationinsights;
#[::tokio::main]
async fn main() -> Result<(), applicationinsights::Error> {
let config = aws_config::load_from_env().await;
let client = aws_sdk_applicationinsights::Client::new(&config);
// ... make some calls with the client
Ok(())
}
```
See the client documentation for information on what calls can be made, and the inputs and outputs for each of those calls.
### Using the SDK
Until the SDK is released, we will be adding information about using the SDK to the Developer Guide. Feel free to suggest additional sections for the guide by opening an issue and describing what you are trying to do.
### Getting Help
* GitHub discussions - For ideas, RFCs & general questions
* GitHub issues - For bug reports & feature requests
* Generated Docs (latest version)
* Usage examples
Crate Organization
---
The entry point for most customers will be `Client`, which exposes one method for each API offered by Amazon CloudWatch Application Insights. The return value of each of these methods is a “fluent builder”,
where the different inputs for that API are added by builder-style function call chaining,
followed by calling `send()` to get a `Future` that will result in either a successful output or a `SdkError`.
Some of these API inputs may be structs or enums to provide more complex structured information.
These structs and enums live in `types`. There are some simpler types for representing data such as date times or binary blobs that live in `primitives`.
All types required to configure a client via the `Config` struct live in `config`.
The `operation` module has a submodule for every API, and in each submodule is the input, output, and error type for that API, as well as builders to construct each of those.
There is a top-level `Error` type that encompasses all the errors that the client can return. Any other error type can be converted to this `Error` type via the
`From` trait.
The other modules within this crate are not required for normal usage.
Modules
---
* clientClient for calling Amazon CloudWatch Application Insights.
* configConfiguration for Amazon CloudWatch Application Insights.
* errorCommon errors and error handling utilities.
* metaInformation about this crate.
* operationAll operations that this crate can perform.
* primitivesPrimitives such as `Blob` or `DateTime` used by other types.
* typesData structures used by operation inputs/outputs.
Structs
---
* ClientClient for Amazon CloudWatch Application Insights
* ConfigConfiguration for a aws_sdk_applicationinsights service client.
Enums
---
* ErrorAll possible error types for this service.
Crate aws_sdk_applicationinsights
===
**Please Note: The SDK is currently in Developer Preview and is intended strictly for feedback purposes only. Do not use this SDK for production workloads.**
Amazon CloudWatch Application Insights is a service that helps you detect common problems with your applications. It enables you to pinpoint the source of issues in your applications (built with technologies such as Microsoft IIS, .NET, and Microsoft SQL Server), by providing key insights into detected problems.
After you onboard your application, CloudWatch Application Insights identifies, recommends, and sets up metrics and logs. It continuously analyzes and correlates your metrics and logs for unusual behavior to surface actionable problems with your application. For example, if your application is slow and unresponsive and leading to HTTP 500 errors in your Application Load Balancer (ALB), Application Insights informs you that a memory pressure problem with your SQL Server database is occurring. It bases this analysis on impactful metrics and log errors.
### Getting Started
> Examples are available for many services and operations, check out the
> examples folder in GitHub.
The SDK provides one crate per AWS service. You must add Tokio as a dependency within your Rust project to execute asynchronous code. To add `aws-sdk-applicationinsights` to your project, add the following to your **Cargo.toml** file:
```
[dependencies]
aws-config = "0.56.1"
aws-sdk-applicationinsights = "0.33.0"
tokio = { version = "1", features = ["full"] }
```
Then in code, a client can be created with the following:
```
use aws_sdk_applicationinsights as applicationinsights;
#[::tokio::main]
async fn main() -> Result<(), applicationinsights::Error> {
let config = aws_config::load_from_env().await;
let client = aws_sdk_applicationinsights::Client::new(&config);
// ... make some calls with the client
Ok(())
}
```
See the client documentation for information on what calls can be made, and the inputs and outputs for each of those calls.
### Using the SDK
Until the SDK is released, we will be adding information about using the SDK to the Developer Guide. Feel free to suggest additional sections for the guide by opening an issue and describing what you are trying to do.
### Getting Help
* GitHub discussions - For ideas, RFCs & general questions
* GitHub issues - For bug reports & feature requests
* Generated Docs (latest version)
* Usage examples
Crate Organization
---
The entry point for most customers will be `Client`, which exposes one method for each API offered by Amazon CloudWatch Application Insights. The return value of each of these methods is a “fluent builder”,
where the different inputs for that API are added by builder-style function call chaining,
followed by calling `send()` to get a `Future` that will result in either a successful output or a `SdkError`.
Some of these API inputs may be structs or enums to provide more complex structured information.
These structs and enums live in `types`. There are some simpler types for representing data such as date times or binary blobs that live in `primitives`.
All types required to configure a client via the `Config` struct live in `config`.
The `operation` module has a submodule for every API, and in each submodule is the input, output, and error type for that API, as well as builders to construct each of those.
There is a top-level `Error` type that encompasses all the errors that the client can return. Any other error type can be converted to this `Error` type via the
`From` trait.
The other modules within this crate are not required for normal usage.
Modules
---
* clientClient for calling Amazon CloudWatch Application Insights.
* configConfiguration for Amazon CloudWatch Application Insights.
* errorCommon errors and error handling utilities.
* metaInformation about this crate.
* operationAll operations that this crate can perform.
* primitivesPrimitives such as `Blob` or `DateTime` used by other types.
* typesData structures used by operation inputs/outputs.
Structs
---
* ClientClient for Amazon CloudWatch Application Insights
* ConfigConfiguration for a aws_sdk_applicationinsights service client.
Enums
---
* ErrorAll possible error types for this service.
Struct aws_sdk_applicationinsights::client::Client
===
```
pub struct Client { /* private fields */ }
```
Client for Amazon CloudWatch Application Insights
Client for invoking operations on Amazon CloudWatch Application Insights. Each operation on Amazon CloudWatch Application Insights is a method on this this struct. `.send()` MUST be invoked on the generated operations to dispatch the request to the service.
### Constructing a `Client`
A `Config` is required to construct a client. For most use cases, the `aws-config`
crate should be used to automatically resolve this config using
`aws_config::load_from_env()`, since this will resolve an `SdkConfig` which can be shared across multiple different AWS SDK clients. This config resolution process can be customized by calling `aws_config::from_env()` instead, which returns a `ConfigLoader` that uses the builder pattern to customize the default config.
In the simplest case, creating a client looks as follows:
```
let config = aws_config::load_from_env().await;
let client = aws_sdk_applicationinsights::Client::new(&config);
```
Occasionally, SDKs may have additional service-specific that can be set on the `Config` that is absent from `SdkConfig`, or slightly different settings for a specific client may be desired.
The `Config` struct implements `From<&SdkConfig>`, so setting these specific settings can be done as follows:
```
let sdk_config = ::aws_config::load_from_env().await;
let config = aws_sdk_applicationinsights::config::Builder::from(&sdk_config)
.some_service_specific_setting("value")
.build();
```
See the `aws-config` docs and `Config` for more information on customizing configuration.
*Note:* Client construction is expensive due to connection thread pool initialization, and should be done once at application start-up.
Using the `Client`
---
A client has a function for every operation that can be performed by the service.
For example, the `AddWorkload` operation has a `Client::add_workload`, function which returns a builder for that operation.
The fluent builder ultimately has a `send()` function that returns an async future that returns a result, as illustrated below:
```
let result = client.add_workload()
.resource_group_name("example")
.send()
.await;
```
The underlying HTTP requests that get made by this can be modified with the `customize_operation`
function on the fluent builder. See the `customize` module for more information.
Implementations
---
### impl Client
#### pub fn add_workload(&self) -> AddWorkloadFluentBuilder
Constructs a fluent builder for the `AddWorkload` operation.
* The fluent builder is configurable:
+ `resource_group_name(impl Into<String>)` / `set_resource_group_name(Option<String>)`: The name of the resource group.
+ `component_name(impl Into<String>)` / `set_component_name(Option<String>)`: The name of the component.
+ `workload_configuration(WorkloadConfiguration)` / `set_workload_configuration(Option<WorkloadConfiguration>)`: The configuration settings of the workload. The value is the escaped JSON of the configuration.
* On success, responds with `AddWorkloadOutput` with field(s):
+ `workload_id(Option<String>)`: The ID of the workload.
+ `workload_configuration(Option<WorkloadConfiguration>)`: The configuration settings of the workload. The value is the escaped JSON of the configuration.
* On failure, responds with `SdkError<AddWorkloadError>`
### impl Client
#### pub fn create_application(&self) -> CreateApplicationFluentBuilder
Constructs a fluent builder for the `CreateApplication` operation.
* The fluent builder is configurable:
+ `resource_group_name(impl Into<String>)` / `set_resource_group_name(Option<String>)`: The name of the resource group.
+ `ops_center_enabled(bool)` / `set_ops_center_enabled(Option<bool>)`: When set to `true`, creates opsItems for any problems detected on an application.
+ `cwe_monitor_enabled(bool)` / `set_cwe_monitor_enabled(Option<bool>)`: Indicates whether Application Insights can listen to CloudWatch events for the application resources, such as `instance terminated`, `failed deployment`, and others.
+ `ops_item_sns_topic_arn(impl Into<String>)` / `set_ops_item_sns_topic_arn(Option<String>)`: The SNS topic provided to Application Insights that is associated to the created opsItem. Allows you to receive notifications for updates to the opsItem.
+ `tags(Tag)` / `set_tags(Option<Vec<Tag>>)`: List of tags to add to the application. tag key (`Key`) and an associated tag value (`Value`). The maximum length of a tag key is 128 characters. The maximum length of a tag value is 256 characters.
+ `auto_config_enabled(bool)` / `set_auto_config_enabled(Option<bool>)`: Indicates whether Application Insights automatically configures unmonitored resources in the resource group.
+ `auto_create(bool)` / `set_auto_create(Option<bool>)`: Configures all of the resources in the resource group by applying the recommended configurations.
+ `grouping_type(GroupingType)` / `set_grouping_type(Option<GroupingType>)`: Application Insights can create applications based on a resource group or on an account. To create an account-based application using all of the resources in the account, set this parameter to `ACCOUNT_BASED`.
* On success, responds with `CreateApplicationOutput` with field(s):
+ `application_info(Option<ApplicationInfo>)`: Information about the application.
* On failure, responds with `SdkError<CreateApplicationError>`
### impl Client
#### pub fn create_component(&self) -> CreateComponentFluentBuilder
Constructs a fluent builder for the `CreateComponent` operation.
* The fluent builder is configurable:
+ `resource_group_name(impl Into<String>)` / `set_resource_group_name(Option<String>)`: The name of the resource group.
+ `component_name(impl Into<String>)` / `set_component_name(Option<String>)`: The name of the component.
+ `resource_list(impl Into<String>)` / `set_resource_list(Option<Vec<String>>)`: The list of resource ARNs that belong to the component.
* On success, responds with `CreateComponentOutput`
* On failure, responds with `SdkError<CreateComponentError>`
### impl Client
#### pub fn create_log_pattern(&self) -> CreateLogPatternFluentBuilder
Constructs a fluent builder for the `CreateLogPattern` operation.
* The fluent builder is configurable:
+ `resource_group_name(impl Into<String>)` / `set_resource_group_name(Option<String>)`: The name of the resource group.
+ `pattern_set_name(impl Into<String>)` / `set_pattern_set_name(Option<String>)`: The name of the log pattern set.
+ `pattern_name(impl Into<String>)` / `set_pattern_name(Option<String>)`: The name of the log pattern.
+ `pattern(impl Into<String>)` / `set_pattern(Option<String>)`: The log pattern. The pattern must be DFA compatible. Patterns that utilize forward lookahead or backreference constructions are not supported.
+ `rank(i32)` / `set_rank(Option<i32>)`: Rank of the log pattern. Must be a value between `1` and `1,000,000`. The patterns are sorted by rank, so we recommend that you set your highest priority patterns with the lowest rank. A pattern of rank `1` will be the first to get matched to a log line. A pattern of rank `1,000,000` will be last to get matched. When you configure custom log patterns from the console, a `Low` severity pattern translates to a `750,000` rank. A `Medium` severity pattern translates to a `500,000` rank. And a `High` severity pattern translates to a `250,000` rank. Rank values less than `1` or greater than `1,000,000` are reserved for AWS-provided patterns.
* On success, responds with `CreateLogPatternOutput` with field(s):
+ `log_pattern(Option<LogPattern>)`: The successfully created log pattern.
+ `resource_group_name(Option<String>)`: The name of the resource group.
* On failure, responds with `SdkError<CreateLogPatternError>`
### impl Client
#### pub fn delete_application(&self) -> DeleteApplicationFluentBuilder
Constructs a fluent builder for the `DeleteApplication` operation.
* The fluent builder is configurable:
+ `resource_group_name(impl Into<String>)` / `set_resource_group_name(Option<String>)`: The name of the resource group.
* On success, responds with `DeleteApplicationOutput`
* On failure, responds with `SdkError<DeleteApplicationError>`
### impl Client
#### pub fn delete_component(&self) -> DeleteComponentFluentBuilder
Constructs a fluent builder for the `DeleteComponent` operation.
* The fluent builder is configurable:
+ `resource_group_name(impl Into<String>)` / `set_resource_group_name(Option<String>)`: The name of the resource group.
+ `component_name(impl Into<String>)` / `set_component_name(Option<String>)`: The name of the component.
* On success, responds with `DeleteComponentOutput`
* On failure, responds with `SdkError<DeleteComponentError>`
### impl Client
#### pub fn delete_log_pattern(&self) -> DeleteLogPatternFluentBuilder
Constructs a fluent builder for the `DeleteLogPattern` operation.
* The fluent builder is configurable:
+ `resource_group_name(impl Into<String>)` / `set_resource_group_name(Option<String>)`: The name of the resource group.
+ `pattern_set_name(impl Into<String>)` / `set_pattern_set_name(Option<String>)`: The name of the log pattern set.
+ `pattern_name(impl Into<String>)` / `set_pattern_name(Option<String>)`: The name of the log pattern.
* On success, responds with `DeleteLogPatternOutput`
* On failure, responds with `SdkError<DeleteLogPatternError>`
### impl Client
#### pub fn describe_application(&self) -> DescribeApplicationFluentBuilder
Constructs a fluent builder for the `DescribeApplication` operation.
* The fluent builder is configurable:
+ `resource_group_name(impl Into<String>)` / `set_resource_group_name(Option<String>)`: The name of the resource group.
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The AWS account ID for the resource group owner.
* On success, responds with `DescribeApplicationOutput` with field(s):
+ `application_info(Option<ApplicationInfo>)`: Information about the application.
* On failure, responds with `SdkError<DescribeApplicationError>`
### impl Client
#### pub fn describe_component(&self) -> DescribeComponentFluentBuilder
Constructs a fluent builder for the `DescribeComponent` operation.
* The fluent builder is configurable:
+ `resource_group_name(impl Into<String>)` / `set_resource_group_name(Option<String>)`: The name of the resource group.
+ `component_name(impl Into<String>)` / `set_component_name(Option<String>)`: The name of the component.
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The AWS account ID for the resource group owner.
* On success, responds with `DescribeComponentOutput` with field(s):
+ `application_component(Option<ApplicationComponent>)`: Describes a standalone resource or similarly grouped resources that the application is made up of.
+ `resource_list(Option<Vec<String>>)`: The list of resource ARNs that belong to the component.
* On failure, responds with `SdkError<DescribeComponentError>`
### impl Client
#### pub fn describe_component_configuration(
&self
) -> DescribeComponentConfigurationFluentBuilder
Constructs a fluent builder for the `DescribeComponentConfiguration` operation.
* The fluent builder is configurable:
+ `resource_group_name(impl Into<String>)` / `set_resource_group_name(Option<String>)`: The name of the resource group.
+ `component_name(impl Into<String>)` / `set_component_name(Option<String>)`: The name of the component.
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The AWS account ID for the resource group owner.
* On success, responds with `DescribeComponentConfigurationOutput` with field(s):
+ `monitor(Option<bool>)`: Indicates whether the application component is monitored.
+ `tier(Option<Tier>)`: The tier of the application component. Supported tiers include `DOT_NET_CORE`, `DOT_NET_WORKER`, `DOT_NET_WEB`, `SQL_SERVER`, and `DEFAULT`
+ `component_configuration(Option<String>)`: The configuration settings of the component. The value is the escaped JSON of the configuration.
* On failure, responds with `SdkError<DescribeComponentConfigurationError>`
### impl Client
#### pub fn describe_component_configuration_recommendation(
&self
) -> DescribeComponentConfigurationRecommendationFluentBuilder
Constructs a fluent builder for the `DescribeComponentConfigurationRecommendation` operation.
* The fluent builder is configurable:
+ `resource_group_name(impl Into<String>)` / `set_resource_group_name(Option<String>)`: The name of the resource group.
+ `component_name(impl Into<String>)` / `set_component_name(Option<String>)`: The name of the component.
+ `tier(Tier)` / `set_tier(Option<Tier>)`: The tier of the application component.
+ `recommendation_type(RecommendationType)` / `set_recommendation_type(Option<RecommendationType>)`: The recommended configuration type.
* On success, responds with `DescribeComponentConfigurationRecommendationOutput` with field(s):
+ `component_configuration(Option<String>)`: The recommended configuration settings of the component. The value is the escaped JSON of the configuration.
* On failure, responds with `SdkError<DescribeComponentConfigurationRecommendationError>`
### impl Client
#### pub fn describe_log_pattern(&self) -> DescribeLogPatternFluentBuilder
Constructs a fluent builder for the `DescribeLogPattern` operation.
* The fluent builder is configurable:
+ `resource_group_name(impl Into<String>)` / `set_resource_group_name(Option<String>)`: The name of the resource group.
+ `pattern_set_name(impl Into<String>)` / `set_pattern_set_name(Option<String>)`: The name of the log pattern set.
+ `pattern_name(impl Into<String>)` / `set_pattern_name(Option<String>)`: The name of the log pattern.
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The AWS account ID for the resource group owner.
* On success, responds with `DescribeLogPatternOutput` with field(s):
+ `resource_group_name(Option<String>)`: The name of the resource group.
+ `account_id(Option<String>)`: The AWS account ID for the resource group owner.
+ `log_pattern(Option<LogPattern>)`: The successfully created log pattern.
* On failure, responds with `SdkError<DescribeLogPatternError>`
### impl Client
#### pub fn describe_observation(&self) -> DescribeObservationFluentBuilder
Constructs a fluent builder for the `DescribeObservation` operation.
* The fluent builder is configurable:
+ `observation_id(impl Into<String>)` / `set_observation_id(Option<String>)`: The ID of the observation.
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The AWS account ID for the resource group owner.
* On success, responds with `DescribeObservationOutput` with field(s):
+ `observation(Option<Observation>)`: Information about the observation.
* On failure, responds with `SdkError<DescribeObservationError>`
### impl Client
#### pub fn describe_problem(&self) -> DescribeProblemFluentBuilder
Constructs a fluent builder for the `DescribeProblem` operation.
* The fluent builder is configurable:
+ `problem_id(impl Into<String>)` / `set_problem_id(Option<String>)`: The ID of the problem.
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The AWS account ID for the owner of the resource group affected by the problem.
* On success, responds with `DescribeProblemOutput` with field(s):
+ `problem(Option<Problem>)`: Information about the problem.
* On failure, responds with `SdkError<DescribeProblemError>`
### impl Client
#### pub fn describe_problem_observations(
&self
) -> DescribeProblemObservationsFluentBuilder
Constructs a fluent builder for the `DescribeProblemObservations` operation.
* The fluent builder is configurable:
+ `problem_id(impl Into<String>)` / `set_problem_id(Option<String>)`: The ID of the problem.
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The AWS account ID for the resource group owner.
* On success, responds with `DescribeProblemObservationsOutput` with field(s):
+ `related_observations(Option<RelatedObservations>)`: Observations related to the problem.
* On failure, responds with `SdkError<DescribeProblemObservationsError>`
### impl Client
#### pub fn describe_workload(&self) -> DescribeWorkloadFluentBuilder
Constructs a fluent builder for the `DescribeWorkload` operation.
* The fluent builder is configurable:
+ `resource_group_name(impl Into<String>)` / `set_resource_group_name(Option<String>)`: The name of the resource group.
+ `component_name(impl Into<String>)` / `set_component_name(Option<String>)`: The name of the component.
+ `workload_id(impl Into<String>)` / `set_workload_id(Option<String>)`: The ID of the workload.
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The AWS account ID for the workload owner.
* On success, responds with `DescribeWorkloadOutput` with field(s):
+ `workload_id(Option<String>)`: The ID of the workload.
+ `workload_remarks(Option<String>)`: If logging is supported for the resource type, shows whether the component has configured logs to be monitored.
+ `workload_configuration(Option<WorkloadConfiguration>)`: The configuration settings of the workload. The value is the escaped JSON of the configuration.
* On failure, responds with `SdkError<DescribeWorkloadError>`
### impl Client
#### pub fn list_applications(&self) -> ListApplicationsFluentBuilder
Constructs a fluent builder for the `ListApplications` operation.
This operation supports pagination; See `into_paginator()`.
* The fluent builder is configurable:
+ `max_results(i32)` / `set_max_results(Option<i32>)`: The maximum number of results to return in a single call. To retrieve the remaining results, make another call with the returned `NextToken` value.
+ `next_token(impl Into<String>)` / `set_next_token(Option<String>)`: The token to request the next page of results.
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The AWS account ID for the resource group owner.
* On success, responds with `ListApplicationsOutput` with field(s):
+ `application_info_list(Option<Vec<ApplicationInfo>>)`: The list of applications.
+ `next_token(Option<String>)`: The token used to retrieve the next page of results. This value is `null` when there are no more results to return.
* On failure, responds with `SdkError<ListApplicationsError>`
### impl Client
#### pub fn list_components(&self) -> ListComponentsFluentBuilder
Constructs a fluent builder for the `ListComponents` operation.
This operation supports pagination; See `into_paginator()`.
* The fluent builder is configurable:
+ `resource_group_name(impl Into<String>)` / `set_resource_group_name(Option<String>)`: The name of the resource group.
+ `max_results(i32)` / `set_max_results(Option<i32>)`: The maximum number of results to return in a single call. To retrieve the remaining results, make another call with the returned `NextToken` value.
+ `next_token(impl Into<String>)` / `set_next_token(Option<String>)`: The token to request the next page of results.
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The AWS account ID for the resource group owner.
* On success, responds with `ListComponentsOutput` with field(s):
+ `application_component_list(Option<Vec<ApplicationComponent>>)`: The list of application components.
+ `next_token(Option<String>)`: The token to request the next page of results.
* On failure, responds with `SdkError<ListComponentsError>`
### impl Client
#### pub fn list_configuration_history(
&self
) -> ListConfigurationHistoryFluentBuilder
Constructs a fluent builder for the `ListConfigurationHistory` operation.
This operation supports pagination; See `into_paginator()`.
* The fluent builder is configurable:
+ `resource_group_name(impl Into<String>)` / `set_resource_group_name(Option<String>)`: Resource group to which the application belongs.
+ `start_time(DateTime)` / `set_start_time(Option<DateTime>)`: The start time of the event.
+ `end_time(DateTime)` / `set_end_time(Option<DateTime>)`: The end time of the event.
+ `event_status(ConfigurationEventStatus)` / `set_event_status(Option<ConfigurationEventStatus>)`: The status of the configuration update event. Possible values include INFO, WARN, and ERROR.
+ `max_results(i32)` / `set_max_results(Option<i32>)`: The maximum number of results returned by `ListConfigurationHistory` in paginated output. When this parameter is used, `ListConfigurationHistory` returns only `MaxResults` in a single page along with a `NextToken` response element. The remaining results of the initial request can be seen by sending another `ListConfigurationHistory` request with the returned `NextToken` value. If this parameter is not used, then `ListConfigurationHistory` returns all results.
+ `next_token(impl Into<String>)` / `set_next_token(Option<String>)`: The `NextToken` value returned from a previous paginated `ListConfigurationHistory` request where `MaxResults` was used and the results exceeded the value of that parameter. Pagination continues from the end of the previous results that returned the `NextToken` value. This value is `null` when there are no more results to return.
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The AWS account ID for the resource group owner.
* On success, responds with `ListConfigurationHistoryOutput` with field(s):
+ `event_list(Option<Vec<ConfigurationEvent>>)`: The list of configuration events and their corresponding details.
+ `next_token(Option<String>)`: The `NextToken` value to include in a future `ListConfigurationHistory` request. When the results of a `ListConfigurationHistory` request exceed `MaxResults`, this value can be used to retrieve the next page of results. This value is `null` when there are no more results to return.
* On failure, responds with `SdkError<ListConfigurationHistoryError>`
### impl Client
#### pub fn list_log_pattern_sets(&self) -> ListLogPatternSetsFluentBuilder
Constructs a fluent builder for the `ListLogPatternSets` operation.
This operation supports pagination; See `into_paginator()`.
* The fluent builder is configurable:
+ `resource_group_name(impl Into<String>)` / `set_resource_group_name(Option<String>)`: The name of the resource group.
+ `max_results(i32)` / `set_max_results(Option<i32>)`: The maximum number of results to return in a single call. To retrieve the remaining results, make another call with the returned `NextToken` value.
+ `next_token(impl Into<String>)` / `set_next_token(Option<String>)`: The token to request the next page of results.
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The AWS account ID for the resource group owner.
* On success, responds with `ListLogPatternSetsOutput` with field(s):
+ `resource_group_name(Option<String>)`: The name of the resource group.
+ `account_id(Option<String>)`: The AWS account ID for the resource group owner.
+ `log_pattern_sets(Option<Vec<String>>)`: The list of log pattern sets.
+ `next_token(Option<String>)`: The token used to retrieve the next page of results. This value is `null` when there are no more results to return.
* On failure, responds with `SdkError<ListLogPatternSetsError>`
### impl Client
#### pub fn list_log_patterns(&self) -> ListLogPatternsFluentBuilder
Constructs a fluent builder for the `ListLogPatterns` operation.
This operation supports pagination; See `into_paginator()`.
* The fluent builder is configurable:
+ `resource_group_name(impl Into<String>)` / `set_resource_group_name(Option<String>)`: The name of the resource group.
+ `pattern_set_name(impl Into<String>)` / `set_pattern_set_name(Option<String>)`: The name of the log pattern set.
+ `max_results(i32)` / `set_max_results(Option<i32>)`: The maximum number of results to return in a single call. To retrieve the remaining results, make another call with the returned `NextToken` value.
+ `next_token(impl Into<String>)` / `set_next_token(Option<String>)`: The token to request the next page of results.
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The AWS account ID for the resource group owner.
* On success, responds with `ListLogPatternsOutput` with field(s):
+ `resource_group_name(Option<String>)`: The name of the resource group.
+ `account_id(Option<String>)`: The AWS account ID for the resource group owner.
+ `log_patterns(Option<Vec<LogPattern>>)`: The list of log patterns.
+ `next_token(Option<String>)`: The token used to retrieve the next page of results. This value is `null` when there are no more results to return.
* On failure, responds with `SdkError<ListLogPatternsError>`
### impl Client
#### pub fn list_problems(&self) -> ListProblemsFluentBuilder
Constructs a fluent builder for the `ListProblems` operation.
This operation supports pagination; See `into_paginator()`.
* The fluent builder is configurable:
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The AWS account ID for the resource group owner.
+ `resource_group_name(impl Into<String>)` / `set_resource_group_name(Option<String>)`: The name of the resource group.
+ `start_time(DateTime)` / `set_start_time(Option<DateTime>)`: The time when the problem was detected, in epoch seconds. If you don’t specify a time frame for the request, problems within the past seven days are returned.
+ `end_time(DateTime)` / `set_end_time(Option<DateTime>)`: The time when the problem ended, in epoch seconds. If not specified, problems within the past seven days are returned.
+ `max_results(i32)` / `set_max_results(Option<i32>)`: The maximum number of results to return in a single call. To retrieve the remaining results, make another call with the returned `NextToken` value.
+ `next_token(impl Into<String>)` / `set_next_token(Option<String>)`: The token to request the next page of results.
+ `component_name(impl Into<String>)` / `set_component_name(Option<String>)`: The name of the component.
+ `visibility(Visibility)` / `set_visibility(Option<Visibility>)`: Specifies whether or not you can view the problem. If not specified, visible and ignored problems are returned.
* On success, responds with `ListProblemsOutput` with field(s):
+ `problem_list(Option<Vec<Problem>>)`: The list of problems.
+ `next_token(Option<String>)`: The token used to retrieve the next page of results. This value is `null` when there are no more results to return.
+ `resource_group_name(Option<String>)`: The name of the resource group.
+ `account_id(Option<String>)`: The AWS account ID for the resource group owner.
* On failure, responds with `SdkError<ListProblemsError>`
### impl Client
#### pub fn list_tags_for_resource(&self) -> ListTagsForResourceFluentBuilder
Constructs a fluent builder for the `ListTagsForResource` operation.
* The fluent builder is configurable:
+ `resource_arn(impl Into<String>)` / `set_resource_arn(Option<String>)`: The Amazon Resource Name (ARN) of the application that you want to retrieve tag information for.
* On success, responds with `ListTagsForResourceOutput` with field(s):
+ `tags(Option<Vec<Tag>>)`: An array that lists all the tags that are associated with the application. Each tag consists of a required tag key (`Key`) and an associated tag value (`Value`).
* On failure, responds with `SdkError<ListTagsForResourceError>`
### impl Client
#### pub fn list_workloads(&self) -> ListWorkloadsFluentBuilder
Constructs a fluent builder for the `ListWorkloads` operation.
This operation supports pagination; See `into_paginator()`.
* The fluent builder is configurable:
+ `resource_group_name(impl Into<String>)` / `set_resource_group_name(Option<String>)`: The name of the resource group.
+ `component_name(impl Into<String>)` / `set_component_name(Option<String>)`: The name of the component.
+ `max_results(i32)` / `set_max_results(Option<i32>)`: The maximum number of results to return in a single call. To retrieve the remaining results, make another call with the returned `NextToken` value.
+ `next_token(impl Into<String>)` / `set_next_token(Option<String>)`: The token to request the next page of results.
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The AWS account ID of the owner of the workload.
* On success, responds with `ListWorkloadsOutput` with field(s):
+ `workload_list(Option<Vec<Workload>>)`: The list of workloads.
+ `next_token(Option<String>)`: The token to request the next page of results.
* On failure, responds with `SdkError<ListWorkloadsError>`
### impl Client
#### pub fn remove_workload(&self) -> RemoveWorkloadFluentBuilder
Constructs a fluent builder for the `RemoveWorkload` operation.
* The fluent builder is configurable:
+ `resource_group_name(impl Into<String>)` / `set_resource_group_name(Option<String>)`: The name of the resource group.
+ `component_name(impl Into<String>)` / `set_component_name(Option<String>)`: The name of the component.
+ `workload_id(impl Into<String>)` / `set_workload_id(Option<String>)`: The ID of the workload.
* On success, responds with `RemoveWorkloadOutput`
* On failure, responds with `SdkError<RemoveWorkloadError>`
### impl Client
#### pub fn tag_resource(&self) -> TagResourceFluentBuilder
Constructs a fluent builder for the `TagResource` operation.
* The fluent builder is configurable:
+ `resource_arn(impl Into<String>)` / `set_resource_arn(Option<String>)`: The Amazon Resource Name (ARN) of the application that you want to add one or more tags to.
+ `tags(Tag)` / `set_tags(Option<Vec<Tag>>)`: A list of tags that to add to the application. A tag consists of a required tag key (`Key`) and an associated tag value (`Value`). The maximum length of a tag key is 128 characters. The maximum length of a tag value is 256 characters.
* On success, responds with `TagResourceOutput`
* On failure, responds with `SdkError<TagResourceError>`
### impl Client
#### pub fn untag_resource(&self) -> UntagResourceFluentBuilder
Constructs a fluent builder for the `UntagResource` operation.
* The fluent builder is configurable:
+ `resource_arn(impl Into<String>)` / `set_resource_arn(Option<String>)`: The Amazon Resource Name (ARN) of the application that you want to remove one or more tags from.
+ `tag_keys(impl Into<String>)` / `set_tag_keys(Option<Vec<String>>)`: The tags (tag keys) that you want to remove from the resource. When you specify a tag key, the action removes both that key and its associated tag value.
To remove more than one tag from the application, append the `TagKeys` parameter and argument for each additional tag to remove, separated by an ampersand.
* On success, responds with `UntagResourceOutput`
* On failure, responds with `SdkError<UntagResourceError>`
### impl Client
#### pub fn update_application(&self) -> UpdateApplicationFluentBuilder
Constructs a fluent builder for the `UpdateApplication` operation.
* The fluent builder is configurable:
+ `resource_group_name(impl Into<String>)` / `set_resource_group_name(Option<String>)`: The name of the resource group.
+ `ops_center_enabled(bool)` / `set_ops_center_enabled(Option<bool>)`: When set to `true`, creates opsItems for any problems detected on an application.
+ `cwe_monitor_enabled(bool)` / `set_cwe_monitor_enabled(Option<bool>)`: Indicates whether Application Insights can listen to CloudWatch events for the application resources, such as `instance terminated`, `failed deployment`, and others.
+ `ops_item_sns_topic_arn(impl Into<String>)` / `set_ops_item_sns_topic_arn(Option<String>)`: The SNS topic provided to Application Insights that is associated to the created opsItem. Allows you to receive notifications for updates to the opsItem.
+ `remove_sns_topic(bool)` / `set_remove_sns_topic(Option<bool>)`: Disassociates the SNS topic from the opsItem created for detected problems.
+ `auto_config_enabled(bool)` / `set_auto_config_enabled(Option<bool>)`: Turns auto-configuration on or off.
* On success, responds with `UpdateApplicationOutput` with field(s):
+ `application_info(Option<ApplicationInfo>)`: Information about the application.
* On failure, responds with `SdkError<UpdateApplicationError>`
### impl Client
#### pub fn update_component(&self) -> UpdateComponentFluentBuilder
Constructs a fluent builder for the `UpdateComponent` operation.
* The fluent builder is configurable:
+ `resource_group_name(impl Into<String>)` / `set_resource_group_name(Option<String>)`: The name of the resource group.
+ `component_name(impl Into<String>)` / `set_component_name(Option<String>)`: The name of the component.
+ `new_component_name(impl Into<String>)` / `set_new_component_name(Option<String>)`: The new name of the component.
+ `resource_list(impl Into<String>)` / `set_resource_list(Option<Vec<String>>)`: The list of resource ARNs that belong to the component.
* On success, responds with `UpdateComponentOutput`
* On failure, responds with `SdkError<UpdateComponentError>`
### impl Client
#### pub fn update_component_configuration(
&self
) -> UpdateComponentConfigurationFluentBuilder
Constructs a fluent builder for the `UpdateComponentConfiguration` operation.
* The fluent builder is configurable:
+ `resource_group_name(impl Into<String>)` / `set_resource_group_name(Option<String>)`: The name of the resource group.
+ `component_name(impl Into<String>)` / `set_component_name(Option<String>)`: The name of the component.
+ `monitor(bool)` / `set_monitor(Option<bool>)`: Indicates whether the application component is monitored.
+ `tier(Tier)` / `set_tier(Option<Tier>)`: The tier of the application component.
+ `component_configuration(impl Into<String>)` / `set_component_configuration(Option<String>)`: The configuration settings of the component. The value is the escaped JSON of the configuration. For more information about the JSON format, see Working with JSON. You can send a request to `DescribeComponentConfigurationRecommendation` to see the recommended configuration for a component. For the complete format of the component configuration file, see Component Configuration.
+ `auto_config_enabled(bool)` / `set_auto_config_enabled(Option<bool>)`: Automatically configures the component by applying the recommended configurations.
* On success, responds with `UpdateComponentConfigurationOutput`
* On failure, responds with `SdkError<UpdateComponentConfigurationError>`
### impl Client
#### pub fn update_log_pattern(&self) -> UpdateLogPatternFluentBuilder
Constructs a fluent builder for the `UpdateLogPattern` operation.
* The fluent builder is configurable:
+ `resource_group_name(impl Into<String>)` / `set_resource_group_name(Option<String>)`: The name of the resource group.
+ `pattern_set_name(impl Into<String>)` / `set_pattern_set_name(Option<String>)`: The name of the log pattern set.
+ `pattern_name(impl Into<String>)` / `set_pattern_name(Option<String>)`: The name of the log pattern.
+ `pattern(impl Into<String>)` / `set_pattern(Option<String>)`: The log pattern. The pattern must be DFA compatible. Patterns that utilize forward lookahead or backreference constructions are not supported.
+ `rank(i32)` / `set_rank(Option<i32>)`: Rank of the log pattern. Must be a value between `1` and `1,000,000`. The patterns are sorted by rank, so we recommend that you set your highest priority patterns with the lowest rank. A pattern of rank `1` will be the first to get matched to a log line. A pattern of rank `1,000,000` will be last to get matched. When you configure custom log patterns from the console, a `Low` severity pattern translates to a `750,000` rank. A `Medium` severity pattern translates to a `500,000` rank. And a `High` severity pattern translates to a `250,000` rank. Rank values less than `1` or greater than `1,000,000` are reserved for AWS-provided patterns.
* On success, responds with `UpdateLogPatternOutput` with field(s):
+ `resource_group_name(Option<String>)`: The name of the resource group.
+ `log_pattern(Option<LogPattern>)`: The successfully created log pattern.
* On failure, responds with `SdkError<UpdateLogPatternError>`
### impl Client
#### pub fn update_problem(&self) -> UpdateProblemFluentBuilder
Constructs a fluent builder for the `UpdateProblem` operation.
* The fluent builder is configurable:
+ `problem_id(impl Into<String>)` / `set_problem_id(Option<String>)`: The ID of the problem.
+ `update_status(UpdateStatus)` / `set_update_status(Option<UpdateStatus>)`: The status of the problem. Arguments can be passed for only problems that show a status of `RECOVERING`.
+ `visibility(Visibility)` / `set_visibility(Option<Visibility>)`: The visibility of a problem. When you pass a value of `IGNORED`, the problem is removed from the default view, and all notifications for the problem are suspended. When `VISIBLE` is passed, the `IGNORED` action is reversed.
* On success, responds with `UpdateProblemOutput`
* On failure, responds with `SdkError<UpdateProblemError>`
### impl Client
#### pub fn update_workload(&self) -> UpdateWorkloadFluentBuilder
Constructs a fluent builder for the `UpdateWorkload` operation.
* The fluent builder is configurable:
+ `resource_group_name(impl Into<String>)` / `set_resource_group_name(Option<String>)`: The name of the resource group.
+ `component_name(impl Into<String>)` / `set_component_name(Option<String>)`: The name of the component.
+ `workload_id(impl Into<String>)` / `set_workload_id(Option<String>)`: The ID of the workload.
+ `workload_configuration(WorkloadConfiguration)` / `set_workload_configuration(Option<WorkloadConfiguration>)`: The configuration settings of the workload. The value is the escaped JSON of the configuration.
* On success, responds with `UpdateWorkloadOutput` with field(s):
+ `workload_id(Option<String>)`: The ID of the workload.
+ `workload_configuration(Option<WorkloadConfiguration>)`: The configuration settings of the workload. The value is the escaped JSON of the configuration.
* On failure, responds with `SdkError<UpdateWorkloadError>`
### impl Client
#### pub fn from_conf(conf: Config) -> Self
Creates a new client from the service `Config`.
##### Panics
This method will panic if the `conf` has retry or timeouts enabled without a `sleep_impl`.
If you experience this panic, it can be fixed by setting the `sleep_impl`, or by disabling retries and timeouts.
#### pub fn config(&self) -> &Config
Returns the client’s configuration.
### impl Client
#### pub fn new(sdk_config: &SdkConfig) -> Self
Creates a new client from an SDK Config.
##### Panics
* This method will panic if the `sdk_config` is missing an async sleep implementation. If you experience this panic, set the `sleep_impl` on the Config passed into this function to fix it.
* This method will panic if the `sdk_config` is missing an HTTP connector. If you experience this panic, set the
`http_connector` on the Config passed into this function to fix it.
Trait Implementations
---
### impl Clone for Client
#### fn clone(&self) -> Client
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read moreAuto Trait Implementations
---
### impl !RefUnwindSafe for Client
### impl Send for Client
### impl Sync for Client
### impl Unpin for Client
### impl !UnwindSafe for Client
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct aws_sdk_applicationinsights::Client
===
```
pub struct Client { /* private fields */ }
```
Client for Amazon CloudWatch Application Insights
Client for invoking operations on Amazon CloudWatch Application Insights. Each operation on Amazon CloudWatch Application Insights is a method on this this struct. `.send()` MUST be invoked on the generated operations to dispatch the request to the service.
### Constructing a `Client`
A `Config` is required to construct a client. For most use cases, the `aws-config`
crate should be used to automatically resolve this config using
`aws_config::load_from_env()`, since this will resolve an `SdkConfig` which can be shared across multiple different AWS SDK clients. This config resolution process can be customized by calling `aws_config::from_env()` instead, which returns a `ConfigLoader` that uses the builder pattern to customize the default config.
In the simplest case, creating a client looks as follows:
```
let config = aws_config::load_from_env().await;
let client = aws_sdk_applicationinsights::Client::new(&config);
```
Occasionally, SDKs may have additional service-specific that can be set on the `Config` that is absent from `SdkConfig`, or slightly different settings for a specific client may be desired.
The `Config` struct implements `From<&SdkConfig>`, so setting these specific settings can be done as follows:
```
let sdk_config = ::aws_config::load_from_env().await;
let config = aws_sdk_applicationinsights::config::Builder::from(&sdk_config)
.some_service_specific_setting("value")
.build();
```
See the `aws-config` docs and `Config` for more information on customizing configuration.
*Note:* Client construction is expensive due to connection thread pool initialization, and should be done once at application start-up.
Using the `Client`
---
A client has a function for every operation that can be performed by the service.
For example, the `AddWorkload` operation has a `Client::add_workload`, function which returns a builder for that operation.
The fluent builder ultimately has a `send()` function that returns an async future that returns a result, as illustrated below:
```
let result = client.add_workload()
.resource_group_name("example")
.send()
.await;
```
The underlying HTTP requests that get made by this can be modified with the `customize_operation`
function on the fluent builder. See the `customize` module for more information.
Implementations
---
### impl Client
#### pub fn add_workload(&self) -> AddWorkloadFluentBuilder
Constructs a fluent builder for the `AddWorkload` operation.
* The fluent builder is configurable:
+ `resource_group_name(impl Into<String>)` / `set_resource_group_name(Option<String>)`: The name of the resource group.
+ `component_name(impl Into<String>)` / `set_component_name(Option<String>)`: The name of the component.
+ `workload_configuration(WorkloadConfiguration)` / `set_workload_configuration(Option<WorkloadConfiguration>)`: The configuration settings of the workload. The value is the escaped JSON of the configuration.
* On success, responds with `AddWorkloadOutput` with field(s):
+ `workload_id(Option<String>)`: The ID of the workload.
+ `workload_configuration(Option<WorkloadConfiguration>)`: The configuration settings of the workload. The value is the escaped JSON of the configuration.
* On failure, responds with `SdkError<AddWorkloadError>`
### impl Client
#### pub fn create_application(&self) -> CreateApplicationFluentBuilder
Constructs a fluent builder for the `CreateApplication` operation.
* The fluent builder is configurable:
+ `resource_group_name(impl Into<String>)` / `set_resource_group_name(Option<String>)`: The name of the resource group.
+ `ops_center_enabled(bool)` / `set_ops_center_enabled(Option<bool>)`: When set to `true`, creates opsItems for any problems detected on an application.
+ `cwe_monitor_enabled(bool)` / `set_cwe_monitor_enabled(Option<bool>)`: Indicates whether Application Insights can listen to CloudWatch events for the application resources, such as `instance terminated`, `failed deployment`, and others.
+ `ops_item_sns_topic_arn(impl Into<String>)` / `set_ops_item_sns_topic_arn(Option<String>)`: The SNS topic provided to Application Insights that is associated to the created opsItem. Allows you to receive notifications for updates to the opsItem.
+ `tags(Tag)` / `set_tags(Option<Vec<Tag>>)`: List of tags to add to the application. tag key (`Key`) and an associated tag value (`Value`). The maximum length of a tag key is 128 characters. The maximum length of a tag value is 256 characters.
+ `auto_config_enabled(bool)` / `set_auto_config_enabled(Option<bool>)`: Indicates whether Application Insights automatically configures unmonitored resources in the resource group.
+ `auto_create(bool)` / `set_auto_create(Option<bool>)`: Configures all of the resources in the resource group by applying the recommended configurations.
+ `grouping_type(GroupingType)` / `set_grouping_type(Option<GroupingType>)`: Application Insights can create applications based on a resource group or on an account. To create an account-based application using all of the resources in the account, set this parameter to `ACCOUNT_BASED`.
* On success, responds with `CreateApplicationOutput` with field(s):
+ `application_info(Option<ApplicationInfo>)`: Information about the application.
* On failure, responds with `SdkError<CreateApplicationError>`
### impl Client
#### pub fn create_component(&self) -> CreateComponentFluentBuilder
Constructs a fluent builder for the `CreateComponent` operation.
* The fluent builder is configurable:
+ `resource_group_name(impl Into<String>)` / `set_resource_group_name(Option<String>)`: The name of the resource group.
+ `component_name(impl Into<String>)` / `set_component_name(Option<String>)`: The name of the component.
+ `resource_list(impl Into<String>)` / `set_resource_list(Option<Vec<String>>)`: The list of resource ARNs that belong to the component.
* On success, responds with `CreateComponentOutput`
* On failure, responds with `SdkError<CreateComponentError>`
### impl Client
#### pub fn create_log_pattern(&self) -> CreateLogPatternFluentBuilder
Constructs a fluent builder for the `CreateLogPattern` operation.
* The fluent builder is configurable:
+ `resource_group_name(impl Into<String>)` / `set_resource_group_name(Option<String>)`: The name of the resource group.
+ `pattern_set_name(impl Into<String>)` / `set_pattern_set_name(Option<String>)`: The name of the log pattern set.
+ `pattern_name(impl Into<String>)` / `set_pattern_name(Option<String>)`: The name of the log pattern.
+ `pattern(impl Into<String>)` / `set_pattern(Option<String>)`: The log pattern. The pattern must be DFA compatible. Patterns that utilize forward lookahead or backreference constructions are not supported.
+ `rank(i32)` / `set_rank(Option<i32>)`: Rank of the log pattern. Must be a value between `1` and `1,000,000`. The patterns are sorted by rank, so we recommend that you set your highest priority patterns with the lowest rank. A pattern of rank `1` will be the first to get matched to a log line. A pattern of rank `1,000,000` will be last to get matched. When you configure custom log patterns from the console, a `Low` severity pattern translates to a `750,000` rank. A `Medium` severity pattern translates to a `500,000` rank. And a `High` severity pattern translates to a `250,000` rank. Rank values less than `1` or greater than `1,000,000` are reserved for AWS-provided patterns.
* On success, responds with `CreateLogPatternOutput` with field(s):
+ `log_pattern(Option<LogPattern>)`: The successfully created log pattern.
+ `resource_group_name(Option<String>)`: The name of the resource group.
* On failure, responds with `SdkError<CreateLogPatternError>`
### impl Client
#### pub fn delete_application(&self) -> DeleteApplicationFluentBuilder
Constructs a fluent builder for the `DeleteApplication` operation.
* The fluent builder is configurable:
+ `resource_group_name(impl Into<String>)` / `set_resource_group_name(Option<String>)`: The name of the resource group.
* On success, responds with `DeleteApplicationOutput`
* On failure, responds with `SdkError<DeleteApplicationError>`
### impl Client
#### pub fn delete_component(&self) -> DeleteComponentFluentBuilder
Constructs a fluent builder for the `DeleteComponent` operation.
* The fluent builder is configurable:
+ `resource_group_name(impl Into<String>)` / `set_resource_group_name(Option<String>)`: The name of the resource group.
+ `component_name(impl Into<String>)` / `set_component_name(Option<String>)`: The name of the component.
* On success, responds with `DeleteComponentOutput`
* On failure, responds with `SdkError<DeleteComponentError>`
### impl Client
#### pub fn delete_log_pattern(&self) -> DeleteLogPatternFluentBuilder
Constructs a fluent builder for the `DeleteLogPattern` operation.
* The fluent builder is configurable:
+ `resource_group_name(impl Into<String>)` / `set_resource_group_name(Option<String>)`: The name of the resource group.
+ `pattern_set_name(impl Into<String>)` / `set_pattern_set_name(Option<String>)`: The name of the log pattern set.
+ `pattern_name(impl Into<String>)` / `set_pattern_name(Option<String>)`: The name of the log pattern.
* On success, responds with `DeleteLogPatternOutput`
* On failure, responds with `SdkError<DeleteLogPatternError>`
### impl Client
#### pub fn describe_application(&self) -> DescribeApplicationFluentBuilder
Constructs a fluent builder for the `DescribeApplication` operation.
* The fluent builder is configurable:
+ `resource_group_name(impl Into<String>)` / `set_resource_group_name(Option<String>)`: The name of the resource group.
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The AWS account ID for the resource group owner.
* On success, responds with `DescribeApplicationOutput` with field(s):
+ `application_info(Option<ApplicationInfo>)`: Information about the application.
* On failure, responds with `SdkError<DescribeApplicationError>`
### impl Client
#### pub fn describe_component(&self) -> DescribeComponentFluentBuilder
Constructs a fluent builder for the `DescribeComponent` operation.
* The fluent builder is configurable:
+ `resource_group_name(impl Into<String>)` / `set_resource_group_name(Option<String>)`: The name of the resource group.
+ `component_name(impl Into<String>)` / `set_component_name(Option<String>)`: The name of the component.
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The AWS account ID for the resource group owner.
* On success, responds with `DescribeComponentOutput` with field(s):
+ `application_component(Option<ApplicationComponent>)`: Describes a standalone resource or similarly grouped resources that the application is made up of.
+ `resource_list(Option<Vec<String>>)`: The list of resource ARNs that belong to the component.
* On failure, responds with `SdkError<DescribeComponentError>`
### impl Client
#### pub fn describe_component_configuration(
&self
) -> DescribeComponentConfigurationFluentBuilder
Constructs a fluent builder for the `DescribeComponentConfiguration` operation.
* The fluent builder is configurable:
+ `resource_group_name(impl Into<String>)` / `set_resource_group_name(Option<String>)`: The name of the resource group.
+ `component_name(impl Into<String>)` / `set_component_name(Option<String>)`: The name of the component.
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The AWS account ID for the resource group owner.
* On success, responds with `DescribeComponentConfigurationOutput` with field(s):
+ `monitor(Option<bool>)`: Indicates whether the application component is monitored.
+ `tier(Option<Tier>)`: The tier of the application component. Supported tiers include `DOT_NET_CORE`, `DOT_NET_WORKER`, `DOT_NET_WEB`, `SQL_SERVER`, and `DEFAULT`
+ `component_configuration(Option<String>)`: The configuration settings of the component. The value is the escaped JSON of the configuration.
* On failure, responds with `SdkError<DescribeComponentConfigurationError>`
### impl Client
#### pub fn describe_component_configuration_recommendation(
&self
) -> DescribeComponentConfigurationRecommendationFluentBuilder
Constructs a fluent builder for the `DescribeComponentConfigurationRecommendation` operation.
* The fluent builder is configurable:
+ `resource_group_name(impl Into<String>)` / `set_resource_group_name(Option<String>)`: The name of the resource group.
+ `component_name(impl Into<String>)` / `set_component_name(Option<String>)`: The name of the component.
+ `tier(Tier)` / `set_tier(Option<Tier>)`: The tier of the application component.
+ `recommendation_type(RecommendationType)` / `set_recommendation_type(Option<RecommendationType>)`: The recommended configuration type.
* On success, responds with `DescribeComponentConfigurationRecommendationOutput` with field(s):
+ `component_configuration(Option<String>)`: The recommended configuration settings of the component. The value is the escaped JSON of the configuration.
* On failure, responds with `SdkError<DescribeComponentConfigurationRecommendationError>`
### impl Client
#### pub fn describe_log_pattern(&self) -> DescribeLogPatternFluentBuilder
Constructs a fluent builder for the `DescribeLogPattern` operation.
* The fluent builder is configurable:
+ `resource_group_name(impl Into<String>)` / `set_resource_group_name(Option<String>)`: The name of the resource group.
+ `pattern_set_name(impl Into<String>)` / `set_pattern_set_name(Option<String>)`: The name of the log pattern set.
+ `pattern_name(impl Into<String>)` / `set_pattern_name(Option<String>)`: The name of the log pattern.
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The AWS account ID for the resource group owner.
* On success, responds with `DescribeLogPatternOutput` with field(s):
+ `resource_group_name(Option<String>)`: The name of the resource group.
+ `account_id(Option<String>)`: The AWS account ID for the resource group owner.
+ `log_pattern(Option<LogPattern>)`: The successfully created log pattern.
* On failure, responds with `SdkError<DescribeLogPatternError>`
### impl Client
#### pub fn describe_observation(&self) -> DescribeObservationFluentBuilder
Constructs a fluent builder for the `DescribeObservation` operation.
* The fluent builder is configurable:
+ `observation_id(impl Into<String>)` / `set_observation_id(Option<String>)`: The ID of the observation.
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The AWS account ID for the resource group owner.
* On success, responds with `DescribeObservationOutput` with field(s):
+ `observation(Option<Observation>)`: Information about the observation.
* On failure, responds with `SdkError<DescribeObservationError>`
### impl Client
#### pub fn describe_problem(&self) -> DescribeProblemFluentBuilder
Constructs a fluent builder for the `DescribeProblem` operation.
* The fluent builder is configurable:
+ `problem_id(impl Into<String>)` / `set_problem_id(Option<String>)`: The ID of the problem.
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The AWS account ID for the owner of the resource group affected by the problem.
* On success, responds with `DescribeProblemOutput` with field(s):
+ `problem(Option<Problem>)`: Information about the problem.
* On failure, responds with `SdkError<DescribeProblemError>`
### impl Client
#### pub fn describe_problem_observations(
&self
) -> DescribeProblemObservationsFluentBuilder
Constructs a fluent builder for the `DescribeProblemObservations` operation.
* The fluent builder is configurable:
+ `problem_id(impl Into<String>)` / `set_problem_id(Option<String>)`: The ID of the problem.
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The AWS account ID for the resource group owner.
* On success, responds with `DescribeProblemObservationsOutput` with field(s):
+ `related_observations(Option<RelatedObservations>)`: Observations related to the problem.
* On failure, responds with `SdkError<DescribeProblemObservationsError>`
### impl Client
#### pub fn describe_workload(&self) -> DescribeWorkloadFluentBuilder
Constructs a fluent builder for the `DescribeWorkload` operation.
* The fluent builder is configurable:
+ `resource_group_name(impl Into<String>)` / `set_resource_group_name(Option<String>)`: The name of the resource group.
+ `component_name(impl Into<String>)` / `set_component_name(Option<String>)`: The name of the component.
+ `workload_id(impl Into<String>)` / `set_workload_id(Option<String>)`: The ID of the workload.
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The AWS account ID for the workload owner.
* On success, responds with `DescribeWorkloadOutput` with field(s):
+ `workload_id(Option<String>)`: The ID of the workload.
+ `workload_remarks(Option<String>)`: If logging is supported for the resource type, shows whether the component has configured logs to be monitored.
+ `workload_configuration(Option<WorkloadConfiguration>)`: The configuration settings of the workload. The value is the escaped JSON of the configuration.
* On failure, responds with `SdkError<DescribeWorkloadError>`
### impl Client
#### pub fn list_applications(&self) -> ListApplicationsFluentBuilder
Constructs a fluent builder for the `ListApplications` operation.
This operation supports pagination; See `into_paginator()`.
* The fluent builder is configurable:
+ `max_results(i32)` / `set_max_results(Option<i32>)`: The maximum number of results to return in a single call. To retrieve the remaining results, make another call with the returned `NextToken` value.
+ `next_token(impl Into<String>)` / `set_next_token(Option<String>)`: The token to request the next page of results.
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The AWS account ID for the resource group owner.
* On success, responds with `ListApplicationsOutput` with field(s):
+ `application_info_list(Option<Vec<ApplicationInfo>>)`: The list of applications.
+ `next_token(Option<String>)`: The token used to retrieve the next page of results. This value is `null` when there are no more results to return.
* On failure, responds with `SdkError<ListApplicationsError>`
### impl Client
#### pub fn list_components(&self) -> ListComponentsFluentBuilder
Constructs a fluent builder for the `ListComponents` operation.
This operation supports pagination; See `into_paginator()`.
* The fluent builder is configurable:
+ `resource_group_name(impl Into<String>)` / `set_resource_group_name(Option<String>)`: The name of the resource group.
+ `max_results(i32)` / `set_max_results(Option<i32>)`: The maximum number of results to return in a single call. To retrieve the remaining results, make another call with the returned `NextToken` value.
+ `next_token(impl Into<String>)` / `set_next_token(Option<String>)`: The token to request the next page of results.
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The AWS account ID for the resource group owner.
* On success, responds with `ListComponentsOutput` with field(s):
+ `application_component_list(Option<Vec<ApplicationComponent>>)`: The list of application components.
+ `next_token(Option<String>)`: The token to request the next page of results.
* On failure, responds with `SdkError<ListComponentsError>`
### impl Client
#### pub fn list_configuration_history(
&self
) -> ListConfigurationHistoryFluentBuilder
Constructs a fluent builder for the `ListConfigurationHistory` operation.
This operation supports pagination; See `into_paginator()`.
* The fluent builder is configurable:
+ `resource_group_name(impl Into<String>)` / `set_resource_group_name(Option<String>)`: Resource group to which the application belongs.
+ `start_time(DateTime)` / `set_start_time(Option<DateTime>)`: The start time of the event.
+ `end_time(DateTime)` / `set_end_time(Option<DateTime>)`: The end time of the event.
+ `event_status(ConfigurationEventStatus)` / `set_event_status(Option<ConfigurationEventStatus>)`: The status of the configuration update event. Possible values include INFO, WARN, and ERROR.
+ `max_results(i32)` / `set_max_results(Option<i32>)`: The maximum number of results returned by `ListConfigurationHistory` in paginated output. When this parameter is used, `ListConfigurationHistory` returns only `MaxResults` in a single page along with a `NextToken` response element. The remaining results of the initial request can be seen by sending another `ListConfigurationHistory` request with the returned `NextToken` value. If this parameter is not used, then `ListConfigurationHistory` returns all results.
+ `next_token(impl Into<String>)` / `set_next_token(Option<String>)`: The `NextToken` value returned from a previous paginated `ListConfigurationHistory` request where `MaxResults` was used and the results exceeded the value of that parameter. Pagination continues from the end of the previous results that returned the `NextToken` value. This value is `null` when there are no more results to return.
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The AWS account ID for the resource group owner.
* On success, responds with `ListConfigurationHistoryOutput` with field(s):
+ `event_list(Option<Vec<ConfigurationEvent>>)`: The list of configuration events and their corresponding details.
+ `next_token(Option<String>)`: The `NextToken` value to include in a future `ListConfigurationHistory` request. When the results of a `ListConfigurationHistory` request exceed `MaxResults`, this value can be used to retrieve the next page of results. This value is `null` when there are no more results to return.
* On failure, responds with `SdkError<ListConfigurationHistoryError>`
### impl Client
#### pub fn list_log_pattern_sets(&self) -> ListLogPatternSetsFluentBuilder
Constructs a fluent builder for the `ListLogPatternSets` operation.
This operation supports pagination; See `into_paginator()`.
* The fluent builder is configurable:
+ `resource_group_name(impl Into<String>)` / `set_resource_group_name(Option<String>)`: The name of the resource group.
+ `max_results(i32)` / `set_max_results(Option<i32>)`: The maximum number of results to return in a single call. To retrieve the remaining results, make another call with the returned `NextToken` value.
+ `next_token(impl Into<String>)` / `set_next_token(Option<String>)`: The token to request the next page of results.
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The AWS account ID for the resource group owner.
* On success, responds with `ListLogPatternSetsOutput` with field(s):
+ `resource_group_name(Option<String>)`: The name of the resource group.
+ `account_id(Option<String>)`: The AWS account ID for the resource group owner.
+ `log_pattern_sets(Option<Vec<String>>)`: The list of log pattern sets.
+ `next_token(Option<String>)`: The token used to retrieve the next page of results. This value is `null` when there are no more results to return.
* On failure, responds with `SdkError<ListLogPatternSetsError>`
### impl Client
#### pub fn list_log_patterns(&self) -> ListLogPatternsFluentBuilder
Constructs a fluent builder for the `ListLogPatterns` operation.
This operation supports pagination; See `into_paginator()`.
* The fluent builder is configurable:
+ `resource_group_name(impl Into<String>)` / `set_resource_group_name(Option<String>)`: The name of the resource group.
+ `pattern_set_name(impl Into<String>)` / `set_pattern_set_name(Option<String>)`: The name of the log pattern set.
+ `max_results(i32)` / `set_max_results(Option<i32>)`: The maximum number of results to return in a single call. To retrieve the remaining results, make another call with the returned `NextToken` value.
+ `next_token(impl Into<String>)` / `set_next_token(Option<String>)`: The token to request the next page of results.
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The AWS account ID for the resource group owner.
* On success, responds with `ListLogPatternsOutput` with field(s):
+ `resource_group_name(Option<String>)`: The name of the resource group.
+ `account_id(Option<String>)`: The AWS account ID for the resource group owner.
+ `log_patterns(Option<Vec<LogPattern>>)`: The list of log patterns.
+ `next_token(Option<String>)`: The token used to retrieve the next page of results. This value is `null` when there are no more results to return.
* On failure, responds with `SdkError<ListLogPatternsError>`
### impl Client
#### pub fn list_problems(&self) -> ListProblemsFluentBuilder
Constructs a fluent builder for the `ListProblems` operation.
This operation supports pagination; See `into_paginator()`.
* The fluent builder is configurable:
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The AWS account ID for the resource group owner.
+ `resource_group_name(impl Into<String>)` / `set_resource_group_name(Option<String>)`: The name of the resource group.
+ `start_time(DateTime)` / `set_start_time(Option<DateTime>)`: The time when the problem was detected, in epoch seconds. If you don’t specify a time frame for the request, problems within the past seven days are returned.
+ `end_time(DateTime)` / `set_end_time(Option<DateTime>)`: The time when the problem ended, in epoch seconds. If not specified, problems within the past seven days are returned.
+ `max_results(i32)` / `set_max_results(Option<i32>)`: The maximum number of results to return in a single call. To retrieve the remaining results, make another call with the returned `NextToken` value.
+ `next_token(impl Into<String>)` / `set_next_token(Option<String>)`: The token to request the next page of results.
+ `component_name(impl Into<String>)` / `set_component_name(Option<String>)`: The name of the component.
+ `visibility(Visibility)` / `set_visibility(Option<Visibility>)`: Specifies whether or not you can view the problem. If not specified, visible and ignored problems are returned.
* On success, responds with `ListProblemsOutput` with field(s):
+ `problem_list(Option<Vec<Problem>>)`: The list of problems.
+ `next_token(Option<String>)`: The token used to retrieve the next page of results. This value is `null` when there are no more results to return.
+ `resource_group_name(Option<String>)`: The name of the resource group.
+ `account_id(Option<String>)`: The AWS account ID for the resource group owner.
* On failure, responds with `SdkError<ListProblemsError>`
### impl Client
#### pub fn list_tags_for_resource(&self) -> ListTagsForResourceFluentBuilder
Constructs a fluent builder for the `ListTagsForResource` operation.
* The fluent builder is configurable:
+ `resource_arn(impl Into<String>)` / `set_resource_arn(Option<String>)`: The Amazon Resource Name (ARN) of the application that you want to retrieve tag information for.
* On success, responds with `ListTagsForResourceOutput` with field(s):
+ `tags(Option<Vec<Tag>>)`: An array that lists all the tags that are associated with the application. Each tag consists of a required tag key (`Key`) and an associated tag value (`Value`).
* On failure, responds with `SdkError<ListTagsForResourceError>`
### impl Client
#### pub fn list_workloads(&self) -> ListWorkloadsFluentBuilder
Constructs a fluent builder for the `ListWorkloads` operation.
This operation supports pagination; See `into_paginator()`.
* The fluent builder is configurable:
+ `resource_group_name(impl Into<String>)` / `set_resource_group_name(Option<String>)`: The name of the resource group.
+ `component_name(impl Into<String>)` / `set_component_name(Option<String>)`: The name of the component.
+ `max_results(i32)` / `set_max_results(Option<i32>)`: The maximum number of results to return in a single call. To retrieve the remaining results, make another call with the returned `NextToken` value.
+ `next_token(impl Into<String>)` / `set_next_token(Option<String>)`: The token to request the next page of results.
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The AWS account ID of the owner of the workload.
* On success, responds with `ListWorkloadsOutput` with field(s):
+ `workload_list(Option<Vec<Workload>>)`: The list of workloads.
+ `next_token(Option<String>)`: The token to request the next page of results.
* On failure, responds with `SdkError<ListWorkloadsError>`
### impl Client
#### pub fn remove_workload(&self) -> RemoveWorkloadFluentBuilder
Constructs a fluent builder for the `RemoveWorkload` operation.
* The fluent builder is configurable:
+ `resource_group_name(impl Into<String>)` / `set_resource_group_name(Option<String>)`: The name of the resource group.
+ `component_name(impl Into<String>)` / `set_component_name(Option<String>)`: The name of the component.
+ `workload_id(impl Into<String>)` / `set_workload_id(Option<String>)`: The ID of the workload.
* On success, responds with `RemoveWorkloadOutput`
* On failure, responds with `SdkError<RemoveWorkloadError>`
### impl Client
#### pub fn tag_resource(&self) -> TagResourceFluentBuilder
Constructs a fluent builder for the `TagResource` operation.
* The fluent builder is configurable:
+ `resource_arn(impl Into<String>)` / `set_resource_arn(Option<String>)`: The Amazon Resource Name (ARN) of the application that you want to add one or more tags to.
+ `tags(Tag)` / `set_tags(Option<Vec<Tag>>)`: A list of tags that to add to the application. A tag consists of a required tag key (`Key`) and an associated tag value (`Value`). The maximum length of a tag key is 128 characters. The maximum length of a tag value is 256 characters.
* On success, responds with `TagResourceOutput`
* On failure, responds with `SdkError<TagResourceError>`
### impl Client
#### pub fn untag_resource(&self) -> UntagResourceFluentBuilder
Constructs a fluent builder for the `UntagResource` operation.
* The fluent builder is configurable:
+ `resource_arn(impl Into<String>)` / `set_resource_arn(Option<String>)`: The Amazon Resource Name (ARN) of the application that you want to remove one or more tags from.
+ `tag_keys(impl Into<String>)` / `set_tag_keys(Option<Vec<String>>)`: The tags (tag keys) that you want to remove from the resource. When you specify a tag key, the action removes both that key and its associated tag value.
To remove more than one tag from the application, append the `TagKeys` parameter and argument for each additional tag to remove, separated by an ampersand.
* On success, responds with `UntagResourceOutput`
* On failure, responds with `SdkError<UntagResourceError>`
### impl Client
#### pub fn update_application(&self) -> UpdateApplicationFluentBuilder
Constructs a fluent builder for the `UpdateApplication` operation.
* The fluent builder is configurable:
+ `resource_group_name(impl Into<String>)` / `set_resource_group_name(Option<String>)`: The name of the resource group.
+ `ops_center_enabled(bool)` / `set_ops_center_enabled(Option<bool>)`: When set to `true`, creates opsItems for any problems detected on an application.
+ `cwe_monitor_enabled(bool)` / `set_cwe_monitor_enabled(Option<bool>)`: Indicates whether Application Insights can listen to CloudWatch events for the application resources, such as `instance terminated`, `failed deployment`, and others.
+ `ops_item_sns_topic_arn(impl Into<String>)` / `set_ops_item_sns_topic_arn(Option<String>)`: The SNS topic provided to Application Insights that is associated to the created opsItem. Allows you to receive notifications for updates to the opsItem.
+ `remove_sns_topic(bool)` / `set_remove_sns_topic(Option<bool>)`: Disassociates the SNS topic from the opsItem created for detected problems.
+ `auto_config_enabled(bool)` / `set_auto_config_enabled(Option<bool>)`: Turns auto-configuration on or off.
* On success, responds with `UpdateApplicationOutput` with field(s):
+ `application_info(Option<ApplicationInfo>)`: Information about the application.
* On failure, responds with `SdkError<UpdateApplicationError>`
### impl Client
#### pub fn update_component(&self) -> UpdateComponentFluentBuilder
Constructs a fluent builder for the `UpdateComponent` operation.
* The fluent builder is configurable:
+ `resource_group_name(impl Into<String>)` / `set_resource_group_name(Option<String>)`: The name of the resource group.
+ `component_name(impl Into<String>)` / `set_component_name(Option<String>)`: The name of the component.
+ `new_component_name(impl Into<String>)` / `set_new_component_name(Option<String>)`: The new name of the component.
+ `resource_list(impl Into<String>)` / `set_resource_list(Option<Vec<String>>)`: The list of resource ARNs that belong to the component.
* On success, responds with `UpdateComponentOutput`
* On failure, responds with `SdkError<UpdateComponentError>`
### impl Client
#### pub fn update_component_configuration(
&self
) -> UpdateComponentConfigurationFluentBuilder
Constructs a fluent builder for the `UpdateComponentConfiguration` operation.
* The fluent builder is configurable:
+ `resource_group_name(impl Into<String>)` / `set_resource_group_name(Option<String>)`: The name of the resource group.
+ `component_name(impl Into<String>)` / `set_component_name(Option<String>)`: The name of the component.
+ `monitor(bool)` / `set_monitor(Option<bool>)`: Indicates whether the application component is monitored.
+ `tier(Tier)` / `set_tier(Option<Tier>)`: The tier of the application component.
+ `component_configuration(impl Into<String>)` / `set_component_configuration(Option<String>)`: The configuration settings of the component. The value is the escaped JSON of the configuration. For more information about the JSON format, see Working with JSON. You can send a request to `DescribeComponentConfigurationRecommendation` to see the recommended configuration for a component. For the complete format of the component configuration file, see Component Configuration.
+ `auto_config_enabled(bool)` / `set_auto_config_enabled(Option<bool>)`: Automatically configures the component by applying the recommended configurations.
* On success, responds with `UpdateComponentConfigurationOutput`
* On failure, responds with `SdkError<UpdateComponentConfigurationError>`
### impl Client
#### pub fn update_log_pattern(&self) -> UpdateLogPatternFluentBuilder
Constructs a fluent builder for the `UpdateLogPattern` operation.
* The fluent builder is configurable:
+ `resource_group_name(impl Into<String>)` / `set_resource_group_name(Option<String>)`: The name of the resource group.
+ `pattern_set_name(impl Into<String>)` / `set_pattern_set_name(Option<String>)`: The name of the log pattern set.
+ `pattern_name(impl Into<String>)` / `set_pattern_name(Option<String>)`: The name of the log pattern.
+ `pattern(impl Into<String>)` / `set_pattern(Option<String>)`: The log pattern. The pattern must be DFA compatible. Patterns that utilize forward lookahead or backreference constructions are not supported.
+ `rank(i32)` / `set_rank(Option<i32>)`: Rank of the log pattern. Must be a value between `1` and `1,000,000`. The patterns are sorted by rank, so we recommend that you set your highest priority patterns with the lowest rank. A pattern of rank `1` will be the first to get matched to a log line. A pattern of rank `1,000,000` will be last to get matched. When you configure custom log patterns from the console, a `Low` severity pattern translates to a `750,000` rank. A `Medium` severity pattern translates to a `500,000` rank. And a `High` severity pattern translates to a `250,000` rank. Rank values less than `1` or greater than `1,000,000` are reserved for AWS-provided patterns.
* On success, responds with `UpdateLogPatternOutput` with field(s):
+ `resource_group_name(Option<String>)`: The name of the resource group.
+ `log_pattern(Option<LogPattern>)`: The successfully created log pattern.
* On failure, responds with `SdkError<UpdateLogPatternError>`
### impl Client
#### pub fn update_problem(&self) -> UpdateProblemFluentBuilder
Constructs a fluent builder for the `UpdateProblem` operation.
* The fluent builder is configurable:
+ `problem_id(impl Into<String>)` / `set_problem_id(Option<String>)`: The ID of the problem.
+ `update_status(UpdateStatus)` / `set_update_status(Option<UpdateStatus>)`: The status of the problem. Arguments can be passed for only problems that show a status of `RECOVERING`.
+ `visibility(Visibility)` / `set_visibility(Option<Visibility>)`: The visibility of a problem. When you pass a value of `IGNORED`, the problem is removed from the default view, and all notifications for the problem are suspended. When `VISIBLE` is passed, the `IGNORED` action is reversed.
* On success, responds with `UpdateProblemOutput`
* On failure, responds with `SdkError<UpdateProblemError>`
### impl Client
#### pub fn update_workload(&self) -> UpdateWorkloadFluentBuilder
Constructs a fluent builder for the `UpdateWorkload` operation.
* The fluent builder is configurable:
+ `resource_group_name(impl Into<String>)` / `set_resource_group_name(Option<String>)`: The name of the resource group.
+ `component_name(impl Into<String>)` / `set_component_name(Option<String>)`: The name of the component.
+ `workload_id(impl Into<String>)` / `set_workload_id(Option<String>)`: The ID of the workload.
+ `workload_configuration(WorkloadConfiguration)` / `set_workload_configuration(Option<WorkloadConfiguration>)`: The configuration settings of the workload. The value is the escaped JSON of the configuration.
* On success, responds with `UpdateWorkloadOutput` with field(s):
+ `workload_id(Option<String>)`: The ID of the workload.
+ `workload_configuration(Option<WorkloadConfiguration>)`: The configuration settings of the workload. The value is the escaped JSON of the configuration.
* On failure, responds with `SdkError<UpdateWorkloadError>`
### impl Client
#### pub fn from_conf(conf: Config) -> Self
Creates a new client from the service `Config`.
##### Panics
This method will panic if the `conf` has retry or timeouts enabled without a `sleep_impl`.
If you experience this panic, it can be fixed by setting the `sleep_impl`, or by disabling retries and timeouts.
#### pub fn config(&self) -> &Config
Returns the client’s configuration.
### impl Client
#### pub fn new(sdk_config: &SdkConfig) -> Self
Creates a new client from an SDK Config.
##### Panics
* This method will panic if the `sdk_config` is missing an async sleep implementation. If you experience this panic, set the `sleep_impl` on the Config passed into this function to fix it.
* This method will panic if the `sdk_config` is missing an HTTP connector. If you experience this panic, set the
`http_connector` on the Config passed into this function to fix it.
Trait Implementations
---
### impl Clone for Client
#### fn clone(&self) -> Client
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read moreAuto Trait Implementations
---
### impl !RefUnwindSafe for Client
### impl Send for Client
### impl Sync for Client
### impl Unpin for Client
### impl !UnwindSafe for Client
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Type Alias aws_sdk_applicationinsights::error::SdkError
===
```
pub type SdkError<E, R = HttpResponse> = SdkError<E, R>;
```
Error type returned by the client.
Aliased Type
---
```
enum SdkError<E, R = HttpResponse> {
ConstructionFailure(ConstructionFailure),
TimeoutError(TimeoutError),
DispatchFailure(DispatchFailure),
ResponseError(ResponseError<R>),
ServiceError(ServiceError<E, R>),
}
```
Variants
---
### ConstructionFailure(ConstructionFailure)
The request failed during construction. It was not dispatched over the network.
### TimeoutError(TimeoutError)
The request failed due to a timeout. The request MAY have been sent and received.
### DispatchFailure(DispatchFailure)
The request failed during dispatch. An HTTP response was not received. The request MAY have been sent.
### ResponseError(ResponseError<R>)
A response was received but it was not parseable according the the protocol (for example the server hung up without sending a complete response)
### ServiceError(ServiceError<E, R>)
An error response was received from the service
Trait Implementations
---
### impl<E, R> ProvideErrorMetadata for SdkError<E, R>where
E: ProvideErrorMetadata,
#### fn meta(&self) -> &ErrorMetadata
Returns error metadata, which includes the error code, message,
request ID, and potentially additional information.#### fn code(&self) -> Option<&strReturns the error code if it’s available.#### fn message(&self) -> Option<&strReturns the error message, if there is one.### impl<E, R> RequestId for SdkError<E, R>where
R: HttpHeaders,
#### fn request_id(&self) -> Option<&strReturns the request ID, or `None` if the service could not be reached.
Module aws_sdk_applicationinsights::types
===
Data structures used by operation inputs/outputs.
Modules
---
* buildersBuilders
* errorError types that Amazon CloudWatch Application Insights can respond with.
Structs
---
* ApplicationComponentDescribes a standalone resource or similarly grouped resources that the application is made up of.
* ApplicationInfoDescribes the status of the application.
* ConfigurationEvent The event information.
* LogPatternAn object that defines the log patterns that belongs to a `LogPatternSet`.
* ObservationDescribes an anomaly or error with the application.
* ProblemDescribes a problem that is detected by correlating observations.
* RelatedObservationsDescribes observations related to the problem.
* TagAn object that defines the tags associated with an application. A *tag* is a label that you optionally define and associate with an application. Tags can help you categorize and manage resources in different ways, such as by purpose, owner, environment, or other criteria.
* WorkloadDescribes the workloads on a component.
* WorkloadConfigurationThe configuration of the workload.
Enums
---
* CloudWatchEventSourceWhen writing a match expression against `CloudWatchEventSource`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* ConfigurationEventResourceTypeWhen writing a match expression against `ConfigurationEventResourceType`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* ConfigurationEventStatusWhen writing a match expression against `ConfigurationEventStatus`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* DiscoveryTypeWhen writing a match expression against `DiscoveryType`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* FeedbackKeyWhen writing a match expression against `FeedbackKey`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* FeedbackValueWhen writing a match expression against `FeedbackValue`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* GroupingTypeWhen writing a match expression against `GroupingType`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* LogFilterWhen writing a match expression against `LogFilter`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* OsTypeWhen writing a match expression against `OsType`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* RecommendationTypeWhen writing a match expression against `RecommendationType`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* ResolutionMethodWhen writing a match expression against `ResolutionMethod`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* SeverityLevelWhen writing a match expression against `SeverityLevel`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* StatusWhen writing a match expression against `Status`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* TierWhen writing a match expression against `Tier`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* UpdateStatusWhen writing a match expression against `UpdateStatus`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* VisibilityWhen writing a match expression against `Visibility`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
Module aws_sdk_applicationinsights::primitives
===
Primitives such as `Blob` or `DateTime` used by other types.
Structs
---
* DateTimeDateTime in time.
* UnknownVariantValueOpaque struct used as inner data for the `Unknown` variant defined in enums in the crate
Enums
---
* DateTimeFormatFormats for representing a `DateTime` in the Smithy protocols.
Struct aws_sdk_applicationinsights::Config
===
```
pub struct Config { /* private fields */ }
```
Configuration for a aws_sdk_applicationinsights service client.
Service configuration allows for customization of endpoints, region, credentials providers,
and retry configuration. Generally, it is constructed automatically for you from a shared configuration loaded by the `aws-config` crate. For example:
```
// Load a shared config from the environment let shared_config = aws_config::from_env().load().await;
// The client constructor automatically converts the shared config into the service config let client = Client::new(&shared_config);
```
The service config can also be constructed manually using its builder.
Implementations
---
### impl Config
#### pub fn builder() -> Builder
Constructs a config builder.
#### pub fn to_builder(&self) -> Builder
Converts this config back into a builder so that it can be tweaked.
#### pub fn http_connector(&self) -> Option<SharedHttpConnectorReturn the `SharedHttpConnector` to use when making requests, if any.
#### pub fn endpoint_resolver(&self) -> SharedEndpointResolver
Returns the endpoint resolver.
#### pub fn retry_config(&self) -> Option<&RetryConfigReturn a reference to the retry configuration contained in this config, if any.
#### pub fn sleep_impl(&self) -> Option<SharedAsyncSleepReturn a cloned shared async sleep implementation from this config, if any.
#### pub fn timeout_config(&self) -> Option<&TimeoutConfigReturn a reference to the timeout configuration contained in this config, if any.
#### pub fn interceptors(&self) -> impl Iterator<Item = SharedInterceptor> + '_
Returns interceptors currently registered by the user.
#### pub fn time_source(&self) -> Option<SharedTimeSourceReturn time source used for this service.
#### pub fn app_name(&self) -> Option<&AppNameReturns the name of the app that is using the client, if it was provided.
This *optional* name is used to identify the application in the user agent that gets sent along with requests.
#### pub fn invocation_id_generator(&self) -> Option<SharedInvocationIdGeneratorReturns the invocation ID generator if one was given in config.
The invocation ID generator generates ID values for the `amz-sdk-invocation-id` header. By default, this will be a random UUID. Overriding it may be useful in tests that examine the HTTP request and need to be deterministic.
#### pub fn new(config: &SdkConfig) -> Self
Creates a new service config from a shared `config`.
#### pub fn signing_service(&self) -> &'static str
The signature version 4 service signing name to use in the credential scope when signing requests.
The signing service may be overridden by the `Endpoint`, or by specifying a custom
`SigningService` during operation construction
#### pub fn region(&self) -> Option<&RegionReturns the AWS region, if it was provided.
#### pub fn credentials_cache(&self) -> Option<SharedCredentialsCacheReturns the credentials cache.
Trait Implementations
---
### impl Clone for Config
#### fn clone(&self) -> Config
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn from(sdk_config: &SdkConfig) -> Self
Converts to this type from the input type.Auto Trait Implementations
---
### impl !RefUnwindSafe for Config
### impl Send for Config
### impl Sync for Config
### impl Unpin for Config
### impl !UnwindSafe for Config
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Module aws_sdk_applicationinsights::config
===
Configuration for Amazon CloudWatch Application Insights.
Modules
---
* endpointTypes needed to configure endpoint resolution.
* interceptorsTypes needed to implement `Interceptor`.
* retryRetry configuration.
* timeoutTimeout configuration.
Structs
---
* AppNameApp name that can be configured with an AWS SDK client to become part of the user agent string.
* BuilderBuilder for creating a `Config`.
* ConfigConfiguration for a aws_sdk_applicationinsights service client.
* ConfigBagLayered configuration structure
* CredentialsAWS SDK Credentials
* RegionThe region to send requests to.
* RuntimeComponentsComponents that can only be set in runtime plugins that the orchestrator uses directly to call an operation.
* SharedAsyncSleepWrapper type for sharable `AsyncSleep`
* SharedInterceptorInterceptor wrapper that may be shared
* SleepFuture returned by `AsyncSleep`.
Traits
---
* AsyncSleepAsync trait with a `sleep` function.
* InterceptorAn interceptor allows injecting code into the SDK ’s request execution pipeline.
Module aws_sdk_applicationinsights::operation
===
All operations that this crate can perform.
Modules
---
* add_workloadTypes for the `AddWorkload` operation.
* create_applicationTypes for the `CreateApplication` operation.
* create_componentTypes for the `CreateComponent` operation.
* create_log_patternTypes for the `CreateLogPattern` operation.
* delete_applicationTypes for the `DeleteApplication` operation.
* delete_componentTypes for the `DeleteComponent` operation.
* delete_log_patternTypes for the `DeleteLogPattern` operation.
* describe_applicationTypes for the `DescribeApplication` operation.
* describe_componentTypes for the `DescribeComponent` operation.
* describe_component_configurationTypes for the `DescribeComponentConfiguration` operation.
* describe_component_configuration_recommendationTypes for the `DescribeComponentConfigurationRecommendation` operation.
* describe_log_patternTypes for the `DescribeLogPattern` operation.
* describe_observationTypes for the `DescribeObservation` operation.
* describe_problemTypes for the `DescribeProblem` operation.
* describe_problem_observationsTypes for the `DescribeProblemObservations` operation.
* describe_workloadTypes for the `DescribeWorkload` operation.
* list_applicationsTypes for the `ListApplications` operation.
* list_componentsTypes for the `ListComponents` operation.
* list_configuration_historyTypes for the `ListConfigurationHistory` operation.
* list_log_pattern_setsTypes for the `ListLogPatternSets` operation.
* list_log_patternsTypes for the `ListLogPatterns` operation.
* list_problemsTypes for the `ListProblems` operation.
* list_tags_for_resourceTypes for the `ListTagsForResource` operation.
* list_workloadsTypes for the `ListWorkloads` operation.
* remove_workloadTypes for the `RemoveWorkload` operation.
* tag_resourceTypes for the `TagResource` operation.
* untag_resourceTypes for the `UntagResource` operation.
* update_applicationTypes for the `UpdateApplication` operation.
* update_componentTypes for the `UpdateComponent` operation.
* update_component_configurationTypes for the `UpdateComponentConfiguration` operation.
* update_log_patternTypes for the `UpdateLogPattern` operation.
* update_problemTypes for the `UpdateProblem` operation.
* update_workloadTypes for the `UpdateWorkload` operation.
Traits
---
* RequestIdImplementers add a function to return an AWS request ID
Enum aws_sdk_applicationinsights::Error
===
```
#[non_exhaustive]pub enum Error {
AccessDeniedException(AccessDeniedException),
BadRequestException(BadRequestException),
InternalServerException(InternalServerException),
ResourceInUseException(ResourceInUseException),
ResourceNotFoundException(ResourceNotFoundException),
TagsAlreadyExistException(TagsAlreadyExistException),
TooManyTagsException(TooManyTagsException),
ValidationException(ValidationException),
Unhandled(Unhandled),
}
```
All possible error types for this service.
Variants (Non-exhaustive)
---
Non-exhaustive enums could have additional variants added in future. Therefore, when matching against variants of non-exhaustive enums, an extra wildcard arm must be added to account for any future variants.### AccessDeniedException(AccessDeniedException)
User does not have permissions to perform this action.
### BadRequestException(BadRequestException)
The request is not understood by the server.
### InternalServerException(InternalServerException)
The server encountered an internal error and is unable to complete the request.
### ResourceInUseException(ResourceInUseException)
The resource is already created or in use.
### ResourceNotFoundException(ResourceNotFoundException)
The resource does not exist in the customer account.
### TagsAlreadyExistException(TagsAlreadyExistException)
Tags are already registered for the specified application ARN.
### TooManyTagsException(TooManyTagsException)
The number of the provided tags is beyond the limit, or the number of total tags you are trying to attach to the specified resource exceeds the limit.
### ValidationException(ValidationException)
The parameter is not valid.
### Unhandled(Unhandled)
An unexpected error occurred (e.g., invalid JSON returned by the service or an unknown error code).
Trait Implementations
---
### impl Debug for Error
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more1.0.0 · source#### fn description(&self) -> &str
👎Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎Deprecated since 1.33.0: replaced by Error::source, which can support downcasting#### fn provide<'a>(&'a self, request: &mut Request<'a>)
🔬This is a nightly-only experimental API. (`error_generic_member_access`)Provides type based access to context intended for error reports.
#### fn from(err: AddWorkloadError) -> Self
Converts to this type from the input type.### impl From<CreateApplicationError> for Error
#### fn from(err: CreateApplicationError) -> Self
Converts to this type from the input type.### impl From<CreateComponentError> for Error
#### fn from(err: CreateComponentError) -> Self
Converts to this type from the input type.### impl From<CreateLogPatternError> for Error
#### fn from(err: CreateLogPatternError) -> Self
Converts to this type from the input type.### impl From<DeleteApplicationError> for Error
#### fn from(err: DeleteApplicationError) -> Self
Converts to this type from the input type.### impl From<DeleteComponentError> for Error
#### fn from(err: DeleteComponentError) -> Self
Converts to this type from the input type.### impl From<DeleteLogPatternError> for Error
#### fn from(err: DeleteLogPatternError) -> Self
Converts to this type from the input type.### impl From<DescribeApplicationError> for Error
#### fn from(err: DescribeApplicationError) -> Self
Converts to this type from the input type.### impl From<DescribeComponentConfigurationError> for Error
#### fn from(err: DescribeComponentConfigurationError) -> Self
Converts to this type from the input type.### impl From<DescribeComponentConfigurationRecommendationError> for Error
#### fn from(err: DescribeComponentConfigurationRecommendationError) -> Self
Converts to this type from the input type.### impl From<DescribeComponentError> for Error
#### fn from(err: DescribeComponentError) -> Self
Converts to this type from the input type.### impl From<DescribeLogPatternError> for Error
#### fn from(err: DescribeLogPatternError) -> Self
Converts to this type from the input type.### impl From<DescribeObservationError> for Error
#### fn from(err: DescribeObservationError) -> Self
Converts to this type from the input type.### impl From<DescribeProblemError> for Error
#### fn from(err: DescribeProblemError) -> Self
Converts to this type from the input type.### impl From<DescribeProblemObservationsError> for Error
#### fn from(err: DescribeProblemObservationsError) -> Self
Converts to this type from the input type.### impl From<DescribeWorkloadError> for Error
#### fn from(err: DescribeWorkloadError) -> Self
Converts to this type from the input type.### impl From<ListApplicationsError> for Error
#### fn from(err: ListApplicationsError) -> Self
Converts to this type from the input type.### impl From<ListComponentsError> for Error
#### fn from(err: ListComponentsError) -> Self
Converts to this type from the input type.### impl From<ListConfigurationHistoryError> for Error
#### fn from(err: ListConfigurationHistoryError) -> Self
Converts to this type from the input type.### impl From<ListLogPatternSetsError> for Error
#### fn from(err: ListLogPatternSetsError) -> Self
Converts to this type from the input type.### impl From<ListLogPatternsError> for Error
#### fn from(err: ListLogPatternsError) -> Self
Converts to this type from the input type.### impl From<ListProblemsError> for Error
#### fn from(err: ListProblemsError) -> Self
Converts to this type from the input type.### impl From<ListTagsForResourceError> for Error
#### fn from(err: ListTagsForResourceError) -> Self
Converts to this type from the input type.### impl From<ListWorkloadsError> for Error
#### fn from(err: ListWorkloadsError) -> Self
Converts to this type from the input type.### impl From<RemoveWorkloadError> for Error
#### fn from(err: RemoveWorkloadError) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<AddWorkloadError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<AddWorkloadError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<CreateApplicationError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<CreateApplicationError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<CreateComponentError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<CreateComponentError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<CreateLogPatternError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<CreateLogPatternError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DeleteApplicationError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DeleteApplicationError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DeleteComponentError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DeleteComponentError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DeleteLogPatternError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DeleteLogPatternError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DescribeApplicationError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DescribeApplicationError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DescribeComponentConfigurationError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DescribeComponentConfigurationError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DescribeComponentConfigurationRecommendationError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(
err: SdkError<DescribeComponentConfigurationRecommendationError, R>
) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DescribeComponentError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DescribeComponentError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DescribeLogPatternError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DescribeLogPatternError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DescribeObservationError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DescribeObservationError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DescribeProblemError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DescribeProblemError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DescribeProblemObservationsError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DescribeProblemObservationsError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DescribeWorkloadError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DescribeWorkloadError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<ListApplicationsError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<ListApplicationsError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<ListComponentsError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<ListComponentsError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<ListConfigurationHistoryError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<ListConfigurationHistoryError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<ListLogPatternSetsError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<ListLogPatternSetsError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<ListLogPatternsError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<ListLogPatternsError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<ListProblemsError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<ListProblemsError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<ListTagsForResourceError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<ListTagsForResourceError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<ListWorkloadsError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<ListWorkloadsError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<RemoveWorkloadError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<RemoveWorkloadError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<TagResourceError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<TagResourceError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<UntagResourceError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<UntagResourceError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<UpdateApplicationError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<UpdateApplicationError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<UpdateComponentConfigurationError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<UpdateComponentConfigurationError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<UpdateComponentError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<UpdateComponentError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<UpdateLogPatternError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<UpdateLogPatternError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<UpdateProblemError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<UpdateProblemError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<UpdateWorkloadError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<UpdateWorkloadError, R>) -> Self
Converts to this type from the input type.### impl From<TagResourceError> for Error
#### fn from(err: TagResourceError) -> Self
Converts to this type from the input type.### impl From<UntagResourceError> for Error
#### fn from(err: UntagResourceError) -> Self
Converts to this type from the input type.### impl From<UpdateApplicationError> for Error
#### fn from(err: UpdateApplicationError) -> Self
Converts to this type from the input type.### impl From<UpdateComponentConfigurationError> for Error
#### fn from(err: UpdateComponentConfigurationError) -> Self
Converts to this type from the input type.### impl From<UpdateComponentError> for Error
#### fn from(err: UpdateComponentError) -> Self
Converts to this type from the input type.### impl From<UpdateLogPatternError> for Error
#### fn from(err: UpdateLogPatternError) -> Self
Converts to this type from the input type.### impl From<UpdateProblemError> for Error
#### fn from(err: UpdateProblemError) -> Self
Converts to this type from the input type.### impl From<UpdateWorkloadError> for Error
#### fn from(err: UpdateWorkloadError) -> Self
Converts to this type from the input type.### impl RequestId for Error
#### fn request_id(&self) -> Option<&strReturns the request ID, or `None` if the service could not be reached.Auto Trait Implementations
---
### impl !RefUnwindSafe for Error
### impl Send for Error
### impl Sync for Error
### impl Unpin for Error
### impl !UnwindSafe for Error
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> ToString for Twhere
T: Display + ?Sized,
#### default fn to_string(&self) -> String
Converts the given value to a `String`.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Module aws_sdk_applicationinsights::client
===
Client for calling Amazon CloudWatch Application Insights.
### Constructing a `Client`
A `Config` is required to construct a client. For most use cases, the `aws-config`
crate should be used to automatically resolve this config using
`aws_config::load_from_env()`, since this will resolve an `SdkConfig` which can be shared across multiple different AWS SDK clients. This config resolution process can be customized by calling `aws_config::from_env()` instead, which returns a `ConfigLoader` that uses the builder pattern to customize the default config.
In the simplest case, creating a client looks as follows:
```
let config = aws_config::load_from_env().await;
let client = aws_sdk_applicationinsights::Client::new(&config);
```
Occasionally, SDKs may have additional service-specific that can be set on the `Config` that is absent from `SdkConfig`, or slightly different settings for a specific client may be desired.
The `Config` struct implements `From<&SdkConfig>`, so setting these specific settings can be done as follows:
```
let sdk_config = ::aws_config::load_from_env().await;
let config = aws_sdk_applicationinsights::config::Builder::from(&sdk_config)
.some_service_specific_setting("value")
.build();
```
See the `aws-config` docs and `Config` for more information on customizing configuration.
*Note:* Client construction is expensive due to connection thread pool initialization, and should be done once at application start-up.
Using the `Client`
---
A client has a function for every operation that can be performed by the service.
For example, the `AddWorkload` operation has a `Client::add_workload`, function which returns a builder for that operation.
The fluent builder ultimately has a `send()` function that returns an async future that returns a result, as illustrated below:
```
let result = client.add_workload()
.resource_group_name("example")
.send()
.await;
```
The underlying HTTP requests that get made by this can be modified with the `customize_operation`
function on the fluent builder. See the `customize` module for more information.
Modules
---
* customizeOperation customization and supporting types.
Structs
---
* ClientClient for Amazon CloudWatch Application Insights
Module aws_sdk_applicationinsights::error
===
Common errors and error handling utilities.
Structs
---
* DisplayErrorContextProvides a `Display` impl for an `Error` that outputs the full error context
Traits
---
* ProvideErrorMetadataTrait to retrieve error metadata from a result
Type Aliases
---
* BoxErrorA boxed error that is `Send` and `Sync`.
* SdkErrorError type returned by the client.
Module aws_sdk_applicationinsights::meta
===
Information about this crate.
Statics
---
* PKG_VERSIONCrate version number. |
fitConic | cran | R | Package ‘fitConic’
August 28, 2023
Title Fit Data to Any Conic Section
Version 1.2.1
Date 2023-08-28
Description Fit data to an ellipse, hyperbola, or parabola. Bootstrapping is avail-
able when needed. The conic curve can be rotated through an arbitrary an-
gle and the fit will still succeed. Helper functions are provided to convert generator coeffi-
cients from one style to another, generate test data sets, rotate conic section parame-
ters, and so on. References include <NAME> (2014) ``Fitting ellipses, cir-
cles, and lines by least squares'' <https://people.cas.uab.edu/~mosya/cl/>; <NAME>-
bon, <NAME>, <NAME> (1999) ``Direct Least Squares Fitting of El-
lipses'' IEEE Trans. PAMI, Vol. 21, pages 476-48; <NAME>, <NAME>, and <NAME> (2014) ``Fit-
ting quadratic curves to data points'', British Journal of Mathematics & Computer Science, 4, 33-
60; <NAME> and <NAME> (2011) ``Least squares fitting of quadratic curves and sur-
faces'', Computer Vision, Editor S. R. Yoshida, Nova Science Publishers, pp. 285-302.
Imports pracma
License LGPL-3
NeedsCompilation no
Author <NAME> [aut, cre],
<NAME> [ctb],
<NAME> [ctb]
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2023-08-28 17:40:12 UTC
R topics documented:
fitConic-packag... 2
Ato... 3
bootEllips... 4
bootHyperbol... 5
createConi... 6
doWeight... 7
fhy... 8
fitConi... 9
fitParabol... 10
JmatrixLM... 12
Residuals.ellips... 13
Residuals.hyperbol... 13
rotate... 14
fitConic-package Fit Data to Any Conic Section
Description
Fit data to an ellipse, hyperbola, or parabola. Bootstrapping is available when needed. The conic
curve can be rotated through an arbitrary angle and the fit will still succeed. Helper functions are
provided to convert generator coefficients from one style to another, generate test data sets, rotate
conic section parameters, and so on. References include <NAME> (2014) "Fitting ellipses,
circles, and lines by least squares" <https://people.cas.uab.edu/~mosya/cl/>; <NAME>, <NAME>, <NAME> (1999) "Direct Least Squares Fitting of Ellipses" IEEE Trans. PAMI, Vol. 21,
pages 476-48; <NAME>, <NAME>, and <NAME> (2014) "Fitting quadratic curves to data points",
British Journal of Mathematics & Computer Science, 4, 33-60; <NAME> and <NAME> (2011)
"Least squares fitting of quadratic curves and surfaces", Computer Vision, Editor <NAME>,
Nova Science Publishers, pp. 285-302.
Details
The DESCRIPTION file:
Package: fitConic
Title: Fit Data to Any Conic Section
Version: 1.2.1
Date: 2023-08-28
Authors@R: c(person(given = "Carl", family = "Witthoft", role = c("aut","cre"), email= "<EMAIL>") ,person(gi
Description: Fit data to an ellipse, hyperbola, or parabola. Bootstrapping is available when needed. The conic curve can be
Imports: pracma
License: LGPL-3
Author: <NAME> [aut, cre], <NAME> [ctb], <NAME> [ctb]
Maintainer: <NAME> <<EMAIL>>
The main function is fitConic .
Author(s)
NA, based on code provided in the references and in conicfit::fit.conicLMA()
Maintainer: NA
References
https://www.mathworks.com/matlabcentral/answers/80541 for the RANSAC-style search to
fit rotated parabolas. https://math.stackexchange.com/questions/426150 for detailed el-
lipse parametric equations. https://math.stackexchange.com/questions/2800817 for "fo-
cus/directrix/eccentricity" information https://people.cas.uab.edu/~mosya/cl/ and the folks
referred to there, for fitConicLMA . https://en.wikipedia.org/wiki/Ellipse for several pa-
rameter conversion formulas
<NAME>, <NAME>, <NAME>, "Direct Least Squares Fitting of Ellipses", IEEE Trans.
PAMI, Vol. 21, pages 476-480 (1999)
Halir R, Flusser J (1998) Proceedings of the 6th International Conference in Central Europe on
Computer Graphics and Visualization, Numerically stable direct least squares fitting of ellipses
(WSCG, Plzen, Czech Republic), pp 125132.
AtoG A Set Of Functions To Convert Among Various Conic-Section-
Defining Parameter Sets.
Description
AtoG Convert from full quadratic "ABCDEF" to focus, axis, angle "hvab theta" parameters. GtoA
Convert from "hvab theta" to "ABCDEF" parameters. parab3toA Simple conversion from a + bx
+ cx^2 to "ABCDEF" parameters. FEDtoA Convert focus, eccentricity, and directrix to "ABCDEF"
parameters.
Usage
AtoG(parA, tol = 1e-06)
GtoA(parG, conicType = c("e", "h"))
parab3toA(ADF, theta = 0)
FEDtoA(focus = c(0, 0), directrix = c(1, 0, 1), eccentricity = 0.5)
Arguments
parA The six coefficients in the quadratic Ax^2 + Bxy + Cy^2 +Dx + Ey +F = 0
tol A small value, used to check whether small coefficient values might be actually
zero. See "Details."
parG a five-element vector "h,v,a,b,theta" . See "Details" for the standard equation
form for this.
conicType Because the ’hvab’ equation has a sign difference for ellipses vs. hyperbolas, it
is necessary to indicate which kind of input is intended. See "Details."
focus location of the conic sections focus.
directrix the 3-element directrix.
eccentricity the eccentricity of the conic section.
ADF The A,D,F coeffients in the standard quadratic. Thus, the x^2 term, the x term,
and the constant term.
theta An angle by which the entire parabola is to be rotated.
Details
The tol input for AtoG checks two conditions. First, is B practically zero, in which case B is set
to exactly zero, implying no rotation of the conic section. Second, is B^2 - 4*A*C almost zero,
implying that the conic is probably a parabola, and conversion to ’hvab’ form is not useful.
The "hvab" form for describing an ellipse or a hyperbola looks like [Center(1:2), Axes(1:2)/2] angle
A, to fill the equation
((x-h)cosA +(y-v)sinA)^2/a^2 + ((x-h)sinA-(y-v)cosA)^2/b^2 = 1 The length of the axes are 2*a,
2*b .
A discussion of the focus/directrix/eccentricity form of a conic section is rather lengthy and not pre-
sented here. One short introduction can be found at https://en.wikipedia.org/wiki/Conic_
section#Eccentricity,_focus_and_directrix
Value
for AtoG,
parG c(h,v,a,b,theta)
exitCode a value used in fitConic. 1,2, or 3 for ellipse, hyperbola, parabola
conicType matching exitCode with a char "e", "h", or "p"
for GtoA
parA the ABCDEF coefficients of the general quadratic
exitCode a value used in fitConic. 1,2, or 3 for ellipse, hyperbola, parabola
conicType matching exitCode with a char "e", "h", or "p"
for FEDtoA, the ABCDEF coefficients of the general quadratic for parab3toA,
parA the ABCDEF coefficients of the general quadratic
exitCode always numeric 3, a value used in fitConic
conicType always char "p"
.
bootEllipse Simple, Medium-Quality Ellipse Fitting Function
Description
This function generates a half-decent fit to the source data. It is intended only for internal use, to
bootstrap the higher-quality fitConic function.
Usage
bootEllipse(x, y = NULL, ...)
Arguments
x vector of x-values, or a Nx2 array of x and y values. In the latter case, the input
y is ignored.
y vector of y-values.
... possible other arguments to be passed to future upgrades
Details
This can be used as a Q&D ellipse fitting algorithm, but is intended only for internal use by
fitConic, providing that function with an initial estimate for the ellipse’s defining parameter set.
Value
parA 6-element set with estimate of the "ABCDEF" coefficients for the general quadratic
equation
centroid estimate of the ellipse’s centroid
Author(s)
<NAME>, <<EMAIL>>
References
This is a revision of the function EllipseDirectFit in package conicfit by Jose Gama, with minor up-
grades. Original MATLAB code by: <NAME> https://www.mathworks.com/matlabcentral/
fileexchange/22684-ellipse-fit-direct-method <NAME>, <NAME>, <NAME>,
"Direct Least Squares Fitting of Ellipses", IEEE Trans. PAMI, Vol. 21, pages 476-480 (1999) Halir
R, Flusser J (1998) Proceedings of the 6th International Conference in Central Europe on Com-
puter Graphics and Visualization, Numerically stable direct least squares fitting of ellipses (WSCG,
Plzen, Czech Republic), pp 125132.
See Also
fitConic , createConic
bootHyperbola A Function to Attempt a Crude Fit of Data to a Hyperbola
Description
This function is not intended for direct use. It attempts to generate an approximate fit of a data set
to a hyperbola, returning a parameter set for use in intializing the main function conicFit .
Usage
bootHyperbola(x, y = NULL, maxiter = 10000, ...)
Arguments
x vector of x-values, or a Nx2 array of x and y values. In the latter case, the input
y is ignored.
y vector of y-values.
maxiter A ’safety’ limiter on the number of iterations to try before giving up.
... possible other arguments to be passed to future upgrades
Value
parA the new 6-parameter set defining the non-rotated conic.
parAr the new 6-parameter set defining the rotated conic.
theta the angle of rotation between ParA and ParAr
fitdat the information returned from optim
Author(s)
<NAME>, <<EMAIL>>
See Also
fitConic
createConic Create A Conic Section Dataset Based on Parameter Set
Description
Given a vector of x-values and a parameter set defining a conic section, produce an array of x- and
y- values, optionally with noise added, for the specified conic section.
Usage
createConic(x, param, conicType, ranFun = NULL, noise = 1, seedit = NULL, tol = 1e-06)
Arguments
x Vector of (real) values
param Either a 6-value set representing the standard quadratic form Ax^2 + Bxy + Cy^2
+Dx + Ey +F = 0 or a 5-value set representing the "hvab,theta" form ((x-h)cosA
+(y-v)sinA)^2/a^2 + ((x-h)sinA-(y-v)cosA)^2/b^2 = 1 . In the latter case the
value conicType is required.
conicType Either the character "e" for ellipse or "h" for hyperbola. Only required if the
"hvab,theta" form is used in param .
ranFun If random noise is to be added to the calculated y-values, provide a vectorized
function which takes a single input (x). See Details.
noise Optional argument to multiply the output of ranFun .
seedit Optional argument to set a starting seed for ranFun to use.
tol A (small) value used to decide whether various parameter terms are so small
that they should be zero. This is used to facilitate distinguishing, e.g., parabolas
from hyperbolas.
Details
When supplied ranFun is used as follows. y <- y + ranFun(y)*noise . Make sure any function
supplied fits that form (no other input argument required; only a vector returned).
Value
An N x 2 array of the x,y pairs. Warning: since there are often two possible y-values for a given
x-value (these being quadratic equations), the array does contain duplicate x-values. This may
"annoy" some other packages’ functions which don’t allow that sort of repeated value. If this
presents a problem, I’d recommend applying a very small amount of noise to the x-values in this
output.
Author(s)
<NAME> <<EMAIL>>
Examples
# create noisy ellipse
parGr <- c(-2.3,4.2,5,3,pi/4)
xe <-seq(-8,9,by=.05)
elipGrn <- createConic(xe, parGr, 'e',ranFun=rnorm, noise=0.25)
elipGr <- createConic(xe, parGr, 'e')
plot(elipGrn, pch='.',cex = 4, asp = TRUE) #, xlim = c(-5,8), ylim = c(0,7))
lines(elipGr,col='green')
doWeights Function to Apply Weights to Data
Description
This function applies an integer weight set to an array of (x,y) data points. It normally is only called
from fitConic but can be applied directly to a dataset if desired.
Usage
doWeights(XY, weights)
Arguments
XY A Nx2 array of data representing (x,y) pairs
weights A vector of weights the same length as the number of rows in XY. At this time,
only nonnegative integer values are allowed. Doubles are rounded and negative
values are set to zero. A zero weight will remove the matching data value from
the dataset.
Value
A new Nx2 array. Basically, each row in the input XY is repeated weights[j] times. ..
Author(s)
<NAME> <<EMAIL>>
fhyp Internal Functions to Perform Bootstrap Fitting Operations
Description
These functions are not intended for external use. fhyp and fhypopt support the parent function
bootHyperbola by providing functions for optimize to use. The functions costparab costparabxy
similarly provide functions for optim to use inside the function fitParabola .
Usage
fhyp(xy, b3, Ang)
costparabxy(theta, xy)
costparab(theta, xy)
Arguments
xy A Nx2 array of data
b3 Three of the parameters describing a hyperbola. These three are the "other pa-
rameters" fed to optim
Ang The initial angle of rotation, also optimized during the process.
theta The angle of rotation of the parabola for this run of optimize
Value
various combinations of "cost" values, i.e. Figure of Merit, to determine optimal set of coefficients,
along with datasets where necessary. ..
Author(s)
<NAME> <<EMAIL>>
fitConic Fit Data to A Conic Section Curve
Description
This function fits data to an ellipse, hyperbola, or parabola. It can do so without any initial condi-
tions, or can accept initial parameter values when known.
Usage
fitConic(X, Y = NULL, parInit = NULL, conicType = c("e", "h", "p"),
weights = NULL, LambdaIni = 1, epsilonP = 1e-06, epsilonF = 1e-06, IterMAX = 20000)
Arguments
X vector of x-values, or a Nx2 array of x and y values. In the latter case, the input
y is ignored.
Y vector of y-values.
parInit Optional. A vector either of six values representing an initial guess at the
"ABCDEF" coefficients of the quadratic, or five values representing an initial
guess at the "hvab,theta" coefficients. In the latter case, a value of either "e" or
"h" is required for conicType. See the Details section for more information.
conicType If parInit is either NULL or the "hvab,theta" option, conicType is required.
Enter either "e", "h", or "p" for fitting to ellipse, hyperbola, or parabola.
weights Optional vector of weights to apply to data. Must be same length as the input
data. Only non-negative integer weights are allowed. See the Details section.
LambdaIni A control parameter used in the fitting algorithm. Typically there is no reason to
change from the default value.
epsilonP A tolerance value to determine whether convergence has occurred.
epsilonF A tolerance parameter for determining when to adjust lambda away from the
input value LambdaIni.
IterMAX A "safety" value to avoid loop thrashing when convergence isn’t taking place.
Details
ParInit, when supplied is either a 6-value set representing the standard quadratic form Ax^2 +
Bxy + Cy^2 +Dx + Ey +F = 0 or a 5-value set representing the "hvab,theta" form ((x-h)cosA
+(y-v)sinA)^2/a^2 + ((x-h)sinA-(y-v)cosA)^2/b^2 = 1 . In the latter case the value conicType is
required, because ellipses and hyperbolas have a different sign for the y-term. In most cases, the
bootstrapper tools work well enough to allow the main algorithm to fit to an ellipse or hyperbola.
However, "knowledge is power." If you have a good idea approximately what the ParIni values
are, entering them will help avoid convergence to the wrong local minimum. The algorithm branch
which fits data to parabolas does not use or need initialization, as it uses a RANSAC-type search to
find the best rotation angle, and then does a simple quadratic polynomial fit. The weights input is
restricted to nonnegative integers at this time. Doubles are rounded and negative values are set to
zero. A zero weight will remove the matching data value from the dataset.
Value
parA vector of the six "ABCDEF" coefficients RSS ’root sum square’ figure of merit describing the
relative fit quality iters number of iterations at convergence exitCode 1 means ellipse, 2 means
hyperbola, 3 means parabola. If other values show up (possibly -1, 0, 4), most likely the dataset led
to a degenerate case such as a line fit.
Author(s)
<NAME> <<EMAIL>>
References
https://people.cas.uab.edu/~mosya/cl/ for information on the original "LMA" fitting algo-
rithm. https://math.stackexchange.com/questions/426150 and https://math.stackexchange.
com/questions/2800817 for various related equations concerning conic sections. https://en.
wikipedia.org/wiki/Ellipse for several parameter conversion formulas
See Also
createConic , fitParabola
Examples
##-create a hyperbola, add noise
Ang = 0.42 #radians
xh <- seq(-20,20,by=0.1)
parAxyh <- c(0, 1, 0, -2, 4, -15 )
parAxyhr <- rotateA(parAxyh, Ang)$parA
newxyr <-createConic(xh,parAxyhr)
newxyrn <- createConic(xh,parAxyhr,ranFun=rnorm, noise= 0.05)
plot(newxyr, t = 'l',asp=TRUE)
points(newxyrn, pch = '.', cex = 3)
# Now find the hyperbola for that dataset
hypfitr <-fitConic(newxyrn, conicType = 'h')
hypdatr <- createConic(xh, hypfitr$parA)
lines(hypdatr, col='red')
fitParabola Fit Data to Parabola
Description
This function fits a data set to a parabola, including any rotation angle.
Usage
fitParabola(x, y = NULL, searchAngle = c(-pi/2, pi/2), ...)
Arguments
x vector of x-values, or a Nx2 array of x and y values. In the latter case, the input
y is ignored.
y vector of y-values.
searchAngle Optional pair of angles, in radians, defining the limits of the search range to
find the rotation angle of the parabola. Usually the default range -pi/2:+pi/2
works acceptable.
... For possible future expansion to pass to additional features.
Details
fitParabola starts by doing a RANSAC-style search to find the optimum rotation angle. Once
that is chosen, the data are rotated by that angle and a simple polynomial fit to the (rotated) vertical
parabola is done.
Value
vertex calculated vertex of the parabola
theta angle of rotation relative to a vertical parabola
parA the "ABCDEF" coefficients of the fitted parabola
parQ the coefficients of the derotated parabola’s simple quadratic polynomial, highest
power first
cost final value of the "cost" parameter used for optimization
Note
When the function fitConic is called with instructions to fit to a parabola, it passes the inputs to
fitParabola and does nothing else. For parabolic data, then, either function will give the same
result.
Author(s)
<NAME> <<EMAIL>>
References
Some of the code is based on https://www.mathworks.com/matlabcentral/answers/80541
See Also
createConic
Examples
# Create vertical parabola with some noise
parP <-c(.5,0,0,2,-1,4)
xp <- seq(-5,5,by=0.05)
partest <-createConic(xp,param = parP,ranFun = rnorm, noise = 1)
plot(partest, pch= '.',asp=TRUE, cex=3)
# rotate the data
partestr <-xyrot(partest,theta = -.35)
points(partestr,col='green',pch='.',cex=3)
# do the fit
parfit <-fitParabola(partestr)
points(parfit$vertex,pch='X',col='blue')
parout <- createConic(xp,parfit$parA)
lines(parout,col='red')
JmatrixLMA Calculate a Jacobian Matrix
Description
Calculate the Jacobian matrix with the original dataset and the current version of fitted data. This is
not intended for external use. It is called from fitConic
Usage
JmatrixLMA(XY, parA, XYproj)
Arguments
XY The original input dataset
parA The current set of ABCDEF quadratic equation coefficients.
XYproj The current calculated dataset based on the latest iteration of the coefficient set.
Value
Res residuals based on the norm of XY - XYproj
J matrix of values for each input data point corresponding to the terms in the
general quadratic Ax^2 + Bxy + Cy^2 +Dx + Ey +F
Author(s)
<NAME> <<EMAIL>>
References
This is a copy of JmatrixLMA with some validation steps added.
Residuals.ellipse Calculate Residual Error For Current Coefficients
Description
This function is not intended for external use. It is called from fitConic when iterating to find the
best-fit ellipse.
Usage
Residuals.ellipse(XY, parG)
Arguments
XY The x,y dataset
parG The "G-parameter" set for the current iteration.
Value
RSS Figure of merit, the ’norm’ of the difference between the input XY data and the
output "XYproj" data generated.
XYproj Calculated dataset to be used in generating the Jacobian matrix for the next
iteration of fitConic
Author(s)
<NAME> <<EMAIL>>
References
This is a slightly modified (and debugged) version of Residuals.ellipse
Residuals.hyperbola Calculate Residual Error For Current Coefficients
Description
This function is not intended for external use. It is called from fitConic when iterating to find the
best-fit hyperbola.
Usage
Residuals.hyperbola(XY, parG)
Arguments
XY The x,y dataset
parG The "G-parameter" set for the current iteration.
Value
RSS Figure of merit, the ’norm’ of the difference between the input XY data and the
output "XYproj" data generated.
XYproj Calculated dataset to be used in generating the Jacobian matrix for the next
iteration of fitConic
Author(s)
<NAME> <<EMAIL>>
References
This is a slightly modified (and debugged) version of Residuals.hyperbola
rotateA Rotate Conic Section Equation Parameters Or A Dataset, With Respect
To X-Y Axes.
Description
rotateA Takes as input "parA," the 6 values of the general quadratic Ax^2 + Bxy + Cy^2 +Dx + Ey
+F = 0 , and applies a rotation angle to the coefficient set. derotateA calculates the rotation angle
required to change the conic section defined by ’parA’ into one that is orthogonal to the cartesian
axes. xyrot is a simple function to rotate the coordinate system by theta.
Usage
rotateA(parA, theta)
derotateA(parA, ACmin = 1e-05)
xyrot(x, y = NULL, theta)
Arguments
parA the 6 values of the general quadratic Ax^2 + Bxy + Cy^2 +Dx + Ey +F = 0
theta the angle, in radians, to rotate the conic section.
ACmin A tolerance parameter for deciding that the product of parameters A and C is
actually zero (in which case the type of conic section is more likely a parabola
or a degenerate case)
x Either a vector of x-coordinates or a Nx2 array of x and y coordinates, in which
case the y-input is ignored
y A vector of y-coordinates.
Details
derotateA uses the following standard formula to calculate the angle. Derotate means to remove
the xy term, i.e. force B = 0 . Some algebra shows that cot(2theta) = (A-C)/B and thus tan(2theta)
= B/(A-C)
For xyrot, the internal xy.coords is used. If you enter only a vector for x and nothing for y, this
will feed the new vectors 1:N for x and x-input for y to the rotator, which is probably not useful.
Value
For derotateA,
parA the new 6-parameter set defining the derotated conic.
theta the derived angle by which the parameter set was rotated
For rotateA
parA the new 6-parameter set defining the rotated conic.
theta the angle by which the parameter set was rotated
For xyrot a Nx2 array of the x,y coordinates of the rotated data set.
Author(s)
<NAME>, <<EMAIL>>
See Also
createConic
Examples
# make an ellipse and derotate it
parGr <- c(-2.3,4.2,5,3,pi/4)
xe <-seq(-8,9,by=.05)
elipGr <- createConic(xe, parGr, 'e')
plot(elipGr, t= 'l', asp = TRUE)
# convert to ABCDEF form
parAr <- GtoA(parGr,'e')
elipAr <- createConic(xe,parAr$parA)
points(elipAr,pch='.',col='red')
# remove rotation angle
parAd <- derotateA(parAr$parA)
# returns theta = pi/4, how much the ellipse had been rotated by
elipAd <-createConic(xe,parAd$parA)
lines(elipAd)
# rotate back
parAdr <- rotateA(parAd$parA, parAd$theta)
elipAdr <-createConic(xe,parAdr$parA)
lines(elipAdr,lty=3, lwd = 3, col='green') |
github.com/nicholas10128/zinx | go | Go | README
[¶](#section-readme)
---
###
English | [简体中文](https://github.com/aceld/zinx/blob/v1.2.1/README-CN.md)
[![License](https://img.shields.io/badge/License-GPL%203.0-black.svg)](https://github.com/aceld/zinx/blob/v1.2.1/LICENSE)
[![Discord](https://img.shields.io/badge/zinx-Discord-blue.svg)](https://discord.gg/xQ8Xxfyfcz)
[![Gitter](https://img.shields.io/badge/zinx-Gitter-green.svg)](https://gitter.im/zinx_go/community)
[![zinx tutorial](https://img.shields.io/badge/ZinxTutorial-YuQue-red.svg)](https://www.yuque.com/aceld/npyr8s/bgftov)
[![Original Book of Zinx](https://img.shields.io/badge/OriginalBook-YuQue-black.svg)](https://www.yuque.com/aceld)
Zinx is a lightweight concurrent server framework based on Golang.
#### Document
[< Zinx Wiki : English >](https://github.com/aceld/zinx/wiki)
[< Zinx 文档 : 简体中文>](https://www.yuque.com/aceld/tsgooa/sbvzgczh3hqz8q3l)
> **Note**:
> Zinx has been widely used in many enterprises for development purposes, including message forwarding for backend modules, long-linked game servers, and message handling plugins for web frameworks.
> Zinx is positioned as a framework with concise code that allows developers to quickly understand the internal details of the framework and easily customize it based on their own enterprise scenarios.
---
#### Source of Zinx
##### Github
Git: <https://github.com/aceld/zinx##### Gitee(China)
Git: <https://gitee.com/Aceld/zinx##### Website
<http://zinx.me---
#### Online Tutorial
| platform | Entry |
| --- | --- |
| | [Zinx Framework tutorial-Lightweight server based on Golang](https://dev.to/aceld/1building-basic-services-with-zinx-framework-296e) |
| | [《Golang轻量级并发服务器框架zinx》](https://www.yuque.com/aceld) |
#### Online Tutorial Video
| platform | online video |
| --- | --- |
| | [zinx-BiliBili](https://www.bilibili.com/video/av71067087) |
| | [zinx-BiliBili](https://www.douyin.com/video/6983301202939333891) |
| | [zinx-youtube](https://www.youtube.com/watch?v=U95iF-HMWsU&list=PL_GrAPKmuajzeNI8HBTi-k5NQO1g0rM-A) |
#### I. One word that has been said before
Why did we create Zinx? Although there are many Golang application frameworks for servers, there are few lightweight enterprise frameworks applied in the gaming or other long-linked fields.
The purpose of designing Zinx is to provide a complete outline of how to write a TCP server based on Golang, so that more Golang enthusiasts can learn and understand this field in a straightforward manner.
The development of the Zinx framework project is synchronized with the creation of learning tutorials, and all the incremental and iterative thinking involved in the development process is incorporated into the tutorials. This approach avoids overwhelming beginners with a complete framework that they may find difficult to grasp all at once.
The tutorials will be iterated version by version, with each version adding small increments of functionality, allowing a beginner to gradually and comprehensively learn about the field of server frameworks.
Of course, we hope that more people will join Zinx and provide us with valuable feedback, enabling Zinx to become a truly enterprise-level server framework. Thank you for your attention!
##### Reply from chatGPT(AI)
![what-is-zinx](https://user-images.githubusercontent.com/7778936/209745848-acfc14eb-74cd-4513-b386-8bc6e0bcc09f.png)
![compare-zinx](https://user-images.githubusercontent.com/7778936/209745864-7d8984b0-bd73-4109-b4ec-aec152f8f8e8.png)
##### The honor of zinx
###### GVP Most Valuable Open Source Project of the Year at OSCHINA
![GVP-zinx](https://s2.ax1x.com/2019/10/13/uvYVBV.jpg)
###### Stargazers over time
[![Stargazers over time](https://api.star-history.com/svg?repos=aceld/zinx&type=Date)](#readme-zinx)
#### II. Zinx architecture
![Zinx框架](https://user-images.githubusercontent.com/7778936/220058446-0ad45112-2225-4b71-b0d8-69a7f3cee5ca.jpg)
![流程图](https://raw.githubusercontent.com/wenyoufu/testaaaaaa/master/%E6%B5%81%E7%A8%8B%E5%9B%BE-en.jpg)
![zinx-start](https://user-images.githubusercontent.com/7778936/126594039-98dddd10-ec6a-4881-9e06-a09ec34f1af7.gif)
#### III. Zinx development API documentation
##### (1) QuickStart
[<Zinx's TCP Debugging Tool>](https://github.com/xxl6097/tcptest)
DownLoad zinx Source
```
$go get github.com/aceld/zinx
```
> note: Golang Version 1.16+
###### Zinx-Server
```
package main
import (
"fmt"
"github.com/aceld/zinx/ziface"
"github.com/aceld/zinx/znet"
)
// PingRouter MsgId=1
type PingRouter struct {
znet.BaseRouter
}
//Ping Handle MsgId=1 func (r *PingRouter) Handle(request ziface.IRequest) {
//read client data
fmt.Println("recv from client : msgId=", request.GetMsgID(), ", data=", string(request.GetData()))
}
func main() {
//1 Create a server service
s := znet.NewServer()
//2 configure routing
s.AddRouter(1, &PingRouter{})
//3 start service
s.Serve()
}
```
Run Server
```
$ go run server.go
```
```
██
▀▀
████████ ████ ██▄████▄ ▀██ ██▀
▄█▀ ██ ██▀ ██ ████
▄█▀ ██ ██ ██ ▄██▄
▄██▄▄▄▄▄ ▄▄▄██▄▄▄ ██ ██ ▄█▀▀█▄
▀▀▀▀▀▀▀▀ ▀▀▀▀▀▀▀▀ ▀▀ ▀▀ ▀▀▀ ▀▀▀
┌──────────────────────────────────────────────────────┐
│ [Github] https://github.com/aceld │
│ [tutorial] https://www.yuque.com/aceld/npyr8s/bgftov │
└──────────────────────────────────────────────────────┘
[Zinx] Version: V1.0, MaxConn: 12000, MaxPacketSize: 4096
=== Zinx Global Config ===
Host: 0.0.0.0 TCPPort: 8999 Name: ZinxServerApp Version: V1.0 MaxPacketSize: 4096 MaxConn: 12000 WorkerPoolSize: 10 MaxWorkerTaskLen: 1024 MaxMsgChanLen: 1024 ConfFilePath: /Users/Aceld/go/src/zinx-usage/quick_start/conf/zinx.json LogDir: /Users/Aceld/go/src/zinx-usage/quick_start/log LogFile:
LogIsolationLevel: 0 HeartbeatMax: 10
===
2023/03/09 18:39:49 [INFO]msghandler.go:61: Add api msgID = 1 2023/03/09 18:39:49 [INFO]server.go:112: [START] Server name: ZinxServerApp,listenner at IP: 0.0.0.0, Port 8999 is starting 2023/03/09 18:39:49 [INFO]msghandler.go:66: Worker ID = 0 is started.
2023/03/09 18:39:49 [INFO]msghandler.go:66: Worker ID = 1 is started.
2023/03/09 18:39:49 [INFO]msghandler.go:66: Worker ID = 3 is started.
2023/03/09 18:39:49 [INFO]msghandler.go:66: Worker ID = 2 is started.
2023/03/09 18:39:49 [INFO]msghandler.go:66: Worker ID = 4 is started.
2023/03/09 18:39:49 [INFO]msghandler.go:66: Worker ID = 6 is started.
2023/03/09 18:39:49 [INFO]msghandler.go:66: Worker ID = 7 is started.
2023/03/09 18:39:49 [INFO]msghandler.go:66: Worker ID = 8 is started.
2023/03/09 18:39:49 [INFO]msghandler.go:66: Worker ID = 9 is started.
2023/03/09 18:39:49 [INFO]msghandler.go:66: Worker ID = 5 is started.
2023/03/09 18:39:49 [INFO]server.go:134: [START] start Zinx server ZinxServerApp succ, now listenning...
```
###### Zinx-Client
```
package main
import (
"fmt"
"github.com/aceld/zinx/ziface"
"github.com/aceld/zinx/znet"
"time"
)
//Client custom business func pingLoop(conn ziface.IConnection) {
for {
err := conn.SendMsg(1, []byte("Ping...Ping...Ping...[FromClient]"))
if err != nil {
fmt.Println(err)
break
}
time.Sleep(1 * time.Second)
}
}
//Executed when a connection is created func onClientStart(conn ziface.IConnection) {
fmt.Println("onClientStart is Called ... ")
go pingLoop(conn)
}
func main() {
//Create a client client
client := znet.NewClient("127.0.0.1", 8999)
//Set the hook function after the link is successfully established
client.SetOnConnStart(onClientStart)
//start the client
client.Start()
//Prevent the process from exiting, waiting for an interrupt signal
select {}
}
```
Run Client
```
$ go run client.go
2023/03/09 19:04:54 [INFO]client.go:73: [START] Zinx Client LocalAddr: 127.0.0.1:55294, RemoteAddr: 127.0.0.1:8999 2023/03/09 19:04:54 [INFO]connection.go:354: ZINX CallOnConnStart....
```
Terminal of Zinx Print:
```
recv from client : msgId= 1 , data= Ping...Ping...Ping...[FromClient]
recv from client : msgId= 1 , data= Ping...Ping...Ping...[FromClient]
recv from client : msgId= 1 , data= Ping...Ping...Ping...[FromClient]
recv from client : msgId= 1 , data= Ping...Ping...Ping...[FromClient]
recv from client : msgId= 1 , data= Ping...Ping...Ping...[FromClient]
recv from client : msgId= 1 , data= Ping...Ping...Ping...[FromClient]
...
```
##### (2) Zinx configuration file
```
{
"Name":"zinx v-0.10 demoApp",
"Host":"0.0.0.0",
"TCPPort":9090,
"MaxConn":3,
"WorkerPoolSize":10,
"LogDir": "./mylog",
"LogFile":"app.log",
"LogSaveDays":15,
"LogCons": true,
"LogIsolationLevel":0
}
```
`Name`:Server Application Name
`Host`:Server IP
`TcpPort`:Server listening port
`MaxConn`:Maximum number of client links allowed
`WorkerPoolSize`:Maximum number of working Goroutines in the work task pool
`LogDir`: Log folder
`LogFile`: Log file name (if not provided, log information is printed to Stderr)
`LogIsolationLevel`: Log Isolation Level -0: Full On 1: Off debug 2: Off debug/info 3: Off debug/info/warn
---
###### Developers
| **Zinx** | **Authors** |
| --- | --- |
| [zinx](https://github.com/aceld/zinx) | 刘丹冰([@aceld](https://github.com/aceld)) 张超([@zhngcho](https://github.com/zhngcho)) 高智辉Roger([@adsian](https://github.com/adsian)) 胡贵建([@huguijian](https://github.com/huguijian)) 张继瑀([@kstwoak](https://github.com/kstwoak)) 夏小力([@xxl6097](https://github.com/xxl6097)) 李志成([@clukboy](https://github.com/clukboy))姚承政([@hcraM41](https://github.com/hcraM41))李国杰([@LI-GUOJIE](https://github.com/LI-GUOJIE)) |
| [zinx(C++)](https://github.com/marklion/zinx) | 刘洋([@marklion](https://github.com/marklion)) |
| [zinx(Lua)](https://github.com/huqitt/zinx-lua) | 胡琪([@huqitt](https://github.com/huqitt)) |
| [ginx(Java)](https://github.com/ModuleCode/ginx) | ModuleCode([@ModuleCode](https://github.com/ModuleCode)) |
---
Thanks to all the developers who contributed to Zinx!
[![](https://contrib.rocks/image?repo=aceld/zinx)](https://github.com/aceld/zinx/graphs/contributors)
---
##### About the author
`name`:`Aceld(刘丹冰)`
`mail`:
[<EMAIL>](mailto:<EMAIL>)
`github`:
<https://github.com/aceld`original work`:
<https://www.yuque.com/aceld##### Join the Zinx community
| platform | Entry |
| --- | --- |
| | <https://discord.gg/xQ8Xxfyfcz> |
| | 加微信: `ace_ld` 或扫二维码,备注`zinx`即可。 |
None |
ROCaggregator | cran | R | Package ‘ROCaggregator’
October 12, 2022
Title Aggregate Multiple ROC Curves into One Global ROC
Version 1.0.1
Description Aggregates multiple Receiver Operating Characteristic (ROC) curves
obtained from different sources into one global ROC. Additionally, it’s
also possible to calculate the aggregated precision-recall (PR) curve.
License MIT + file LICENSE
Encoding UTF-8
RoxygenNote 7.1.1
Imports utils, magrittr
Suggests testthat (>= 3.0.0), mockery, mockr, knitr, rmarkdown, ROCR,
pROC, pracma, stats
Config/testthat/edition 3
VignetteBuilder knitr
URL https://gitlab.com/UM-CDS/general-tools/rocaggregator
BugReports https://gitlab.com/UM-CDS/general-tools/rocaggregator/-/issues
NeedsCompilation no
Author <NAME> [aut, cre] (<https://orcid.org/0000-0002-3047-7630>)
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2021-08-10 09:10:14 UTC
R topics documented:
partial_c... 2
precision_recall_curv... 2
roc_curv... 3
shift_vecto... 4
partial_cm Compute the global confusion matrix from the FPR and TPR obtained
from each node
Description
Compute the global confusion matrix from the FPR and TPR obtained from each node
Usage
partial_cm(
fpr,
tpr,
thresholds,
negative_count,
total_count,
descending = FALSE
)
Arguments
fpr list - False positive rates for each individual ROC
tpr list - True positive rates for each individual ROC
thresholds list - Thresholds used to compute the fpr and tpr
negative_count list - Total number of samples corresponding to the negative case
total_count list - Total number of samples
descending thresholds in descending order?
Value
global confusion matrix and thresholds
precision_recall_curve
Compute the precision recall curve
Description
Compute the precision recall curve
Usage
precision_recall_curve(fpr, tpr, thresholds, negative_count, total_count)
Arguments
fpr list - False positive rates for each individual ROC.
tpr list - True positive rates for each individual ROC.
thresholds list - Thresholds used to compute the fpr and tpr.
negative_count vector - Total number of samples corresponding to the negative case.
total_count vector - Total number of samples.
Value
list with the global precision, recall, and thresholds (increasing)
roc_curve Compute Receiver operating characteristic (ROC)
Description
Compute Receiver operating characteristic (ROC)
Usage
roc_curve(fpr, tpr, thresholds, negative_count, total_count)
Arguments
fpr list - False positive rates for each individual ROC
tpr list - True positive rates for each individual ROC
thresholds list - Thresholds used to compute the fpr and tpr
negative_count vector - Total number of samples corresponding to the negative case
total_count vector - Total number of samples
Value
list with the global fpr, tpr, and thresholds (decreasing)
shift_vector Shift a vector left or right according to the value provided
Description
Shift a vector left or right according to the value provided
Usage
shift_vector(x, n)
Arguments
x the vector
n shift
Value
the vector shifted
Examples
shift_vector(c(1,2,3,4), 1)
shift_vector(c(1,2,3,4), -1) |
github.com/asim/go-micro/plugins/broker/stan/v4 | go | Go | None
Documentation
[¶](#section-documentation)
---
### Overview [¶](#pkg-overview)
Package stan provides a NATS Streaming broker
### Index [¶](#pkg-index)
* [func AckOnSuccess() broker.SubscribeOption](#AckOnSuccess)
* [func ClientID(clientID string) broker.Option](#ClientID)
* [func ClusterID(clusterID string) broker.Option](#ClusterID)
* [func ConnectRetry(v bool) broker.Option](#ConnectRetry)
* [func ConnectTimeout(td time.Duration) broker.Option](#ConnectTimeout)
* [func DurableName(name string) broker.Option](#DurableName)
* [func NewBroker(opts ...broker.Option) broker.Broker](#NewBroker)
* [func Options(opts stan.Options) broker.Option](#Options)
* [func ServerSubscriberOption(opts ...stan.SubscriptionOption) server.SubscriberOption](#ServerSubscriberOption)
* [func SubscribeContext(ctx context.Context) broker.SubscribeOption](#SubscribeContext)
* [func SubscribeOption(opts ...stan.SubscriptionOption) broker.SubscribeOption](#SubscribeOption)
### Constants [¶](#pkg-constants)
This section is empty.
### Variables [¶](#pkg-variables)
This section is empty.
### Functions [¶](#pkg-functions)
####
func [AckOnSuccess](https://github.com/asim/go-micro/blob/plugins/broker/stan/v4.7.0/plugins/broker/stan/options.go#L53) [¶](#AckOnSuccess)
```
func AckOnSuccess() [broker](/go-micro.dev/v4/broker).[SubscribeOption](/go-micro.dev/v4/broker#SubscribeOption)
```
AckOnSuccess will automatically acknowledge messages when no error is returned
####
func [ClientID](https://github.com/asim/go-micro/blob/plugins/broker/stan/v4.7.0/plugins/broker/stan/options.go#L29) [¶](#ClientID)
```
func ClientID(clientID [string](/builtin#string)) [broker](/go-micro.dev/v4/broker).[Option](/go-micro.dev/v4/broker#Option)
```
ClientID specify client id to connect
####
func [ClusterID](https://github.com/asim/go-micro/blob/plugins/broker/stan/v4.7.0/plugins/broker/stan/options.go#L22) [¶](#ClusterID)
```
func ClusterID(clusterID [string](/builtin#string)) [broker](/go-micro.dev/v4/broker).[Option](/go-micro.dev/v4/broker#Option)
```
ClusterID specify cluster id to connect
####
func [ConnectRetry](https://github.com/asim/go-micro/blob/plugins/broker/stan/v4.7.0/plugins/broker/stan/options.go#L67) [¶](#ConnectRetry)
```
func ConnectRetry(v [bool](/builtin#bool)) [broker](/go-micro.dev/v4/broker).[Option](/go-micro.dev/v4/broker#Option)
```
ConnectRetry reconnect to broker in case of errors
####
func [ConnectTimeout](https://github.com/asim/go-micro/blob/plugins/broker/stan/v4.7.0/plugins/broker/stan/options.go#L60) [¶](#ConnectTimeout)
```
func ConnectTimeout(td [time](/time).[Duration](/time#Duration)) [broker](/go-micro.dev/v4/broker).[Option](/go-micro.dev/v4/broker#Option)
```
ConnectTimeout timeout for connecting to broker -1 infinitive or time.Duration value
####
func [DurableName](https://github.com/asim/go-micro/blob/plugins/broker/stan/v4.7.0/plugins/broker/stan/options.go#L74) [¶](#DurableName)
```
func DurableName(name [string](/builtin#string)) [broker](/go-micro.dev/v4/broker).[Option](/go-micro.dev/v4/broker#Option)
```
DurableName sets the DurableName for the subscriber
####
func [NewBroker](https://github.com/asim/go-micro/blob/plugins/broker/stan/v4.7.0/plugins/broker/stan/stan.go#L381) [¶](#NewBroker)
```
func NewBroker(opts ...[broker](/go-micro.dev/v4/broker).[Option](/go-micro.dev/v4/broker#Option)) [broker](/go-micro.dev/v4/broker).[Broker](/go-micro.dev/v4/broker#Broker)
```
####
func [Options](https://github.com/asim/go-micro/blob/plugins/broker/stan/v4.7.0/plugins/broker/stan/options.go#L15) [¶](#Options)
```
func Options(opts [stan](/github.com/nats-io/stan.go).[Options](/github.com/nats-io/stan.go#Options)) [broker](/go-micro.dev/v4/broker).[Option](/go-micro.dev/v4/broker#Option)
```
Options accepts stan.Options
####
func [ServerSubscriberOption](https://github.com/asim/go-micro/blob/plugins/broker/stan/v4.7.0/plugins/broker/stan/options.go#L39) [¶](#ServerSubscriberOption)
```
func ServerSubscriberOption(opts ...[stan](/github.com/nats-io/stan.go).[SubscriptionOption](/github.com/nats-io/stan.go#SubscriptionOption)) [server](/go-micro.dev/v4/server).[SubscriberOption](/go-micro.dev/v4/server#SubscriberOption)
```
####
func [SubscribeContext](https://github.com/asim/go-micro/blob/plugins/broker/stan/v4.7.0/plugins/broker/stan/options.go#L46) [¶](#SubscribeContext)
```
func SubscribeContext(ctx [context](/context).[Context](/context#Context)) [broker](/go-micro.dev/v4/broker).[SubscribeOption](/go-micro.dev/v4/broker#SubscribeOption)
```
SubscribeContext set the context for broker.SubscribeOption
####
func [SubscribeOption](https://github.com/asim/go-micro/blob/plugins/broker/stan/v4.7.0/plugins/broker/stan/options.go#L35) [¶](#SubscribeOption)
```
func SubscribeOption(opts ...[stan](/github.com/nats-io/stan.go).[SubscriptionOption](/github.com/nats-io/stan.go#SubscriptionOption)) [broker](/go-micro.dev/v4/broker).[SubscribeOption](/go-micro.dev/v4/broker#SubscribeOption)
```
### Types [¶](#pkg-types)
This section is empty. |
rhai_rustler | hex | Erlang | rhai_rustler
===
[![CI](https://github.com/fabriziosestito/rhai_rustler/actions/workflows/main.yaml/badge.svg)](https://github.com/fabriziosestito/rhai_rustler/actions/workflows/main.yaml)
[![Rust CI](https://github.com/fabriziosestito/rhai_rustler/actions/workflows/rust-ci.yaml/badge.svg)](https://github.com/fabriziosestito/rhai_rustler/actions/workflows/rust-ci.yaml)
[![NIFs precompilation](https://github.com/fabriziosestito/rhai_rustler/actions/workflows/release.yaml/badge.svg)](https://github.com/fabriziosestito/rhai_rustler/actions/workflows/release.yaml)
[![Hex.pm](https://img.shields.io/hexpm/v/rhai_rustler.svg)](https://hex.pm/packages/rhai_rustler)
[![Hex Docs](https://img.shields.io/badge/hex-docs-purple.svg)](https://hexdocs.pm/rhai_rustler/)
Elixir NIF bindings for Rhai, a tiny, simple and fast embedded scripting language for Rust that gives you a safe and easy way to add scripting to your applications.
Please refer to [The Rhai Book](https://rhai.rs/book/index.html) for extended information about the language.
[Installation](#installation)
---
Add `:rhai_rustler` to the list of dependencies in `mix.exs`:
```
def deps do
[
{:rhai_rustler, "~> 1.0.0"}
]
end
```
[Features](#features)
---
`rhai_rustler` exposes a subset of the Rhai API to Elixir:
* Engine - [Rhai Book](https://rhai.rs/book/engine/index.html) - [docs.rs](https://docs.rs/rhai/latest/rhai/struct.Engine.html)
* Scope - [Rhai book](https://rhai.rs/book/engine/scope.html) - [docs.rs](https://docs.rs/rhai/latest/rhai/struct.Scope.html)
* AST - [Rhai book](https://rhai.rs/book/engine/ast.html) - [docs.rs](https://docs.rs/rhai/latest/rhai/struct.Ast.html)
Note that not all the Rhai API features are supported. For instance, advanced and low-level APIs are not exposed.
If any usage patterns become apparent, they will be included in the future.
Please refer to [NIF bindings](nif-bindings.html) to see the methods supported by the Elixir NIF.
The Elixir NIF provides a way to extend Rhai with external native Rust modules, see: [Extending Rhai with external native Rust modules](#extending-rhai-with-external-native-rust-modules) and [rhai_dylib](https://github.com/rhaiscript/rhai-dylib) for more information.
To check the supported types conversion, see [Type conversion table](#type-conversion-table).
[Usage patterns](#usage-patterns)
---
###
["Hello Rhai"](#hello-rhai)
```
engine = Rhai.Engine.new()
{:ok, "Hello Rhai!"} = Rhai.Engine.eval(engine, "\"Hello Rhai!\"")
```
###
[Eval](#eval)
```
engine = Rhai.Engine.new()
# Simple evaluation
{:ok, 2} = Rhai.Engine.eval(engine, "1 + 1")
# Evaluation with scope scope = Rhai.Scope.new() |> Rhai.Scope.push("a", 10) |> Rhai.Scope.push("b", 3)
{:ok, 30} = Rhai.Engine.eval_with_scope(engine, scope, "a * b")
```
###
[AST](#ast)
```
engine = Rhai.Engine.new()
scope = Rhai.Scope.new() |> Rhai.Scope.push_constant("a", 10) |> Rhai.Scope.push_constant("b", 3)
{:ok, %Rhai.AST{} = ast} = Rhai.Engine.compile_with_scope(engine, scope, "a * b")
{:ok, 30} = Rhai.Engine.eval_ast(engine, ast)
# AST can be shared between engines task = Task.async(fn -> Rhai.Engine.eval_ast(Rhai.Engine.new(), ast) end)
{:ok, 30} = Task.await(task)
```
###
[Raw Engine](#raw-engine)
```
engine = Rhai.Engine.new_raw()
# Returns an error since BasicArrayPackage is not registered
{:error, {:function_not_found, _}} = Rhai.Engine.eval(engine, "[1, 2, 3].find(|x| x > 2)")
# Please refer to https://rhai.rs/book/rust/packages/builtin.html for more information about packages engine = Rhai.Engine.register_package(engine, :basic_array)
{:ok, 3} = Rhai.Engine.eval(engine, "[1, 2, 3].find(|x| x > 2)")
```
###
[Extending rhai_rustler with external native Rust modules](#extending-rhai_rustler-with-external-native-rust-modules)
`rhai_rustler` utilizes the `[rhai_dylib](https://github.com/rhaiscript/rhai-dylib)` library to expand the capabilities of Rhai by loading external native Rust modules. This allows users to introduce new functions, custom types, and operators.
[test_dylib_module](https://github.com/fabriziosestito/rhai_rustler/tree/main/native/test_dylib_module) serves as an example of how to create a dylib module. A [dummy rustler module](https://github.com/fabriziosestito/rhai_rustler/blob/main/test/support/test_dylib_module.ex) is employed to trigger the compilation process. This same approach can be adopted in real-world projects, such as when distributing the dylib module as a Hex package.
[Type conversion table](#type-conversion-table)
---
Elixir Types are converted to Rhai types (and back) as follows:
| Elixir | Rhai |
| --- | --- |
| integer() | Integer |
| float() | Float |
| float() | Decimal |
| bool() | Boolean |
| String.t() | String |
| String.t() | Char |
| list() | Array |
| tuple() | Array |
| %{ String.t() => Rhai.Any.t() } | Object map |
| nil() | Empty |
| pid() | Empty (not supported) |
| ref() | Empty (not supported) |
| fun() | Empty (not supported) |
| map() | Empty (not supported) |
[Rustler precompiled](#rustler-precompiled)
---
By default, **you don't need the Rust toolchain installed** because the lib will try to download a precompiled NIF file.
In case you want to force compilation set the
`RHAI_RUSTLER_FORCE_BUILD` environment variable to `true` or `1`.
Precompiled NIFs are available for the following platforms:
* aarch64-apple-darwin
* x86_64-apple-darwin
* x86_64-unknown-linux-gnu
* x86_64-unknown-linux-musl
* arm-unknown-linux-gnueabihf
* aarch64-unknown-linux-gnu
* aarch64-unknown-linux-musl
* x86_64-pc-windows-msvc
* x86_64-pc-windows-gnu
###
[Release flow](#release-flow)
Please follow [this guide](https://hexdocs.pm/rustler_precompiled/precompilation_guide.html#the-release-flow) when releasing a new version of the library.
[License](#license)
---
This library is licensed under Apache 2.0 License. See [LICENSE](license.html) for details.
[Links](#links)
---
* [rhai](https://github.com/rhaiscript/rhai) The Rust crate doing most of the dirty work.
* [RustlerPrecompiled](https://github.com/philss/rustler_precompiled) Use precompiled NIFs from trusted sources in your Elixir code.
* [NimbleLZ4](https://github.com/whatyouhide/nimble_lz4) Major inspiration for the RustlerPrecompiled GitHub actions workflow and general setup.
[API Reference](api-reference.html)
[Next Page →
NIF Bindings](nif-bindings.html)
Rhai.AST
===
Compiled AST (abstract syntax tree) of a Rhai script.
[Summary](#summary)
===
[Types](#types)
---
[t()](#t:t/0)
[Functions](#functions)
---
[clear_functions(ast)](#clear_functions/1)
Clear all function definitions in the AST.
[clear_source(ast)](#clear_source/1)
Clear the source.
[clear_statements(ast)](#clear_statements/1)
Clear all statements in the AST, leaving only function definitions.
[clone_functions_only(ast)](#clone_functions_only/1)
Clone the AST’s functions into a new AST. No statements are cloned.
[combine(ast1, ast2)](#combine/2)
Combine one AST with another. The second AST is consumed.
[empty()](#empty/0)
Create an empty AST.
[has_functions?(ast)](#has_functions?/1)
Does this AST contain script-defined functions?
[merge(ast1, ast2)](#merge/2)
Merge two AST into one. Both AST’s are untouched and a new, merged, version is returned.
[set_source(ast, source)](#set_source/2)
Set the source.
[source(ast)](#source/1)
Get the source if any.
[Types](#types)
===
[Functions](#functions)
===
Rhai.Any
===
Rhai types
[Summary](#summary)
===
[Types](#types)
---
[t()](#t:t/0)
[Types](#types)
===
Rhai.Engine
===
Rhai main scripting engine.
[Summary](#summary)
===
[Types](#types)
---
[t()](#t:t/0)
[Functions](#functions)
---
[allow_anonymous_fn?(engine)](#allow_anonymous_fn?/1)
Is anonymous function allowed? Default is true.
[allow_if_expression?(engine)](#allow_if_expression?/1)
Is if-expression allowed? Default is `true`.
[allow_loop_expressions?(engine)](#allow_loop_expressions?/1)
Are loop-expression allowed? Default is `true`.
[allow_looping?(engine)](#allow_looping?/1)
Is looping allowed? Default is `true`.
[allow_shadowing?(engine)](#allow_shadowing?/1)
Is shadowing allowed? Default is `true`.
[allow_statement_expression?(engine)](#allow_statement_expression?/1)
Is statement_expression allowed? Default is `true`.
[allow_switch_expression?(engine)](#allow_switch_expression?/1)
Is `switch` expression allowed? Default is `true`.
[call_fn(engine, scope, ast, name, args)](#call_fn/5)
Call a script function defined in an AST with multiple arguments.
[compact_script(engine, script)](#compact_script/2)
Compact a script to eliminate insignificant whitespaces and comments.
This is useful to prepare a script for further compressing.
The output script is semantically identical to the input script, except smaller in size.
Unlike other uglifiers and minifiers, this method does not rename variables nor perform any optimization on the input script.
[compile(engine, script)](#compile/2)
Compile a string into an AST, which can be used later for evaluation.
[compile_expression(engine, script)](#compile_expression/2)
Compile a string containing an expression into an AST, which can be used later for evaluation.
[compile_expression_with_scope(engine, scope, script)](#compile_expression_with_scope/3)
Compile a string containing an expression into an AST using own scope, which can be used later for evaluation.
[compile_file(engine, path)](#compile_file/2)
Compile a script file into an AST, which can be used later for evaluation.
[compile_file_with_scope(engine, scope, script)](#compile_file_with_scope/3)
Compile a script file into an AST using own scope, which can be used later for evaluation.
[compile_into_self_contained(engine, scope, script)](#compile_into_self_contained/3)
Compile a string into an AST using own scope, which can be used later for evaluation, embedding all imported modules.
Modules referred by import statements containing literal string paths are eagerly resolved via the current module resolver and embedded into the resultant AST. When it is evaluated later, import statement directly recall pre-resolved modules and the resolution process is not performed again.
[compile_scripts_with_scope(engine, scope, script)](#compile_scripts_with_scope/3)
When passed a list of strings, first join the strings into one large script, and then compile them into an AST using own scope, which can be used later for evaluation.
[compile_with_scope(engine, scope, script)](#compile_with_scope/3)
Compile a string into an AST using own scope, which can be used later for evaluation.
[disable_symbol(engine, symbol)](#disable_symbol/2)
Disable a particular keyword or operator in the language.
[ensure_data_size_within_limits(engine, value)](#ensure_data_size_within_limits/2)
Return an error if the size of a Dynamic is out of limits (if any).
[eval(engine, script)](#eval/2)
Evaluate a string as a script, returning the result value or an error.
[eval_ast(engine, ast)](#eval_ast/2)
Evaluate an AST, returning the result value or an error.
[eval_ast_with_scope(engine, scope, ast)](#eval_ast_with_scope/3)
Evaluate an AST with own scope, returning the result value or an error.
[eval_expression(engine, script)](#eval_expression/2)
Evaluate a string containing an expression, returning the result value or an error.
[eval_expression_with_scope(engine, scope, script)](#eval_expression_with_scope/3)
Evaluate a string containing an expression with own scope, returning the result value or an error.
[eval_file(engine, path)](#eval_file/2)
Evaluate a script file, returning the result value or an error.
[eval_file_with_scope(engine, scope, path)](#eval_file_with_scope/3)
Evaluate a script file with own scope, returning the result value or an error.
[eval_with_scope(engine, scope, script)](#eval_with_scope/3)
Evaluate a string as a script with own scope, returning the result value or an error.
[fail_on_invalid_map_property?(engine)](#fail_on_invalid_map_property?/1)
Set whether to raise error if an object map property does not exist.
[fast_operators?(engine)](#fast_operators?/1)
Is fast operators mode enabled? Default is `false`.
[max_array_size(engine)](#max_array_size/1)
The maximum length of arrays (0 for unlimited).
[max_call_levels(engine)](#max_call_levels/1)
Is fast operators mode enabled? Default is `false`.
[max_expr_depth(engine)](#max_expr_depth/1)
The depth limit for expressions (0 for unlimited).
[max_function_expr_depth(engine)](#max_function_expr_depth/1)
The depth limit for expressions in functions (0 for unlimited).
[max_map_size(engine)](#max_map_size/1)
The maximum size of object maps (0 for unlimited).
[max_modules(engine)](#max_modules/1)
The maximum number of imported modules allowed for a script.
[max_operations(engine)](#max_operations/1)
The maximum number of operations allowed for a script to run (0 for unlimited).
[max_string_size(engine)](#max_string_size/1)
The maximum length, in bytes, of strings (0 for unlimited).
[new()](#new/0)
Create a new Engine
[new_raw()](#new_raw/0)
Create a new Engine with minimal built-in functions.
[optimization_level(engine)](#optimization_level/1)
The current optimization level. It controls whether and how the Engine will optimize an AST after compilation.
[optimize_ast(engine, scope, ast, optimization_level)](#optimize_ast/4)
Optimize the AST with constants defined in an external Scope.
An optimized copy of the AST is returned while the original AST is consumed.
[register_custom_operator(engine, operator, precedence)](#register_custom_operator/3)
Register a custom operator with a precedence into the language.
[register_custom_operator!(engine, operator, precedence)](#register_custom_operator!/3)
Register a custom operator with a precedence into the language.
[register_global_module(engine, path)](#register_global_module/2)
Register a shared dylib Module into the global namespace of Engine.
[register_global_module!(engine, path)](#register_global_module!/2)
Register a shared dylib Module into the global namespace of Engine.
[register_package(engine, package)](#register_package/2)
Register the package with an Engine.
[register_static_module(engine, namespace, path)](#register_static_module/3)
Register a shared Module into the namespace of Engine.
[register_static_module!(engine, namespace, path)](#register_static_module!/3)
Register a shared Module into the namespace of Engine.
[run(engine, script)](#run/2)
Evaluate a string as script.
[run_ast(engine, ast)](#run_ast/2)
Evaluate an AST.
[run_ast_with_scope(engine, scope, ast)](#run_ast_with_scope/3)
Evaluate an AST with own scope.
[run_file(engine, path)](#run_file/2)
Evaluate a file.
[run_file_with_scope(engine, scope, path)](#run_file_with_scope/3)
Evaluate a file with own scope.
[run_with_scope(engine, scope, script)](#run_with_scope/3)
Evaluate a string as script with own scope.
[set_allow_anonymous_fn(engine, enable)](#set_allow_anonymous_fn/2)
Set whether anonymous function is allowed.
[set_allow_if_expression(engine, enable)](#set_allow_if_expression/2)
Set whether `if`-expression is allowed.
[set_allow_loop_expressions(engine, enable)](#set_allow_loop_expressions/2)
Set whether loop expressions are allowed.
[set_allow_looping(engine, enable)](#set_allow_looping/2)
Set whether looping is allowed.
[set_allow_shadowing(engine, enable)](#set_allow_shadowing/2)
Set whether shadowing is allowed.
[set_allow_statement_expression(engine, enable)](#set_allow_statement_expression/2)
Set whether statement_expression is allowed.
[set_allow_switch_expression(engine, enable)](#set_allow_switch_expression/2)
Set whether `switch` expression is allowed.
[set_fail_on_invalid_map_property(engine, enable)](#set_fail_on_invalid_map_property/2)
Set whether to raise error if an object map property does not exist.
[set_fast_operators(engine, enable)](#set_fast_operators/2)
Set whether fast operators mode is enabled.
[set_max_array_size(engine, max_size)](#set_max_array_size/2)
Set the maximum length of arrays (0 for unlimited).
[set_max_call_levels(engine, levels)](#set_max_call_levels/2)
Set the maximum levels of function calls allowed for a script in order to avoid infinite recursion and stack overflows.
[set_max_expr_depths(engine, max_expr_depth, max_function_expr_depth)](#set_max_expr_depths/3)
Set the depth limits for expressions (0 for unlimited).
[set_max_map_size(engine, size)](#set_max_map_size/2)
Set the maximum size of object maps (0 for unlimited).
[set_max_modules(engine, modules)](#set_max_modules/2)
Set the maximum number of imported modules allowed for a script.
[set_max_operations(engine, operations)](#set_max_operations/2)
Set the maximum number of operations allowed for a script to run to avoid consuming too much resources (0 for unlimited).
[set_max_string_size(engine, string_size)](#set_max_string_size/2)
Set the maximum length, in bytes, of strings (0 for unlimited).
[set_module_resolvers(engine, module_resolvers)](#set_module_resolvers/2)
Set the module resolution services used by the Engine.
[set_optimization_level(engine, optimization_level)](#set_optimization_level/2)
Control whether and how the Engine will optimize an AST after compilation.
[set_strict_variables(engine, enable)](#set_strict_variables/2)
Set whether strict variables mode is enabled.
[strict_variables?(engine)](#strict_variables?/1)
Is strict variables mode enabled? Default is `false`.
[Types](#types)
===
[Functions](#functions)
===
Rhai.Error
===
Rhai error types
[Summary](#summary)
===
[Types](#types)
---
[error()](#t:error/0)
[t()](#t:t/0)
[Types](#types)
===
Rhai.Package
===
Rhai package types
[Summary](#summary)
===
[Types](#types)
---
[t()](#t:t/0)
[Types](#types)
===
Rhai.Scope
===
Type containing information about the current scope. Useful for keeping state between Engine evaluation runs.
Scope implements the [https://hexdocs.pm/elixir/1.12/Enumerable.html](Enumerable) protocol.
[Summary](#summary)
===
[Types](#types)
---
[t()](#t:t/0)
[Functions](#functions)
---
[clear(scope)](#clear/1)
Empty the Scope.
[clone_visible(scope)](#clone_visible/1)
Clone the Scope, keeping only the last instances of each variable name. Shadowed variables are omitted in the copy.
[constant?(scope, name)](#constant?/2)
Check if the named entry in the Scope is constant.
Search starts backwards from the last, stopping at the first entry matching the specified name.
Returns nil if no entry matching the specified name is found.
[contains?(scope, name)](#contains?/2)
Does the Scope contain the entry?
[empty?(scope)](#empty?/1)
Returns true if this Scope contains no variables.
[get_value(scope, name)](#get_value/2)
Get the value of an entry in the Scope, starting from the last.
[len(scope)](#len/1)
Get the number of entries inside the Scope.
[new()](#new/0)
Create a new Scope
[pop(scope)](#pop/1)
Remove the last entry from the Scope.
[pop!(scope)](#pop!/1)
Remove the last entry from the Scope.
[push(scope, name, value)](#push/3)
Add (push) a new entry to the Scope.
[push_constant(scope, name, value)](#push_constant/3)
Add (push) a new constant to the Scope.
[remove(scope, name)](#remove/2)
Remove the last entry in the Scope by the specified name and return its value.
[rewind(scope, size)](#rewind/2)
Truncate (rewind) the Scope to a previous size.
[set_or_push(scope, name, value)](#set_or_push/3)
Update the value of the named entry in the Scope if it already exists and is not constant.
Push a new entry with the value into the Scope if the name doesn’t exist or if the existing entry is constant.
[set_value(scope, name, value)](#set_value/3)
Update the value of the named entry in the Scope.
[set_value!(scope, name, value)](#set_value!/3)
Update the value of the named entry in the Scope.
[with_capacity(capacity)](#with_capacity/1)
Create a new Scope with a particular capacity.
[Types](#types)
===
[Functions](#functions)
=== |
sigs.k8s.io/cluster-api-provider-aws | go | Go | README
[¶](#section-readme)
---
### Kubernetes Cluster API Provider AWS
![](https://github.com/kubernetes/kubernetes/raw/master/logo/logo.png)[![Powered by AWS Cloud Computing](https://d0.awsstatic.com/logos/powered-by-aws.png)](https://aws.amazon.com/opensource/)
[![](https://godoc.org/sigs.k8s.io/cluster-api-provider-aws?status.svg)](https://godoc.org/sigs.k8s.io/cluster-api-provider-aws)
[![](https://goreportcard.com/badge/sigs.k8s.io/cluster-api-provider-aws)](https://goreportcard.com/report/sigs.k8s.io/cluster-api-provider-aws)
[![](https://img.shields.io/badge/join%20slack-%23cluster--api--aws-brightgreen)](http://slack.k8s.io/)
[![](https://bestpractices.coreinfrastructure.org/projects/5688/badge)](https://bestpractices.coreinfrastructure.org/projects/5688)
---
Kubernetes-native declarative infrastructure for AWS.
#### What is the Cluster API Provider AWS
The [Cluster API](https://github.com/kubernetes-sigs/cluster-api) brings declarative, Kubernetes-style APIs to cluster creation, configuration and management.
The API itself is shared across multiple cloud providers allowing for true AWS hybrid deployments of Kubernetes. It is built atop the lessons learned from previous cluster managers such as [kops](https://github.com/kubernetes/kops) and
[kubicorn](http://kubicorn.io/).
#### Documentation
Please see our [book](https://cluster-api-aws.sigs.k8s.io) for in-depth documentation.
#### Launching a Kubernetes cluster on AWS
Check out the [Cluster API Quick Start](https://cluster-api.sigs.k8s.io/user/quick-start.html) for launching a cluster on AWS.
#### Features
* Native Kubernetes manifests and API
* Manages the bootstrapping of VPCs, gateways, security groups and instances.
* Choice of Linux distribution among Amazon Linux 2, CentOS 7, Ubuntu(18.04, 20.04) and Flatcar using [pre-baked AMIs](https://cluster-api-aws.sigs.k8s.io/topics/images/built-amis.html).
* Deploys Kubernetes control planes into private subnets with a separate bastion server.
* Doesn't use SSH for bootstrapping nodes.
* Installs only the minimal components to bootstrap a control plane and workers.
* Supports control planes on EC2 instances.
* [EKS support](https://cluster-api-aws.sigs.k8s.io/topics/eks/index.html)
---
#### Compatibility with Cluster API and Kubernetes Versions
This provider's versions are compatible with the following versions of Cluster API and support all Kubernetes versions that is supported by its compatible Cluster API version:
| | Cluster API v1alpha3 (v0.3) | Cluster API v1alpha4 (v0.4) | Cluster API v1beta1 (v1.x) |
| --- | --- | --- | --- |
| CAPA v1alpha3 `(v0.6)` | ✓ | ☓ | ☓ |
| CAPA v1alpha4 `(v0.7)` | ☓ | ✓ | ☓ |
| CAPA v1beta1 `(v1.x, main)` | ☓ | ☓ | ✓ |
(See [Kubernetes support matrix](https://cluster-api.sigs.k8s.io/reference/versions.html) of Cluster API versions).
---
#### Kubernetes versions with published AMIs
See [amis](https://cluster-api-aws.sigs.k8s.io/topics/images/amis.html) for the list of most recently published AMIs.
---
#### clusterawsadm
`clusterawsadm` CLI tool provides bootstrapping, AMI, EKS, and controller related helpers.
`clusterawsadm` binaries are released with each release, can be found under [assets](https://github.com/kubernetes-sigs/cluster-api-provider-aws/releases/latest) section.
---
#### Getting involved and contributing
Are you interested in contributing to cluster-api-provider-aws? We, the maintainers and community, would love your suggestions, contributions, and help!
Also, the maintainers can be contacted at any time to learn more about how to get involved.
In the interest of getting more new people involved we tag issues with
[`good first issue`](https://github.com/kubernetes-sigs/cluster-api-provider-aws/issues?q=is%3Aissue+is%3Aopen+sort%3Aupdated-desc+label%3A%22good+first+issue%22).
These are typically issues that have smaller scope but are good ways to start to get acquainted with the codebase.
We also encourage ALL active community participants to act as if they are maintainers, even if you don't have "official" write permissions. This is a community effort, we are here to serve the Kubernetes community. If you have an active interest and you want to get involved, you have real power! Don't assume that the only people who can get things done around here are the "maintainers".
We also would love to add more "official" maintainers, so show us what you can do!
This repository uses the Kubernetes bots. See a full list of the commands [here](https://go.k8s.io/bot-commands).
##### Build the images locally
If you want to just build the CAPA containers locally, run
```
REGISTRY=docker.io/my-reg make docker-build
```
##### Tilt-based development environment
See [development](https://cluster-api-aws.sigs.k8s.io/development/development.html) section for details
##### Implementer office hours
Maintainers hold office hours every two weeks, with sessions open to all developers working on this project.
Office hours are hosted on a zoom video chat every other Monday at 09:00 (Pacific) / 12:00 (Eastern) / 17:00 (Europe/London),
and are published on the [Kubernetes community meetings calendar](https://calendar.google.com/calendar/embed?src=cgnt364vd8s86hr2phapfjc6uk%40group.calendar.google.com).
##### Other ways to communicate with the contributors
Please check in with us in the [#cluster-api-aws](https://kubernetes.slack.com/messages/CD6U2V71N) channel on Slack.
#### Github issues
##### Bugs
If you think you have found a bug please follow the instructions below.
* Please spend a small amount of time giving due diligence to the issue tracker. Your issue might be a duplicate.
* Get the logs from the cluster controllers. Please paste this into your issue.
* Open a [new issue](https://github.com/kubernetes-sigs/cluster-api-provider-aws/issues/new).
* Remember that users might be searching for your issue in the future, so please give it a meaningful title to help others.
* Feel free to reach out to the cluster-api community on the [kubernetes slack](https://kubernetes.slack.com/messages/CD6U2V71N).
##### Tracking new features
We also use the issue tracker to track features. If you have an idea for a feature, or think you can help kops become even more awesome follow the steps below.
* Open a [new issue](https://github.com/kubernetes-sigs/cluster-api-provider-aws/issues/new).
* Remember that users might be searching for your issue in the future, so please give it a meaningful title to help others.
* Clearly define the use case, using concrete examples. EG: I type `this` and cluster-api-provider-aws does `that`.
* Some of our larger features will require some design. If you would like to include a technical design for your feature please include it in the issue.
* After the new feature is well understood, and the design agreed upon, we can start coding the feature. We would love for you to code it. So please open up a **WIP** *(work in progress)* pull request, and happy coding.
> “Amazon Web Services, AWS, and the “Powered by AWS” logo materials are
> trademarks of Amazon.com, Inc. or its affiliates in the United States
> and/or other countries."
#### Our Contributors
Thank you to all contributors and a special thanks to our current maintainers & reviewers:
| Maintainers | Reviewers |
| --- | --- |
| [@richardcase](https://github.com/richardcase) | [@Ankitasw](https://github.com/Ankitasw) |
| [@sedefsavas](https://github.com/sedefsavas) | [@dthorsen](https://github.com/dthorsen) |
| | [@dlipovetsky](https://github.com/dlipovetsky) |
| | [@pydctw](https://github.com/pydctw) |
| | [@shivi28](https://github.com/shivi28) |
and the previous/emeritus maintainers & reviwers:
| Emeritus Maintainers | Emeritus Reviewers |
| --- | --- |
| [@chuckha](https://github.com/chuckha) | [@ashish-amarnath](https://github.com/ashish-amarnath) |
| [@detiber](https://github.com/detiber) | [@davidewatson](https://github.com/davidewatson) |
| [@ncdc](https://github.com/ncdc) | [@enxebre](https://github.com/enxebre) |
| [@randomvariable](https://github.com/randomvariable) | [@ingvagabund](https://github.com/ingvagabund) |
| [@rudoi](https://github.com/rudoi) | [@michaelbeaumont](https://github.com/michaelbeaumont) |
| [@vincepri](https://github.com/vincepri) | [@sethp-nr](https://github.com/sethp-nr) |
All the CAPA contributors:
[![](https://contrib.rocks/image?repo=kubernetes-sigs/cluster-api-provider-aws)](https://github.com/kubernetes-sigs/cluster-api-provider-aws/graphs/contributors)
Documentation
[¶](#section-documentation)
---
![The Go Gopher](/static/shared/gopher/airplane-1200x945.svg)
There is no documentation for this package. |
github.com/grafana/loki | go | Go | README
[¶](#section-readme)
---
![Loki Logo](https://github.com/grafana/loki/raw/v1.6.1/docs/sources/logo_and_name.png)
[![Drone CI](https://cloud.drone.io/api/badges/grafana/loki/status.svg)](https://cloud.drone.io/grafana/loki)
[![CircleCI](https://circleci.com/gh/grafana/loki.svg?style=shield&circle-token=618193e5787b2951c1ea3352ad5f254f4f52313d)](https://circleci.com/gh/grafana/loki/tree/master)
[![Go Report Card](https://goreportcard.com/badge/github.com/grafana/loki)](https://goreportcard.com/report/github.com/grafana/loki)
[![Slack](https://img.shields.io/badge/join%20slack-%23loki-brightgreen.svg)](https://slack.grafana.com/)
### Loki: like Prometheus, but for logs.
Loki is a horizontally-scalable, highly-available, multi-tenant log aggregation system inspired by [Prometheus](https://prometheus.io/).
It is designed to be very cost effective and easy to operate.
It does not index the contents of the logs, but rather a set of labels for each log stream.
Compared to other log aggregation systems, Loki:
* does not do full text indexing on logs. By storing compressed, unstructured logs and only indexing metadata, Loki is simpler to operate and cheaper to run.
* indexes and groups log streams using the same labels you’re already using with Prometheus, enabling you to seamlessly switch between metrics and logs using the same labels that you’re already using with Prometheus.
* is an especially good fit for storing [Kubernetes](https://kubernetes.io/) Pod logs. Metadata such as Pod labels is automatically scraped and indexed.
* has native support in Grafana (needs Grafana v6.0).
A Loki-based logging stack consists of 3 components:
* `promtail` is the agent, responsible for gathering logs and sending them to Loki.
* `loki` is the main server, responsible for storing logs and processing queries.
* [Grafana](https://github.com/grafana/grafana) for querying and displaying the logs.
Loki is like Prometheus, but for logs: we prefer a multidimensional label-based approach to indexing, and want a single-binary, easy to operate system with no dependencies.
Loki differs from Prometheus by focusing on logs instead of metrics, and delivering logs via push, instead of pull.
#### Getting started
* [Installing Loki](https://github.com/grafana/loki/tree/v1.6.0/docs/sources/installation/README.md)
* [Installing Promtail](https://github.com/grafana/loki/tree/v1.6.0/docs/sources/clients/promtail/installation.md)
* [Getting Started Guide](https://github.com/grafana/loki/tree/v1.6.0/docs/sources/getting-started/README.md)
#### Upgrading
* [Upgrading Loki](https://github.com/grafana/loki/blob/master/docs/operations/upgrade.md)
##### Documentation
* [master](https://github.com/grafana/loki/blob/v1.6.1/docs/README.md)
* [v1.6.0](https://github.com/grafana/loki/tree/v1.6.0/docs/sources/_index.md)
* [v1.5.0](https://github.com/grafana/loki/tree/v1.5.0/docs/README.md)
* [v1.4.1](https://github.com/grafana/loki/tree/v1.4.1/docs/README.md)
* [v1.4.0](https://github.com/grafana/loki/tree/v1.4.0/docs/README.md)
* [v1.3.0](https://github.com/grafana/loki/tree/v1.3.0/docs/README.md)
* [v1.2.0](https://github.com/grafana/loki/tree/v1.2.0/docs/README.md)
* [v1.1.0](https://github.com/grafana/loki/tree/v1.1.0/docs/README.md)
* [v1.0.0](https://github.com/grafana/loki/tree/v1.0.0/docs/README.md)
Commonly used sections (from the latest release v1.6.0):
* [API documentation](https://github.com/grafana/loki/tree/v1.6.0/docs/sources/api/_index.md) for alternative ways of getting logs into Loki.
* [Labels](https://github.com/grafana/loki/tree/v1.6.0/docs/sources/getting-started/labels.md)
* [Operations](https://github.com/grafana/loki/tree/v1.6.0/docs/sources/operations/_index.md) for important aspects of running Loki.
* [Promtail](https://github.com/grafana/loki/tree/v1.6.0/docs/sources/clients/promtail/_index.md) is an agent which can tail your log files and push them to Loki.
* [Pipelines](https://github.com/grafana/loki/tree/v1.6.0/docs/sources/clients/promtail/pipelines.md) for detailed log processing pipeline documentation
* [Docker Logging Driver](https://github.com/grafana/loki/tree/v1.6.0/docs/sources/clients/docker-driver/_index.md) is a docker plugin to send logs directly to Loki from Docker containers.
* [LogCLI](https://github.com/grafana/loki/tree/v1.6.0/docs/sources/getting-started/logcli.md) on how to query your logs without Grafana.
* [Loki Canary](https://github.com/grafana/loki/tree/v1.6.0/docs/sources/operations/loki-canary.md) for monitoring your Loki installation for missing logs.
* [Troubleshooting](https://github.com/grafana/loki/tree/v1.6.0/docs/sources/getting-started/troubleshooting.md) for help around frequent error messages.
* [Loki in Grafana](https://github.com/grafana/loki/tree/v1.6.0/docs/sources/getting-started/grafana.md) for how to set up a Loki datasource in Grafana and query your logs.
#### Getting Help
If you have any questions or feedback regarding Loki:
* Ask a question on the Loki Slack channel. To invite yourself to the Grafana Slack, visit <https://slack.grafana.com/> and join the #loki channel.
* [File an issue](https://github.com/grafana/loki/issues/new) for bugs, issues and feature suggestions.
* Send an email to [<EMAIL>](mailto:<EMAIL>), or use the [web interface](https://groups.google.com/forum/#!forum/lokiproject).
* UI issues should be filed directly in [Grafana](https://github.com/grafana/grafana/issues/new).
Your feedback is always welcome.
#### Further Reading
* The original [design doc](https://docs.google.com/document/d/11tjK_lvp1-SVsFZjgOTr1vV3-q6vBAsZYIQ5ZeYBkyM/view) for Loki is a good source for discussion of the motivation and design decisions.
* <NAME>'s March 2019 DevOpsDays Vancouver talk "[Grafana Loki: Log Aggregation for Incident Investigations](https://grafana.com/blog/2019/05/06/how-loki-correlates-metrics-and-logs-and-saves-you-money/)".
* Grafana Labs blog post "[How We Designed Loki to Work Easily Both as Microservices and as Monoliths](https://grafana.com/blog/2019/04/15/how-we-designed-loki-to-work-easily-both-as-microservices-and-as-monoliths/)".
* <NAME>'s early-2019 CNCF Paris/FOSDEM talk "[Grafana Loki: like Prometheus, but for logs](https://fosdem.org/2019/schedule/event/loki_prometheus_for_logs/)" ([slides](https://speakerdeck.com/grafana/grafana-loki-like-prometheus-but-for-logs), [video](https://mirror.as35701.net/video.fosdem.org/2019/UB2.252A/loki_prometheus_for_logs.mp4)).
* <NAME>'s KubeCon 2018 talk "[On the OSS Path to Full Observability with Grafana](https://kccna18.sched.com/event/GrXC/on-the-oss-path-to-full-observability-with-grafana-david-kaltschmidt-grafana-labs)" ([slides](https://speakerdeck.com/davkal/on-the-path-to-full-observability-with-oss-and-launch-of-loki), [video](https://www.youtube.com/watch?v=U7C5SpRtK74&list=PLj6h78yzYM2PZf9eA7bhWnIh_mK1vyOfU&index=346)) on how Loki fits into a cloud-native environment.
* <NAME>'s blog post "[Loki: Prometheus-inspired, open source logging for cloud natives](https://grafana.com/blog/2018/12/12/loki-prometheus-inspired-open-source-logging-for-cloud-natives/)" on details of the Loki architecture.
* <NAME>'s blog post "[Closer look at Grafana's user interface for Loki](https://grafana.com/blog/2019/01/02/closer-look-at-grafanas-user-interface-for-loki/)" on the ideas that went into the logging user interface.
#### Contributing
Refer to [CONTRIBUTING.md](https://github.com/grafana/loki/blob/v1.6.1/CONTRIBUTING.md)
##### Building from source
Loki can be run in a single host, no-dependencies mode using the following commands.
You need `go` [v1.10+](https://golang.org/dl/) installed locally.
```
$ go get github.com/grafana/loki
$ cd $GOPATH/src/github.com/grafana/loki # GOPATH is $HOME/go by default.
$ go build ./cmd/loki
$ ./loki -config.file=./cmd/loki/loki-local-config.yaml
...
```
To build Promtail on non-Linux platforms, use the following command:
```
$ go build ./cmd/promtail
```
On Linux, Promtail requires the systemd headers to be installed for Journal support.
With Journal support on Ubuntu, run with the following commands:
```
$ sudo apt install -y libsystemd-dev
$ go build ./cmd/promtail
```
With Journal support on CentOS, run with the following commands:
```
$ sudo yum install -y systemd-devel
$ go build ./cmd/promtail
```
Otherwise, to build Promtail without Journal support, run `go build`
with CGO disabled:
```
$ CGO_ENABLED=0 go build ./cmd/promtail
```
#### License
Apache License 2.0, see [LICENSE](https://github.com/grafana/loki/blob/v1.6.1/LICENSE).
None |
GillespieSSA2 | cran | R | Package ‘GillespieSSA2’
January 23, 2023
Type Package
Title Gillespie's Stochastic Simulation Algorithm for Impatient People
Version 0.3.0
Description A fast, scalable, and versatile framework for
simulating large systems with Gillespie's Stochastic Simulation
Algorithm ('SSA'). This package is the spiritual successor to the
'GillespieSSA' package originally written by <NAME>.
Benefits of this package include major speed improvements (>100x),
easier to understand documentation, and many unit tests that try to
ensure the package works as intended. Cannoodt and Saelens et al. (2021)
<doi:10.1038/s41467-021-24152-2>.
License GPL (>= 3)
URL https://rcannood.github.io/GillespieSSA2/,
https://github.com/rcannood/GillespieSSA2
BugReports https://github.com/rcannood/GillespieSSA2/issues
Depends R (>= 3.3)
Imports assertthat, dplyr, dynutils, Matrix, methods, purrr, Rcpp (>=
0.12.3), RcppXPtrUtils, readr, rlang, stringr, tidyr
Suggests covr, ggplot2, GillespieSSA, knitr, rmarkdown, testthat (>=
2.1.0)
LinkingTo Rcpp
VignetteBuilder knitr
Encoding UTF-8
RoxygenNote 7.2.2
NeedsCompilation yes
Author <NAME> [aut, cre] (<https://orcid.org/0000-0003-3641-729X>),
<NAME> [aut] (<https://orcid.org/0000-0002-7114-6248>)
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2023-01-23 19:20:02 UTC
R topics documented:
compile_reaction... 2
GillespieSSA... 3
ode_e... 5
plot_ss... 5
port_reaction... 6
print.SSA_reactio... 6
reactio... 7
ss... 8
ssa_bt... 10
ssa_et... 11
ssa_exac... 12
compile_reactions Precompile the reactions
Description
By precompiling the reactions, you can run multiple SSA simulations repeatedly without having to
recompile the reactions every time.
Usage
compile_reactions(
reactions,
state_ids,
params,
buffer_ids = NULL,
hardcode_params = FALSE,
fun_by = 10000L,
debug = FALSE
)
Arguments
reactions ’reaction’ A list of multiple reaction() objects.
state_ids [character] The names of the states in the correct order.
params [named numeric] Constants that are used in the propensity functions.
buffer_ids [character] The order of any buffer calculations that are made as part of the
propensity functions.
hardcode_params
[logical] Whether or not to hardcode the values of params in the compilation
of the propensity functions. Setting this to TRUE will result in a minor sacrifice
in accuracy for a minor increase in performance.
fun_by [integer] Combine this number of propensity functions into one function.
debug [logical] Whether to print the resulting C++ code before compiling.
Value
A list of objects solely to be used by ssa().
• x[["state_change"]]: A sparse matrix of reaction effects.
• x[["reaction_ids"]]: The names of the reactions.
• x[["buffer_ids"]]: A set of buffer variables found in the propensity functions.
• x[["buffer_size"]]: The minimum size of the buffer required.
• x[["function_pointers"]]: A list of compiled propensity functions.
• x[["hardcode_params"]]: Whether the parameters were hard coded into the source code.‘
Examples
initial_state <- c(prey = 1000, predators = 1000)
params <- c(c1 = 10, c2 = 0.01, c3 = 10)
reactions <- list(
# propensity function effects name for reaction
reaction(~c1 * prey, c(prey = +1), "prey_up"),
reaction(~c2 * prey * predators, c(prey = -1, predators = +1), "predation"),
reaction(~c3 * predators, c(predators = -1), "pred_down")
)
compiled_reactions <- compile_reactions(
reactions = reactions,
state_ids = names(initial_state),
params = params
)
out <-
ssa(
initial_state = initial_state,
reactions = compiled_reactions,
params = params,
method = ssa_exact(),
final_time = 5,
census_interval = .001,
verbose = TRUE
)
plot_ssa(out)
GillespieSSA2 GillespieSSA2: Gillespie’s Stochastic Simulation Algorithm for im-
patient people.
Description
GillespieSSA2 is a fast, scalable, and versatile framework for simulating large systems with Gille-
spie’s Stochastic Simulation Algorithm (SSA). This package is the spiritual successor to the Gille-
spieSSA package originally written by <NAME>.
Details
GillespieSSA2 has the following added benefits:
• The whole algorithm is run in Rcpp which results in major speed improvements (>100x). Even
your propensity functions (reactions) are being compiled to Rcpp!
• Parameters and variables have been renamed to make them easier to understand.
• Many unit tests try to ensure that the code works as intended.
The SSA methods currently implemented are: Exact (ssa_exact()), Explicit tau-leaping (ssa_etl()),
and the Binomial tau-leaping (ssa_btl()).
The stochastic simulation algorithm
The stochastic simulation algorithm (SSA) is a procedure for constructing simulated trajectories
of finite populations in continuous time. If Xi (t) is the number of individuals in population i
(i = 1, . . . , N ) at time t, the SSA estimates the state vector X(t) ≡ (X1 (t), . . . , XN (t)), given that
the system initially (at time t0 ) was in state X(t0 ) = x0 .
Reactions are single instantaneous events changing at least one of the populations (e.g. birth, death,
movement, collision, predation, infection, etc). These cause the state of the system to change over
time.
The SSA procedure samples the time τ to the next reaction Rj (j = 1, . . . , M ) and updates the
system state X(t) accordingly.
Each reaction Rj is characterized mathematically by two quantities; its state-change vector νj and
its propensity function aj (x). The state-change vector is defined as νj ≡ (ν1j , . . . , νN j ), where
νij is the change in the number of individuals in population i caused by one reaction of type j. The
propensity function is defined as aj (x), where aj (x)dt is the probability that a particular reaction j
will occur in the next infinitesimal time interval [t, t + dt].
Contents of this package
• ssa(): The main entry point for running an SSA simulation.
• plot_ssa(): A standard visualisation for generating an overview plot fo the output.
• ssa_exact(), ssa_etl(), ssa_btl(): Different SSA algorithms.
• ode_em(): An ODE algorithm.
• compile_reactions(): A function for precompiling the reactions.
See Also
ssa() for more explanation on how to use GillespieSSA2
ode_em Euler-Maruyama method (EM)
Description
Euler-Maruyama method implementation of the ODE.
Usage
ode_em(tau = 0.01, noise_strength = 2)
Arguments
tau tau parameter
noise_strength noise_strength parameter
Value
an object of to be used by ssa().
plot_ssa Simple plotting of ssa output
Description
Provides basic functionally for simple and quick time series plot of simulation output from ssa().
Usage
plot_ssa(
ssa_out,
state = TRUE,
propensity = FALSE,
buffer = FALSE,
firings = FALSE,
geom = c("point", "step")
)
Arguments
ssa_out Data object returned by ssa().
state Whether or not to plot the state values.
propensity Whether or not to plot the propensity values.
buffer Whether or not to plot the buffer values.
firings Whether or not to plot the reaction firings values.
geom Which geom to use, must be one of "point", "step".
port_reactions Port GillespieSSA parameters to GillespieSSA2
Description
This is a helper function to tranform GillesieSSA-style paramters to GillespieSSA2.
Usage
port_reactions(x0, a, nu)
Arguments
x0 The x0 parameter of GillespieSSA::ssa().
a The a parameter of GillespieSSA::ssa().
nu The nu parameter of GillespieSSA::ssa().
Value
A set of reaction()s to be used by ssa().
Examples
x0 <- c(Y1 = 1000, Y2 = 1000)
a <- c("c1*Y1","c2*Y1*Y2","c3*Y2")
nu <- matrix(c(+1,-1,0,0,+1,-1),nrow=2,byrow=TRUE)
port_reactions(x0, a, nu)
print.SSA_reaction Print various SSA objects
Description
Print various SSA objects
Usage
## S3 method for class 'SSA_reaction'
print(x, ...)
## S3 method for class 'SSA_method'
print(x, ...)
Arguments
x An SSA reaction or SSA method
... Not used
reaction Define a reaction
Description
During an SSA simulation, at any infinitesimal time interval, a reaction will occur with a probability
defined according to its propensity. If it does, then it will change the state vector according to its
effects.
Usage
reaction(propensity, effect, name = NA_character_)
Arguments
propensity [character/formula] A character or formula representation of the propensity
function, written in C++.
effect [named integer vector] The change in state caused by this reaction.
name [character] A name for this reaction (Optional). May only contain characters
matching [A-Za-z0-9_].
Details
It is possible to use ’buffer’ values in order to speed up the computation of the propensity functions.
For instance, instead of "(c3 * s1) / (1 + c3 * c1)", it is possible to write "buf = c3 * s1; buf /
(buf + 1)" instead.
Value
[SSA_reaction] This object describes a single reaction as part of an SSA simulation. It contains
the following member values:
• r[["propensity"]]: The propensity function as a character.
• r[["effect"]]: The change in state caused by this reaction.
• r[["name"]]: The name of the reaction, NA_character_ if no name was provided.
Examples
# propensity effect
reaction(~ c1 * s1, c(s1 = -1))
reaction("c2 * s1 * s1", c(s1 = -2, s2 = +1))
reaction("buf = c3 * s1; buf / (buf + 1)", c(s1 = +2))
ssa Invoking the stochastic simulation algorithm
Description
Main interface function to the implemented SSA methods. Runs a single realization of a predefined
system. For a detailed explanation on how to set up your first SSA system, check the introduction
vignette: vignette("an_introduction", package = "GillespieSSA2"). If you’re transitioning
from GillespieSSA to GillespieSSA2, check out the corresponding vignette: vignette("converting_from_GillespieSSA
package = "GillespieSSA2").
Usage
ssa(
initial_state,
reactions,
final_time,
params = NULL,
method = ssa_exact(),
census_interval = 0,
stop_on_neg_state = TRUE,
max_walltime = Inf,
log_propensity = FALSE,
log_firings = FALSE,
log_buffer = FALSE,
verbose = FALSE,
console_interval = 1,
sim_name = NA_character_,
return_simulator = FALSE
)
Arguments
initial_state [named numeric vector] The initial state to start the simulation with.
reactions A list of reactions, see reaction().
final_time [numeric] The final simulation time.
params [named numeric vector] Constant parameters to be used in the propensity
functions.
method [ssa_method]] Which SSA algorithm to use. Must be one of: ssa_exact(),
ssa_btl(), or ssa_etl().
census_interval
[numeric] The approximate interval between recording the state of the system.
Setting this parameter to 0 will cause each state to be recorded, and to Inf will
cause only the end state to be recorded.
stop_on_neg_state
[logical] Whether or not to stop the simulation when the a negative value in
the state has occured. This can occur, for instance, in the ssa_etl() method.
max_walltime [numeric] The maximum duration (in seconds) that the simulation is allowed
to run for before terminated.
log_propensity [logical] Whether or not to store the propensity values at each census.
log_firings [logical] Whether or not to store number of firings of each reaction between
censuses.
log_buffer [logical] Whether or not to store the buffer at each census.
verbose [logical] If TRUE, intermediary information pertaining to the simulation will
be displayed.
console_interval
[numeric] The approximate interval between intermediary information outputs.
sim_name [character] An optional name for the simulation.
return_simulator
Whether to return the simulator itself, instead of the output.
Details
Substantial improvements in speed and accuracy can be obtained by adjusting the additional (and
optional) ssa arguments. By default ssa uses conservative parameters (o.a. ssa_exact()) which
prioritise computational accuracy over computational speed.
Approximate methods (ssa_etl() and ssa_btl()) are not fool proof! Some tweaking might be
required for a stochastic model to run appropriately.
Value
Returns a list containing the output of the simulation:
• out[["time"]]: [numeric] The simulation time at which a census was performed.
• out[["state"]]: [numeric matrix] The number of individuals at those time points.
• out[["propensity"]]: [numeric matrix] If log_propensity is TRUE, the propensity
value of each reaction at each time point.
• out[["firings"]]: [numeric matrix] If log_firings is TRUE, the number of firings be-
tween two time points.
• out[["buffer"]]: [numeric matrix] If log_buffer is TRUE, the buffer values at each time
point.
• out[["stats"]]: [data frame] Various stats:
– $method: The name of the SSA method used.
– $sim_name: The name of the simulation, if provided.
– $sim_time_exceeded: Whether the simulation stopped because the final simulation time
was reached.
– $all_zero_state: Whether an extinction has occurred.
– $negative_state: Whether a negative state has occurred. If an SSA method other than
ssa_etl() is used, this indicates a mistake in the provided reaction effects.
– $all_zero_propensity: Whether the simulation stopped because all propensity values
are zero.
– $negative_propensity: Whether a negative propensity value has occurred. If so, there
is likely a mistake in the provided reaction propensity functions.
– $walltime_exceeded: Whether the simulation stopped because the maximum execution
time has been reached.
– $walltime_elapsed: The duration of the simulation.
– $num_steps: The number of steps performed.
– $dtime_mean: The mean time increment per step.
– $dtime_sd: THe standard deviation of time increments.
– $firings_mean: The mean number of firings per step.
– $firings_sd: The standard deviation of the number of firings.
See Also
GillespieSSA2 for a high level explanation of the package
Examples
initial_state <- c(prey = 1000, predators = 1000)
params <- c(c1 = 10, c2 = 0.01, c3 = 10)
reactions <- list(
# propensity function effects name for reaction
reaction(~c1 * prey, c(prey = +1), "prey_up"),
reaction(~c2 * prey * predators, c(prey = -1, predators = +1), "predation"),
reaction(~c3 * predators, c(predators = -1), "pred_down")
)
out <-
ssa(
initial_state = initial_state,
reactions = reactions,
params = params,
method = ssa_exact(),
final_time = 5,
census_interval = .001,
verbose = TRUE
)
plot_ssa(out)
ssa_btl Binomial tau-leap method (BTL)
Description
Binomial tau-leap method implementation of the SSA as described by Chatterjee et al. (2005).
Usage
ssa_btl(mean_firings = 10)
Arguments
mean_firings A coarse-graining factor of how many firings will occur at each iteration on
average. Depending on the propensity functions, a value for mean_firings will
result in warnings generated and a loss of accuracy.
Value
an object of to be used by ssa().
References
<NAME>., <NAME>., and <NAME>. 2005. Binomial distribution based tau-leap
accelerated stochastic simulation. J. Chem. Phys. 122:024112. doi: 10.1063/1.1833357.
ssa_etl Explicit tau-leap method (ETL)
Description
Explicit tau-leap method implementation of the SSA as described by Gillespie (2001). Note that this
method does not attempt to select an appropriate value for tau, nor does it implement estimated-
midpoint technique.
Usage
ssa_etl(tau = 0.3)
Arguments
tau the step-size (default 0.3).
Value
an object of to be used by ssa().
References
<NAME>. 2001. Approximate accelerated stochastic simulation of chemically reacting systems.
J. Chem. Phys. 115:1716-1733. doi: 10.1063/1.1378322.
ssa_exact Exact method
Description
Exact method implementation of the SSA as described by Gillespie (1977).
Usage
ssa_exact()
Value
an object of to be used by ssa().
References
<NAME>. 1977. Exact stochastic simulation of coupled chemical reactions. J. Phys. Chem.
81:2340. doi: 10.1021/j100540a008 |
diskernet | npm | JavaScript | DiskerNet: An Archive of Your Online Journey
===
DiskerNet empowers you to be the master archivist of your own internet browsing. As a robust, lightweight tool, DiskerNet seamlessly connects to your browser, saving and organizing your online discoveries in real-time. With an option to archive everything or only bookmark-worthy content, DiskerNet places you in full control of your browsing history. No special plugins or extensions required.
Why DiskerNet?
---
* **Access**: Keep track of your online finds without breaking a sweat.
* **Efficiency**: Find your saved content fast, saving you time for more exploration.
* **Flexibility**: Share your archive with others or maintain your digital solitude.
* **Simplicity**: No frills, no fuss. DiskerNet is straightforward to use, requiring no extra tools or plugins.
* **Organization**: Search through everything you've archived with full text search of all archived content. Your own personal search engine.
Latest Updates
---
**Local SSL Certificates Now Supported!** 🔒 🎉
Ensure your DiskerNet server runs over TLS with our support for local SSL certificates.
Licensing
---
DiskerNet is protected under the APGL-3.0
Get DiskerNet
---
[Download a release](https://github.com/crisdosyago/Diskernet/releases)
or ...
Install via **[npm](https://www.npmjs.com/package/diskernet)**:
```
$ npm i -g diskernet@latest
```
or...
**Build your own binaries:**
```
$ git clone https://github.com/crisdosyago/DiskerNet
$ cd DiskerNet
$ npm i
## Contributions!
Welcome! Get involved. :)
$ ./scripts/build_setup.sh
$ ./scripts/compile.sh
$ cd bin/
```
Navigate your digital world with DiskerNet. Download and start archiving today!
Readme
---
### Keywords
* archivist
* library |
wellcome-aws-utils | readthedoc | Python | Wellcome AWS Utils 1.0.0 documentation
[Wellcome AWS Utils](index.html#document-index)
---
Welcome to Wellcome AWS Utils’s documentation![¶](#welcome-to-wellcome-aws-utils-s-documentation)
===
This package is a collection of utilities written at Wellcome for interacting with AWS.
Some of these utilities are very specific to Wellcome projects, others are more generic and should be generally useful.
The best place to start is the [API reference](index.html#document-api), which describes all the utilities the package provides.
It’s a bit sparse at the moment, but hopefully we’ll expand it soon!
API reference[¶](#api-reference)
---
### Deployment utilities[¶](#deployment-utilities)
Shared library to help surface ECS deployment information.
*class* `wellcome_aws_utils.deployment_utils.``Deployment`(*deployment_key*, *deployment_status*, *color*, *created_at*, *task_definition*)[¶](#wellcome_aws_utils.deployment_utils.Deployment)
`color`[¶](#wellcome_aws_utils.deployment_utils.Deployment.color)
Alias for field number 2
`created_at`[¶](#wellcome_aws_utils.deployment_utils.Deployment.created_at)
Alias for field number 3
`deployment_key`[¶](#wellcome_aws_utils.deployment_utils.Deployment.deployment_key)
Alias for field number 0
`deployment_status`[¶](#wellcome_aws_utils.deployment_utils.Deployment.deployment_status)
Alias for field number 1
`task_definition`[¶](#wellcome_aws_utils.deployment_utils.Deployment.task_definition)
Alias for field number 4
*class* `wellcome_aws_utils.deployment_utils.``DeploymentKey`(*id*, *service_arn*)[¶](#wellcome_aws_utils.deployment_utils.DeploymentKey)
`id`[¶](#wellcome_aws_utils.deployment_utils.DeploymentKey.id)
Alias for field number 0
`service_arn`[¶](#wellcome_aws_utils.deployment_utils.DeploymentKey.service_arn)
Alias for field number 1
### DynamoDB events[¶](#dynamodb-events)
*class* `wellcome_aws_utils.dynamo_event.``DynamoEventType`[[source]](_modules/wellcome_aws_utils/dynamo_event.html#DynamoEventType)[¶](#wellcome_aws_utils.dynamo_event.DynamoEventType)
An enumeration.
### DynamoDB utilities[¶](#dynamodb-utilities)
`wellcome_aws_utils.dynamo_utils.``change_dynamo_capacity`(*client*, *table_name*, *desired_capacity*)[[source]](_modules/wellcome_aws_utils/dynamo_utils.html#change_dynamo_capacity)[¶](#wellcome_aws_utils.dynamo_utils.change_dynamo_capacity)
Given the name of a DynamoDB table and a desired capacity, update the read/write capacity of the table and every secondary index.
### ECS[¶](#ecs)
*exception* `wellcome_aws_utils.ecs_utils.``EcsThrottleException`[[source]](_modules/wellcome_aws_utils/ecs_utils.html#EcsThrottleException)[¶](#wellcome_aws_utils.ecs_utils.EcsThrottleException)
`wellcome_aws_utils.ecs_utils.``clone_task_definition`(*client*, *task_definition*)[[source]](_modules/wellcome_aws_utils/ecs_utils.html#clone_task_definition)[¶](#wellcome_aws_utils.ecs_utils.clone_task_definition)
Given a task definition ARN, clone the associated task.
Returns the new task definition ARN.
`wellcome_aws_utils.ecs_utils.``describe_cluster`(*ecs_client*, *cluster_arn*)[[source]](_modules/wellcome_aws_utils/ecs_utils.html#describe_cluster)[¶](#wellcome_aws_utils.ecs_utils.describe_cluster)
Given a cluster ARN attempts to find a matching cluster description.
Returns a cluster description.
`wellcome_aws_utils.ecs_utils.``describe_service`(*ecs_client*, *cluster_arn*, *service_arn*)[[source]](_modules/wellcome_aws_utils/ecs_utils.html#describe_service)[¶](#wellcome_aws_utils.ecs_utils.describe_service)
Given a cluster ARN and service ARN, attempts to find a matching service description.
Returns a service description.
`wellcome_aws_utils.ecs_utils.``get_cluster_arns`(*ecs_client*)[[source]](_modules/wellcome_aws_utils/ecs_utils.html#get_cluster_arns)[¶](#wellcome_aws_utils.ecs_utils.get_cluster_arns)
Extract the list of cluster ARNs in this account.
Returns a list of cluster ARNs.
`wellcome_aws_utils.ecs_utils.``get_latest_task_definition`(*client*, *cluster*, *service*)[[source]](_modules/wellcome_aws_utils/ecs_utils.html#get_latest_task_definition)[¶](#wellcome_aws_utils.ecs_utils.get_latest_task_definition)
Given the name of a cluster and a service, return the ARN for its latest task definition.
`wellcome_aws_utils.ecs_utils.``get_service_arns`(*ecs_client*, *cluster_arn*)[[source]](_modules/wellcome_aws_utils/ecs_utils.html#get_service_arns)[¶](#wellcome_aws_utils.ecs_utils.get_service_arns)
Given a cluster ARN, extracts the associated service ARNs.
Returns a list of service ARNS.
`wellcome_aws_utils.ecs_utils.``identify_cluster_by_app_name`(*client*, *app_name*)[[source]](_modules/wellcome_aws_utils/ecs_utils.html#identify_cluster_by_app_name)[¶](#wellcome_aws_utils.ecs_utils.identify_cluster_by_app_name)
Given the name of one of our applications (e.g. api, calm_adapter),
return the ARN of the cluster the task runs on.
`wellcome_aws_utils.ecs_utils.``run_task`(*ecs_client*, *cluster_name*, *task_definition*, *started_by*, *container_name='app'*, *command=[]*)[[source]](_modules/wellcome_aws_utils/ecs_utils.html#run_task)[¶](#wellcome_aws_utils.ecs_utils.run_task)
Run a given command against a named container in a task definition on a particular cluster.
Returns the response from calling run_task
### S3[¶](#s3)
`wellcome_aws_utils.s3_utils.``is_object`(*bucket*, *key*)[[source]](_modules/wellcome_aws_utils/s3_utils.html#is_object)[¶](#wellcome_aws_utils.s3_utils.is_object)
Checks if an object exists in S3. Returns True/False.
| Parameters: | * **bucket** – Bucket of the object to check.
* **key** – Key of the object to check.
|
`wellcome_aws_utils.s3_utils.``copy_object`(*src_bucket*, *src_key*, *dst_bucket*, *dst_key*, *lazy=False*)[[source]](_modules/wellcome_aws_utils/s3_utils.html#copy_object)[¶](#wellcome_aws_utils.s3_utils.copy_object)
Copy an object from one S3 bucket to another.
| Parameters: | * **src_bucket** – Bucket of the source object.
* **src_key** – Key of the source object.
* **dst_bucket** – Bucket of the destination object.
* **dst_key** – Key of the destination object.
* **lazy** – Do a lazy copy. This means that the object will only be copied if the destination object does not exist, or exists but has a different ETag from the source object.
|
`wellcome_aws_utils.s3_utils.``parse_s3_record`(*event*)[[source]](_modules/wellcome_aws_utils/s3_utils.html#parse_s3_record)[¶](#wellcome_aws_utils.s3_utils.parse_s3_record)
Extracts a simple subset of an S3 update event.
`wellcome_aws_utils.s3_utils.``write_objects_to_s3`(*bucket*, *key*, *objects*)[[source]](_modules/wellcome_aws_utils/s3_utils.html#write_objects_to_s3)[¶](#wellcome_aws_utils.s3_utils.write_objects_to_s3)
Given an iterable of objects that can be serialised as JSON, serialise them as JSON, and write them to a file in S3, one per line.
| Parameters: | * **bucket** – S3 bucket to upload the new file to.
* **key** – S3 key to upload the new file to.
* **objects** – An iterable of objects that can be serialised as JSON.
|
### SNS[¶](#sns)
*class* `wellcome_aws_utils.sns_utils.``EnhancedJSONEncoder`(***, *skipkeys=False*, *ensure_ascii=True*, *check_circular=True*, *allow_nan=True*, *sort_keys=False*, *indent=None*, *separators=None*, *default=None*)[[source]](_modules/wellcome_aws_utils/sns_utils.html#EnhancedJSONEncoder)[¶](#wellcome_aws_utils.sns_utils.EnhancedJSONEncoder)
`default`(*obj*)[[source]](_modules/wellcome_aws_utils/sns_utils.html#EnhancedJSONEncoder.default)[¶](#wellcome_aws_utils.sns_utils.EnhancedJSONEncoder.default)
Implement this method in a subclass such that it returns a serializable object for `o`, or calls the base implementation
(to raise a `TypeError`).
For example, to support arbitrary iterators, you could implement default like this:
```
def default(self, o):
try:
iterable = iter(o)
except TypeError:
pass
else:
return list(iterable)
# Let the base class default method raise the TypeError
return JSONEncoder.default(self, o)
```
*class* `wellcome_aws_utils.sns_utils.``SNSEvent`(*subject*, *message*)[¶](#wellcome_aws_utils.sns_utils.SNSEvent)
`message`[¶](#wellcome_aws_utils.sns_utils.SNSEvent.message)
Alias for field number 1
`subject`[¶](#wellcome_aws_utils.sns_utils.SNSEvent.subject)
Alias for field number 0
`wellcome_aws_utils.sns_utils.``extract_json_message`(*event*)[[source]](_modules/wellcome_aws_utils/sns_utils.html#extract_json_message)[¶](#wellcome_aws_utils.sns_utils.extract_json_message)
Extracts a JSON message from an SNS event sent to a lambda
Deprecated in favour of extract_sns_messages_from_lambda_event
`wellcome_aws_utils.sns_utils.``extract_sns_messages_from_lambda_event`(*event*)[[source]](_modules/wellcome_aws_utils/sns_utils.html#extract_sns_messages_from_lambda_event)[¶](#wellcome_aws_utils.sns_utils.extract_sns_messages_from_lambda_event)
Extracts a JSON message from an SNS event sent to an AWS Lambda.
| Parameters: | **event** – An event sent to a Lambda from SNS. |
| Returns: | A generator of SNSEvent instances. |
`wellcome_aws_utils.sns_utils.``publish_sns_message`(*sns_client*, *topic_arn*, *message*, *subject='default-subject'*)[[source]](_modules/wellcome_aws_utils/sns_utils.html#publish_sns_message)[¶](#wellcome_aws_utils.sns_utils.publish_sns_message)
Given a topic ARN and a series of key-value pairs, publish the key-value data to the SNS topic.
### SQS[¶](#sqs)
`wellcome_aws_utils.sqs_utils.``get_messages`(*queue_url*, *delete=False*, *batch_size=10*)[[source]](_modules/wellcome_aws_utils/sqs_utils.html#get_messages)[¶](#wellcome_aws_utils.sqs_utils.get_messages)
Gets messages from an SQS queue. If `delete` is True, the messages are also deleted after they’ve been read.
Changelog[¶](#changelog)
---
This is a record of all releases of wellcome_aws_utils.
### 3.2.0 - 2020-07-15[¶](#id1)
Also handle SQS messages
### 3.1.0 - 2020-06-29[¶](#id2)
The adapters now only send an ID / Version, and we need to look that up in Dynamo to fetch the S3 object in the reporting code.
### 3.0.0 - 2020-06-25[¶](#id3)
The schema of VHS data has changed meaning the reporting_utils are currently broken. This updates to the latest VHS data format.
### 2.3.3 - 2019-07-26[¶](#id4)
Makes sure that the Elasticsearch doc is sent over as a string.
### 2.3.2 - 2019-05-21[¶](#id5)
Adds ability to switch AWS roles when fetching elasticsearch credentials
### 2.3.1 - 2019-05-21[¶](#id6)
Now with fixed Travis credentials.
### 2.3.0 - 2019-05-21[¶](#id7)
This release modifies the way that secrets are handled by lambdas in the reporting pipleine. Previously, secrets were passed to lambdas as environment variables, defined in terraform. We now fetch secrets from AWS secretsmanager as records move through the pipeline.
### 2.2.1 - 2018-11-23[¶](#id8)
A large number of records in the Sierra VHS contain a `reindexShard` parameter which is not expected when initialising a `HybridRecord()` object. `attrs` can’t handle data it doesn’t expect, and the records with `reindexShard` parameters therefore fail to pass through the pipeline.
We now throw away any unnecessary data in the received message, allowing originally dirty messages to pass through without issue.
### 2.2.0 - 2018-11-08[¶](#id9)
This release adds utils for the reporting pipeline.
The functions under `reporting_utils.py` describe a basic ETL pipeline from VHS to Elasticsearch, without a transformation specified. In this way, the shape of the pipeline remains independent of both the data within it and the transforms being applied.
As further data sources are added to the reporting pipeline and more Lambda functions are created, we keep repeated code to a minimum. In a new Lambda function, the user should specify a set of data-source-specific transformations in a `transform.py` file. The Lambda’s `main` can then remain minimal and generic:
### 2.1.3 - 2018-08-17[¶](#id10)
This fixes a bug in the `@log_on_error` decorator where the return value of the original function would be replaced by `None`. This decorator now preserves the original return value.
### 2.1.2 - 2018-06-26[¶](#id11)
Previously sending a message with `sns_utils.publish_sns_message` would print a message upon success.
Now this message is only logged at debug level.
### 2.1.1 - 2018-06-04[¶](#id12)
Now `@log_on_error` can be used to decorate functions with arbitrary arguments/keyword arguments.
### 2.1.0 - 2018-06-04[¶](#id13)
This adds a new method: `lambda_utils.log_on_error`. This can be used to decorate the main function for a Lambda, and logs the event/context if the Lambda throws an unexpected exception.
For example, running the following snippet:
```
@log_on_error def handler(event, context=None):
if event == {1: '1', 2: '2'}:
raise ValueError
handler(event={'foo': 'bar'})
handler(event='99 green bottles' * 99)
handler(event={1: '1', 2: '2'})
```
gives the following output:
This makes it easier to debug failed Lambdas, but without the expense of logging every event that a Lambda receives.
### 2.0.2 - 2018-06-04[¶](#id14)
Previously sending a message with `sns_utils.publish_sns_message` would log the entire SNS response.
Now the response is only logged if the SNS message is unsuccessful.
### 2.0.1 - 2018-01-12[¶](#id15)
This fixes a bug in `s3_utils.parse_s3_record`. If the key of a changed file included a character which is usually quoted in URLs (e.g. `+`),
a parsed record from the S3 event stream would use the URL-quoted form of the object key.
For example, a change to `s3://example/foo+bar` would become `foo%2Bbar`.
This version unquotes the key when parsing the event.
### 2.0.0 - 2017-11-29[¶](#id16)
Replacing the DynamoImageFactory and DynamoImage classes with DynamoEventFactory and DynamoEvent
* Perform quite a bit of sanity checking on event object received
* DynamoEvent can:
- return old and new images (if available)
- return modified keys only
- return deserialized or otherwise images and keys based on params
### 1.1.0 - 2017-11-15[¶](#id17)
Deprecates `sns_utils.extract_json_message` in favour of `sns_utils.extract_sns_messages_from_lambda_event`.
extract_sns_messages_from_lambda_event provides:
- better error reporting if the event is malformed
- loops over all available records from event not just the first
- returns subject along with the json decoded message
This release also adds `UnWellcomeException` which will be used as the base exception for new errors.
### 1.0.0 - 2017-11-07[¶](#id18)
First production release! |
@google-cloud/security-private-ca | npm | JavaScript | [Certificate Authority Service: Node.js Client](https://github.com/googleapis/google-cloud-node/tree/main/packages/google-cloud-security-privateca)
===
Privateca client for Node.js
A comprehensive list of changes in each version may be found in
[the CHANGELOG](https://github.com/googleapis/google-cloud-node/tree/main/packages/google-cloud-security-privateca/CHANGELOG.md).
* [Certificate Authority Service Node.js Client API Reference](https://cloud.google.com/nodejs/docs/reference/security-private-ca/latest)
* [Certificate Authority Service Documentation](https://cloud.google.com/certificate-authority-service)
* [github.com/googleapis/google-cloud-node/packages/google-cloud-security-privateca](https://github.com/googleapis/google-cloud-node/tree/main/packages/google-cloud-security-privateca)
Read more about the client libraries for Cloud APIs, including the older Google APIs Client Libraries, in [Client Libraries Explained](https://cloud.google.com/apis/docs/client-libraries-explained).
**Table of contents:**
* [Quickstart](#quickstart)
+ [Before you begin](#before-you-begin)
+ [Installing the client library](#installing-the-client-library)
+ [Using the client library](#using-the-client-library)
* [Samples](#samples)
* [Versioning](#versioning)
* [Contributing](#contributing)
* [License](#license)
Quickstart
---
### Before you begin
1. [Select or create a Cloud Platform project](https://console.cloud.google.com/project).
2. [Enable billing for your project](https://support.google.com/cloud/answer/6293499#enable-billing).
3. [Enable the Certificate Authority Service API](https://console.cloud.google.com/flows/enableapi?apiid=privateca.googleapis.com).
4. [Set up authentication with a service account](https://cloud.google.com/docs/authentication/getting-started) so you can access the API from your local workstation.
### Installing the client library
```
npm install @google-cloud/security-private-ca
```
### Using the client library
```
// Imports the Google Cloud client library
const {
CertificateAuthorityServiceClient,
} = require('@google-cloud/security-private-ca');
// TODO(developer): replace with your prefered project ID.
// const projectId = 'my-project'
// Creates a client const client = new CertificateAuthorityServiceClient();
async function listCertificates() {
const res = await client.listCertificates({
parent: `projects/${projectId}/locations/${location}/caPools/${name}`,
});
return res;
}
listCertificates();
```
Samples
---
Samples are in the [`samples/`](https://github.com/googleapis/google-cloud-node/tree/main/packages/google-cloud-security-privateca/samples) directory. Each sample's `README.md` has instructions for running its sample.
| Sample | Source Code | Try it |
| --- | --- | --- |
| Certificate_authority_service.activate_certificate_authority | [source code](https://github.com/googleapis/google-cloud-node/blob/main/packages/google-cloud-security-privateca/samples/generated/v1/certificate_authority_service.activate_certificate_authority.js) | |
| Certificate_authority_service.create_ca_pool | [source code](https://github.com/googleapis/google-cloud-node/blob/main/packages/google-cloud-security-privateca/samples/generated/v1/certificate_authority_service.create_ca_pool.js) | |
| Certificate_authority_service.create_certificate | [source code](https://github.com/googleapis/google-cloud-node/blob/main/packages/google-cloud-security-privateca/samples/generated/v1/certificate_authority_service.create_certificate.js) | |
| Certificate_authority_service.create_certificate_authority | [source code](https://github.com/googleapis/google-cloud-node/blob/main/packages/google-cloud-security-privateca/samples/generated/v1/certificate_authority_service.create_certificate_authority.js) | |
| Certificate_authority_service.create_certificate_template | [source code](https://github.com/googleapis/google-cloud-node/blob/main/packages/google-cloud-security-privateca/samples/generated/v1/certificate_authority_service.create_certificate_template.js) | |
| Certificate_authority_service.delete_ca_pool | [source code](https://github.com/googleapis/google-cloud-node/blob/main/packages/google-cloud-security-privateca/samples/generated/v1/certificate_authority_service.delete_ca_pool.js) | |
| Certificate_authority_service.delete_certificate_authority | [source code](https://github.com/googleapis/google-cloud-node/blob/main/packages/google-cloud-security-privateca/samples/generated/v1/certificate_authority_service.delete_certificate_authority.js) | |
| Certificate_authority_service.delete_certificate_template | [source code](https://github.com/googleapis/google-cloud-node/blob/main/packages/google-cloud-security-privateca/samples/generated/v1/certificate_authority_service.delete_certificate_template.js) | |
| Certificate_authority_service.disable_certificate_authority | [source code](https://github.com/googleapis/google-cloud-node/blob/main/packages/google-cloud-security-privateca/samples/generated/v1/certificate_authority_service.disable_certificate_authority.js) | |
| Certificate_authority_service.enable_certificate_authority | [source code](https://github.com/googleapis/google-cloud-node/blob/main/packages/google-cloud-security-privateca/samples/generated/v1/certificate_authority_service.enable_certificate_authority.js) | |
| Certificate_authority_service.fetch_ca_certs | [source code](https://github.com/googleapis/google-cloud-node/blob/main/packages/google-cloud-security-privateca/samples/generated/v1/certificate_authority_service.fetch_ca_certs.js) | |
| Certificate_authority_service.fetch_certificate_authority_csr | [source code](https://github.com/googleapis/google-cloud-node/blob/main/packages/google-cloud-security-privateca/samples/generated/v1/certificate_authority_service.fetch_certificate_authority_csr.js) | |
| Certificate_authority_service.get_ca_pool | [source code](https://github.com/googleapis/google-cloud-node/blob/main/packages/google-cloud-security-privateca/samples/generated/v1/certificate_authority_service.get_ca_pool.js) | |
| Certificate_authority_service.get_certificate | [source code](https://github.com/googleapis/google-cloud-node/blob/main/packages/google-cloud-security-privateca/samples/generated/v1/certificate_authority_service.get_certificate.js) | |
| Certificate_authority_service.get_certificate_authority | [source code](https://github.com/googleapis/google-cloud-node/blob/main/packages/google-cloud-security-privateca/samples/generated/v1/certificate_authority_service.get_certificate_authority.js) | |
| Certificate_authority_service.get_certificate_revocation_list | [source code](https://github.com/googleapis/google-cloud-node/blob/main/packages/google-cloud-security-privateca/samples/generated/v1/certificate_authority_service.get_certificate_revocation_list.js) | |
| Certificate_authority_service.get_certificate_template | [source code](https://github.com/googleapis/google-cloud-node/blob/main/packages/google-cloud-security-privateca/samples/generated/v1/certificate_authority_service.get_certificate_template.js) | |
| Certificate_authority_service.list_ca_pools | [source code](https://github.com/googleapis/google-cloud-node/blob/main/packages/google-cloud-security-privateca/samples/generated/v1/certificate_authority_service.list_ca_pools.js) | |
| Certificate_authority_service.list_certificate_authorities | [source code](https://github.com/googleapis/google-cloud-node/blob/main/packages/google-cloud-security-privateca/samples/generated/v1/certificate_authority_service.list_certificate_authorities.js) | |
| Certificate_authority_service.list_certificate_revocation_lists | [source code](https://github.com/googleapis/google-cloud-node/blob/main/packages/google-cloud-security-privateca/samples/generated/v1/certificate_authority_service.list_certificate_revocation_lists.js) | |
| Certificate_authority_service.list_certificate_templates | [source code](https://github.com/googleapis/google-cloud-node/blob/main/packages/google-cloud-security-privateca/samples/generated/v1/certificate_authority_service.list_certificate_templates.js) | |
| Certificate_authority_service.list_certificates | [source code](https://github.com/googleapis/google-cloud-node/blob/main/packages/google-cloud-security-privateca/samples/generated/v1/certificate_authority_service.list_certificates.js) | |
| Certificate_authority_service.revoke_certificate | [source code](https://github.com/googleapis/google-cloud-node/blob/main/packages/google-cloud-security-privateca/samples/generated/v1/certificate_authority_service.revoke_certificate.js) | |
| Certificate_authority_service.undelete_certificate_authority | [source code](https://github.com/googleapis/google-cloud-node/blob/main/packages/google-cloud-security-privateca/samples/generated/v1/certificate_authority_service.undelete_certificate_authority.js) | |
| Certificate_authority_service.update_ca_pool | [source code](https://github.com/googleapis/google-cloud-node/blob/main/packages/google-cloud-security-privateca/samples/generated/v1/certificate_authority_service.update_ca_pool.js) | |
| Certificate_authority_service.update_certificate | [source code](https://github.com/googleapis/google-cloud-node/blob/main/packages/google-cloud-security-privateca/samples/generated/v1/certificate_authority_service.update_certificate.js) | |
| Certificate_authority_service.update_certificate_authority | [source code](https://github.com/googleapis/google-cloud-node/blob/main/packages/google-cloud-security-privateca/samples/generated/v1/certificate_authority_service.update_certificate_authority.js) | |
| Certificate_authority_service.update_certificate_revocation_list | [source code](https://github.com/googleapis/google-cloud-node/blob/main/packages/google-cloud-security-privateca/samples/generated/v1/certificate_authority_service.update_certificate_revocation_list.js) | |
| Certificate_authority_service.update_certificate_template | [source code](https://github.com/googleapis/google-cloud-node/blob/main/packages/google-cloud-security-privateca/samples/generated/v1/certificate_authority_service.update_certificate_template.js) | |
| Certificate_authority_service.activate_certificate_authority | [source code](https://github.com/googleapis/google-cloud-node/blob/main/packages/google-cloud-security-privateca/samples/generated/v1beta1/certificate_authority_service.activate_certificate_authority.js) | |
| Certificate_authority_service.create_certificate | [source code](https://github.com/googleapis/google-cloud-node/blob/main/packages/google-cloud-security-privateca/samples/generated/v1beta1/certificate_authority_service.create_certificate.js) | |
| Certificate_authority_service.create_certificate_authority | [source code](https://github.com/googleapis/google-cloud-node/blob/main/packages/google-cloud-security-privateca/samples/generated/v1beta1/certificate_authority_service.create_certificate_authority.js) | |
| Certificate_authority_service.disable_certificate_authority | [source code](https://github.com/googleapis/google-cloud-node/blob/main/packages/google-cloud-security-privateca/samples/generated/v1beta1/certificate_authority_service.disable_certificate_authority.js) | |
| Certificate_authority_service.enable_certificate_authority | [source code](https://github.com/googleapis/google-cloud-node/blob/main/packages/google-cloud-security-privateca/samples/generated/v1beta1/certificate_authority_service.enable_certificate_authority.js) | |
| Certificate_authority_service.fetch_certificate_authority_csr | [source code](https://github.com/googleapis/google-cloud-node/blob/main/packages/google-cloud-security-privateca/samples/generated/v1beta1/certificate_authority_service.fetch_certificate_authority_csr.js) | |
| Certificate_authority_service.get_certificate | [source code](https://github.com/googleapis/google-cloud-node/blob/main/packages/google-cloud-security-privateca/samples/generated/v1beta1/certificate_authority_service.get_certificate.js) | |
| Certificate_authority_service.get_certificate_authority | [source code](https://github.com/googleapis/google-cloud-node/blob/main/packages/google-cloud-security-privateca/samples/generated/v1beta1/certificate_authority_service.get_certificate_authority.js) | |
| Certificate_authority_service.get_certificate_revocation_list | [source code](https://github.com/googleapis/google-cloud-node/blob/main/packages/google-cloud-security-privateca/samples/generated/v1beta1/certificate_authority_service.get_certificate_revocation_list.js) | |
| Certificate_authority_service.get_reusable_config | [source code](https://github.com/googleapis/google-cloud-node/blob/main/packages/google-cloud-security-privateca/samples/generated/v1beta1/certificate_authority_service.get_reusable_config.js) | |
| Certificate_authority_service.list_certificate_authorities | [source code](https://github.com/googleapis/google-cloud-node/blob/main/packages/google-cloud-security-privateca/samples/generated/v1beta1/certificate_authority_service.list_certificate_authorities.js) | |
| Certificate_authority_service.list_certificate_revocation_lists | [source code](https://github.com/googleapis/google-cloud-node/blob/main/packages/google-cloud-security-privateca/samples/generated/v1beta1/certificate_authority_service.list_certificate_revocation_lists.js) | |
| Certificate_authority_service.list_certificates | [source code](https://github.com/googleapis/google-cloud-node/blob/main/packages/google-cloud-security-privateca/samples/generated/v1beta1/certificate_authority_service.list_certificates.js) | |
| Certificate_authority_service.list_reusable_configs | [source code](https://github.com/googleapis/google-cloud-node/blob/main/packages/google-cloud-security-privateca/samples/generated/v1beta1/certificate_authority_service.list_reusable_configs.js) | |
| Certificate_authority_service.restore_certificate_authority | [source code](https://github.com/googleapis/google-cloud-node/blob/main/packages/google-cloud-security-privateca/samples/generated/v1beta1/certificate_authority_service.restore_certificate_authority.js) | |
| Certificate_authority_service.revoke_certificate | [source code](https://github.com/googleapis/google-cloud-node/blob/main/packages/google-cloud-security-privateca/samples/generated/v1beta1/certificate_authority_service.revoke_certificate.js) | |
| Certificate_authority_service.schedule_delete_certificate_authority | [source code](https://github.com/googleapis/google-cloud-node/blob/main/packages/google-cloud-security-privateca/samples/generated/v1beta1/certificate_authority_service.schedule_delete_certificate_authority.js) | |
| Certificate_authority_service.update_certificate | [source code](https://github.com/googleapis/google-cloud-node/blob/main/packages/google-cloud-security-privateca/samples/generated/v1beta1/certificate_authority_service.update_certificate.js) | |
| Certificate_authority_service.update_certificate_authority | [source code](https://github.com/googleapis/google-cloud-node/blob/main/packages/google-cloud-security-privateca/samples/generated/v1beta1/certificate_authority_service.update_certificate_authority.js) | |
| Certificate_authority_service.update_certificate_revocation_list | [source code](https://github.com/googleapis/google-cloud-node/blob/main/packages/google-cloud-security-privateca/samples/generated/v1beta1/certificate_authority_service.update_certificate_revocation_list.js) | |
| Quickstart | [source code](https://github.com/googleapis/google-cloud-node/blob/main/packages/google-cloud-security-privateca/samples/quickstart.js) | |
The [Certificate Authority Service Node.js Client API Reference](https://cloud.google.com/nodejs/docs/reference/security-private-ca/latest) documentation also contains samples.
Supported Node.js Versions
---
Our client libraries follow the [Node.js release schedule](https://github.com/nodejs/release#release-schedule).
Libraries are compatible with all current *active* and *maintenance* versions of Node.js.
If you are using an end-of-life version of Node.js, we recommend that you update as soon as possible to an actively supported LTS version.
Google's client libraries support legacy versions of Node.js runtimes on a best-efforts basis with the following warnings:
* Legacy versions are not tested in continuous integration.
* Some security patches and features cannot be backported.
* Dependencies cannot be kept up-to-date.
Client libraries targeting some end-of-life versions of Node.js are available, and can be installed through npm [dist-tags](https://docs.npmjs.com/cli/dist-tag).
The dist-tags follow the naming convention `legacy-(version)`.
For example, `npm install @google-cloud/security-private-ca@legacy-8` installs client libraries for versions compatible with Node.js 8.
Versioning
---
This library follows [Semantic Versioning](http://semver.org/).
This library is considered to be **stable**. The code surface will not change in backwards-incompatible ways unless absolutely necessary (e.g. because of critical security issues) or with an extensive deprecation period. Issues and requests against **stable** libraries are addressed with the highest priority.
More Information: [Google Cloud Platform Launch Stages](https://cloud.google.com/terms/launch-stages)
Contributing
---
Contributions welcome! See the [Contributing Guide](https://github.com/googleapis/google-cloud-node/blob/main/CONTRIBUTING.md).
Please note that this `README.md`, the `samples/README.md`,
and a variety of configuration files in this repository (including `.nycrc` and `tsconfig.json`)
are generated from a central template. To edit one of these files, make an edit to its templates in
[directory](https://github.com/googleapis/synthtool).
License
---
Apache Version 2.0
See [LICENSE](https://github.com/googleapis/google-cloud-node/blob/main/LICENSE)
Readme
---
### Keywords
* google apis client
* google api client
* google apis
* google api
* google
* google cloud platform
* google cloud
* cloud
* google privateca
* privateca
* certificate authority service |
importinegi | cran | R | Package ‘importinegi’
January 27, 2023
Title Download and Manage Open Data from INEGI
Version 1.2.0
Author <NAME> [aut, cre]
Maintainer <NAME> <<EMAIL>>
Description Download and manage data sets of statistical projects and geographic data created by In-
stituto Nacional de Estadistica y Geografia (INEGI). See <https://www.inegi.org.mx/>.
BugReports https://github.com/crenteriam/importinegi/issues
Depends R (>= 3.3.0)
License CC0
Encoding UTF-8
Imports foreign, dplyr, haven, rgdal, data.table, rio
Suggests knitr, markdown, testthat (>= 2.1.0)
Language es
RoxygenNote 7.1.0
VignetteBuilder knitr
NeedsCompilation no
Repository CRAN
Date/Publication 2023-01-27 16:00:03 UTC
R topics documented:
catalogo_ineg... 2
censo_municipa... 3
censo_poblacion_age... 4
censo_poblacion_ite... 5
censo_poblacion_muestr... 6
censo_poblacion_rura... 7
censo_poblacion_urban... 8
enigh_nuevaconstruccio... 9
eno... 10
sig_caminos_descarg... 11
sig_caminos_extra... 11
sig_marcoge... 12
catalogo_inegi Catalogo de proyectos estadisticos del INEGI
Description
Consulta el catalogo la Red Nacional de Metadatos del INEGI o los metadatos de un proyecto
estadistico en particular.
Usage
catalogo_inegi(id = NA)
Arguments
id Para acceder al diccionario de datos de una base de datos de la Red Nacional
de Metadatos del INEGI, utiliza el numero de identificacion unico (valor nu-
merico). El identificador unico se puede consultar en el catalogo de proyectos
estadisticos del INEGI (ver ejemplo).
Details
La funcion catalogo_inegi provee una lista de bases de datos con un identificador unico (id). Si
conoces el id de la base de datos, utilizalo en el parametro para acceder al libro de codigos y los
metadatos de la base de datos. Si no conoces el id de la base de datos a consultar, teclea la funcion
catalogo_inegi sin parametros para descargar la lista de bases de datos (Ver ejemplo).
Value
Data.frame
See Also
Consulta el repositorio la Red Nacional de Metadatos del INEGI.
Examples
# Accede al repositorio de la Red Nacional de Metadatos
# > de INEGI y almacenalo como una base de datos.
## Not run: dt.catalogo <- catalogo_inegi()
# Consulta metadatos de una base de datos.
## Not run: catalogo_inegi(id = 489)
censo_municipal Censo Nacional de Gobiernos Municipales y Delegacionales
Description
Descarga los datos del Censo Nacional de Gobiernos Municipales y Delegacionales (CNGMD),
Usage
censo_municipal(year = NA, fuente = NA, datos = NA)
Arguments
year Año del levantamiento del censo en formato numerico. Los años disponibles
son 2011, 2013, 2015 y 2017.
fuente Fuente de datos de las instituciones publicas de municipales y delegacionales en
formato alfanumerico. Las opciones son: ayuntamiento, administracion, seguri-
dad, justicia.
datos Base de datos producida por cada fuente de datos en formato alfanumerico. Las
opciones pueden ser, segun la fuente de datos: comision, estructura, integrantes,
actividades, funciones, marco, participacion, recursos, tramites, transparencia,
ejercicio, infraestructura, y recursos.
Details
El CNGMD es un proyecto estadistico sobre la gestion y desempeño de las entidades gubernamen-
tales mexicanas a nivel municipal. El CNGMD cubre cuatro tematicas: ayuntamiento, administra-
cion publica municipal, seguridad y justicia.
Value
Data.frame
Examples
# Consulta los metadatos del Censo Nacional de Gobiernos Municipales y Delegacionales
## Not run: censo_municipal()
# Descarga los microdatos de la estructura de los ayuntamientos en 2011
## Not run: estruct <- censo_municipal(year = 2011, fuente = "ayuntamiento", datos = "estructura")
censo_poblacion_ageb Censo de Poblacion - AGEB
Description
Descarga los datos del Censo de Poblacion y Vivienda al nivel de desagregacion AGEB y manzana
urbana.
Usage
censo_poblacion_ageb(
year = 2010,
estado = "Nacional",
totalestado = FALSE,
totalmunicipio = FALSE,
totallocalidad = FALSE,
totalageb = FALSE,
manzana = TRUE
)
Arguments
year Año del levantamiento del censo en formato numerico. El unio año disponible
en INEGI (incluyendo los conteos) para esta base de datos es 2010.
estado Define el nombre de la entidad federativa para descargar los datos, en formato al-
fanumerico. Utiliza "Nacional" para descargarlos a nivel nacional. Los nombres
de los estados deben ir capitalizados (y en su caso, con espacios), por ejemplo:
"Aguascalientes", "CDMX", "San Luis Potosi".
totalestado Resultados agregados a nivel entidad federativa. FALSE omite los resultados a
nivel entidad federativa.
totalmunicipio Resultados agregados a nivel municipio. FALSE omite los resultados a nivel
municipio.
totallocalidad Resultados agregados a nivel localidad urbana. FALSE omite los resultados a
nivel municipio.
totalageb Resultados agregados a nivel AGEB urbana. FALSE omite los resultados a nivel
AGEB.
manzana Si se requiere conservar unicamente los resultados a nivel agregado (p. ej. es-
tado, municipio o localidad), FALSE eliminara las observaciones por manzana.
Details
Esta base de datos tiene tres niveles de agregacion: entidades federativas, municipios, agebs y
manzanas (en zonas urbanas).
Value
Data.frame
Examples
# Consultar los datos del Censo a nivel AGEB y manzana urbana.
## Not run: censo_poblacion_ageb()
# Descargar los datos de CDMX de 2010.
## Not run: ageb = censo_poblacion_ageb(year = 2010, estado = "CDMX")
censo_poblacion_iter Censo de Poblacion - ITER
Description
Censo de Poblacion y Vivienda. Principales resultados por localidad (ITER).
Usage
censo_poblacion_iter(
year = "2010",
estado = "Nacional",
totalestado = FALSE,
totalmunicipio = FALSE,
localidades = TRUE
)
Arguments
year Año del levantamiento del censo en formato numerico. Los años disponibles
(incluyendo los conteos) son: 1990, 1995, 2000, 2005, 2010 y 2015
estado Define el nombre de la entidad federativa para descargar los datos, en formato al-
fanumerico. La funcion, por defecto utiliza la palabra "Nacional" para descargar
los datos de todos los estados. Los nombres de los estados deben ir capitaliza-
dos (y en su caso, con espacios), por ejemplo: "Aguascalientes", "CDMX", "San
Luis Potosi".
totalestado Resultados agregados a nivel entidad federativa. FALSE omite los resultados a
nivel entidad federativa.
totalmunicipio Resultados agregados a nivel municipio. FALSE omite los resultados a nivel
municipio.
localidades Si se requiere conservar unicamente los resultados a nivel agregado (estado o
municipio), FALSE eliminara las observaciones por localidad.
Details
Esta base de datos tiene dos niveles de agregacion: entidades federativas y municipios.
Value
Data.frame
Examples
# Consultar los datos ITER del Censo de Poblacion y Vivienda
## Not run: censo_poblacion_iter()
# Descargar los datos de CDMX de 2010.
## Not run: iter = censo_poblacion_iter(year = 2010, estado = "CDMX")
censo_poblacion_muestra
Censo de Poblacion - Muestra
Description
Censo de Poblacion y Vivienda. Muestra (cuestionario ampliado).
Usage
censo_poblacion_muestra(year = 2010, estado = NA, muestra = NA)
Arguments
year Año del levantamiento del censo en formato numerico. Los años disponibles
(incluyendo los conteos) son: 1990, 1995, 2005 y 2010.
estado Define el nombre de la entidad federativa para descargar los datos, en formato
alfanumerico. Los nombres de los estados deben ir capitalizados (y en su caso,
con espacios), por ejemplo: "Aguascalientes", "CDMX", "San Luis Potosi".
muestra Bases de datos disponibles Migrantes (1995, 2000 y 2010), Personas (1995,
2000, 2005 y 2010), Viviendas (2000, 2005 y 2010), Hogar (2005) y NA (1990).
Details
En la muestra del Censo la unidad de analisis puede ser personas, viviendas o migrantes. Por lo
tanto, ademas del año y el estado, un tercer parametro requerido es muestra, que representa la
unidad de analisis. Las unidades de analisis en este parámetro pueden ser Migrantes, Personas,
Viviendas u Hogar.
Value
Data.frame
Examples
# Descarga los datos de CDMX de 2010.
## Not run: muestra = censo_poblacion_muestra(year = 2010, estado = "CDMX", muestra = "Personas")
censo_poblacion_rural Censo de Poblacion - Localidades rurales
Description
Censo de Poblacion y Vivienda. Resultados sobre localidades con menos de 5 mil habitantes
Usage
censo_poblacion_rural(year = NA, estado = "Nacional")
Arguments
year Año del levantamiento del censo en formato numerico. El unico año disponible
en INEGI (incluyendo los conteos) es 2010.
estado Define el nombre de la entidad federativa para descargar los datos en formato al-
fanumerico. La funcion, por defecto utiliza la palabra "Nacional" para descargar
los datos de todos los estados. Los nombres de los estados deben ir capitaliza-
dos (y en su caso, con espacios), por ejemplo: "Aguascalientes", "CDMX", "San
Luis Potosi".
Details
Esta base de datos tiene dos niveles de agregacion: entidades federativas y municipios.
Value
Data.frame
Examples
# Descargar los datos de CDMX de 2010.
## Not run: rural = censo_poblacion_rural(year = 2010, estado = "CDMX")
censo_poblacion_urbano
Censo de Poblacion - Entorno urbano
Description
Censo de Poblacion y Vivienda. Resultados sobre infraestructura y caracteristicas del entorno ur-
bano.
Usage
censo_poblacion_urbano(year = NA, estado = NA)
Arguments
year Año del levantamiento del censo en formato numerico. Los años disponibles
(incluyendo los conteos) son: 2000, 2005, 2010 y 2015.
estado Define el nombre de la entidad federativa para descargar los datos en formato
alfanumerico. Los nombres de los estados deben ir capitalizados (y en su caso,
con espacios), por ejemplo: "Aguascalientes", "CDMX", "San Luis Potosi".
Details
Esta base de datos tiene dos niveles de agregacion: entidades federativas y municipios.
Value
Data.frame
Examples
# Consultar los datos del entorno urbano del Censo de Poblacion y Vivienda
## Not run: censo_poblacion_entorno()
# Descargar los datos de CDMX de 2010.
## Not run: urbano = censo_poblacion_entorno(year = 2010, estado = "CDMX")
enigh_nuevaconstruccion
ENIGH Nueva Construccion (2008-2014)
Description
Descarga datos de la Encuesta Nacional de Ingreso y Gasto de los Hogares, Nueva Construccion
(2008-2014).
Usage
enigh_nuevaconstruccion(year = NA, datos = NA)
Arguments
year Año de levantamiento de la encuesta en formato numerico. Los años disponibles
son 2008, 2010, 2012 y 2014
datos Base de datos a descargar "viviendas" "hogares" "concentrado" "erogaciones"
"gastohogar" "gastotarjetas" "poblacion" "ingresos" "gastopersona" "trabajos"
"agro" "noagro"
Details
La ENIGH provee informacion estadisticas sobre los ingresos y gastos de los hogares en cuanto
a su monto, procedencia y distribucion. Adicionalmente, la ENIGH provee informacion sobre las
caracteristicas socio-demograficas de los integrantes del hogar.
Value
Data.frame
Examples
# Descargar datos de hogares
## Not run: hogares14 = enigh_nuevaconstruccion(year = 2014, datos = "hogares")
enoe ENOE
Description
Encuesta Nacional de Ocupacion y Empleo (ENOE)
Usage
enoe(year = NA, trimestre = NA, integrar = FALSE)
Arguments
year Año de levantamiento de la encuesta en formato numerico.
trimestre Trimestre de levantamiento de la encuesta en formato alfanumerico. Las op-
ciones son: "trim1", "trim2", "trim3" y "trim4".
integrar FALSE: descarga por separado y en una lista las cinco bases de datos que com-
ponen la ENOE. TRUE: integra las cinco bases de datos en una sola, utilizando
el identificador unico del entrevistado.
Details
La ENOE es un proyecto estadistico de encuestas en hogares especializado en informacion sobre
el mercado laboral. La ENOE provee informacion trimestral sobre la fuerza laboral, la ocupacion,
subocupacion y desocupacion de los miembros del hogar encuestado.
Value
Data.frame
Examples
# Descargar las bases de datos de la ENOE 2009, Trimestre 1, sin integrar.
## Not run: enoe(year = 2009, trimestre = "trim1")
# Descargar las bases de datos de la ENOE 2009, Trimestre 1, integradas
## Not run: enoe(year = 2009, trimestre = "trim1", integrar = TRUE)
sig_caminos_descarga Red Nacional de Caminos - Descarga datos
Description
Descarga un una lista con todos los mapas de la Red Nacional de Caminos para un año especifico. El
objeto resultante de esta funcion es necesario para extraer, por separado, cada mapa con la funcion
sig_caminos_extrae().
Usage
sig_caminos_descarga(year = NA)
Arguments
year Año de referencia del mapa, en formato numerico (2016-2019).
Details
La Red Nacional de Caminos (RNC) provee informacion georreferenciada sobre las vias de comu-
nicacion inter-urbana e intra-urbana. Adicionalmente, contiene informacion sobre la infraestruc-
tura publica urbana (p. ej. tuneles, puentes, plazas de cobro, marcas de kilometraje, etc.), y la
infraestructura de otros medios de transporte (p. ej. transbordadores, aeropuertos, puertos y esta-
ciones de ferrocarril).
Value
Data.frame
Examples
# Descargar mapas de la RNC
## Not run: mapas.rnc = sig_caminos_descarga(year = 2019)
sig_caminos_extrae Red Nacional de Caminos - Extrae mapas
Description
Extrae cada uno de los mapas que componen de la Red Nacional de Caminos, previamente descar-
gados con la funcion sig_caminos_descarga().
Usage
sig_caminos_extrae(. = NA, mapa = NA)
Arguments
. Inserta el nombre del objecto previamente creado con la funcion sig_caminos_descarga().
mapa Mapa en formato alfanumerico. Las opciones son: estructura, localidad, manio-
bra_prohibida, plaza_cobro, poste_de_referencia, puente, red_vial, sitio_de_interes,
tarifas, transbordador, tred_localidad, tred_sitio_de_interes, runion.
Details
La Red Nacional de Caminos (RNC) provee informacion georreferenciada sobre las vias de comu-
nicacion inter-urbana e intra-urbana. Adicionalmente, contiene informacion sobre la infraestruc-
tura publica urbana (p. ej. tuneles, puentes, plazas de cobro, marcas de kilometraje, etc.), y la
infraestructura de otros medios de transporte (p. ej. transbordadores, aeropuertos, puertos y esta-
ciones de ferrocarril).
Value
Data.frame
Examples
# Descargar mapas de la RNC
## Not run: mapas.rnc = sig_caminos_descarga(year = 2019)
# Extraer el mapa de las plazas de cobro
## Not run: mapa.pzacobro = sig_caminos_extrae(mapas.rnc, mapa = "puente")
sig_marcogeo Marco Geoestadistico Nacional
Description
Extrae los mapas del Marco Geoestadistico Nacional.
Usage
sig_marcogeo(year = NA, mapa = NA, version = NA)
Arguments
year Año de referencia del mapa, en formato numerico. Años disponibles: 1995,
2000, 2005, 2007, 2009, 2010 y 2013.
mapa Mapa en formato alfanumerico. Las opciones son: entidades, municipios, ageb,
urbano, y rural.
version Especificar, en formato alfanumerico, la version para los años 2010 (4.3, 5.0,
5.0.A), 2017 (2010.0 o dejar en blanco) y 2018 (2010.0 o dejar en blanco). Para
el resto de los años, dejar en blanco.
Details
El Marco Geoestadistico Nacional (MGN) es un proyecto geoestadistico que presenta informacion
sobre la division política del territorio mexicano en sus diferentes niveles de gobierno (nacional,
estatal y municipal), asi como otras formas de clasificacion del territorio nacional.
Value
Data.frame
Examples
# Consultar los metadatos del Marco Geoestadistico Nacional
## Not run: sig_marcogeo()
# Descargar el mapa de munucipios para 2009
## Not run: mapa09 = sig_marcogeo(year = 2009, mapa = "municipios") |
lima1 | readthedoc | Python | Lima documentation
[Lima](#)
---
LImA : Library for Image Acquisition[¶](#lima-library-for-image-acquisition)
===
LImA (stands for **L** ibrary for **Im** age **A** cquisition) is a project for the unified control of 2D detectors. It is used in production in [ESRF Beamlines](https://www.esrf.eu/about/synchrotron-science/beamline) and in other places.
The architecture of the library aims at clearly separating hardware specific code from common software configuration and features, like setting standard acquisition parameters (exposure time, external trigger), file saving and image processing.
LImA is a C++ library but the library also comes with a [Python](http://python.org) binding. A [PyTango](http://github.com/tango-cs/pytango) device server for remote control is provided as well.
We provide Conda binary package for Windows and Linux for some cameras. Check out our [Conda channel](https://anaconda.org/esrf-bcu).
LImA is a very active project and many developments are ongoing and available from [GitHub](https://github.com/esrf-bliss/LImA). You can find stable version releases through git branches and tags on [Github releases](https://github.com/esrf-bliss/LImA/releases).
If you want to get in touch with the LIMA community, please send an email to [<EMAIL>](mailto:lima%40esrf.fr). You may also want to subscribe to our mailing list by sending a message to [<EMAIL>](mailto:sympa%40esrf.fr?subject=subscribe%20lima) with `subscribe lima` as subject.
For the latest changes, refers to the [`Release Notes`](_downloads/6d2be5d69baba64ffcc19c2c6eabc5d5/ReleaseNotes.txt).
Note that this documentation is also available in [pdf](http://readthedocs.org/projects/lima-doc/downloads/pdf/latest/) and [epub](http://readthedocs.org/projects/lima-doc/downloads/epub/latest/) format.
Requirements[¶](#requirements)
---
Some tools and libraries are required to build LImA for either Windows and Linux.
Note
All the dependencies, build or runtime, are available as [Conda](https://conda.io) packages for both Windows and Linux platforms.
### Build dependencies[¶](#build-dependencies)
* A C++ compiler (usually GCC for Linux and Visual Studio for Windows)
+ Visual Studio 2008 for x86 or x64 for python2.7.x
+ Visual Studio 2008 Express for x86 only for python2.7.x
+ Visual Studio 2015 or 2017 for x86 and x64 for python >= 3.5
* [CMake](https://cmake.org) >= 3.1
### Python dependencies[¶](#python-dependencies)
[LImA](https://lima1.readthedocs.io) is compatible with python 2 and 3.
* [numpy](http://pypi.python.org/pypi/numpy) >= 1.1
* [sip](https://www.riverbankcomputing.com/software/sip) >= 4.19
### Optional dependencies[¶](#optional-dependencies)
#### Saving format dependencies[¶](#saving-format-dependencies)
* [TIFF](http://www.libtiff.org/), Tag Image File Format (TIFF), a widely used format for storing image data ;
* [zlib](https://zlib.net/), a lossless data-compression library. For Windows, you can download the ESRF binary package [zlib-windows](http://ftp.esrf.fr/pub/bliss/lima/zlib-windows.zip) and install it under `C:\Program Files` ;
* [CBF](http://www.bernstein-plus-sons.com/software/CBF), a library for accessing Crystallographic Binary Files (CBF files) and Image-supporting CIF (imgCIF) files ;
* [HDF5](https://support.hdfgroup.org/HDF5), a data model, library, and file format for storing and managing data ;
* [CCfits](https://heasarc.gsfc.nasa.gov/fitsio/ccfits), [CFITSIO](https://heasarc.gsfc.nasa.gov/fitsio/fitsio.html), a library for reading and writing data files in FITS (Flexible Image Transport System) data format ;
* [LZ4](https://lz4.github.io/lz4) >= 1.9.1, a lossless compression algorithm ;
* [libconfig](http://www.hyperrealm.com/libconfig), a library for processing structured configuration files. For Windows, you can download the ESRF binary package [libconfig-windows](http://ftp.esrf.fr/pub/bliss/lima/libconfig-windows.zip) and install it under `C:\Program Files`.
#### PyTango server dependencies[¶](#pytango-server-dependencies)
* [PyTango](http://github.com/tango-cs/pytango), the Tango python binding
* [libtango](http://www.tango-controls.org/downloads/), the Tango toolkit
Build and Install[¶](#build-and-install)
---
### Install binary packages[¶](#install-binary-packages)
We provide [Conda](https://conda.io) binary packages for some cameras. This is, by far, the easiest way to get started with LImA! For instance:
Install first lastest miniconda3 (<https://docs.conda.io/en/latest/miniconda.html>)
Install mamba package in your “base” environment to speed up your future installations, the default conda installer is very slow, so we prefer to use mamba:
::conda install mamba
Install now the Lima camera package (e.g basler) at the same time you create the new environment for your Lima installation:
::mamba create -n basler -c conda-forge -c esrf-bcu lima-camera-basler
would install a fully loaded Lima and all its dependencies with the Basler camera plugin and SDK. The camera comes as a python module but is also C++ development package that includes header files and [CMake package config](https://cmake.org/cmake/help/latest/manual/cmake-packages.7.html) files.
If you need to run the Python Tango device server you should install the Tango camera package:
```
mamba create -n basler -c conda-forge -c esrf-bcu lima-camera-basler-tango
```
Note
The runtime libraries of the camera’s SDK are provided as well but some cameras requires drivers or specific setups than needs to be installed manually.
### Build from source[¶](#build-from-source)
First, you need to [Get the Source](index.html#get-source). Two methods are provided to build LImA from source:
> * using our install script that aims to hide the complexity of [CMake](https://cmake.org);
> * using [CMake](https://cmake.org) directly for developers who are already acquainted with the tool and need the extra flexibility.
#### Using scripts[¶](#using-scripts)
The `install` scripts will run [CMake](https://cmake.org) to compile and/or install.
It accepts input arguments (see below) but it also uses a configuration file `scripts/config.txt`. Feel free to update this file for setting a permanent configuration for your own installation.
For Linux:
```
[sudo] install.sh
[--git]
[--install-prefix=<desired installation path>]
[--install-python-prefix=<desired python installation path>]
[options]
```
For Windows:
```
install.bat
[--install-prefix=<desired installation path>]
[--install-python-prefix=<desired python installation path>]
[options]
```
The `--git` (Linux only) option can be used to clone the required submodules as a prerequisite. Otherwise you should install the submodules manually with git commands, for instance:
```
$ git submodule init third-party/Processlib
$ git submodule init camera/basler
$ git submodule init applications/tango/python
$ git submodule update
```
Options are `<camera-name> <saving-format> python pytango-server`:
`<camera-name>` can be a combination of any of the following options:
```
andor|andor3|basler|prosilica|adsc|mythen3|ueye|xh|xspress3|ultra|
xpad|mythen|pco|marccd|pointgrey|imxpad|dexela|merlin|v4l2|
eiger|pixirad|hexitec|aviex|roperscientific|rayonixhs|espia|maxipix|frelon
```
`<saving-format>` can be a combination of any of the following options:
```
cbf|nxs|fits|edfgz|edflz4|tiff|hdf5
```
`python` will install the python module
`pytango-server` will install the [PyTango](http://github.com/tango-cs/pytango) server
For example, to install the Basler camera, use the TIFF output format, the python binding and the TANGO server, one would run:
```
$ sudo install.sh --git --install-prefix=./install --install-python-prefix=./install/python tiff basler python pytango-server
```
#### Using CMake[¶](#using-cmake)
Install first the project submodules:
```
git submodule init third-party/Processlib git submodule init camera/basler git submodule init applications/tango/python git submodule update
```
Run `cmake` in the build directory:
```
mkdir build cd build cmake ..
[-G "Visual Studio 15 2017 Win64" | -G "Visual Studio 15 2017" | -G "Unix Makefiles"]
[-DCMAKE_INSTALL_PREFIX=<desired installation path>]
[-DPYTHON_SITE_PACKAGES_DIR=<desired python installation path>]
-DLIMA_ENABLE_TIFF=true
-DLIMACAMERA_BASLER=true
-DLIMA_ENABLE_PYTANGO_SERVER=true
-DLIMA_ENABLE_PYTHON=true
```
Then compile and install:
```
cmake --build sudo cmake --build --target install
```
### Environment Setup[¶](#environment-setup)
Warning
If you are using [Conda](https://conda.io), we advice against setting any environment variables that might affect the Conda environment (e.g. `PATH`, `PYTHONPATH`)as this one of the most common source of troubles.
If the install path for libraries and python modules are not the default, you need to update your environment variables as follow:
For Linux:
```
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:<my-custom-install-dir>/Lima/lib export PYTHONPATH=$PYTHONPATH:<my-custom-install-dir>
```
For Windows:
```
set PATH=%PATH%;<my-custom-install-dir>\Lima\lib set PYTHONPATH=%PYTHONPATH%;<my-custom-install-dir>
```
or update the system wide variables `PATH` for the libraries and `PYTHONPATH` for python.
PyTango Device Server[¶](#pytango-device-server)
---
### Server setup[¶](#server-setup)
As [PyTango](http://github.com/tango-cs/pytango) ([Tango](http://tango-controls.org) for python) server is provided as Python script, you just have to copy the `applications/tango/python` directory wherever you want.
* `camera` directory: contained all camera Tango device specifics so remove all none need script
* `doc` directory: contained plugins camera documentation (exhaustive list of properties, commands and attributes)
* `plugins` directory: contained all plugins device server like:
+ Roi counters
+ Mask…
* `scripts` directory: contained a script use at ESRF to start Lima device server (can also be removed)
* `LimaCCDs.py` file: python script to start Lima device server
* `LimaViewer.py` file: python script to start LimaViewer device server to get image from Lima device server
:: warning: Make sure your environment is properly set for python and library paths, see [Build and Install](index.html#build-installation) for more information.
### Example of plugin server setup : Basler detector[¶](#example-of-plugin-server-setup-basler-detector)
This procedure described the way to implement basler camera plugin. It is the same for whole the plugins, only properties may change.
You need to create a device server for Lima and another for the camera plugin. Lima device will use basler device thanks to “LimaCameraType” property. This property corresponds to the name of the camera plugin.
#### Lima device server[¶](#lima-device-server)
1. Run Jive and select “Tools->Server Wizard” menu. You must enter server and instance names
> Click Next…
2. Start the Lima device server. Open a terminal and execute the command “server_name instance_name”
> Click Next on the “Tango Device Installation Wizard” window
3. Declare a Lima device
> The Lima device server, contained several classes. For Basler camera you need to configure LimaCCDs and Basler classes.
> > > > Select “LimaCCDs” class and click “Declare device” button. You must enter the device name with a string as “Domain/Family/member”.
> > > > Click Next and configure all the properties. You can let the default property values except for “LimaCameraType”. This property must contain the name of the Camera Plugin “Basler”.
> > > > At the end of the configuration, click “New Class” button.
> > > > Select “Basler” class and click “Declare device” button. You must enter the device name with a string as “Domain/Family/member”.
> > > > Click Next and configure all the properties. You can let the default property values except for “cam_ip_adress”. This property must contain the IP adress of the Basler camera.
> > > > Configuration is now ended, click “Finish”
> > > #### Lima Viewer[¶](#lima-viewer)
To test the Lima device server, you can use the LimaViewer. This is a device server which periodically get the last image from the buffer. It allows the user to check that Lima device server is operational. The procedure below describe how to install and configure the LimaViewer device server.
1. Run Jive and select “Tools->Server Wizard” menu. You must enter server and instance names
> Click Next…
2. Start the LimaViewer device server. Open a terminal and execute the command “server_name instance_name”
> Click Next on the “Tango Device Installation Wizard” window
3. Declare a LimaViewer device
> Select “LimaViewer” class and click “Declare device” button.
> > > > Enter the device name with a string as “Domain/Family/Member”.
> > > > Click Next and configure the “Dev_Ccd_name” property. This property corresponds to the name of the Lima device created before.
> > > > Configuration is now finished, click on “Finish”
> > > #### Test LimaCCDs device server with Jive[¶](#test-limaccds-device-server-with-jive)
The LimaViewer device appears in the Device tab from Jive. Make a right click on the LimaViewer device server and select “Monitor Device”
AtkPanel is now launched. You can configure exposure time and the number of frames to acquire.
The camera image can be viewed by selecting the “image_ccd” tab
Overview[¶](#overview)
---
This section provides a big picture of LImA.
Fig. 1 LImA Architecture[¶](#id1)
Fig. 2 LImA Dataflow, Statuses and Events[¶](#id2)
Concepts[¶](#concepts)
---
Tutorial[¶](#tutorial)
---
In this tutorial, we are going to write a program that prepares the camera and run a simple acquisition. We will be using the simulator, but every cameras should work in the same way. The program is in C++, the python binding being similar or simpler.
First some headers needs to be included :
> * The `simulator/camera.h` that defines the `Camera` class for this specific cameras
> * The `lima/ctcontrol.h` that defines the `CtControl` class which is the main user interface of LImA
If the library and plugin were not installed in the default locations, make sure to adjust the include search paths of your compiler.
```
#include <simulator/camera.h>
#include <lima/ctcontrol.h>
```
Then, the camera object is instantiated and the corresponding interface is constructed:
```
// A camera instance simulator::Camera simu(/* some cameras have specific settings here, e.g. IP address */);
// A hardware interface simulator::Interface hw(simu);
```
At this point, the code specific to the camera code is over and we can instantiate the [`lima::CtControl`](index.html#_CPPv4N4lima9CtControlE) object:
```
// The main control object CtControl ct = lima::CtControl(&hw);
```
[`lima::CtControl`](index.html#_CPPv4N4lima9CtControlE) is a class that aggregates many aspects of the configuration and the control of the cameras. Here is a non exhaustive lists of controls:
| Control | Description |
| --- | --- |
| Acquisition | Controls exposure time, number of frames, trigger mode, etc… |
| Image | Controls cropping (ROI), binning, rotation and other processing applied either on hardware or by software… |
| Saving | Controls the file format, compression, metadata… |
| Shutter | Controls the shutter mode and open and closed times… |
| Buffer | Controls the number of buffer, the maximum memory to use… |
These specific controls are accessible form the main [`lima::CtControl`](index.html#_CPPv4N4lima9CtControlE) object.
```
// Get the acquisition, saving and image controls CtAcquisition *acq = ct.acquisition();
CtSaving *save = ct.saving();
CtImage *image = ct.image();
```
All these control objects have member functions to set their parameters, either directly or using a the `Parameter` object, such as `lima::CtSaving::Parameter` (nested class). Here is how we could set the saving properties of our acquisition:
```
save->setDirectory("./data");
save->setPrefix("test_");
save->setSuffix(".edf");
save->setNextNumber(100);
save->setFormat(CtSaving::EDF);
save->setSavingMode(CtSaving::AutoFrame);
save->setFramesPerFile(100);
```
In the same way, image processing can configured to use a 2 x 2 binning:
```
image->setBin(Bin(2, 2));
```
Or acquisition parameters to get 10 frames with a 0.1 sec exposure:
```
acq->setAcqMode(Single);
acq->setAcqExpoTime(0.1);
acq->setAcqNbFrames(10);
```
Once we are happy with our settings, it’s time to prepare the acquisition which perform multiple tasks such as buffer allocation, folder creation or applying the camera settings through the camera plugin and SDK.
```
// Prepare acquisition (transfer properties to the camera)
ct.prepareAcq();
```
If the preparation is successful, the acquisition can be started anytime with:
```
// Start acquisition ct.startAcq();
```
That’s all for now, have good fun with LImA!
Supported Cameras[¶](#supported-cameras)
---
### Conda packages[¶](#conda-packages)
The following Conda packages are available from the esrf-bcu channel. Some cameras may required to manually install the drivers for the given SDK version.
### Windows Only[¶](#windows-only)
#### Hamamatsu[¶](#hamamatsu)
##### Introduction[¶](#introduction)
The Hamamatsu Orca flash is digital CMOS camera.
It supports USB3 or direct camera link connectivity.
> * USB 3.0 -> 30fps
> * Cameralink -> 100fps
The Lima plugin controls an Orca camera (**ORCA-Flash4.0 V2, C11440-22CU V2**) under Windows. It is based on the Hamamatsu DCAM-API SDK.
##### Prerequisite[¶](#prerequisite)
Host OS is Windows (32 or 64 bits). The driver must be installed on the host system.
##### Installation & Module configuration[¶](#installation-module-configuration)
Follow the generic instructions in [Build and Install](index.html#build-installation). If using CMake directly, add the following flag:
```
-DLIMACAMERA_HAMAMATSU=true
```
For the Tango server installation, refers to [PyTango Device Server](index.html#tango-installation).
##### Initialization and Capabilities[¶](#initialization-and-capabilities)
Implementing a new plugin for new detector is driven by the LIMA framework but the developer has some freedoms to choose which standard and specific features will be made available. This section is supposed to give you the correct information regarding how the camera is exported within the LIMA framework.
###### Camera initialization[¶](#camera-initialization)
There is nothing specific.
The available cameras must first be enumerated. A selected camera can then be inited.
(Note that at the moment only one camera will be handled by the pluggin.)
###### Std capabilities[¶](#std-capabilities)
This plugin has been implemented in respect of the mandatory capabilites but with some limitations according to some programmer’s choices.
We only provide here extra information for a better understanding of the capabilities of the Orca camera.
* HwDetInfo
> * Max image size is : 2048 * 2048
> * 16 bit unsigned type is supported
> * Pixel size: 6.5µm * 6.5µm
> * Detector type: Scientific CMOS sensor FL-400
* HwSync
> Supported trigger types are:
> * IntTrig
> * ExtTrigSingle
> * ExtGate (not yet implemented)
###### Optional capabilities[¶](#optional-capabilities)
* HwBin
> Possible binning values are:
> * 1 * 1
> * 2 * 2
> * 4 * 4
* HwRoi
> The Subarray mode allows defining a rectangle for ROI:
> * X: 0 to 2044
> * Width: 4 to 2048
> * Y: 0 to 2044
> * Heigth: 4 to 2048
* HwShutter
> * There is no shutter control available in the DCAM-API SDK.
* Cooling
> * There is no cooler sensor access or control to the cooling system via the DCAM-API SDK.
> * Cooling management is autonomous and can only be chosen between air or water cooling outside the sdk.
* Readout mode
> * Two readout modes are available: SLOW (30fps at full frame) or NORMAL (100fps at full frame).
##### Configuration[¶](#configuration)
##### How to use[¶](#how-to-use)
The following set of functions is used as a wrapper to the DCAM-API SDK.
Code can be found in the HamamatsuDCAMSDKHelper.cpp file.
```
dcam_init_open(); // initialize DCAM-API and get a camera handle.
dcamex_setsubarrayrect(); // Initialize the subarray mode (defines a ROI -rectangle-)
dcamex_getsubarrayrect(); // Get the current subarray parameters (get ROI settings)
dcamex_getimagewidth(); // Get the width of the image dcamex_getimageheight(); // Get the height of the image dcamex_getfeatureinq(); // Get the settings of a feature (ex: exposure time)
dcamex_getbitsperchannel(); // Get the number of bits per channel
```
#### PCO camera[¶](#pco-camera)
##### Introduction[¶](#introduction)
* **PCO camera systems**
> * PCO develops specialized fast and sensitive video camera systems, mainly for scientific applications;
> which covers digital camera systems with high dynamic range, high resolution, high speed and low noise.
> [PCO home page](http://www.pco.de/)
* **Product overview and technical data of the PCO cameras supported in LIMA**
> * **PCO.dimax:**
> High speed 12 bit CMOS camera with fast image rates of 1469 frames per second (fps) at full resolution of 1920 x 1080 pixel.
> [(tech data pcodimax)](http://www.pco.de/categories/high-speed-cameras/pcodimax-hd/)
> * **PCO.edge:**
> Extremely low noise sCMOS camera with fast frame rates (100 fps), wide dynamic range (1:27000), high quantum efficiency,
> high resolution (2560 x 2160) and large field of view.
> [(tech data pcoedge)](http://www.pco.de/categories/scmos-cameras/pcoedge-42/)
> * **PCO.2000:**
> High resolution (2048 x 2048 pixel) and low noise 14bit CCD cooled camera system with internal image memory (camRAM),
> allows fast image recording with 160 MB/s. The available exposure times range from 500 ns to 49 days.
> [(tech data pco2000)](http://www.pco.de/categories/sensitive-cameras/pco2000/)
> * **PCO.4000:**
> High resolution (4008 x 2672 pixel) and low noise 14bit CCD cooled camera system with internal image memory (camRAM),
> allows fast image recording with 128 MB/s. The available exposure times range from 5 us to 49 days.
> [(tech data)](http://www.pco.de/categories/sensitive-cameras/pco4000/)
* **Interface buses**
> * **Cameralink:** used by **PCO.dimax** and **PCO.edge**
> * **Cameralink HS:** used by **PCO.edge**
> * **USB3.0:** used by **PCO.edge**
> * **GigE:** used by **PCO.2000** and **PCO.4000**
* **Type of applications**
> * Mainly used in scientific applications.
* **OS supported**
> * **Win7 Professional** (english) 64 bits SP1.
##### Prerequisites[¶](#prerequisites)
* **Required software packages**
> * **download links**
> > > * [PCO and Silicon Software download (login/pw required)](ftp://pcoag.biz/)
> > * [VC++ download](http://www.microsoft.com/visualstudio/en-us/products/2008-editions/express)
> > * [GSL download](http://sourceforge.net/projects/gnuwin32/files/gsl/1.8/gsl-1.8.exe/download)
> > * [python download](http://www.python.org/download/releases/2.6.6/)
> > * [numpy download](http://sourceforge.net/projects/numpy/files/NumPy/1.5.1/)
> > * [PyQt download](http://www.riverbankcomputing.co.uk/software/pyqt/download)
> > * [PyTango download](http://www.tango-controls.org/download)
> > * [GIT download](http://code.google.com/p/msysgit/downloads/list)
> > > > * **md5 checksum and size of packges used (maybe not updated)**
```
Silicon Software Runtime 5.4.4
f8317c5145bac803f142c51b7c54ba27 RuntimeSetup_with_Applets_v5.4.4_Win64.exe
```
```
pco-sdk 1.20
eb73ab0495a66c068c408344e20c8ad9 read_me.txt
69a8f5667b71a8cf206d782e20f526ab SW_PCOSDKWIN_120.exe
```
```
CAMWARE v403_1
a9f8b2e465b7702ff727ba349ef327e8 SW_CAMWAREWIN64_403_1.exe
```
```
VC++ compiler
Microsoft Visual Studio 2008
Version 9.0.30729.1 SP
Microsoft .NET Framework
Version 3.5 SP1
Installed Edition: Professional
Microsoft Visual C++ 2008 91605-270-4441125-60040
Microsoft Visual C++ 2008
```
```
Python
8d10ff41492919ae93a989aba4963d14 numpy-MKL-1.8.1.win-amd64-py2.7.exe
5a38820953d38db6210a90e58f85548d PyTango-8.0.4.win-amd64-py2.7.exe
b73f8753c76924bc7b75afaa6d304645 python-2.7.6.amd64.msi
```
```
pco edge CLHS / for firmware upgrade to 1.19
9790828ce5265bab8b89585d8b8e83a9 pco.programmer_edgeHS.exe
b9266e03a04ac9a9ff835311f0e27d94 pco_clhs_info.exe
7e2f767684fb4ffaf5a5fac1af0c7679 sc2_clhs.dll
2ed778785489846fd141f968dca3735b README.txt
6bdb7a27b0d7738762c878a33983dada /FW_pco.edge_CLHS_020_V01_19.ehs
```
```
UTILS
38ba677d295b4b6c17368bb86b661103 FileZilla_3.22.1_win64-setup_bundled.exe
0377ccd0a3283617d161f24d080fb105 Git-1.9.0-preview20140217.exe
3cbd2488210b6e7b3e7fa1baf05022d4 MobaXterm_Setup_7.1.msi
```
* **Enviroment variables**
> * **system variables**
```
===> add manually the python path (it is not set by the installation program)
PATH -> C:\Python26;
===> used for some utility batch files
PATH -> C:\blissadm\bat;
```
> * **user variables**
```
TANGO_HOST -> <host>:20000
```
##### Installation & Module configuration[¶](#installation-module-configuration)
Follow the generic instructions in [Build and Install](index.html#build-installation). If using CMake directly, add the following flag:
```
-DLIMACAMERA_PCO=true
```
For the Tango server installation, refers to [PyTango Device Server](index.html#tango-installation).
##### Post installation actions[¶](#post-installation-actions)
* **enable/disable PCO logs**
```
===> rename file extensions (C:\ProgramData\pco):
.txt (disabled) / .log (enabled) ---+
camware.log <--- created by hand
PCO_CDlg.log
PCO_Conv.log
SC2_Cam.log
```
* **Command prompt console (Visual Studio)**
```
> All Programs
> Microsoft Visual C++ 2008 Express Edition
> Visual Studio Tools
> Visual Studio 2008 Command Prompt
```
* **TODO**
* After installing PCO modules [Installation](index.html#installation)
* And probably Tango server [PyTango Device Server](index.html#tango-installation)
##### Configuration[¶](#configuration)
* **TODO**
##### PCO EDGE notes[¶](#pco-edge-notes)
###### PC characteristics (used for PCO EDGE at ESRF)[¶](#pc-characteristics-used-for-pco-edge-at-esrf)
* **PROCESSOR**
```
2x Intel Xeon E5645 Six-Core CPU, 2,40GHz, 80W, Socket LGA1366, 12MB 5,86GT/sec
CPU's: 2x Xeon SixCore E5645 2,40Ghz 12MB 5,86GT/sec Intel Xeon E5645 Six-Core CPU, 2,40GHz, 80W, Socket LGA1366, 12MB external cache. 5,86GT/sec QPI speed. 1333MHz memory speed (DDR3 only).
Intel Technologies: Intel Turbo Boost , Intel Hyper-Threading Technology, Intel Virtualization (VT-x), Intel Trusted Execution,
Enhanced Intel SpeedStep, Intel Demand Based Switching, Execute Disable Bit.
```
* **RAM**
```
24 GB (6x DDR3-1333 Reg. ECC 4 GB module)
```
* **HD**
```
C:
WDC WD5003ABYX-01WERA1
Western Digital 500 GB, 7200 RPM, SATA 2, 300 Mbps
D:
Adaptec RAID 5405/5405Q with 2 HD of 450 Gb -> RAID0 837 GB
HUS156045VLS600
Hitachi 450GB, 15,000RPM SAS / Serial Attached SCSI, 6Gbps
```
* **graphic card**
```
Matrox G200eW
```
* **motherboard**
```
Motherboard Extended ATX format 13,68in x 13in, (34,7cm x 33cm) (W x H);
2 socket LGA 1366-pin. It supports processors Quad-Core Intel Xeon series 5500; QPI bus system (up to 6.4GT/s); *chipset Intel 5520*;
18 socket DIMM 240 pin, support for up to 288GB memory DDR3 1333/1066/800MHz Registered or 48GB memory DDR3 unbuffered ECC, the real operating ram speed depends on the processor?s model and number of installed ram, best performances are achieved through a triple channel configuration;
```
* **PCI slots**
```
1x PCIe x4 (in x8 slot)
3x PCIe x8 1x PCIe x8 (in x16 slot)
2x PCIe x16
```
###### PCO EDGE - install instructions for Silicon Software Me4 Board[¶](#pco-edge-install-instructions-for-silicon-software-me4-board)
Check the document **camera/pco/doc/Me4_Installation_Test_e1.pdf** with the requirements and procedure to install the CameraLink grabber card. It is important in order to get the maximum transfer speed required by the PCO EDGE camera.
The boards tested by PCO are:
```
Supermicro X8ST3 GigaByte GA-X58A-UD3R Intel S5520 Intel DX58SO2 Supermicro X8DTH-iF
```
With the PC described in [PCO EDGE notes](index.html#pco-esrf-pc)
the speed of the CameraLink is about
**570 MB/s** (66% of the theoretic max of 860 MB/s).
###### PCO EDGE - shutter mode (global/rolling)[¶](#pco-edge-shutter-mode-global-rolling)
```
cam.talk("rollingShutter 0") <--- set shutter mode to GLOBAL
cam.talk("rollingShutter 1") <--- set shutter mode to ROLLING
```
After the change of the shutter mode, the cam is rebooted and requires about 10s to became ready, meanwhile the acq status is AcqConfig.
The validRanges (exposure and latency time) are updated after the mode change.
#### Perkin Elmer camera[¶](#perkin-elmer-camera)
##### Intoduction[¶](#intoduction)
> “PerkinElmer is a world leader in the design, development, and manufacture of Amorphous Silicon (aSi) Flat Panel Detectors (FPD) designed to perform across a wide range of medical, veterinary, and industrial, Non-Destructive Testing (NDT) applications. Our XRD family of detectors provide superior image resolution, high frame rates up to 30 frames per seconds (fps), energy levels form 20 keV -15 MeV and easy information storage and retrieval.”
The detector model we tested (ESRF) is : XRD 1621 CN ES
##### Prerequisite Windows 7[¶](#prerequisite-windows-7)
First, you have to install the Perkinelmer Windows7 SDK to the default path.
##### Installation & Module configuration[¶](#installation-module-configuration)
Follow the generic instructions in [Build and Install](index.html#build-installation). If using CMake directly, add the following flag:
```
-DLIMACAMERA_PERKINELMER=true
```
For the Tango server installation, refers to [PyTango Device Server](index.html#tango-installation).
##### Initialisation and Capabilities[¶](#initialisation-and-capabilities)
###### Camera initialisation[¶](#camera-initialisation)
The camera will be initialized by created the `PerkinElmer::Interface` object. The contructor will take care of your detector configuration according to the SDK installation setup done before.
###### Std capabilities[¶](#std-capabilities)
This plugin has been implement in respect of the mandatory capabilites but with some limitations which are due to the camera and SDK features. We provide here further information for a better understanding of the detector specific capabilities.
* HwDetInfo
getCurrImageType/getDefImageType(): Bpp16 only.
setCurrImageType(): this method do not change the image type which is fixed to Bpp16.
* HwSync
get/setTrigMode(): the supported mode are IntTrig, ExtStartStop, ExtTrigReadout
###### Optional capabilities[¶](#optional-capabilities)
In addition to the standard capabilities, we make the choice to implement some optional capabilities which are supported by the SDK and the I-Kon cameras. A Shutter control, a hardware ROI and a hardware Binning are available.
* HwBin
Some camera models support binning 4x4, 2x2, 4x2 4x2 and 1x1 and others support only 2x2.
Camera type si provided when initing the sdk (_InitDetector()) and only camera of type 15 supports the long range of binning.
##### Configuration[¶](#configuration)
> * Nothing special to do, but read the manual for proper installation.
##### How to use[¶](#how-to-use)
This is a python code example for a simple test:
```
from Lima import PerkinElmer from lima import Core
hwint = PerkinElmer.Interface()
ct = Core.CtControl(hwint)
acq = ct.acquisition()
# set offset and gain calibration, one image 1.0 second exposure hwint.startAcqOffsetImage(1, 1.0)
hwint.startAcqGainImage(1, 1.0)
# set further hardware configuration print (hwint.getGain())
hwint.setCorrectionMode(hwint.OffsetAndGain) # or No or OffsetOnly hwint.setKeepFirstImage(False)
# setting new file parameters and autosaving mode saving=ct.saving()
pars=saving.getParameters()
pars.directory='/buffer/lcb18012/opisg/test_lima'
pars.prefix='test1_'
pars.suffix='.edf'
pars.fileFormat=Core.CtSaving.EDF pars.savingMode=Core.CtSaving.AutoFrame saving.setParameters(pars)
# set accumulation mode
acq_pars= acq.getPars()
#0-normal,1-concatenation,2-accumu acq_pars.acqMode = 2 acq_pars.accMaxExpoTime = 0.05 acq_pars.acqExpoTime =1 acq_pars.acqNbFrames = 1
acq.setPars(acq_pars)
# here we should have 21 accumalated images per frame print (acq.getAccNbFrames())
# now ask for 2 sec. exposure and 10 frames acq.setAcqExpoTime(2)
acq.setNbImages(10)
ct.prepareAcq()
ct.startAcq()
# wait for last image (#9) ready lastimg = ct.getStatus().ImageCounters.LastImageReady while lastimg !=9:
time.sleep(1)
lastimg = ct.getStatus().ImageCounters.LastImageReady
# read the first image im0 = ct.ReadImage(0)
```
#### PhotonicScience[¶](#photonicscience)
##### Introduction[¶](#introduction)
> “Photonic Science is a high technology independent manufacturer of scientific detector systems covering the range of visible to x-ray and neutron detection. The camera technology offered is wide ranging, from CCD, EMCCD, CMOS to image intensified systems.”
The CCD camera 4022 has been tested at ESRF on beamline ID11.
##### Prerequisite[¶](#prerequisite)
TODO
##### Installation & Module configuration[¶](#installation-module-configuration)
Follow the generic instructions in [Build and Install](index.html#build-installation). If using CMake directly, add the following flag:
```
-DLIMACAMERA_PHOTONICSCIENCE=true
```
For the Tango server installation, refers to [PyTango Device Server](index.html#tango-installation).
##### Initialisation and Capabilities[¶](#initialisation-and-capabilities)
Implementing a new plugin for new detector is driven by the LIMA framework but the developer has some freedoms to choose which standard and specific features will be made available. This section is supposed to give you the correct information regarding how the camera is exported within the LIMA framework.
###### Camera initialisation[¶](#camera-initialisation)
TODO
###### Std capabilities[¶](#std-capabilities)
This plugin has been implemented in respect of the mandatory capabilites but with some limitations which are due to the camera and SDK features. We only provide here extra information for a better understanding of the capabilities for Andor cameras.
* HwDetInfo
> TODO
* HwSync
> TODO
###### Optional capabilities[¶](#optional-capabilities)
In addition to the standard capabilities, we make the choice to implement some optional capabilities which are supported by the SDK and the I-Kon cameras. A Shutter control, a hardware ROI and a hardware Binning are available.
* HwShutter
> TODO
* HwRoi
> TODO
* HwBin
> TODO
##### Configuration[¶](#configuration)
> TODO
##### How to use[¶](#how-to-use)
This is a python code example for a simple test:
```
from Lima import PhotonicScience from lima import Core
# camera library path cam = Xh.Camera('ImageStar4022_v2.5\imagestar4022control.dll')
hwint = Xh.Interface(cam)
ct = Core.CtControl(hwint)
acq = ct.acquisition()
# configure some hw parameters
# set some low level configuration
# setting new file parameters and autosaving mode saving=ct.saving()
pars=saving.getParameters()
pars.directory='/buffer/lcb18012/opisg/test_lima'
pars.prefix='test1_'
pars.suffix='.edf'
pars.fileFormat=Core.CtSaving.EDF pars.savingMode=Core.CtSaving.AutoFrame saving.setParameters(pars)
# now ask for 2 sec. exposure and 10 frames acq.setAcqExpoTime(2)
acq.setNbImages(10)
ct.prepareAcq()
ct.startAcq()
# wait for last image (#9) ready lastimg = ct.getStatus().ImageCounters.LastImageReady while lastimg !=9:
time.sleep(1)
lastimg = ct.getStatus().ImageCounters.LastImageReady
# read the first image im0 = ct.ReadImage(0)
```
#### MyCamera[¶](#mycamera)
##### Introduction[¶](#introduction)
Princeton’s camera are often use for Spectroscopy.
This Lima module has been tested with:
> * Pixis 100 and 400.
This module was tested with **PiCam 5.x**. You can get it
.. _Latest: <ftp://ftp.piacton.com/Public/Software/Official/PICam/PICam%20Install.exeDocumentation can be found
.. _Doc: <https://www.princetoninstruments.com/products/software-family/pi-cam##### Installation & Module configuration[¶](#installation-module-configuration)
First, you need to install the SDK.
Then, follow the generic instructions in [Build and Install](index.html#build-installation). If using CMake directly, add the following flag:
```
-DLIMACAMERA_PRINCETON=true
```
For the Tango server installation, refers to [PyTango Device Server](index.html#tango-installation).
##### Initialisation and Capabilities[¶](#initialisation-and-capabilities)
Implementing a new plugin for new detector is driven by the LIMA framework but the developer has some freedoms to choose which standard and specific features will be made available. This section is supposed to give you the correct information regarding how the camera is exported within the LIMA framework.
###### Interface initialisation[¶](#interface-initialisation)
The interface will be initialized within the `Interface` object.
The `Interface()` constructor takes an optional serial parameter.
If serial parameter is left empty, the plugin will open the first camera founded.
Small example showing possible ways to initialize:
```
from Lima import Princeton from lima import Core
interface = Princeton.Interface()
```
###### Standard capabilities[¶](#standard-capabilities)
This plugin has been implemented in respect of the mandatory capabilites but with some limitations which are due to the camera and SDK features. Only restriction on capabilities are documented here.
* HwDetInfo
Only support Bpp16 Images.
* HwSync
No restriction, plugin should offer all trigger mode available for the camera.
###### Optional capabilites[¶](#optional-capabilites)
None
##### How to use[¶](#how-to-use)
This is a python code example for a simple test:
```
from Lima import Princeton from lima import Core
hwint = Princeton.Interface("")
ct = Core.CtControl(hwint)
acq = ct.acquisition()
# set and test an acquisition
#
# setting new file parameters and autosaving mode saving=ct.saving()
pars=saving.getParameters()
pars.directory='/buffer/lcb18012/opisg/test_lima'
pars.prefix='test1_'
pars.suffix='.edf'
pars.fileFormat=Core.CtSaving.TIFF pars.savingMode=Core.CtSaving.AutoFrame saving.setParameters(pars)
# now ask for 2 sec. exposure and 10 frames acq.setAcqExpoTime(2)
acq.setNbImages(10)
ct.prepareAcq()
ct.startAcq()
# wait for last image (#9) ready lastimg = ct.getStatus().ImageCounters.LastImageReady while lastimg !=9:
time.sleep(1)
lastimg = ct.getStatus().ImageCounters.LastImageReady
# read the first image im0 = ct.ReadImage(0)
```
### Linux Only[¶](#linux-only)
#### ADSC camera[¶](#adsc-camera)
##### Introduction[¶](#introduction)
ADSC stands for Area Detector System Corporation.
Note
The Lima module has been tested only with the 315r model.
##### Prerequisite[¶](#prerequisite)
2 programs have to be running on the ADSC server:
> * `ccd_image_gather`
> * `det_api_workstation`
##### Initialisation and Capabilities[¶](#initialisation-and-capabilities)
In order to help people to understand how the camera plugin has been implemented in LImA this section provide some important information about the developer’s choices.
###### Camera initialisation[¶](#camera-initialisation)
Here are the available functions:
* `SetHeaderParameters()`
* `UseStoredDarkImage()`
* `SetImageKind()`
* `SetLastImage()`
###### Std capabilites[¶](#std-capabilites)
This plugin has been implemented in respect of the mandatory capabilites but with some limitations according to some programmer’s choices. We only provide here extra information for a better understanding of the capabilities for the Adsc camera.
* HwDetInfo
> + Max image size is : 3072 * 3072
> + 16 bit unsigned type is supported
>
* HwSync
> + trigger type supported are: IntTrig
###### Optional capabilites[¶](#optional-capabilites)
* HwBin
> + 1 * 1
> + 2 * 2
##### Configuration[¶](#configuration)
No specific hardware configuration is needed.
##### How to use[¶](#how-to-use)
here is the list of accessible fonctions to configure and use the ADSC detector:
```
void setHeaderParameters(const std::string& header);
void setStoredImageDark(bool value);
bool getStoredImageDark(void);
void setImageKind(int image_kind);
int getImageKind(void);
void setLastImage(int last_image);
int getLastImage(void);
void setFileName(const std::string& name);
const std::string& getFileName(void);
void setImagePath(const std::string& path);
const std::string& getImagePath(void);
```
#### Andor SDK3[¶](#andor-sdk3)
##### Introduction[¶](#introduction)
Andor Technology manufactuer offers a large catalogue of scientific cameras. Covered scientific applications are low ligth imaging, spectroscopy, microscopy, time-resolved and high energy detection.
Andor is providing a Software Development Tool (SDK) for both Windows and Linux, supporting different interface buses such as USB, CameraLink and also some specific acquisition PCI board. Unfortunately there was a significant API change between the v2 line of SDK and the brand new v3 of the SDK, and recent cameras are only supported by the v3 SDK, whilst this new SDK is not (yet ?) supporting previously built cameras.
The Lima module has been tested only with these camera models:
> * Neo (sCMOS 3-tap, full Camera Link, Linux OS)
> * Zyla (5.5 sCMOS, full Camera Link, Linux OS)
##### Installation & Module configuration[¶](#installation-module-configuration)
First, you have to install the Andor SDK the default path (/usr/local).
For our test we used the SDK for Linux version **V3.3.30004.0** and ran the install script `install_andor` for which option 2 (64b linux) was selected, the default installation is made under `/usr/local/` with:
> * `/usr/local/include`, header files
> * `/usr/local/lib`, library files
> * `/usr/local/andor/bitflow`, files for the frame-grabber driver (including camera firmware/frame grabber configuration)
The Linux SDK 3.3 has shared libraries which has been compiled on recent linux kernel, check first you have the right kernel and libc available by compiling one of the example program available under `examples/console`.
Andor3 python module needs at least the lima core module.
The best before using this Lima pluggin with a Andor Neo camera is to test the proper setting of the frame-grabber driver and system configuration by using the two test programs included in the SDK. Those are typically found in `/usr/local/andor/examples/` and are `listdevices` and `image`.
Then, follow the generic instructions in [Build and Install](index.html#build-installation). If using CMake directly, add the following flag:
```
-DLIMACAMERA_ANDOR3=true
```
For the Tango server installation, refers to [PyTango Device Server](index.html#tango-installation).
##### Configuration[¶](#configuration)
Connect the camera on both cameralink cables and power on.
##### How to use[¶](#how-to-use)
A simple python test programm:
```
from Lima import Andor from lima import Core
# ---+---+
# | |
# v camlink config path v camera index cam = Andor3.Camera('/users/blissadm/local/Andor3/andor/bitflow', 0)
hwint = Andor3.Interface(cam)
ct = Core.CtControl(hwint)
acq = ct.acquisition()
# configure some hw parameters hwint.setTemperatureSP(-30)
hwint.setCooler(True)
.... wait here for cooling
# set some low level configuration
hwint.setCooler(True)
hwint.setTemperatureSP(-55)
hwint.setFanSpeed(cam.Low)
hwint.setAdcGain(cam.b11_low_gain)
hwint.setAdcRate(cam.MHz100)
hwint.setElectronicShutterMode(cam.Rolling)
hwint.setOverlap(False)
# setting new file parameters and autosaving mode saving=ct.saving()
pars=saving.getParameters()
pars.directory='/buffer/lcb18012/opisg/test_lima'
pars.prefix='test1_'
pars.suffix='.edf'
pars.fileFormat=Core.CtSaving.EDF pars.savingMode=Core.CtSaving.AutoFrame saving.setParameters(pars)
# set accumulation mode
acq_pars= acq.getPars()
#0-normal,1-concatenation,2-accumu acq_pars.acqMode = 2 acq_pars.accMaxExpoTime = 0.05 acq_pars.acqExpoTime =1 acq_pars.acqNbFrames = 1
acq.setPars(acq_pars)
# here we should have 21 accumulated images per frame print acq.getAccNbFrames()
# now ask for 2 sec. exposure and 10 frames acq.setAcqExpoTime(2)
acq.setNbImages(10)
ct.prepareAcq()
ct.startAcq()
# wait for last image (#9) ready lastimg = ct.getStatus().ImageCounters.LastImageReady while lastimg !=9:
time.sleep(1)
lastimg = ct.getStatus().ImageCounters.LastImageReady
# read the first image im0 = ct.ReadImage(0)
```
#### Aviex camera plugin[¶](#aviex-camera-plugin)
##### Intoduction[¶](#intoduction)
The PCCD-170170 is a large area detector (4096 x 4096) designed for use in WAXS or SAXS experiments in a vacuum environment.
The detector supports full frame, multiframe time-sliced, and streak camera modes of operation.
Used at the SWING beamline of Synchrotron SOLEIL to make timeresolved SAXS measurements together with another WAXS detector.
This Lima plugin controls an Aviex camera under linux.
It is based on the [MX beamline control](http://mx.iit.edu) toolkit.
It has been tested at the Synchrotron SOLEIL facility, but has not been installed yet on a Beamline.
##### Module configuration[¶](#module-configuration)
First, compile the Mx Library/Driverand and install it in the default path (`/opt/mx/`).
Start the Mx driver with:
```
cd /opt/mx/sbin/
./mx start
```
Then, follow the generic instructions in [Build and Install](index.html#build-installation). If using CMake directly, add the following flag:
```
-DLIMACAMERA_AVIEX=true
```
For the Tango server installation, refers to [PyTango Device Server](index.html#tango-installation).
##### Initialisation and Capabilities[¶](#initialisation-and-capabilities)
Implementing a new plugin for new detector is driven by the LIMA framework but the developer has some freedoms to choose which standard and specific features will be made available. This section is supposed to give you the correct information regarding how the camera is exported within the LIMA framework.
###### Camera initialisation[¶](#camera-initialisation)
There are 2 parameters to be filled with your Lima client:
> * The detector friendly name: can be any string defined by user.
> * The detector database file name: this file must contains configuration parameters such as IP adress, port.
###### Std capabilites[¶](#std-capabilites)
This plugin has been implemented in respect of the mandatory capabilites but with some limitations according to some programmer’s choices. We only provide here extra information for a better understanding of the capabilities for the Aviex camera.
* HwDetInfo
> * Max image size is : 4096 * 4096
> * 16 bit unsigned type is supported
* HwSync trigger type supported are:
> + IntTrig
> + ExtTrigSingle
###### Optional capabilites[¶](#optional-capabilites)
* HwBin
> + 1 * 1
> + 2 * 2
> + 4 * 4
> + 8 * 8
> + Binning above are typical values, but binning is not necessarily square.
>
* HwRoi
> Not yet implemented
##### Configuration[¶](#configuration)
No specific hardware configuration is needed.
##### How to use[¶](#how-to-use)
Here is the list of accessible fonctions to configure and use the Aviex detector:
```
//-- Related to Aviex specific features void getExpMultiplier(double& exp_mult);
void setExpMultiplier(double exp_mult);
void getLatencyTime(double& period_time);
void setLatencyTime(double period_time);
void getGapMultiplier(double& gap_mult);
void setGapMultiplier(double gap_mult);
void getMxLibraryVersion(std::string& version);
void getInternalAcqMode(std::string& acq_mode);
//! Available mode : ONESHOT, MULTIFRAME, GEOMETRICAL, MEASURE_DARK, MEASURE_FLOOD_FIELD void setInternalAcqMode(const std::string& mode);
void getReadoutDelayTime(double& readout_delay);
void setReadoutDelayTime(double readout_delay);
void getReadoutSpeed(bool& readout_speed);
void setReadoutSpeed(bool readout_speed);
void getInitialDelayTime(double& initial_delay);
void setInitialDelayTime(double initial_delay);
//! MASK_CORRECTION_BIT_POSITION = 0
//! BIAS_CORRECTION_BIT_POSITION = 1
//! DARK_CORRECTION_BIT_POSITION = 2
//! FLOOD_CORRECTION_BIT_POSITION = 3
//! GEOM_CORRECTION_BIT_POSITION = 12 void setCorrectionFlags(unsigned long);
```
#### Dexela camera plugin[¶](#dexela-camera-plugin)
##### Introduction[¶](#introduction)
The Dexela detector is a brand product of PerkinElmer. PerkinElmer has recently Acquired Dexela Limited a manufacturer of CMOS flat panel. Nevertheless the Dexela detector SDK still remains not compatible with the other PerkinElmer detector SDK (see perkinelemer plugin) and one need to use this camera plugin instead.
##### Prerequisite[¶](#prerequisite)
The Dexela detector model sensor2923 only has been tested at ESRF.
The detector is controlled via an acquisition board: PIXCI(R) E4 PCIExpress Camera Link board (EPIX,Inc.).
You need to install the acquisition card SDK. It was tested with 3.8 version (xclib). You can find them at <http://www.epixinc.com/support/files.php> .
You also need to install libdexela which is not yet GPL. See detail with [<EMAIL>](mailto:mihael.koep%40softwareschneiderei.de).
###### BIOS configuration[¶](#bios-configuration)
You should disable all power saving mode like CSTATE and disable also multiple-threading feature of cpu.
At ESRF, SuperMicro computers have to be configured like this:
> * Simultaneous Multi-threading has to be disabled
> * C1E support has to be disabled
> * Intel CSTATE Tech has to be disabled
###### Linux kernel configuration[¶](#linux-kernel-configuration)
As the PIXCI acquisition card needs a low jitters configuration, you need to change some kernel parameters.
To do so, you have to change in grub configuration file (under `/etc/default/grub for debian`) the `GRUB_CMDLINE_LINUX_DEFAULT`
by adding theses options:
```
pcie_aspm=off intel_idle.max_cstate=0 processor.max_cstate=0 idle=poll mce=ignore_ce ipmi_si.force_kipmi=0 nmi_watchdog=0 noht
nosoftlockup isolcpus=0
```
the whole line should look something like this:
```
GRUB_CMDLINE_LINUX_DEFAULT="ipv6.disable=1 quiet pcie_aspm=off intel_idle.max_cstate=0 processor.max_cstate=0 idle=poll mce=ignore_ce ipmi_si.force_kipmi=0 nmi_watchdog=0 noht nosoftlockup isolcpus=0"
```
You also have to uninstall or disable the irqbalance process.
On Debian you can simply type:
```
sudo apt-get purge irqbalance
```
##### Installation & Module configuration[¶](#installation-module-configuration)
Follow the generic instructions in [Build and Install](index.html#build-installation). If using CMake directly, add the following flag:
```
-DLIMACAMERA_DEXELA=true
```
For the Tango server installation, refers to [PyTango Device Server](index.html#tango-installation).
##### Initialization and Capabilities[¶](#initialization-and-capabilities)
Implementing a new plugin for new detector is driven by the LIMA framework but the developer has some freedoms to choose which standard and specific features will be made available. This section is supposed to give you the correct information regarding how the camera is exported within the LIMA framework.
##### Camera initialization[¶](#camera-initialization)
The camera will be initialized within the `DexelaInterface` object.
The parameter to pass to `DexelaInterface()` constructor is the fill path need for the acquisition card.
This file is generated by xcap software provided by PIXCI. you can find some example in the config directory.
###### Std capabilities[¶](#std-capabilities)
This plugin has been implemented in respect of the mandatory capabilites but with limitations according due to the detector specific features and with some programmer’s choices. We do not explain here the standard Lima capabilites but you can find in this section the useful information on the Dexela specfic features.
* HwDetInfo
> The Dexela detector as a pixel size of 74.8e-6 m (74.8 um) and the image data type is fixed to 16bpp (bit per pixel).
* HwSync
> The supported trigger modes are IntTrig, IntTrigMult, ExtTrigMult and ExtGate.
The exposure time range is 0.0116 (1/86) to 120 seconds.
The latency time is not manage.
###### Optional capabilities[¶](#optional-capabilities)
In addition to the standard capabilities, we make the choice to implement some optional capabilities in order to have an improved simulation.
* HwShutter
> There is no shutter capability.
* HwRoi
> There is no hardware capability, but Lima provides the sofware Roi as well.
* HwBin
> The supported hardware binning factors are 1x1, 2x2, and 4x4.
##### How to use[¶](#how-to-use)
The LimaCCDs tango server provides a complete interface to the dexela plugin so feel free to test.
For a quick test one can use python, is this a short code example:
```
from Lima import Dexela from lima import Core import time
hwint = Dexela.Interface('./sensor2923.fmt')
ct = Core.CtControl(hwint)
acq = ct.acquisition()
# setting new file parameters and autosaving mode saving=ct.saving()
pars=saving.getParameters()
pars.directory='/tmp/'
pars.prefix='testdexela_'
pars.suffix='.edf'
pars.fileFormat=Core.CtSaving.EDF pars.savingMode=Core.CtSaving.AutoFrame saving.setParameters(pars)
# now ask for 2 sec. exposure and 10 frames acq.setAcqExpoTime(2)
acq.setNbImages(10)
ct.prepareAcq()
ct.startAcq()
# wait for last image (#9) ready lastimg = ct.getStatus().ImageCounters.LastImageReady while lastimg !=9:
time.sleep(1)
lastimg = ct.getStatus().ImageCounters.LastImageReady
# read the first image im0 = ct.ReadImage(0)
```
#### DECTRIS EIGER[¶](#dectris-eiger)
##### Introduction[¶](#introduction)
The EIGER 1M is a high performance X-Ray detector system.
It is made of two subsystems: a detector and a control server.
The control server is driven using an HTTP RESTful interface.
A C++ API for LImA has been developed at Synchrotron SOLEIL.
##### Prerequisite[¶](#prerequisite)
Some dependencies need to be installed:
> * libcurl
> * liblz4
> * libzmq
> * libjsoncpp
to install all dependencies on debian like system, use this command:
```
$ sudo apt-get install libcurl4-gnutls-dev liblz4-dev libzmq3-dev libjsoncpp-dev
```
##### Installation and Module configuration[¶](#installation-and-module-configuration)
Follow the generic instructions in [Build and Install](index.html#build-installation). If using CMake directly, add the following flag:
```
-DLIMACAMERA_EIGER=true
```
For the Tango server installation, refers to [PyTango Device Server](index.html#tango-installation).
##### Initialisation and Capabilities[¶](#initialisation-and-capabilities)
Implementing a new plugin for new detector is driven by the LIMA framework but the developer has some freedoms to choose which standard and specific features will be made available. This section is supposed to give you the correct information regarding how the camera is exported within the LIMA framework.
###### Camera initialization[¶](#camera-initialization)
Initialization is performed automatically within the Eigercamera object. By default the stream will be use to retrieved images unless hardware saving is activated (CtSaving::setManagedMode(CtSaving::Hardware))
###### Std capabilities[¶](#std-capabilities)
* HwDetInfo
| Capability | 1M Value | 4M Value | 9M Value | 16M Value |
| --- | --- | --- | --- | --- |
| Maximum image size | 1030 * 1065 | 2070 * 2167 | 3110 * 3269 | 4150 * 4371 |
| Pixel depth | 12 bits | 12 bits | 12 bits | 12 bits |
| Pixel size | 75µm * 75µm | 75µm * 75µm | 75µm * 75µm | 75µm * 75µm |
| Maximum frame rate | 3000Hz | 750Hz | 238Hz | 133Hz |
* HwSync
Supported trigger types are:
> * IntTrig
> * IntTrigMult
> * ExtTrigSingle
> * ExtTrigMult
> * ExtGate
* There is no hardware support for binning or roi.
* There is no shutter control.
##### Optional capabilities[¶](#optional-capabilities)
* **Cooling**
> * The detector uses liquid cooling.
> * The API allows accessing the temperature and humidity as read-only values.
At the moment, the specific device supports the control of the following features of the Eiger Dectris API.
(Extended description can be found in the Eiger API user manual from Dectris).
* **Photon energy**: This should be set to the incoming beam energy.
Actually it’s an helper which set the threshold
* **Threshold energy**: This parameter will set the camera detection threshold.
This should be set between 50 to 60 % of the incoming beam energy.
* **Auto Summation** (if activate image depth is 32 and, if not image depth is 16)
* **HwSaving**:
This detector can directly generate hdf5, if this feature is used.
Internally Lima controls the file writer Eiger module.
This capability can be activated though the control part with CtSaving object with setManagedMode method.
* **Countrate correction**
* **Efficiency correction**
* **Flatfield correction**
* **LZ4 Compression**
* **Virtual pixel correction**
* **Pixelmask**
* **Retrigger**
##### Configuration[¶](#configuration)
* Device configuration
The default values of the following properties must be updated in the specific device to meet your system configuration.
| Property name | Description | Default value |
| --- | --- | --- |
| DetectorIP | Defines the IP address of the Eiger control server (ex: 192.168.10.1) | 127.0.0.1 |
##### How to use[¶](#how-to-use)
This is a python code of a simple acquisition:
```
from Lima import Eiger from lima import Core
#---+
# |
# v ip adress or hostname cam = Eiger.Camera(lid32eiger1)
hwint = Eiger.Interface(cam)
ct = Core.CtControl(hwint)
acq = ct.acquisition()
# set hardware configuration
# refer to the Dectris Eiger documentation for more information cam.setCountrateCorrection(False)
cam.setFlatfieldCorrection(True)
cam.setAutoSummation(False)
cam.setEfficiencyCorrection(True)
cam.setVirtualPixelCorrection(True)
cam.setPixelMask(True)
# read some parameters print (cam.getTemperature())
print (cam.getHumidity())
# set energy threshold in eV cam.seThresholdEnery(16000)
cam.setPhotonEnergy(8000)
# setting new file parameters and autosaving mode saving=ct.saving()
pars=saving.getParameters()
pars.directory='/buffer/lcb18012/opisg/test_lima'
pars.prefix='test1_'
pars.suffix='.edf'
pars.fileFormat=Core.CtSaving.EDF pars.savingMode=Core.CtSaving.AutoFrame saving.setParameters(pars)
# set accumulation mode
acq_pars= acq.getPars()
# now ask for 10 msec exposure and 10 frames acq.setAcqExpoTime(0.01)
acq.setAcqNbFrames(10)
ct.prepareAcq()
ct.startAcq()
# wait for last image (#9) ready lastimg = ct.getStatus().ImageCounters.LastImageReady while lastimg !=9:
time.sleep(1)
lastimg = ct.getStatus().ImageCounters.LastImageReady
# read the first image im0 = ct.ReadImage(0)
```
#### Dectris Mythen camera[¶](#dectris-mythen-camera)
##### Introduction[¶](#introduction)
Server for the control of a Mythen detector.
##### Module configuration[¶](#module-configuration)
Follow the generic instructions in [Build and Install](index.html#build-installation). If using CMake directly, add the following flag:
```
-DLIMACAMERA_MYTHEN=true
```
For the Tango server installation, refers to [PyTango Device Server](index.html#tango-installation).
##### Installation[¶](#installation)
##### Configuration[¶](#configuration)
#### Dectris Mythen3[¶](#dectris-mythen3)
##### Intoduction[¶](#intoduction)
Server for the control of a Mythen detector.
##### Module configuration[¶](#module-configuration)
Follow the generic instructions in [Build and Install](index.html#build-installation). If using CMake directly, add the following flag:
```
-DLIMACAMERA_MYTHEN=true
```
For the Tango server installation, refers to [PyTango Device Server](index.html#tango-installation).
##### Testing[¶](#testing)
Here is a simple python test program:
```
import time from Lima import Mythen3 from Lima import Core import time
camera = Mythen3.Camera("160.103.146.190", 1031, False)
interface = Mythen3.Interface(camera)
control = Core.CtControl(interface)
# check its OK print camera.getDetectorType()
print camera.getDetectorModel()
print camera.getVersion()
nframes=10 acqtime=2.0
# setting new file parameters and autosaving mode saving=control.saving()
saving.setDirectory("/buffer/dubble281/mythen")
saving.setFramesPerFile(nframes)
saving.setFormat(Core.CtSaving.HDF5)
saving.setPrefix("mythen3_")
saving.setSuffix(".hdf")
saving.setSavingMode(Core.CtSaving.AutoFrame)
saving.setOverwritePolicy(Core.CtSaving.Overwrite)
# do acquisition acq = control.acquisition()
acq.setAcqExpoTime(acqtime)
acq.setAcqNbFrames(nframes)
control.prepareAcq()
control.startAcq()
time.sleep(25)
```
#### <NAME>[¶](#dectris-pilatus)
##### Intoduction[¶](#intoduction)
The PILATUS detector (pixel apparatus for the SLS) is a novel type of a x-ray detector, which has been developed at the Paul Scherrer Institut (PSI) for the Swiss Light Source (SLS). PILATUS detectors are two-dimensional hybrid pixel array detectors, which operate in single-photon counting mode. A hybrid pixel that features single photon counting, comprises a preamplifier, a comparator and a counter. The preamplifier enforces the charge generated in the sensor by the incoming x-ray; the comparator produces a digital signal if the incoming charge exceeds a predefined threshold and thus, together with the counter, one obtains a complete digital storage and read-out of the number of detected x-rays per pixel without any read-out noise or dark current!
PILATUS detectors feature several advantages compared to current state-of-the-art CCD and imaging plate detectors. The main features include: no readout noise, superior signal-to-noise ratio, read-out time of 5 ms, a dynamic range of 20bit, high detective quantum efficiency and the possibility to suppress fluorescence by a energy threshold that is set individually for each pixel. A more complete comparison is given in Table 1. The short readout and fast framing time allow to take diffraction data in continuous mode without opening and closing the shutter for each frame (see Fig. 1). For a comparison on the response to x-rays of integrating and single photon counting detectors see Fig. 2.
Because of the specified properties, PILATUS detectors are superiour to state-of-the-art CCD and imaging plate detectors for various x-ray detection experiments. Major improvements can be expected for time-resolved experiments, for the study of weak diffraction phenomena (e.g. diffuse scattering), for accurate measurements of Bragg intensities, for resonant scattering experiments,…
##### Module configuration[¶](#module-configuration)
Follow the generic instructions in [Build and Install](index.html#build-installation). If using CMake directly, add the following flag:
```
-DLIMACAMERA_PILATUS=true
```
For the Tango server installation, refers to [PyTango Device Server](index.html#tango-installation).
##### Installation[¶](#installation)
On Pilatus PC, create **as root** a ramdisk of 8GB which will be used by Lima dserver as temporary buffer:
> * edit file `/etc/fstab` and add the following line:
> ```
> none /lima_data tmpfs size=8g,mode=0777 0 0
> ```
> * make the directory:
> ```
> mkdir /lima_data
> ```
> * and finally mount the ramdisk:
> ```
> mount -a
> ```
* For Pilatus3, edit file `~det/p2_det/config/cam_data/camera.def` and add thoses two lines:
> + camera_wide = WIDTH_OF_THE_DETECTOR
> + camera_high = HEIGHT_OF_THE_DETECTOR
##### Start the system[¶](#start-the-system)
* Log on the detector pc as *det* user start tvx/camserver:
```
cd p2_det
./runtvx
```
* when tvx has finished initializing camserver just type *quit* in tvx window
* Log on the detector pc as *an other user* or *det*
```
cd WHERE_YOU_HAVE_INSTALL_PILATUS_TANGO_SERVER TANGO_HOST=Host:Port python LimaCCD.py instance_name
```
If the cameserver window notice a connection, seams to work ;)
##### How to use[¶](#how-to-use)
This is a python code example for a simple test:
```
from Lima import Pilatus from Lima import Core
cam = Pilatus.Camera()
hwint = Pilatus.Interface(cam)
ct = Core.CtControl(hwint)
acq = ct.acquisition()
# set some low level configuration cam.setThresholdGain(1)
cam.setFillMode(True)
cam.setEnergy(16.0)
cam.setHardwareTriggerDelay(0)
cam.setNbExposurePerFrame(1)
# setting new file parameters and autosaving mode saving=ct.saving()
pars=saving.getParameters()
pars.directory='/buffer/lcb18012/opisg/test_lima'
pars.prefix='test1_'
pars.suffix='.edf'
pars.fileFormat=Core.CtSaving.EDF pars.savingMode=Core.CtSaving.AutoFrame saving.setParameters(pars)
# now ask for 2 sec. exposure and 10 frames acq.setAcqExpoTime(2)
acq.setAcqNbFrames(10)
ct.prepareAcq()
ct.startAcq()
# wait for last image (#9) ready lastimg = ct.getStatus().ImageCounters.LastImageReady while lastimg !=9:
time.sleep(1)
lastimg = ct.getStatus().ImageCounters.LastImageReady
# read the first image im0 = ct.ReadImage(0)
```
#### Finger Lakes Instrumentation Microline camera plugin[¶](#finger-lakes-instrumentation-microline-camera-plugin)
##### Introduction[¶](#introduction)
FLI supplies cameras to more than 50 countries for life science imaging, veterinary radiology, astronomy, forensics, transmission electron microscopy, and a wide range of other applications. Our on-site staff includes a talented group of mechanical, electrical, and software engineers.
FLI provides a two Software Development Tool (SDK) for both Windows and Linux.
The Lima module as been tested only with this cameras models:* IKon-M and IKon-L (USB interface, Linux OS debian 6)
* IKon-L (USB interface, Windows XP - 32bits)
##### Prerequisites[¶](#prerequisites)
##### Installation & Module configuration[¶](#installation-module-configuration)
Follow the generic instructions in [Build and Install](index.html#build-installation). If using CMake directly, add the following flag:
```
-DLIMACAMERA_FLI=true
```
For the Tango server installation, refers to [PyTango Device Server](index.html#tango-installation).
##### Initialisation and Capabilities[¶](#initialisation-and-capabilities)
Implementing a new plugin for new detector is driven by the LIMA framework but the developer has some freedoms to choose which standard and specific features will be made available. This section is supposed to give you the correct information regarding how the camera is exported within the LIMA framework.
###### Camera initialisation[¶](#camera-initialisation)
The camera will be initialized within the `AndorCamera` object. The AndorCamera contructor sets the camera with default parameters for Preampifier-Gain, VerticalShiftSpeed and the ADC/HorizontalSpeed.
These parameters are optimized for the faster mode, which means the maximum gain, the “fasten recommended” VSSpeed (i.e as returned by GetFastestRecommendedVSSpeed() SDK function call) and the ADC with the faster Horizontal speed.
All the parameters can be set and get using the corresponding methods, the default values (max speeds and gain)
can be applied with -1 as passed value:
> set/getPGain()
> set/getVsSpeed()
> set/getADCSpeed()
Some other methods are available but they can not be supported depending on which camera model you are using:
> set/getHighCapacity()
> set/getFanMode()
> set/getBaselineClamp()
The above parameters, only support enumerate type for values.
###### Std capabilites[¶](#std-capabilites)
This plugin has been implemented in respect of the mandatory capabilites but with some limitations which are due to the camera and SDK features. We only provide here extra information for a better understanding of the capabilities for Andor cameras.
* HwDetInfo
getCurrImageType/getDefImageType(): the methods call the SDK GetBitDepth() function to resolve the image data type. The bit-depth correspond to the AD channel dynamic range which depends on the selected ADC channel.
By experience and with IKon detectors we only have Bpp16 of dynamic range, but the methods can return Bpp8 and Bpp32 as well.
setCurrImageType(): this method do not change the image type which is fixed to 16bpp.
* HwSync
get/setTrigMode(): the only supported mode are IntTrig, ExtTrigSingle, ExtGate and IntTrigMult
###### Optional capabilites[¶](#optional-capabilites)
In addition to the standard capabilities, we make the choice to implement some optional capabilities which are supported by the SDK and the I-Kon cameras. A Shutter control, a hardware ROI and a hardware Binning are available.
* HwShutter
setMode(): only ShutterAuto and ShutterManual modes are supported
* HwRoi
There is no restriction for the ROI setting
* HwBin
There is no restriction for the Binning but the maximum binning is given by the SDK function GetMaximumBinning() which depends on the camera model
##### Configuration[¶](#configuration)
> Plug your USB camera on any USB port of the computer, that’s it !
##### How to use[¶](#how-to-use)
This is a python code example for a simple test:
```
from Lima import FLI from lima import Core
cam = Andor.Camera('/dev/fliusb0')
hwint = Andor.Interface(cam)
ct = Core.CtControl(hwint)
acq = ct.acquisition()
# setting new file parameters and autosaving mode saving=ct.saving()
pars=saving.getParameters()
pars.directory='/buffer/lcb18012/opisg/test_lima'
pars.prefix='test1_'
pars.suffix='.edf'
pars.fileFormat=Core.CtSaving.EDF pars.savingMode=Core.CtSaving.AutoFrame saving.setParameters(pars)
# set accumulation mode
acq_pars= acq.getPars()
#0-normal,1-concatenation,2-accumu acq_pars.acqMode = 2 acq_pars.accMaxExpoTime = 0.05 acq_pars.acqExpoTime =1 acq_pars.acqNbFrames = 1
acq.setPars(acq_pars)
# here we should have 21 accumalated images per frame print acq.getAccNbFrames()
# now ask for 2 sec. exposure and 10 frames acq.setAcqExpoTime(2)
acq.setNbImages(10)
ct.prepareAcq()
ct.startAcq()
# wait for last image (#9) ready lastimg = ct.getStatus().ImageCounters.LastImageReady while lastimg !=9:
time.sleep(1)
lastimg = ct.getStatus().ImageCounters.LastImageReady
# read the first image im0 = ct.ReadImage(0)
```
#### Frelon camera[¶](#frelon-camera)
##### Introduction[¶](#introduction)
The FReLoN camera is a 14 bit dynamic CCD camera, with a 2048*2048 pixel chip. This camera has been developped by the awesome people with the ‘Analog and Transient Electronic’ ESRF group.
##### Prerequisite[¶](#prerequisite)
##### Installation & Module configuration[¶](#installation-module-configuration)
Follow the generic instructions in [Build and Install](index.html#build-installation). If using CMake directly, add the following flag:
```
-DLIMACAMERA_FRELON=true
```
For the Tango server installation, refers to [PyTango Device Server](index.html#tango-installation).
##### Initialisation and Capabilities[¶](#initialisation-and-capabilities)
Implementing a new plugin for new detector is driven by the LIMA framework but the developer has some freedoms to choose which standard and specific features will be made available. This section is supposed to give you the correct information regarding how the camera is exported within the LIMA framework.
###### Camera initialisation[¶](#camera-initialisation)
The Frelon plugin provides a helper class `FrelonAcq` which manages the initialisation sequence with the camera and interface object. An Espia board channel number should be set as the initialisation parameter (default is 0).
```
frelon = Frelon.FrelonAcq(int(espia_dev_nb))
return frelon.getGlobalControl()
```
###### Std capabilites[¶](#std-capabilites)
This plugin has been implemented in respect of the mandatory capabilites but with limitations according due to the detector specific features and with some programmer’s choices. We do not explain here the standard Lima capabilites but you can find in this section the useful information on the Dexela specfic features.
* HwDetInfo
> TODO
* HwSync
> TODO
###### Optional capabilites[¶](#optional-capabilites)
In addition to the standard capabilities, we make the choice to implement some optional capabilities in order to have an improved simulation.
* HwShutter
> TODO
* HwRoi
> TODO
* HwBin
> TODO
##### Configuration[¶](#configuration)
The main configuration will consist in providing the correct `DexelaConfig.cfg` file to the detector API.
The file has to be provided by the manufacturer with a second file like `sensor2923.fmt`. The `.fmt` file contains some calibration data.
##### How to use[¶](#how-to-use)
The LimaCCDs tango server provides a complete interface to the dexela plugin so feel free to test.
For a quick test one can use python, this is a short example code:
```
from Lima import Frelon from lima import Core import time
FrelonAcq = Frelon.FrelonAcq(int(espia_dev_nb))
control = FrelonAcq.getGlobalControl()
acq = control.acquisition()
# setting new file parameters and autosaving mode saving=control.saving()
pars=saving.getParameters()
pars.directory='/tmp/'
pars.prefix='testfrelon_'
pars.suffix='.edf'
pars.fileFormat=Core.CtSaving.EDF pars.savingMode=Core.CtSaving.AutoFrame saving.setParameters(pars)
# now ask for 2 sec. exposure and 10 frames acq.setAcqExpoTime(2)
acq.setNbImages(10)
acq.prepareAcq()
acq.startAcq()
# wait for last image (#9) ready lastimg = control.getStatus().ImageCounters.LastImageReady while lastimg !=9:
time.sleep(1)
lastimg = control.getStatus().ImageCounters.LastImageReady
# read the first image im0 = control.ReadImage(0)
```
#### imXPAD[¶](#imxpad)
##### Introduction[¶](#introduction)
The imXpad detectors benefit of hybrid pixel technology, which leads to major advantages compared to the other technologies. These advantages are mainly provided by direct photon conversion and real time electronic analysis of X-ray photons. This allows for direct photon counting and energy selection.
XPAD detectors key features compared to CCDs and CMOS pixels detectors are:
> * Noise suppression
> * Energy selection
> * Almost infinite dynamic range
> * High Quantum Efficiency (DQE(0) ~100%, dose reduction)
> * Ultra fast electronic shutter (10 ns)
> * Frame rate > 500 Hz
##### Prerequisite[¶](#prerequisite)
In order to operate the imXpad detector, the USB-server or the PCI-server must be running in the computer attached to the detector.
##### Installation & Module configuration[¶](#installation-module-configuration)
Follow the generic instructions in [Build and Install](index.html#build-installation). If using CMake directly, add the following flag:
```
-DLIMACAMERA_IMXPAD=true
```
For the Tango server installation, refers to [PyTango Device Server](index.html#tango-installation).
##### Initialisation and Capabilities[¶](#initialisation-and-capabilities)
Implementing a new plugin for new detector is driven by the LIMA framework but the developer has some freedoms to choose which standard and specific features will be made available. This section is supposed to give you the correct information regarding how the camera is exported within the LIMA framework.
###### Camera initialisation[¶](#camera-initialisation)
imXpad camera must be initialisated using 2 parameters:1. The IP adress where the USB or PCI server is running 2. The port number use by the server to communicate.
###### Std capabilities[¶](#std-capabilities)
* HwDetInfo
getCurrImageType/getDefImageType():
* HwSync:
get/setTrigMode(): the only supported mode are IntTrig, ExtGate, ExtTrigMult, ExtTrigSingle.
Refer to: <http://imxpad.com/templates/SoftwareDocumentation/softwareDocumentation.html> for a whole description of detector capabilities.
###### Optional capabilities[¶](#optional-capabilities)
This plugin does not offer optional hardware capabilities.
##### How to use[¶](#how-to-use)
This is a python code example for a simple test:
```
from Lima import imXpad from Lima import Core import time
# Setting XPAD camera (IP, port)
cam = imXpad.Camera('localhost', 3456)
HWI = imXpad.Interface(cam)
CT = Core.CtControl(HWI)
CTa = CT.acquisition()
CTs = CT.saving()
#To specify where images will be stored using EDF format CTs.setDirectory("./Images")
CTs.setPrefix("id24_")
CTs.setFormat(CTs.RAW)
CTs.setSuffix(".bin")
CTs.setSavingMode(CTs.AutoFrame)
CTs.setOverwritePolicy(CTs.Overwrite)
#To set acquisition parameters CTa.setAcqExpoTime(0.001) #1 ms exposure time.
CTa.setAcqNbFrames(10) # 10 images.
CTa.setLatencyTime(0.005) # 5 ms latency time between images.
#To change acquisition mode cam.setAcquisitionMode(cam.XpadAcquisitionMode.Standard)
#To set Triggers. Possibilities: Core.IntTrig, Core.ExtGate, Core.ExtTrigMult, Core.ExtTrigSingle.
CTa.setTriggerMode(Core.IntTrig)
#To set Outputs.
cam.setOutputSignalMode(cam.XpadOutputSignal.ExposureBusy)
#ASYNCHRONOS acquisition CT.prepareAcq()
CT.startAcq()
#SYNCHRONOUS acquisition CT.prepareAcq()
CT.startAcq()
cam.waitAcqEnd()
#To abort current process
#CT.stopAcq()
#Load Calibration from file
#cam.loadCalibrationFromFile("./S70.cfg")
#Perform Calibrations 0-SLOW, 1-MEDIUM, 2-FAST
#cam.calibrationOTN(0)
#cam.calibrationOTNPulse(0)
#cam.calibrationBEAM(1000000,60,0) # 1s->exposure time, 60->ITHL_MAX, 0->SLOW
```
#### Lambda / Xspectrum[¶](#lambda-xspectrum)
##### Intoduction[¶](#intoduction)
LAMBDA is a next-generation pixel detector for X-rays, based on Medipix3 technology. It is a photon-counting detector, making it effectively noise free, and it offers a high frame rate of up to 23,000 frames per second (with no readout deadtime) and a small pixel size of 55 µm. It is available in a wide variety of sizes and configurations for different applications, and can be equipped with different sensor materials to allow high detection efficiency even at high X-ray energies. The system also has “colour imaging” capabilities, where X-rays hitting the detector can be divided into two energy ranges (*****). Developed by DESY for use at the PETRA-III synchrotron, the system is designed for high reliability, and has external triggering and gating capability for synchronisation with the rest of the experiment. It can be easily integrated into common beamline control systems.
##### Installation & Module configuration[¶](#installation-module-configuration)
Follow the generic instructions in [Build and Install](index.html#build-installation). If using CMake directly, add the following flag:
```
-DLIMACAMERA_LAMBDA=true
```
For the Tango server installation, refers to [PyTango Device Server](index.html#tango-installation).
##### Initialisation and Capabilities[¶](#initialisation-and-capabilities)
###### Camera initialisation[¶](#camera-initialisation)
The camera will be initialized by created the `Lambda::Camera` object. The contructor will take care of your detector configuration according to the SDK installation setup done before.
The Camera::Camera() constructor required to pass the full path to the configuration directory installed on the control computer. The standard path should be /opt/xsp/config .
###### Std capabilites[¶](#std-capabilites)
This plugin has been implement in respect of the mandatory capabilites but with some limitations which are due to the camera and SDK features. We provide here further information for a better understanding of the detector specific capabilities.
* HwDetInfo
getCurrImageType/getDefImageType(): Bpp16 only.
setCurrImageType(): this method do not change the image type which is fixed to Bpp16.
* HwSync
get/setTrigMode(): the supported mode are IntTrig, ExtTrigSingle, ExtTrigMult and ExtGate
###### Optional capabilites[¶](#optional-capabilites)
None of the hardware capability like HwRoi, HwBin have been implemented.
##### Configuration[¶](#configuration)
No Specific hardware configuration are needed. The detector is sold with a control computer equiped with hardware and software.
##### How to use[¶](#how-to-use)
This is a python code example for a simple test:
```
from Lima import Lambda from Lima import Core
cam = Lambda.Camera('/opt/xsp/config')
hwint = Lambda.Interface(cam)
ct = Core.CtControl(hwint)
acq = ct.acquisition()
# set the detector energy threshold cam.setEnergyThreshold(6.0)
# setting new file parameters and autosaving mode saving=ct.saving()
# set saving in HDF5 bitshuffle compression pars=saving.getParameters()
pars.directory='/data1/test_lima'
pars.prefix='test1_'
pars.suffix='.h5'
pars.fileFormat=Core.CtSaving.HDF5BS pars.savingMode=Core.CtSaving.AutoFrame saving.setParameters(pars)
# now ask for 2 sec. exposure and 10 frames acq.setAcqExpoTime(2)
acq.setAcqNbFrames(10)
ct.prepareAcq()
ct.startAcq()
# wait for last image (#9) ready lastimg = ct.getStatus().ImageCounters.LastImageReady while lastimg !=9:
time.sleep(1)
lastimg = ct.getStatus().ImageCounters.LastImageReady
# read the first image im0 = ct.ReadImage(0)
```
#### Maxipix[¶](#maxipix)
##### Intoduction[¶](#intoduction)
MAXIPIX is a high spatial resolution (small pixels), high frame rate, photon-counting pixel detector developed by ESRF. MAXIPIX is based on MEDIPIX2/TIMEPIX readout ASICs developed by CERN and the MEDIPIX2 collaboration. The active detector element consists of a hybrid pixel circuit glued on a chipboard and connected to it with microwire connections. The hybrid pixel circuit consists itself of a pixelated semiconductor sensor connected to one or several readout ASICs by individual micro solder bumps on each pixel. Various module formats are available and may implement either MEDIPIX2 or TIMEPIX ASICs. Both ASICs have identical pixel geometries but different characteristics as regards principally the lowest energy threshold, the discriminator range, and the available detection modes.
We provide today Maxipix 5x1, 4x1 and 1x1 formats based on both TIMEPIX and MEDIPIX2 ASICs.
Beamlines are equiped with the detector, Espia card and a specific computer running centOS 5 x86_64.
##### Installation & Module configuration[¶](#installation-module-configuration)
Follow the generic instructions in [Build and Install](index.html#build-installation). If using CMake directly, add the following flag:
```
-DLIMACAMERA_MAXIPIX=true
```
For the Tango server installation, refers to [PyTango Device Server](index.html#tango-installation).
##### Initialisation and Capabilities[¶](#initialisation-and-capabilities)
Implementing a new plugin for new detector is driven by the LIMA framework but the developer has some freedoms to choose which standard and specific features will be made available. This section is supposed to give you the correct information regarding how the camera is exported within the LIMA framework.
###### Camera initialisation[¶](#camera-initialisation)
The camera will be initialized within the `Maxipix::Camera` class. Camera contructor aims to load the configuration and calibration data to the detector backend electronic (Priam card).
There are so many hardware parameters you can set, but refer to the maxipix documentation for a good pratice.
> set/getSignalLevel()
> set/getReadLevel()
> set/getTriggerLevel()
> set/getShutterLevel()
> set/getReadyMode()
> set/getGateMode()
> set/getFillMode()
> set/getEnergy()
###### Std capabilites[¶](#std-capabilites)
This plugin has been implemented in respect of the mandatory capabilites but with some limitations which are due to the camera. We only provide here extra information for a better understanding of the capabilities for Maxipix cameras.
* HwDetInfo
getCurrImageType/getDefImageType(): always 16bpp.
setCurrImageType(): this method do not change the image type which is fixed to 16bpp.
* HwSync
get/setTrigMode(): supported modes are IntTrig, IntTrigMult,ExtTrigSingle, ExtTrigMult and ExtGate.
###### Optional capabilites[¶](#optional-capabilites)
In addition to the standard capabilities, we make the choice to implement some optional capabilities which are supported by this detector. A Shutter control.
* HwShutter
setMode(): only ShutterAuto and ShutterManual modes are supported.
##### Configuration[¶](#configuration)
Only provided configuration files (`.cfg` and `.bpc`) must be used for your detector, you must not change those files. Each detector has its own set of files. Please contact ESRF Detector group for help.
##### How to use[¶](#how-to-use)
This is a python code example of a simple acquisition:
```
from Lima.Maxipix import Maxipix from lima import Core
#---+
# config name (.cfg file) |
#---+ |
# config path | |
#--- + | |
# espia channel | | |
# v v v cam = Maxipix.Camera(0, '/users/blissadm/local/maxipix/calib/tpxatl25', 'tpxatl25X')
hwint = Maxipix.Interface(cam)
ct = Core.CtControl(hwint)
acq = ct.acquisition()
# set some low level configuration
# see maxipix documentationf for more information hwint.setEnergyThreshold(10.0)
hwint.setFillMode(cam.DISPATCH)
hwint.setShutterLevel(cam.HIGH_RISE)
# setting new file parameters and autosaving mode saving=ct.saving()
pars=saving.getParameters()
pars.directory='/buffer/lcb18012/opisg/test_lima'
pars.prefix='test1_'
pars.suffix='.edf'
pars.fileFormat=Core.CtSaving.EDF pars.savingMode=Core.CtSaving.AutoFrame saving.setParameters(pars)
# set accumulation mode
acq_pars= acq.getPars()
#0-normal,1-concatenation,2-accumu acq_pars.acqMode = 2 acq_pars.accMaxExpoTime = 0.05 acq_pars.acqExpoTime =1 acq_pars.acqNbFrames = 1
acq.setPars(acq_pars)
# here we should have 21 accumalated images per frame print acq.getAccNbFrames()
# now ask for 2 sec. exposure and 10 frames acq.setAcqExpoTime(2)
acq.setNbImages(10)
ct.prepareAcq()
ct.startAcq()
# wait for last image (#9) ready lastimg = ct.getStatus().ImageCounters.LastImageReady while lastimg !=9:
time.sleep(1)
lastimg = ct.getStatus().ImageCounters.LastImageReady
# read the first image im0 = ct.ReadImage(0)
```
#### Merlin camera[¶](#merlin-camera)
##### Introduction[¶](#introduction)
The Merlin Medipix3Rx Quad Readout detector system from Diamond Light Source Ltd is a photon counting soild state pixel detector with a silicon sensor.
The Lima module has only been tested in a 2 x 2 configuration, but is available in a 4 x 1 configuration
There is extensive documentation :ref: Merlin_and_Medipix3_Documentation_v0.7.pdf
##### Prerequisite[¶](#prerequisite)
The Merlin detector system is based on a National Instruments FlexRIO PXI FPGA system.
It incorporates an embedded PC running Windows with a LabView graphical user interface, incorporating a socket server, which this plugin communicates with.
This program must be running prior to starting Lima.
##### Installation & Module configuration[¶](#installation-module-configuration)
Follow the generic instructions in [Build and Install](index.html#build-installation). If using CMake directly, add the following flag:
```
-DLIMACAMERA_MERLIN=true
```
For the Tango server installation, refers to [PyTango Device Server](index.html#tango-installation).
##### Initialisation and Capabilities[¶](#initialisation-and-capabilities)
Implementing a new plugin for new detector is driven by the LIMA framework but the developer has some freedoms to choose which standard and specific features will be made available. This section is supposed to give you good knowledge regarding camera features within the LIMA framework.
###### Camera initialisation[¶](#camera-initialisation)
The camera has to be initialized using the `MerlinCamera` class. The constructor requires the hostname of the detector system.
###### Std capabilities[¶](#std-capabilities)
This plugin has been implemented with the mandatory capabilites, with some limitations due to the camera server implementation.
* HwDetInfo
> The detector is set to full image size at startup which means a binning of 1x1. There is no hardware binning
* HwSync
The supported trigger modes are:
> + IntTrig
> + IntTrigMult
> + ExtTrigSingle
> + ExtTrigMult
##### Testing[¶](#testing)
This is a simple python test program:
```
from Lima import Merlin from Lima import Core import time
camera = Merlin.Camera('<hostname>')
interface = Merlin.Interface(camera)
control = Core.CtControl(interface)
acq = control.acquisition()
# check its OK print camera.getDetectorType()
print camera.getDetectorModel()
print camera.getSoftwareVersion()
nframes=5 acqtime=3.0
# setting new file parameters and autosaving mode saving=control.saving()
saving.setDirectory("/home/grm84/data")
saving.setFramesPerFile(nframes)
saving.setFormat(Core.CtSaving.HDF5)
saving.setPrefix("merlin_")
saving.setSuffix(".hdf")
saving.setSavingMode(Core.CtSaving.AutoFrame)
saving.setOverwritePolicy(Core.CtSaving.Append)
# do acquisition acq=control.acquisition()
acq.setAcqExpoTime(acqtime)
acq.setAcqNbFrames(nframes)
control.prepareAcq()
control.startAcq()
# wait for last image (#4) ready lastimg = control.getStatus().ImageCounters.LastImageReady while lastimg !=nframes-1:
time.sleep(0.01)
lastimg = control.getStatus().ImageCounters.LastImageReady
# read the first image im0 = control.ReadImage(0)
```
#### Minipix camera[¶](#minipix-camera)
##### Introduction[¶](#introduction)
ADVACAM’s imaging cameras are direct conversion single photon counting pixel detectors that represent the cutting edge of current radiation imaging technology. The term “single photon counting” means that every single photon of X-ray radiation detected in individual pixel is processed and counted. The technology brings two major advantages in comparison to the conventional X-ray imaging - high contrast together with sharp high resolution images and spectral information of the radiation that allows material specific information to be displayed in colors.
MiniPIX TPX3 camera is miniaturized and low power radiation camera with the state of art Timepix3 chip. The Timepix3 is the CERN’s latest pixel detector chip that records position, energy and time for every detected quantum of radiation.
The Lima module has been tested with Pixet SDK **1.7.8**. A conda package **lima-camera-minipix** is available from anaconda.org esrf-bcu channel
Monochrome and color cameras are supported with these SDK versions.
##### Installation & Module configuration[¶](#installation-module-configuration)
First, you have to install the ADVACAM SDK *Pixel* to the default path `/opt/pixet`. tgz and deb packages are available from <https://downloads.advacam.com/> .
Note: This camera Lima plugin is a pure python module since Advacam only provides a SDK (pixet) for python.
Then, follow the generic instructions in [Build and Install](index.html#build-installation).
For the Tango server installation, refers to [PyTango Device Server](index.html#tango-installation).
##### Initialisation and Capabilities[¶](#initialisation-and-capabilities)
Implementing a new plugin for new detector is driven by the LIMA framework but the developer has some freedoms to choose which standard and specific features will be made available. This section is supposed to give you the correct information regarding how the camera is exported within the LIMA framework.
###### Camera initialisation[¶](#camera-initialisation)
The camera will be initialized by creating Minipix.Interface object.
Small example showing possible ways to initialize:
```
from Minipix.Interface import Interface from Lima import Core
hwint = Interface(config_path='/opt/pixet/factory/MiniPIX-J06-W0105.xml')
cam = hwint.camera
```
###### Std capabilites[¶](#std-capabilites)
This plugin has been implemented in respect of the mandatory capabilites but with some limitations which are due to the camera and SDK features. Only restriction on capabilites are documented here.
* HwDetInfo
getCurrImageType(): It only supports Bpp16.
* HwSync
get/setTrigMode(): the supported mode are IntTrig, IntTrigMult.
###### Optional capabilites[¶](#optional-capabilites)
N/A
##### Configuration[¶](#configuration)
ADVACAM provides you with a XML configuration file, specific to your detector. Default path can be /opt/pixet/factory, for instance /opt/pixet/factory/MiniPIX-J06-W105.xml, where *J06-W0105* is the product identifier of the detector.
When creating the Minipix Interface object you must passed as argument the path of your XML configuration file:
```
hwint = Interface(config_path='/opt/pixet/factory/MiniPIX-J06-W0105.xml')
```
##### How to use[¶](#how-to-use)
This is a python code example for a simple test:
```
from Lima import Core from Minipix.Interface import Interface
hwint = Interface(config_path='/opt/pixet/factory/MiniPIX-J06-W0105.xml')
cam = hwint.camera
ct = Core.CtControl(hwint)
acq = ct.acquisition()
#
# set and test an acquisition
#
# set an energy threshold and bias voltage
cam.energy_threshold = 3.6 # in keV cam.bias_voltage = 200 # in Volt
# setting new file parameters and autosaving mode saving=ct.saving()
pars=saving.getParameters()
pars.directory='/tmp/test_lima'
pars.prefix='test1_'
pars.suffix='.h5'
pars.fileFormat=Core.CtSaving.HDF5BS pars.savingMode=Core.CtSaving.AutoFrame saving.setParameters(pars)
# now ask for 0.1 sec. exposure and 100 frames acq.setAcqExpoTime(0.1)
acq.setNbImages(100)
ct.prepareAcq()
ct.startAcq()
# wait for last image (#99) ready lastimg = ct.getStatus().ImageCounters.LastImageReady while lastimg !=99:
time.sleep(0.1)
lastimg = ct.getStatus().ImageCounters.LastImageReady
# read the first image im0 = ct.ReadImage(0)
```
#### PIXIRAD (PX1 and PX8) camera plugin[¶](#pixirad-px1-and-px8-camera-plugin)
##### Introduction[¶](#introduction)
PIXIRAD Imaging Counters s.r.l. is an INFN Spin-off company introducing an innovative, high quality X-ray imaging sensor with intrinsic digital characteristics. It is based on Chromatic Photon Counting technology and represents a radical leap forward compared to the standard methods currently on the market.
The PIXIRAD imaging sensors are able to count individually the incident X-ray photons and to separate them in real time according to their energy (two color images per exposure).
* Global count rate > 200 GHz
* Energy range 1-100 keV
* Energy resolution better than 2 keV (FWHM) @20 keV
##### Installation & Module configuration[¶](#installation-module-configuration)
Follow the generic instructions in [Build and Install](index.html#build-installation). If using CMake directly, add the following flag:
```
-DLIMACAMERA_PIXIRAD=true
```
For the Tango server installation, refers to [PyTango Device Server](index.html#tango-installation).
##### Initialisation and Capabilities[¶](#initialisation-and-capabilities)
Implementing a new plugin for new detector is driven by the LIMA framework but the developer has some freedoms to choose which standard and specific features will be made available. This section is supposed to give you the correct information regarding how the camera is exported within the LIMA framework.
###### Camera initialisation[¶](#camera-initialisation)
The camera has to be initialized using the `Pixirad::Camera` class. The default constructor does accept parameters:
###### Std capabilities[¶](#std-capabilities)
This plugin has been implement in respect of the mandatory capabilites but with some limitations which are due to the camera and SDK features. We only provide here extra information for a better understanding of the capabilities.
* HwDetInfo
TODO
* HwSync
> * The minimum latency time is 1 ms.
> * The supported trigger modes are depending of the chosen frame mode:
> + IntTrig
> + ExtTrigMult
###### Optional capabilities[¶](#optional-capabilities)
* HwReconstruction
TODO
###### Specific control parameters[¶](#specific-control-parameters)
Some specific parameters are available within the camera hardware interface. Those parameters should be used carefully, please refer to the camera SDK (or user’s guide) documentation for further information.
```
void autocalibration();
void setHighThreshold0(float t);
void getHighThreshold0(float& t) ;
void setLowThreshold0(float t);
void getLowThreshold0(float& t) ;
void setHighThreshold1(float t);
void getHighThreshold1(float& t) ;
void setLowThreshold1(float t);
void getLowThreshold1(float& t) ;
void setDeadTimeFreeMode(Camera::DeadTimeFreeMode dtf) ;
void getDeadTimeFreeMode(Camera::DeadTimeFreeMode &dtf) ;
void setNbiMode(Camera::SensorConfigNBI nbi) ;
void getNbiMode(Camera::SensorConfigNBI &nbi) ;
void setAsicMode(Camera::SensorConfigASIC asic);
void getAsicMode(Camera::SensorConfigASIC &asic);
void setHybridMode(Camera::SensorConfigHybrid hybrid);
void getHybridMode(Camera::SensorConfigHybrid &hybrid);
void setSensorConfigBuild(Camera::SensorConfigBuild build);
void getSensorConfigBuild(Camera::SensorConfigBuild &build);
void setRunConfigMode(Camera::RunConfigMode mode);
void getRunConfigMode(Camera::RunConfigMode &mode);
void setCoolingTemperatureSetpoint(float t);
void getCoolingTemperatureSetpoint(float& t) ;
void setCoolingMode(Camera::CoolingMode mode);
void getCoolingMode(Camera::CoolingMode &mode);
void setHighVoltageBiais(float hv);
void getHighVoltageBiais(float& hv) ;
void setHVBiasModePower(Camera::HVBiaisPower mode);
void getHVBiasModePower(Camera::HVBiaisPower &mode);
void setHVBiasMode(Camera::HVMode mode);
void getHVBiasMode(Camera::HVMode &mode);
void setHighVoltageDelayBeforeOn(float sec);
void getHighVoltageDelayBeforeOn(float& sec);
void setHVRefreshPeriod(int nbOfImages);
void getHVRefreshPeriod(int& nbOfImages);
void setDelayBetweenFrames(int delayms);
void getDelayBetweenFrames(int& delayms);
void setColorMode(Camera::ColorMode color);
void getColorMode(Camera::ColorMode &color);
void setTrsfMode(Camera::TrsfMode mode);
void getTrsfMode(Camera::TrsfMode &mode);
// UDP void setNCyclesUdpDelay(int nbcycles);
void getNCyclesUdpDelay(int& nbcycles);
void setSyncOutFunction(Camera::SyncOutFunction mode);
void getSyncOutFunction(Camera::SyncOutFunction &mode);
void setSyncOutPol(Camera::Polarity mode);
void getSyncOutPol(Camera::Polarity &mode);
void setSyncInPol(Camera::Polarity mode);
void getSyncInPol(Camera::Polarity &mode);
// Weather variable extracted from UDP stream, needs get/set void getTemperaturePeltierCold(float& information);
void getTemperaturePeltierHot(float& information);
void getHighVoltageTension(float& information);
void getBoxHumidity(float& information);
void getBoxTemperature(float& information);
void getPeltierPower(float& information);
void getAlarmTempTooHot(bool& information);
void getAlarmTempTooHotEnabled(bool& information);
void getAlarmTempTooCold(bool& information);
void getAlarmTempTooColdEnabled(bool& information);
void getAlarmHumidity(bool& information);
void getAlarmHumidityEnabled(bool& information);
```
##### Basic network configuration[¶](#basic-network-configuration)
The camera has 192.168.0.1/24 adress. The detector pc has to be configured likewise.
The recommended option is to have one good quality network interface dedicated to the pixirad, and one for the rest of the world.
* Case one (Recommended), dedicated interface:
> ```
> auto eth1
> iface eth1 inet static
> address 192.168.0.100
> netmask 255.255.255.0
> mtu 1500
> ```
* Case two, one interface, with a router handling two subnetworks:
> Configuration with an alias on interface eth0:
> ```
> auto eth0:1
> iface eth0:1 inet static
> address 192.168.0.100
> netmask 255.255.255.0
> mtu 1500
> ```
##### Test examples[¶](#test-examples)
###### With python[¶](#with-python)
* Test directly the camera within python:
> ```
> from Lima import Core
> from Lima import Pixirad as PixiradAcq
> ```
* Set the number of image treatment threads according to the number of CPU available on your mighty machine :
> ```
> Core.Processlib.PoolThreadMgr.get().setNumberOfThread(20)
> ```
* Create your camera with its network settings and model (PX8 or PX1)
> ```
> print "\n\n\n\n === INIT === \n"
> camera = PixiradAcq.Camera("192.168.0.1", 2222, "PX8")
> camera.init()
> ```
> ```
> print "\n\n\n\n === INTERFACE === \n"
> camera_interface = PixiradAcq.Interface(camera)
> # Set some feature (check manual)
> # color mode (only 1 col mode supported)
> camera_interface.setColorMode(camera.COLMODE_1COL0)
> # Set point (more than acheavable by the peliter to have full powa):
> camera.setCoolingTemperatureSetpoint(-50)
> # Set some energy thresholds (check manual, as they will fall in gain level (ranges of energy).
> camera.setLowThreshold0(10)
> camera.setHighThreshold0(60)
> camera.setLowThreshold1(10)
> camera.setHighThreshold1(60)
> # Some high tension management
> camera.setHighVoltageBiais(2100)
> camera.setHVBiasModePower(1)
> camera.setHighVoltageDelayBeforeOn(3)
> camera.setHVRefreshPeriod(1000);
> # some ethernet interface
> camera_interface.setTrsfMode(camera.UNMOD)
> ```
> ```
> # Get control over things:
> print "\n\n\n\n === CONTROL === \n"
> control = Core.CtControl(camera_interface)
> # set how much you want lima to buffer memory for treatment.
> control.buffer().setMaxMemory(70)
> ```
> ```
> # Get the object with whom you will play :
> print "\n\n\n\n === ACQUISITION OBJECT === \n"
> acq = control.acquisition()
> # Define trigger:
> acq.setTriggerMode(Core.IntTrig)
> #acq.setTriggerMode(Core.ExtTrigMult)
> ```
> ```
> # save somewhere
> saving = control.saving()
> pars=newsaving.getParameters()
> pars.directory='/tmp/test'
> pars.prefix=basename
> pars.suffix='.edf'
> pars.fileFormat=Core.CtSaving.EDF
> pars.savingMode=Core.CtSaving.AutoFrame
> saving.setParameters(pars)
> ```
> ```
> # Take images !
> # expo time for one frame :
> acq.setAcqExpoTime(0.01)
> # number of frames:
> acq.setAcqNbFrames(10)
> # get it !
> control.prepareAcq();
> control.startAcq()
> ```
> ```
> # pretty ones now !
> # Take many (100) images and accumulate them to have better stats and one image written:
> acq.setAcqMode(Core.Accumulation)
> # Max expo time per frame:
> acq.setAccMaxExpoTime(0.01)
> # Total time for the accumulation:
> acq.setAcqExpoTime(1);
> # how many accumulated images:
> acq.setAcqNbFrames(1)
> # get them all and keep one:
> control.prepareAcq();
> control.startAcq()
> ```
###### With Tango[¶](#with-tango)
* Properties
> ```
> initial_model = PX8 // or PX1
> ip_address = 192.168.0.1
> port_number = 2222
> ```
* PyTango client connection examples:
> ```
> import PyTango
> pixi = PyTango.DeviceProxy("d05/pixirad/pixirad")
> limaccd = PyTango.DeviceProxy("d05/pixirad/pixirad8")
> pixi.cooling_temperature_setpoint = -50
> pixi.high_voltage_biais = 2100
> pixi.dead_time_free_mode = 'DEAD_TIME_FREE_MODE_OFF'
> pixi.color_mode = 'COLMODE_1COL0'
> pixi.low_threshold0 = 1
> pixi.high_threshold0 = 99
> pixi.low_threshold1 = 1
> pixi.high_threshold1 = 99
> #pixi.sensor_config_build = 'PX8'
> pixi.h_v_bias_mode_power = 1
> pixi.trsf_mode = "UNMOD"
> limaccd.buffer_max_memory = 80
> limaccd.acq_nb_frames = 0
> limaccd.acq_expo_time = 0.01
> limaccd.prepareAcq()
> limaccd.startAcq()
> ```
##### Advanced configuration and optimization (**optional**)[¶](#advanced-configuration-and-optimization-optional)
The camera will send the images as small (1490) udp datagrams, as fast as it can, nearly saturating the bandwidth of the 1Gb ethernet link.
Bad network cards, or high latency systems will result in a loss of part of the image.
If this happens, several points needs checking. The ethernet card driver might drop packets (and as they are UDP, there won’t be any chace to see them).
The linux kernel UDP buffer might saturate and willingly drop packets (but you knows it at least). In this case, it means that your reading loop (reading from the linux udp buffer) is too slow.
Here are a couple of options:
* Using FIFO realtime mode can help.
* Tuning network buffers can help.
* Changing ethernet card can save your skin, and avoid you loosing weeks fine tuning muddy cards.
###### Realtime mode[¶](#realtime-mode)
In : /etc/security/limits.conf add :
```
username - rtprio 5
```
In soft :
```
pthread_t this_thread = pthread_self();
struct sched_param params;
params.sched_priority = 5;
ret = pthread_setschedparam(this_thread, SCHED_FIFO, ¶ms);
if (ret != 0) { std::cout << "Check /etc/security/limits.conf " << std::endl; }
```
###### Kernel tuning[¶](#kernel-tuning)
```
man udp
```
Change in `/etc/sysctl.conf` and validate with `sysctl -p`
```
net.core.rmem_max = 256217728 net.core.wmem_max = 256217728 net.ipv4.udp_mem = 131072 262144 524288 net.ipv4.udp_rmem_min = 65536 net.core.netdev_max_backlog = 65536 net.core.somaxconn = 1024
```
###### Network card driver tuning[¶](#network-card-driver-tuning)
```
ethtool -g eth1 Ring parameters for eth1:
Pre-set maximums:
RX: 4096 RX Mini: 0 RX Jumbo: 0 TX: 4096 Current hardware settings:
RX: 512 <<<<<< ===
RX Mini: 0 RX Jumbo: 0 TX: 512
```
Increased with :
```
ethtool -G eth1 rx 4096
```
##### Troubleshootings[¶](#troubleshootings)
###### UDP debug tips[¶](#udp-debug-tips)
If you suspect drop of UDP datagram due to a too small kernel buffer (the plugin is too slow to treat the buffer, it filled and drop frames)
```
cat /proc/net/udp
And check the drop column.
```
```
cat /proc/sys/net/core/rmem_max
tells you the buffer size by default : 131071
Enough for 100 images:
```
```
net.core.rmem_max = 507217408
```
###### Possible problems with network adapters[¶](#possible-problems-with-network-adapters)
**List of known to work adapters**
Embedded motherboard card on optiplex 980:
> * Intel Corporation 82578DM Gigabit Network Connection (rev 05)
**List of non working adapters**
Intel pro 1000 on PCI card (82541GI) (debian 7 & 9):
> * Intel Corporation 82541GI Gigabit Ethernet Controller
> * Intel Corporation 82541PI Gigabit Ethernet Controller (rev 05)
###### Possible problems with Chillers[¶](#possible-problems-with-chillers)
Symptoms : strippy images
The goal is to setup your temperature settings as to have the peltier full time @ max power.
If the peltier is regulating the temperature, stripes appears in the images.
A easy way is to setup a -50C unreachable goal for the detector and let it stabilise to wathever temperature it can reach based on chiller setting.
Chiller is supposed to be set at 16degC. Going bellow needs a hutch humidity well controlled.
#### PointGrey[¶](#pointgrey)
##### Introduction[¶](#introduction)
> “Point Grey is a world-leading designer and manufacturer of innovative, high-performance digital cameras for industrial, life science, and traffic applications. We offer a unique and comprehensive portfolio of USB 3.0, GigE, FireWire, USB 2.0 and Camera Link products known for their outstanding quality, ease of use, and unbeatable price-performance.”
The Lima module has been tested only with this GigE cameras models:
> * Blackfly 1024x768 (model BFLY-PGE-05S2M)
##### Prerequisite[¶](#prerequisite)
First, you have to install the PointGrey *FlyCapture* SDK. We only tested it on debian6 and using the SDK version 2.3.19 (the latest one compatible with debian6 libc).
PointGrey python module need at least the lima core module.
##### Installation & Module configuration[¶](#installation-module-configuration)
Follow the generic instructions in [Build and Install](index.html#build-installation). If using CMake directly, add the following flag:
```
-DLIMACAMERA_POINTGREY=true
```
For the Tango server installation, refers to [PyTango Device Server](index.html#tango-installation).
##### Initialisation and Capabilities[¶](#initialisation-and-capabilities)
Implementing a new plugin for new detector is driven by the LIMA framework but the developer has some freedoms to choose which standard and specific features will be made available. This section is supposed to give you good knowledge regarding camera features within the LIMA framework.
###### Camera initialisation[¶](#camera-initialisation)
The camera has to be initialized using the `PointGreyCamera` class. The default constructor needs at least the serial number of your camera in order to get the network connection setting up.
In Addition one can provide both `packate_size` and `packet_delay` parameters. By default no value is passed.
###### Std capabilities[¶](#std-capabilities)
This plugin has been implemented in respect of the mandatory capabilites but with some limitations which are due to the camera and SDK features. We only provide here extra information for a better understanding of the capabilities for Andor cameras.
* HwDetInfo
getPixelSize(): the method just returns -1, it has to be implemented in further version.
get/setImageType(): the plugin only supports Bpp8 and Bpp16
* HwSync
get/setTriggerMode(): Depending of the camera model, but some can not support any trigger mode. Otherwise the only implemented modes are IntTrig and ExtTrigSingle. IntTrigMult is normally a mandatory mode (for any camera) and will be implemented in next version.
###### Optional capabilities[¶](#optional-capabilities)
None has been implemented for this camera plugin.
###### Specific control parameters[¶](#specific-control-parameters)
Some specific paramaters are available within the camera hardware interface. Those parameters should be used carefully and one should refer to the camera SDK (or user’s guide) documentation for a better understanding.
* get/setPacketSize()
* get/setPacketDelay()
* get/setGain()
* get/setAutoGain()
* getGainRange()
The following parameters can break the synchronisation with the LIMA HwSync layer by changing the camera internal exposure time.
* get/setAutoExpTime()
* get/setFrameRate()
* get/setAutoFrameRate()
##### Network Configuration[¶](#network-configuration)
* Depending on your network infrastructure you will need to configure a fix IP address for the camera or use a DHCP setup instead.
The linux SDK provides a configuation tool called `GiGEConfigCmd`. The Windows SDK version provides a graphical tool, `GigEConfigurator.exe`.
* Then in the PointGrey Tango device set the property `camera_serial` using the camera serial number (sticked on the camera).
* If you are running the server with linux kernel >= 2.6.13, you should add this line into `etc/security/limits.conf`. With the following line, the acquisition thread will be in real time mode:
```
USER_RUNNING_DEVICE_SERVER - rtprio 99
```
##### How to use[¶](#how-to-use)
This is a python code example for a simple test:
```
from Lima import PointGrey from lima import Core
cam = PointGrey.Camera(13125072)
hwint = PointGrey.Interface(cam)
control = Core.control(hwint)
acq = control.acquisition()
# configure some hw parameters hwint.setAutoGain(True)
# setting new file parameters and autosaving mode saving=control.saving()
pars=saving.getParameters()
pars.directory='/buffer/lcb18012/opisg/test_lima'
pars.prefix='test1_'
pars.suffix='.edf'
pars.fileFormat=Core.CtSaving.EDF pars.savingMode=Core.CtSaving.AutoFrame saving.setParameters(pars)
# now ask for 10ms sec. exposure and 100 frames acq.setAcqExpoTime(0.01)
acq.setNbImages(100)
control.prepareAcq()
control.startAcq()
# wait for last image (#99) ready lastimg = control.getStatus().ImageCounters.LastImageReady while lastimg !=99:
time.sleep(.01)
lastimg = control.getStatus().ImageCounters.LastImageReady
# read the first image im0 = control.ReadImage(0)
```
#### Prosilica[¶](#prosilica)
##### Introduction[¶](#introduction)
AVT offers a large choice of FireWire and GigE cameras for machine vision, computer vision and other industrial or medical applications. Cameras by AVT and Prosilica include sensitive machine vision sensors (CCD and CMOS, VGA to 16 Megapixels) and fit a wide range of applications.
The Lima module as been tested with color and B/W GigE camera.
##### Installation & Module configuration[¶](#installation-module-configuration)
Follow the generic instructions in [Build and Install](index.html#build-installation). If using CMake directly, add the following flag:
```
-DLIMACAMERA_PROSILICA=true
```
For the Tango server installation, refers to [PyTango Device Server](index.html#tango-installation).
##### Initialisation and Capabilities[¶](#initialisation-and-capabilities)
Implementing a new plugin for new detector is driven by the LIMA framework but the developer has some freedoms to choose which standard and specific features will be made available. This section is supposed to give you good knowledge regarding camera features within the LIMA framework.
###### Camera initialisation[¶](#camera-initialisation)
The camera will be initialized by creating a :cpp:`Prosilica::Camera` object. The contructor sets the camera with default parameters, only the ip address or hostname of the camera is mandatory.
###### Std capabilities[¶](#std-capabilities)
This plugin has been implemented in respect of the mandatory capabilites but with some limitations which are due to the camera and SDK features. Only restriction on capabilites are documented here.
* HwDetInfo
getCurrImageType/getDefImageType(): it can change if the video mode change (see HwVideo capability).
setCurrImageType(): It only supports Bpp8 and Bpp16.
* HwSync
get/setTrigMode(): the only supported mode are IntTrig, IntTrigMult and ExtTrigMult.
###### Optional capabilities[¶](#optional-capabilities)
In addition to the standard capabilities, we make the choice to implement some optional capabilities which are supported by the SDK. Video and Binning are available.
* HwVideo
The prosilica cameras are pure video devices, so only video format for image are supported:
**Color cameras ONLY**
+ BAYER_RG8
+ BAYER_RG16
+ RGB24
+ BGR24
**Color and Monochrome cameras**
+ Y8
Use get/setMode() methods of the cpp::class::Video object (i.e. CtControl::video()) to read or set the format.
* HwBin
There is no restriction for the binning up to the maximum size.
##### Configuration[¶](#configuration)
* First you have to setup ip address of the Prosilica Camera with `CLIpConfig` located in `camera/prosilica/sdk/CLIpConfig`
* list of all cameras available : `CLIpConfig -l` (If you do not see any camera, that’s bad news!)
* finally set ip add : `CLIpConfig -u UNIQUE_NUMBER -s -i 169.254.X.X -n 255.255.255.0 -m FIXED` (It’s an example!)
* Then in the Prosilica Tango device set the property `cam_ip_address` to the address previously set.
That’s all….
##### How to use[¶](#how-to-use)
This is a python code example for a simple test:
```
from Lima import Prosilica from lima import Core
cam = Prosilica.Camera("192.169.1.1")
hwint = Prosilica.Interface(cam)
ct = Core.CtControl(hwint)
acq = ct.acquisition()
# set video and test video
video=ct.video()
video.setMode(Core.RGB24)
video.startLive()
video.stopLive()
video_img = video.getLastImage()
# set and test acquisition
# setting new file parameters and autosaving mode saving=ct.saving()
pars=saving.getParameters()
pars.directory='/buffer/lcb18012/opisg/test_lima'
pars.prefix='test1_'
pars.suffix='.edf'
pars.fileFormat=Core.CtSaving.TIFF pars.savingMode=Core.CtSaving.AutoFrame saving.setParameters(pars)
acq.setAcqExpoTime(0.1)
acq.setNbImages(10)
ct.prepareAcq()
ct.startAcq()
# wait for last image (#9) ready lastimg = ct.getStatus().ImageCounters.LastImageReady while lastimg !=9:
time.sleep(0.01)
lastimg = ct.getStatus().ImageCounters.LastImageReady
# read the first image im0 = ct.ReadImage(0)
```
#### MarCCD[¶](#marccd)
##### Introduction[¶](#introduction)
The SX165 features a round, 165 mm diameter active area, and a versatile, high resolution CCD chip. It is the ideal X-ray detector for research applications with both synchrotrons and rotating anode X-ray sources.
##### Prerequisite[¶](#prerequisite)
The MarCCD software server should be started on the MarCCD host computer, by running the command:
```
$ marccd -r
```
Then you can launch your lima/marccd client on another host, as the MarCCD server can be reached by network
##### Installation & Module configuration[¶](#installation-module-configuration)
Follow the generic instructions in [Build and Install](index.html#build-installation). If using CMake directly, add the following flag:
```
-DLIMACAMERA_MARCCD=true
```
For the Tango server installation, refers to [PyTango Device Server](index.html#tango-installation).
##### Initialisation and Capabilities[¶](#initialisation-and-capabilities)
Implementing a new plugin for new detector is driven by the LIMA framework but the developer has some freedoms to choose which standard and specific features will be made available. This section is supposed to give you good knowledge regarding camera features within the LIMA framework.
###### Camera initialisation[¶](#camera-initialisation)
There are 4 parameters to be filled by your Lima client:
> * The IP address or hostname (ip_address tango property) of the host where the marccd server is running
> * The port (port_number tango property) of the marccd server process
> * The detector target path (image_path tango property): the path where will be saved the marccd image files
> * Reader timeout: in ms, the timeout after which the plugin will be in fault if no marccd image file is present
###### Std capabilities[¶](#std-capabilities)
This plugin has been implemented in respect of the mandatory capabilites but with some limitations according to some programmer’s choices. We only provide here extra information for a better understanding of the capabilities for the MarCCD camera.
* HwDetInfo
> + Max image size is : 4096 * 4096
> + 16 bit unsigned type is supported
>
* HwSync
> + trigger type supported are:
> > > > - IntTrig
> >
###### Optional capabilities[¶](#optional-capabilities)
* HwBin
> + 2 * 2
> + 4 * 4
> + 8 * 8
>
* HwRoi
TODO
##### Configuration[¶](#configuration)
No Specific hardware configuration is needed.
##### How to use[¶](#how-to-use)
Here is the list of accessible fonctions to configure and use the MarCCD detector:
```
void getDetectorImageSize(Size& size);
void setImagePath(const std::string& path);
const std::string& getImagePath(void);
void setImageFileName(const std::string& imgName);
const std::string& getImageFileName();
void setImageIndex(int newImgIdx);
int getImageIndex();
int getFirstImage();
bool isStopSequenceFinished();
void saveBGFrame(bool);
void setBeamX(float);
float getBeamX();
void setBeamY(float);
float getBeamY();
void setDistance(float);
float getDistance();
void setWavelength(float);
float getWavelength();
```
#### Rayonix HS camera[¶](#rayonix-hs-camera)
##### Introduction[¶](#introduction)
The MX-HS series from Rayonix incorporates the new, exclusive HS frame-transfer technology for high speed X-ray data collection without compromising resolution or data quality. The result is a new type of high speed and ultra-low noise area detector that delivers the highest performance available for X-ray diffraction applications.
The Rayonix MX-HS detectors are ideal for taking advantage of high brilliance synchrotron sources, or for any other high frame rate application. Examples include: high throughput protein crystallography, Laue diffraction, time-resolved or static small-angle X-ray scattering (SAXS), wide-angle X-ray scattering (WAXS), powder diffraction, X-ray computed tomography (CT), X-ray imaging, and coherent diffraction imaging (CDI). With no count rate limitation, these detectors are also ideal for XFEL applications.
The Lima module as been tested only with the following models :
> * MX170-HS (2x2 mdules)
##### Prerequisite[¶](#prerequisite)
The Rayonix HS detector is been delivered today with its own control computer, a powerful computer embedded at leat 8GB of RAM, dual 4-Core CPU (8 cores) and a GPU card for the online image correction (background, flatfield …).
The computer is running redhat entprise Linux 6 (64bits).
The rayonix SDK is preinstalled on the detector node under the directory `/opt/rayonix`.
There is no special prerequisite, you can test that the device works properly by running the rayonix GUI, `caxpure`.
##### Installation & Module configuration[¶](#installation-module-configuration)
Follow the generic instructions in [Build and Install](index.html#build-installation). If using CMake directly, add the following flag:
```
-DLIMACAMERA_RAYONIXHS=true
```
For the Tango server installation, refers to [PyTango Device Server](index.html#tango-installation).
##### Initialisation and Capabilities[¶](#initialisation-and-capabilities)
Implementing a new plugin for new detector is driven by the LIMA framework but the developer has some freedoms to choose which standard and specific features will be made available. This section is supposed to give you the correct information regarding how the camera is exported within the LIMA framework.
###### Camera initialisation[¶](#camera-initialisation)
The camera has to be initialized using the RayonixHsCamera class. The default constructor does not need any input parameter.
###### Std capabilities[¶](#std-capabilities)
This plugin has been implemented in respect of the mandatory capabilites but with some limitations which are due to the camera and SDK features. We only provide here extra information for a better understanding of the capabilities.
* HwDetInfo
The detector is set to full image size at startup which means a binning of 1x1.
Note
The recommended binning for most of the experiment is 2x2.
* HwSync
> * The minimum latency time is 1 ms.
> * The supported trigger modes are depending of the chosen frame mode. There are:
> + IntTrig
> + IntTrigMult
> + ExtTrigSingle
> + ExtTrigMult (only for SINGLE frame mode)
> + ExtGate (only for SINGLE frame mode)
> + ExtTrigReadout (only for FAST_TRANSFER frame mode).
###### Optional capabilities[¶](#optional-capabilities)
* HwBin
The supported hardware binning are 2x2, 3x3, 4x4, 5x5, 6x6, 7x7, 8x8, 9x9 and 10x10.
By increasing the binning factor you can increase the readout speed from 2.6 fps to 140 fps which corresponds respectively to a pixel size of 44um and 440 um.
* HwShutter
The Rayonix HS detectors provides 2 output channels one can choose a different source for each (see specific control parameters for more details about the output source control). For the SHUTTER source both opening and closing delay can be set.
The Rayonix HS shutter capability only supports two modes:
> + ShutterAutoFrame
> + ShutterManual
###### Specific control parameters[¶](#specific-control-parameters)
Some specific paramaters are available within the camera hardware interface. Those parameters should be used carefully and one should refer to the camera SDK (or user’s guide) documentation for a better understanding.
* get/setFrameTriggerType(type): signal type for the frame trigger input (channel #1)
* get/setSequenceGateSignalType(type): signal type for the gate input (channel #2), The supported signal types:
> * OPTO
> * OPTO_INVERTED
> * CMOS
> * CMOS_PULLDOWN
> * CMOS_PULLUP
> * CMOS_PULLDOWN_INVERTED
> * CMOS_PULLUP_INVERTED
> * SOFTWARE
* get/setOutputSignalType(channel, type): the signal type for the output channel (CHANNEL_1 or CHANNEL_2)
* get/setOutputSignalID(channel, id): the source id for the output channel, possible sources are:
+ ID_SHUTTER
+ ID_INTEGRATE
+ ID_FRAME
+ ID_LINE
+ ID_SHUTTER_OPENING
+ ID_SHUTTER_CLOSING
+ ID_SHUTTER_ACTIVE
+ ID_TRIGGER_RISE_WAIT
+ ID_TRIGGER_RISE_ACK
+ ID_TRIGGER_FALL_WAIT
+ ID_TRIGGER_FALL_ACK
+ ID_TRIGGER_2_RISE_WAIT
+ ID_TRIGGER_2_RISE_ACK
+ ID_INPUT_FRAME
+ ID_INPUT_GATE
* get/setElectronicShutterEnabled(): active or unactive the electronic shutter
* get/setCoolerTemperatureSetpoint(): the cooler temperature set-point
* get/setSensorTemperatureSetpoint(): the sensor temperature set-point
* get/setSensorTemperature(): the detector measured temperature
* get/setCooler(): stop or start the cooler controller
* get/setVacuumValve(): close or open the vacuum valve
* get/setFrameMode(): modes are SINGLE or FAST_TRANSFER.
Warning
in FAST_TRANSFER mode the latency time is disabled and it has a fixed value of 1 ms which corresponds to the readout time. In addition to this the supported trigger mode will depend on the frame mode. The list of supported trigger modes is available in this document below.
##### Configuration[¶](#configuration)
###### Cabling[¶](#cabling)
The detector head should be connected to the detector computer on the cameralink and USB links. You must connect the USB on the PCI board (not the motherboard ones) and the cameralink on the first channel, the top connector.
###### Cooling[¶](#cooling)
For an optimized condition wit dark current the detector has to be cooled down, the sensor temperature set-point should be at -120 deg and the cooler temperature set-point at -90 deg Celsuis. And of course the cooler controller should be started.
##### How to use[¶](#how-to-use)
This is a simple python test program:
```
from Lima import RayonixHs from lima import Core
cam = RayonixHs.Camera()
hwint = RayonixHs.Interface(cam)
control = Core.CtControl(hwint)
acq = control.acquisition()
# configure some hw parameters sens_temp = hwint.getSensorTemperature()
cool_temp = hwint.getCoolerTemperatureSetpoint()
if sens_temp > -50:
print " Hoops, detector is not cooled down, temp = ", sens_temp
# setting new file parameters and autosaving mode saving=control.saving()
pars=saving.getParameters()
pars.directory='/somewhere/'
pars.prefix='test1_'
pars.suffix='.edf'
pars.fileFormat=Core.CtSaving.EDF pars.savingMode=Core.CtSaving.AutoFrame saving.setParameters(pars)
# set a new binning to increase the frame rate image = control.image()
image.setBin(Core.Bin(2,2))
# now ask for 10ms sec. exposure and 100 frames acq.setAcqExpoTime(0.01)
acq.setNbImages(100)
control.prepareAcq()
control.startAcq()
# wait for last image (#xi99) ready lastimg = control.getStatus().ImageCounters.LastImageReady while lastimg !=99:
time.sleep(1)
lastimg = control.getStatus().ImageCounters.LastImageReady
# read the first image im0 = control.ReadImage(0)
```
#### SlsDetector camera[¶](#slsdetector-camera)
##### Introduction[¶](#introduction)
The PSI/SLS Detector Group has developed a family of X-ray detectors: Mythen, Pilatus, Gotthard,
Eiger, Moench, Jungrau, among others. Most of them are controlled through Ethernet interfaces,
with optional dedicated data link(s). A common protocol has been developed to control these detectors,
based on the *slsDetector* class. A separate software entity receives and dispatch the data: *slsReceiver*.
The SlsDetector LIMA plugin instantiates the necessary software objects to perform data aquisitions with the detectors supported by the slsDetectorsPackage.
The current implementation only works with the PSI/Eiger detectors.
##### Prerequisite[¶](#prerequisite)
The *slsDetectorPackage-v2.3.x* is needed by the SlsDetector LIMA plugin. As explained in
[Installation of Eiger computer at ESRF](index.html#document-camera/slsdetector/doc/installation), the *slsDetectorPackage* is included as a submodule in the SlsDetector camera plugin. It will be automatically compiled and installed during the LIMA build procedure.
In addition to that, a *configuration file*, containing the commands necessary to initialise both the *slsDetector” and *slsReceiver* instances, is required.
The library protocol uses Unix System-V IPC shared memory blocks to exchange information between processes.
The segments, referred to by keys matching hex *000016xx*, must be owned by the user running the plugin,
if it is not *root*. The following command, which removes the existing segments, must be run by the segments’ owner (or *root*) so they can be deleted/created by another user:
```
ipcs -m | \
grep -E '^0x000016[0-9a-z]{2}' | \
awk '{print $2}' | while read m; do \
ipcrm -m $m; \
done
```
###### High-performance Acquisitions[¶](#high-performance-acquisitions)
High-performance acquisitions require a specific backend computer setup.
Please refer to the [Installation of Eiger computer at ESRF](index.html#document-camera/slsdetector/doc/installation).
##### Installation & Module configuration[¶](#installation-module-configuration)
* Follow the steps indicated in [Installation of Eiger computer at ESRF](index.html#document-camera/slsdetector/doc/installation)
As a reference, see:
* linux_installation
* linux_compilation
* [PyTango Device Server](index.html#tango-installation)
##### Initialisation and Capabilities[¶](#initialisation-and-capabilities)
In order to help people to understand how the camera plugin has been implemented in LImA this section provides some important information about the developer’s choices.
###### Camera initialisation[¶](#camera-initialisation)
The SlsDetector plugin exports two kind classes: one generic *SlsDetector::Camera* class, with the common interface to *slsDetector* and *slsReceiver* classes, and detector-specific classes, like *SlsDetector::Eiger*
which manage the particularities of each model.
First, the *SlsDetector::Camera* must be instantiated with the configuration file, and once the connection to the detector is established, a specific class is created depending on the detected type:
```
cam = SlsDetector.Camera(config_fname)
if cam.getType() == SlsDetector.Camera.EigerDet:
eiger = SlsDetector.Eiger(cam)
else:
raise RuntimeError("Non-supported type: %s" % cam.getType())
hw_inter = SlsDetector.Interface(cam)
ct = Core.CtControl(hw_inter)
```
The raw images returned by the *slsReceiver* class might need to be reconstructed, like in the case of the PSI/Eiger detector. A LImA software reconstruction task must be then created from the LImA plugin and registered to the *Core::CtControl* layer:
> if cam.getType() == SlsDetector.Camera.EigerDet:corr = eiger.createCorrectionTask()
> ct.setReconstructionTask(corr)
###### Std capabilites[¶](#std-capabilites)
This plugin has been implemented in respect of the mandatory capabilites but with limitations according due to the detector specific features and with some programmer’s choices. We do not explain here the standard Lima capabilites but you can find in this section the useful information on the SlsDetector specfic features.
* HwDetInfo
TODO
* HwSync
The following trigger modes are currently implemented:
> * IntTrig
> * ExtTrigSingle
> * ExtTrigMult
> * ExtGate
The minimum *latency_time* and the *max_frame_rate* are automatically updated depending on the *PixelDepth* (4, 8, 16, 32), the *ClockDiv* (Full-, Half-, Quarter-, SuperSlow-Speed),
and the *ReadoutFlags* (Parallel, Non-Parallel).
###### Optional capabilites[¶](#optional-capabilites)
In addition to the standard capabilities, we make the choice to implement some optional capabilities in order to have an improved simulation.
* HwShutter
*Not implemented*
* HwRoi
*Not implemented*
* HwBin
*Not implemented*
##### Configuration[¶](#configuration)
The main configuration will consist in providing the correct *config file* file to the *slsDetector API*.
As mentioned before, the file is a list of commands accepted by *sls_detector_put*, and it should also work with the *slsDetectorGui* application.
Two important parameters define the image frame dimension:
* PixelDepth:
+ 4 bit (not implemented yet)
+ 8 bit
+ 16 bit
+ 32 bit
* RawMode:
If set to *True*, the image is exported to LiMA as given from the Receiver(s), without any software reconstruction.
##### How to use[¶](#how-to-use)
The LimaCCDs Tango server provides a complete interface to the SlsDetector plugin so feel free to test.
For a quick test one can use Python, this a short code example to work with the PSI/Eiger detector:
```
from Lima import SlsDetector from Lima import Core import time import sys
config_fname = sys.argv[1]
cam = SlsDetector.Camera(config_fname)
if cam.getType() != SlsDetector.Camera.EigerDet:
raise RuntimeError("Non-supported type: %s" % cam.getType())
eiger = SlsDetector.Eiger(cam)
hw_inter = SlsDetector.Interface(cam)
ct = Core.CtControl(hw_inter)
corr = eiger.createCorrectionTask()
ct.setReconstructionTask(corr)
acq = ct.acquisition()
# setting new file parameters and autosaving mode saving = ct.saving()
pars = saving.getParameters()
pars.directory = '/tmp'
pars.prefix = 'test_slsdetector_'
pars.suffix = '.edf'
pars.fileFormat = Core.CtSaving.EDF pars.savingMode = Core.CtSaving.AutoFrame saving.setParameters(pars)
# now ask for 0.2 sec. exposure and 10 frames acq.setAcqExpoTime(0.2)
acq.setAcqNbFrames(10)
ct.prepareAcq()
ct.startAcq()
# wait for last image (#9) ready lastimg = ct.getStatus().ImageCounters.LastImageReady while lastimg != 9:
time.sleep(0.1)
lastimg = ct.getStatus().ImageCounters.LastImageReady
# read the first image im0 = ct.ReadImage(0)
# cleanup in good order import gc del acq; gc.collect()
del ct; gc.collect()
del corr; gc.collect()
del eiger; gc.collect()
del hw_inter; gc.collect()
del cam; gc.collect()
```
A more complete **test_slsdetector_control.py** Python script can be found under the *camera/slsdetector/test* directory.
#### Ueye[¶](#ueye)
##### Introduction[¶](#introduction)
Industrial Cameras for digital imaging and visualization (USB,GigE).
home site: <http://www.ids-imaging.com/##### Installation & Module configuration[¶](#installation-module-configuration)
First, you have to install the Ueye SDK. See the sdk README provide in the ueye module
Then, follow the generic instructions in [Build and Install](index.html#build-installation). If using CMake directly, add the following flag:
```
-DLIMACAMERA_UEYE=true
```
For the Tango server installation, refers to [PyTango Device Server](index.html#tango-installation).
##### Initialisation and Capabilities[¶](#initialisation-and-capabilities)
Implementing a new plugin for new detector is driven by the LIMA framework but the developer has some freedoms to choose which standard and specific features will be made available. This section is supposed to give you the correct information regarding how the camera is exported within the LIMA framework.
###### Camera initialisation[¶](#camera-initialisation)
The camera will be initialized by creating a `Ueye::Camera` object. The contructor sets the camera with default parameters, only the video address (e.g. 0) of the camera is mandatory.
###### Std capabilites[¶](#std-capabilites)
This plugin has been implement in respect of the mandatory capabilites but with some limitations which are due to the camera and SDK features. Only restriction on capabilites are documented here.
* HwDetInfo
getCurrImageType/getDefImageType(): it can change if the video mode change (see HwVideo capability).
setCurrImageType(): It only supports Bpp8 and Bpp16.
* HwSync
get/setTrigMode(): the only supported mode are IntTrig, IntTrigMult ExtTrigSingle and ExtTrigMult.
###### Optional capabilites[¶](#optional-capabilites)
In addition to the standard capabilities, we make the choice to implement some optional capabilities which are supported by the SDK. **Video** and Binning are available.
* HwVideo
The prosilica cameras are pure video device, so video format for image are supported:
**For color cameras ONLY**
+ BAYER_RG8
+ BAYER_RG16
+ BAYER_BG8
+ BAYER_BG16
+ RGB24
+ YUV422
**Color and Monochrome cameras**
+ Y8
+ Y16
Use get/setMode() methods of the *video* object (i.e. CtControl::video()) to read or set the format.
* HwBin
There is no restriction for the binning up to the maximum size.
##### Configuration[¶](#configuration)
See the SDK `README` in `camera/ueye/sdk/` directory.
##### How to use[¶](#how-to-use)
A python code example for testing your camera:
```
from Lima import Ueye from lima import Core
#---+
# |
# v the video address cam = Ueye.Camera(0)
hwint = Ueye.Interface(cam)
ct = Core.CtControl(hwint)
acq = ct.acquisition()
# set video and test video, supposing we have a color camera !!
#
video=ct.video()
video.setMode(Core.YUV422)
video.setExposure(0.1)
video.startLive()
video.stopLive()
video_img = video.getLastImage()
# set and test acquisition
#
# setting new file parameters and autosaving mode saving=ct.saving()
pars=saving.getParameters()
pars.directory='/buffer/lcb18012/opisg/test_lima'
pars.prefix='test1_'
pars.suffix='.edf'
pars.fileFormat=Core.CtSaving.TIFF pars.savingMode=Core.CtSaving.AutoFrame saving.setParameters(pars)
acq.setAcqExpoTime(0.1)
acq.setNbImages(10)
ct.prepareAcq()
ct.startAcq()
# wait for last image (#9) ready lastimg = ct.getStatus().ImageCounters.LastImageReady while lastimg !=9:
time.sleep(0.1)
lastimg = ct.getStatus().ImageCounters.LastImageReady
# read the first image im0 = ct.ReadImage(0)
```
#### Ultra[¶](#ultra)
##### Introduction[¶](#introduction)
> “The ULTRA Detector System enables capture of one dimensional spectra at extremely high rates. Where CCDs were used to capture a line of data at a time, the ULTRA Detector System offers many orders of magnitude faster time framing. ULTRA is a compact turnkey system. The data acquisition system is attached in a compact form factor unit with gigabit Ethernet out and multiple I/O options onboard.”
Ultra Specification[¶](#id1)
| Sustained Spectral Rate | 20 KHz (spectra per second) Maximum |
| Frame Period | <500 ns Minimum |
| Spectral Sensitivity | 5 – 17KeV 300µm thickness. 500µm also available. |
| Output | Gigabit Ethernet |
| Pixel configuration | Si 512 linear strips @ 50µm pitch |
| ADC Dynamic Range | 16 Bit |
| Synchronisation Inputs | TTL or Fibre Optic |
| Integration Time | <1us - 650us frames |
| TriggeringExternal | (TTL or Fibre) or Internal (10KHz fixed) |
##### Prerequisite[¶](#prerequisite)
The default network setup is (excluding the site network connection):
1GBit Copper network for control communinication between the PC and the Ultra box.
##### Installation & Module configuration[¶](#installation-module-configuration)
Follow the generic instructions in [Build and Install](index.html#build-installation). If using CMake directly, add the following flag:
```
-DLIMACAMERA_ULTRA=true
```
For the Tango server installation, refers to [PyTango Device Server](index.html#tango-installation).
##### Initialisation and Capabilities[¶](#initialisation-and-capabilities)
Implementing a new plugin for new detector is driven by the LIMA framework but the developer has some freedoms to choose which standard and specific features will be made available. This section is supposed to give you the correct information regarding how the camera is exported within the LIMA framework.
###### Camera initialisation[¶](#camera-initialisation)
The camera will be initialized within the :cpp::class::Ultra::Camera object. A TCP and UDP socket connections on the 1GBit port are established
The Ultra requires the following parameters with the recommended settings:
```
headname = 192.168.1.100 hostname = 192.168.1.103 tcpPort = 7 udpPort = 5005 npixels = 512
```
###### Std capabilites[¶](#std-capabilites)
This plugin has been implemented with respect of the mandatory capabilites but with some limitations which are due to the camera. We only provide here extra information for a better understanding of the capabilities for Ultra cameras.
* HwDetInfo
getCurrImageType/getDefImageType(): is set to Bpp16
* HwSync
get/setTrigMode(): the only supported modes are IntTrig, ExtTrigMult and IntTrigMult
###### Optional capabilities[¶](#optional-capabilities)
TODO
#### V4l2 camera[¶](#v4l2-camera)
##### Introduction[¶](#introduction)
V4L2 stands for Video for Linux 2. This new plugin aims to interface any v4l2 camera devices to LIMA framework. Some USB Webcams have been tested successfully. Video for Linux 2 supports most of the market products, however you may encountered some limitations using Lima, please report your problem and or your patch to [<EMAIL>](mailto:<EMAIL>), we will be happy to improve this code for you.
Useful links:
> * <http://linuxtv.org>
> * <http://en.wikipedia.org/wiki/Video4Linux>
##### Installation & Module configuration[¶](#installation-module-configuration)
Depending or your linux flavor you may need to intall/update the v4l2 packages.
The package libv4l-dev is mandatory to compile the lima v4l2 plugin.
We recommend to install a useful tool `qv4l2`, a Qt GUI. You can test your device and check supported video formats and if the camera is supporting fixed exposure for instance.
Follow the generic instructions in [Build and Install](index.html#build-installation). If using CMake directly, add the following flag:
```
-DLIMACAMERA_V4L2=true
```
For the Tango server installation, refers to [PyTango Device Server](index.html#tango-installation).
##### Initialisation and Capabilities[¶](#initialisation-and-capabilities)
Implementing a new plugin for new detector is driven by the LIMA framework but the developer has some freedoms to choose which standard and specific features will be made available. This section is supposed to give you the correct information regarding how the camera is exported within the LIMA framework.
###### Camera initialisation[¶](#camera-initialisation)
The camera will be initialized by creating a `V4l2::Camera` object. The contructor sets the camera with default parameters, and a device path is required, e.g. `/dev/video0`.
###### Std capabilities[¶](#std-capabilities)
This plugin has been implemented in respect of the mandatory capabilites but with some limitations.
It is mainly a video controller, see `HwVideoCtrlObj`, with a minimum set of feature for standard acquisition. For instance the exposure control can not be available if the camera only support the auto-exposure mode.
* HwDetInfo
getCurrImageType/getDefImageType(): it can change if the video mode change (see HwVideo capability).
setCurrImageType(): It only supports Bpp8 and Bpp16.
* HwSync
get/setTrigMode(): Only IntTrig mode is supported.
###### Optional capabilites[¶](#optional-capabilites)
The V4L2 camera plugin is a mostly a **Video** device which provides a limited interface for the acquisition (i.e, exposure, latency ..).
* HwVideo
The v4l2 cameras are pure video device we are supporting the commonly used formats:
**Bayer formats**
+ BAYER_BG8
+ BAYER_BG16
**Luminence+chrominance formats**
+ YUV422
+ UYV411
+ YUV444
+ I420
**RGB formats**
+ RGB555
+ RGB565
+ BGR24
+ RGB24
+ BGR32
+ RGB32
**Monochrome formats**
+ Y8
+ Y16
+ Y32
+ Y64
Use get/setMode() methods of the *video* object (i.e CtControl::video()) for accessing the video format.
The lima plugin will initialise the camera to a *preferred* video format by choosing one of the format the camera supports but through ordered list above.
##### Configuration[¶](#configuration)
Simply plug your camera (USB device or other interface) on your computer, it should be automatically detected and a new device file is created like `/dev/video0`. The new device is maybe owned by `root:video`, so an other user cannot access the device. In that case you should update `/etc/group` to add that user to the video group.
##### How to use[¶](#how-to-use)
This is a python code example for a simple test:
```
from Lima import v4l2 from lima import Core
#---+
# V4l2 device path |
# v cam = v4l2.Camera('/dev/video0')
hwint = v4l2.Interface(cam)
ct = Core.CtControl(hwint)
acq = ct.acquisition()
# set and test video
#
video=ct.video()
# to know which preferred format lima has selected print (video.getMode())
video.startLive()
video.stopLive()
video_img = video.getLastImage()
# set and test an acquisition
#
# setting new file parameters and autosaving mode saving=ct.saving()
pars=saving.getParameters()
pars.directory='/buffer/lcb18012/opisg/test_lima'
pars.prefix='test1_'
pars.suffix='.edf'
pars.fileFormat=Core.CtSaving.TIFF pars.savingMode=Core.CtSaving.AutoFrame saving.setParameters(pars)
# now ask for and 10 frames acq.setNbImages(10)
ct.prepareAcq()
ct.startAcq()
# wait for last image (#9) ready lastimg = ct.getStatus().ImageCounters.LastImageReady while lastimg !=9:
time.sleep(1)
lastimg = ct.getStatus().ImageCounters.LastImageReady
# read the first image im0 = ct.ReadImage(0)
```
#### Ximea[¶](#ximea)
##### Intoduction[¶](#intoduction)
Ximea is a manufacturer of an extremely diversified and highly modular camera family. It offers multiple choices of combining sensors and interfaces. Together with minimal latencies and CPU load, the cameras are a perfect fit for embedded vision and multi-camera applications. Thanks to flat flex cabling, the board-level and semi-housed variants allow integration in tight spaces and close proximity between cameras.
The plugin described here aims to provide full Ximea camera functionality for Lima.
##### Installation & Module configuration[¶](#installation-module-configuration)
Follow the generic instructions in [Build and Install](index.html#build-installation). If using CMake directly, add the following flag:
```
-DLIMACAMERA_XIMEA=true
```
For the Tango server installation, refers to [PyTango Device Server](index.html#tango-installation).
##### Prerequisites[¶](#prerequisites)
In order to use the Ximea plugin, Ximea Linux Software Package needs to be installed on the target machine. The package can be downloaded from the Ximea website (<https://www.ximea.com/support/documents/4>).
The installation process is pretty straightforward:
```
tar xvzf XIMEA_Linux_SP.tgz cd package
./install -pcie
```
Use the `-pcie` option to install with support for PCI Express cameras, if this is not needed you can just run `./install`.
**Important notes**:
* Software Package versions `4.21.24`, `4.21.26` and `4.21.30` have compability problems when running on Ubuntu 20.04. It is however advised to always use the newest version of the Software Package, which at the time of writing is `4.23.02`.
* The XIMEA kernel module is not DKMS-enabled, which means it needs to be recompiled if a new kernel is installed. In general, running the `./install` command again should be sufficient. In case of problems, try to recompile the module manually (example for PCIe cameras):
```
cd /opt/XIMEA/src/ximea_cam_pcie make clean make PWD=$(pwd)
make install PWD=$(pwd)
insmod ximea_cam_pcie.ko
```
If further clarification is needed please refer to the Software Package’s `README` file.
##### Initialisation and Capabilities[¶](#initialisation-and-capabilities)
Implementing a new plugin for new detector is driven by the LIMA framework but the developer has some freedoms to choose which standard and specific features will be made available. This section is supposed to give you the correct information regarding how the camera is exported within the LIMA framework.
###### Camera initialisation[¶](#camera-initialisation)
The camera will be initialized within the `Ximea::Camera` class. Camera contructor aims to start up the camera and load default startup configuration.
There are so many hardware parameters you can set, but refer to the Ximea documentation for a good practice.
###### Std capabilites[¶](#std-capabilites)
This plugin has been implemented in respect of the mandatory capabilites but with some limitations which are due to the camera. We only provide here extra information for a better understanding of the capabilities for Ximea cameras.
* HwDetInfo
getPixelSize(): Will always return 10um x 10um pixel size for unknown cameras. (The only camera known at the moment is MX377MR which pixel size is also 10um x 10um)
* HwSync
get/setTrigMode(): supported modes are IntTrig, IntTrigMult and ExtTrigMult.
###### Optional capabilites[¶](#optional-capabilites)
In addition to the standard capabilities, we make the choice to implement some optional capabilities which are supported by this detector. A Shutter control.
* HwBin
Supported modes: 1x1, 2x2, 4x4
* HwRoi
* HwEvent
##### How to use[¶](#how-to-use)
This is a python code example of a simple acquisition:
```
import time from Lima import Core, Ximea
cam = Ximea.Camera(0)
hw = Ximea.Interface(cam)
ct = Core.CtControl(hw)
# configure saving sav = ct.saving()
sav.setSavingMode(Core.CtSaving.AutoFrame)
sav.setFormat(Core.CtSaving.EDF)
sav.setPrefix('test')
sav.setOverwritePolicy(Core.CtSaving.Overwrite)
sav.setDirectory('/tmp')
# set configuration, see documentation for details ct.image().setBin(Core.Bin(2, 2))
ct.prepareAcq()
ct.startAcq()
while ct.getStatus().AcquisitionStatus != Core.AcqReady:
time.sleep(0.1)
img = ct.ReadBaseImage(0)
```
#### Xpad[¶](#xpad)
##### Introduction[¶](#introduction)
The XPAD detector is based on the photon counting technology providing a quasi noiseless imaging as well as a very high dynamic range and a fast frame rate (500 images/s).
This is a detector stemming from the collaboration of Soleil, CPPM and ESRF(D2AM). It is now supported by the ImXPAD company.
This plugin support the following models:
> * S70,
> * S140,
> * S340,
> * S540
The XPAD runs under Linux, with the help of a PCI express board from PLDA.
##### Prerequisite[¶](#prerequisite)
The host where the PCI express board is installed, should have the PLDA driver installed.
##### Initialisation and Capabilities[¶](#initialisation-and-capabilities)
Implementing a new plugin for new detector is driven by the LIMA framework but the developer has some freedoms to choose which standard and specific features will be made available. This section is supposed to give you the correct information regarding how the camera is exported within the LIMA framework.
###### Camera initialisation[¶](#camera-initialisation)
The camera will be initialized within the `Xpad::Camera` object. One should pass to the constructor, the Xpad type as a string. Possible values are:
> * “IMXPAD_S70”,
> * “IMXPAD_S140”,
> * “IMXPAD_S340”,
> * “IMXPAD_S540”
Synchrone or Asynchrone acquisition should be selected with a call `setAcquisitionType()`.
###### Std capabilities[¶](#std-capabilities)
This plugin has been implemented in respect of the mandatory capabilites but with some limitations according to some programmer’s choices. We only provide here extra information for a better understanding of the capabilities for the xpad camera.
####### HwDetInfo[¶](#hwdetinfo)
> * 16 or 32 bit unsigned type are supported
> * the size of the image will depend of the type of Xpad
####### HwSync[¶](#hwsync)
Trigger type supported are:
> * IntTrig
> * ExtTrigSingle
> * ExtGate : 1 external trigger start N internal gates (gates being configured by software)
> * ExtTrigMult : N external trigger start N internal gates (gates being configured by software)
###### Optional capabilities[¶](#optional-capabilities)
There are no optional capabilities.
##### Configuration[¶](#configuration)
No Specific hardware configuration is needed.
##### How to use[¶](#how-to-use)
Here is a list of accessible fonctions to configure and use the Xpad detector:
```
//! Set all the config G void setAllConfigG(const std::vector<long>& allConfigG);
//! Set the Acquisition type between synchrone and asynchrone void setAcquisitionType(short acq_type);
//! Load of flat config of value: flat_value (on each pixel)
void loadFlatConfig(unsigned flat_value);
//! Load all the config G void loadAllConfigG(unsigned long modNum, unsigned long chipId , unsigned long* config_values);
//! Load a wanted config G with a wanted value void loadConfigG(const std::vector<unsigned long>& reg_and_value);
//! Load a known value to the pixel counters void loadAutoTest(unsigned known_value);
//! Save the config L (DACL) to XPAD RAM void saveConfigL(unsigned long modMask, unsigned long calibId, unsigned long chipId, unsigned long curRow,unsigned long* values);
//! Save the config G to XPAD RAM void saveConfigG(unsigned long modMask, unsigned long calibId, unsigned long reg,unsigned long* values);
//! Load the config to detector chips void loadConfig(unsigned long modMask, unsigned long calibId);
//! Get the modules config (Local aka DACL)
unsigned short*& getModConfig();
//! Reset the detector void reset();
//! Set the exposure parameters void setExposureParameters( unsigned Texp,unsigned Twait,unsigned Tinit,
unsigned Tshutter,unsigned Tovf,unsigned mode, unsigned n,unsigned p,
unsigned nbImages,unsigned BusyOutSel,unsigned formatIMG,unsigned postProc,
unsigned GP1,unsigned GP2,unsigned GP3,unsigned GP4);
//! Calibrate over the noise Slow and save dacl and configg files in path void calibrateOTNSlow (const std::string& path);
//! Calibrate over the noise Medium and save dacl and configg files in path void calibrateOTNMedium (const std::string& path);
//! Calibrate over the noise High and save dacl and configg files in path void calibrateOTNHigh (const std::string& path);
//! upload the calibration (dacl + config) that is stored in path void uploadCalibration(const std::string& path);
//! upload the wait times between each images in case of a sequence of images (Twait from setExposureParameters should be 0)
void uploadExpWaitTimes(unsigned long *pWaitTime, unsigned size);
//! increment the ITHL void incrementITHL();
//! decrement the ITHL void decrementITHL();
//! set the specific parameters (deadTime,init time, shutter ...
void setSpecificParameters( unsigned deadtime, unsigned init,
unsigned shutter, unsigned ovf,
unsigned n, unsigned p,
unsigned busy_out_sel,
bool geom_corr,
unsigned GP1, unsigned GP2, unsigned GP3, unsigned GP4);
//! Set the Calibration Adjusting number of iteration void setCalibrationAdjustingNumber(unsigned calibration_adjusting_number);
```
#### Xspress3[¶](#xspress3)
##### Introduction[¶](#introduction)
Many solid state detectors are not limited by their intrinsic rate capability, but by the readout system connected to them.
The Quantum Detectors Xspress 3 was developed to maximise the throughput and resolution of such detectors and remove the bottleneck at the readout stage. With output count rates of over 3 Mcps, this detector is easily 10X faster than the systems many users have on their beamlines. Xspress 3 can open up the beamline to much faster data collection, its dynamic range can reduce the number of scans required and save large amounts of time with attenuation selection.
The XSPRESS3 system contains a Xilinx Virtex-5 FPGA with two embedded PowerPC processors. PPC1 manages the DMA engines.
PPC2 runs the Xilinx micro kernel and communicates to the Intel 64 bit Linux server PC by 1 GBit Ethernet,TCP sockets.
Bulk data and event lists to be histogrammed are sent from the firmware to the Server PC by 10G Ethernet, UDP.
The Software Development Toolkit (SDK) is provided for Linux only.
##### Prerequisite[¶](#prerequisite)
Unpack the SDK distribution into either the `camera/xspress3/sdk` directory or `/usr/local/lib`. Then ensure the libraries are in the `LD_LIBRARY_PATH`.
The SDK has shared libraries which has been compiled on recent linux kernel. g++ (GCC) 4.1.2 20080704 (Red Hat 4.1.2-50), check first you have the right kernel and libc available by compiling the test program.
The default network setup is (excluding the site network connection):
1GBit Copper network for control communinication between the PC and the XSPRESS3 box.
With more than 1 XSPRESS3 box connected this network uses a ethernet switch A private network with 64 addresses allocated:
```
$ ifconfig eth1
eth1 Link encap:Ethernet HWaddr d4:ae:52:7d:5f:84
inet addr:192.168.0.1 Bcast:192.168.0.63 Mask:255.255.255.192
inet6 addr: fe80::d6ae:52ff:fe7d:5f84/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1
RX packets:1567 errors:0 dropped:5766 overruns:0 frame:0
TX packets:158 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:173937 (169.8 KiB) TX bytes:37252 (36.3 KiB)
Interrupt:48 Memory:da000000-da012800
```
A 10GBit Fibre network for data transfer, point to point with 4 addresses allocated.
With more that 1 XSPRESS3 box there would be multiple 10G Ports on the PC with multiple 4 address range subnets:
```
$ ifconfig eth2
eth2 Link encap:Ethernet HWaddr 00:07:43:05:7c:65
inet addr:192.168.0.65 Bcast:192.168.0.67 Mask:255.255.255.252
inet6 addr: fe80::207:43ff:fe05:7c65/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:702 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:154963 (151.3 KiB)
Interrupt:41 Memory:dd7fe000-dd7fefff
```
Note the carefully picked subnet masks etc and the MTU 9000 We then have a script that should be executed automatically at boot.
```
$ cat /etc/init.d/xspress3.sh
#!/bin/bash
#
# static-arp This is to register a static ARP address in the arp table at boot
#
# Kept as simple as possible hopefully this will auto register the associated
# MAC with the private network address to allow the machine to communicate with the
# test boards for xspress3
# Derived from work by <NAME>, by <NAME>by PATH=/sbin:/bin:/usr/bin:/usr/sbin arp -i eth2 -s 192.168.0.66 02:00:00:00:00:00
#route -v add -host 192.168.0.66 eth2
# Setting default and max buffer sizes for networking.
sysctl -w net.core.rmem_max=1073741824 sysctl -w net.core.rmem_default=1073741824
```
##### Installation & Module configuration[¶](#installation-module-configuration)
Follow the generic instructions in [Build and Install](index.html#build-installation). If using CMake directly, add the following flag:
```
-DLIMACAMERA_XSPRESS3=true
```
For the Tango server installation, refers to [PyTango Device Server](index.html#tango-installation).
##### Initialisation and Capabilities[¶](#initialisation-and-capabilities)
In order to help people to understand how the camera plugin has been implemented in LImA this section provide some important information about the developer’s choices.
###### Camera initialisation[¶](#camera-initialisation)
The camera will be initialized within the `Xspress3::Camera` object. A TCP socket connection on the 1GBit port is established and optionally a UDP connection on the 10Gbit port (depends on boolean constructor flag noUDP). The ROI’s are reset, the first card in a multicard system or the single card, is set to be the master and the run flags are set to initiate Scaler and Histogram modes. The register and configuration settings (as optimised by QD on delivery) are uploaded to the Xspress3.
The Xspress3 requires the following parameters with the recommended settings:
```
nbCards = 1 (number of Xspress3 boxes)
maxFrames = 16384 baseIPaddress = "192.168.0.1"
basePort = 30123 baseMACaddress = "02.00.00.00.00.00"
nbChans = 4/6/8 (depends on the firmware)
createScopeModule = true/false scopeModuleName = "a-name-of-your-choice"
debug = 0 is off, 1 is on, 2 is verbose cardIndex = 0 (for a 1 xspress system)
noUDP = true/false directoryName = "directory containing xspress3 configuration settings"
```
The `Xspress3::Camera` contructor sets the camera with default parameters for Number of Pixels (4096), the imageType (Bpp32),
Number of Frames (1) and the trigger mode (IntTrig)
###### Std capabilities[¶](#std-capabilities)
This plugin has been implemented with respect of the mandatory capabilites but with some limitations which are due to the camera and SDK features. We only provide here extra information for a better understanding of the capabilities for Xspress3 cameras.
* HwDetInfo
+ getCurrImageType/getDefImageType(): is set to Bpp32
+ setCurrImageType(): will not change the image type.
+ getMaxImageSize/getDetectorImageSize(): is defined as number of pixels + number of scalers x number of channels, i.e. (4096+8) x 4 for a 4 channel xspress3 system
+ getPixelSize(): is hardcoded to be 1x1
+ getDetectorModel(): reads and reports the xspress3 firmware version.
* HwSync
get/setTrigMode(): the only supported modes are IntTrig, ExtGate and IntTrigMult
###### Optional capabilities[¶](#optional-capabilities)
None
##### Data Format[¶](#data-format)
The raw data is saved in .edf file format. Each frame is saved as it completes. To allow Lima to save both histogram and scaler data, the latter is appended to the histogram data.
```
histogram scaler
[0] [0 ... 4095, 4096 ... 5003] channel 0
[1] [0 ... 4095, 4096 ... 5003] channel 1
[2] [0 ... 4095, 4096 ... 5003] channel 2
[3] [0 ... 4095, 4096 ... 5003] channel 3
```
* `Camera::readScalers()`: returns the raw scaler data from the Lima buffers from the specified frame and channel
* `Camera::readHistogram()`: returns the raw histogram data from the Lima buffers from the specified frame and channel
* `Camera::setUseDtc()` and `Camera::getUseDtc()`: set to true will dead time correct the data returned from the Lima buffers (default is false)
* `Camera::setUseHW()` and `Camera::getUseHw()`: set to true will return raw histogram data from the H/W data buffers, including the current frame.
##### How to use[¶](#how-to-use)
See example in the test directory. Playback data should be extracted from the tarball.
#### XH camera[¶](#xh-camera)
##### Introduction[¶](#introduction)
> “XH is the worlds first 50μm pitch Ge Strip detector which has been designed specifically for Energy Dispersive EXAFS (EDE). Carrying on from the CLRC development of XSTRIP1, a Si based detector system, XH makes use of amorphous germanium (a-Ge) contact technology produced by LBNL2 and readout ASICs developed by CLRC. XH is designed to address the issues of detection efficiency and radiation damage that limit the effectiveness of the original XSTRIP system.”
The system is controlled from its own PC or via a TCP/IP connection from a beamline computer system.
The Lima plugin has been tested only at ESRF for a unique XH detector on BM23 and ID24 beamlines.
##### Prerequisite Linux OS[¶](#prerequisite-linux-os)
The plugin is only working for Linux distribution and been tested on Redhat E4 i386 and debian 6 x86_64.
##### Installation & Module configuration[¶](#installation-module-configuration)
Follow the generic instructions in [Build and Install](index.html#build-installation). If using CMake directly, add the following flag:
```
-DLIMACAMERA_XH=true
```
For the Tango server installation, refers to [PyTango Device Server](index.html#tango-installation).
##### Initialisation and Capabilities[¶](#initialisation-and-capabilities)
Implementing a new plugin for new detector is driven by the LIMA framework but the developer has some freedoms to choose which standard and specific features will be made available. This section is supposed to give you the correct information regarding how the camera is exported within the LIMA framework.
###### Camera initialisation[¶](#camera-initialisation)
TODO
###### Std capabilities[¶](#std-capabilities)
This plugin has been implemented in respect of the mandatory capabilites but with some limitations which are due to the camera and SDK features. We only provide here extra information for a better understanding of the capabilities for Andor cameras.
* HwDetInfo
> TODO
* HwSync
> TODO
###### Optional capabilities[¶](#optional-capabilities)
In addition to the standard capabilities, we make the choice to implement some optional capabilities which are supported by the SDK and the I-Kon cameras. A Shutter control, a hardware ROI and a hardware Binning are available.
* HwShutter
> TODO
* HwRoi
> TODO
* HwBin
> TODO
##### Configuration[¶](#configuration)
> TODO
##### How to use[¶](#how-to-use)
This is a python code example for a simple test:
```
from Lima import Xh from lima import Core
# hostname port config name cam = Xh.Camera('xh-detector', 1972, 'config_xhx3')
hwint = Xh.Interface(cam)
ct = Core.CtControl(hwint)
acq = ct.acquisition()
# configure some hw parameters
# set some low level configuration
# setting new file parameters and autosaving mode saving=ct.saving()
pars=saving.getParameters()
pars.directory='/buffer/lcb18012/opisg/test_lima'
pars.prefix='test1_'
pars.suffix='.edf'
pars.fileFormat=Core.CtSaving.EDF pars.savingMode=Core.CtSaving.AutoFrame saving.setParameters(pars)
# now ask for 2 sec. exposure and 10 frames acq.setAcqExpoTime(2)
acq.setNbImages(10)
ct.prepareAcq()
ct.startAcq()
# wait for last image (#9) ready lastimg = ct.getStatus().ImageCounters.LastImageReady while lastimg !=9:
time.sleep(0.1)
lastimg = ct.getStatus().ImageCounters.LastImageReady
# read the first image im0 = ct.ReadImage(0)
```
#### Zwo (Zhen Wang Optical)[¶](#zwo-zhen-wang-optical)
##### Introduction[¶](#introduction)
ZWO offers a large choice of cameras for astronomical applications. The cameras are connected via USB. The delivered driver library is available for Linux,
Mac, and Windows.
The LIMA module has been tested with the following models on Linux:
```
- ASI 120MM Mini
- ASI 178MM-Cool
- ASI 294MM Pro
- ASI 2600MM Pro
```
##### Prerequisite[¶](#prerequisite)
##### Installation & Module configuration[¶](#installation-module-configuration)
* follow first the steps for the linux installation linux_installation
* follow first the steps for the windows installation windows_installation
The minimum configuration file is *config.inc* :
```
COMPILE_CORE=1 COMPILE_SIMULATOR=0 COMPILE_SPS_IMAGE=1 COMPILE_ESPIA=0 COMPILE_FRELON=0 COMPILE_MAXIPIX=0 COMPILE_PILATUS=0 COMPILE_BASLER=0 COMPILE_PROSILICA=0 COMPILE_ROPERSCIENTIFIC=0 COMPILE_MYTHEN=0 COMPILE_ADSC=0 COMPILE_UEYE=0 COMPILE_XH=0 COMPILE_XSPRESS3=0 COMPILE_XPAD=0 COMPILE_PERKINELMER=0 COMPILE_ANDOR=0 COMPILE_PHOTONICSCIENCE=0 COMPILE_PCO=0 COMPILE_MARCCD=0 COMPILE_POINTGREY=0 COMPILE_IMXPAD=0 COMPILE_DEXELA=0 COMPILE_ZWO=1 COMPILE_RAYONIXHS=0 COMPILE_CBF_SAVING=0 COMPILE_NXS_SAVING=0 COMPILE_FITS_SAVING=0 COMPILE_EDFGZ_SAVING=0 COMPILE_TIFF_SAVING=0 COMPILE_CONFIG=1 LINK_STRICT_VERSION=0 export COMPILE_CORE COMPILE_SPS_IMAGE COMPILE_SIMULATOR \
COMPILE_ESPIA COMPILE_FRELON COMPILE_MAXIPIX COMPILE_PILATUS \
COMPILE_BASLER COMPILE_PROSILICA COMPILE_ROPERSCIENTIFIC COMPILE_ADSC \
COMPILE_MYTHEN COMPILE_UEYE COMPILE_XH COMPILE_XSPRESS3 COMPILE_XPAD COMPILE_PERKINELMER \
COMPILE_ANDOR COMPILE_PHOTONICSCIENCE COMPILE_PCO COMPILE_MARCCD COMPILE_DEXELA COMPILE_ZWO\
COMPILE_POINTGREY COMPILE_IMXPAD COMPILE_RAYONIXHS COMPILE_CBF_SAVING COMPILE_NXS_SAVING \
COMPILE_FITS_SAVING COMPILE_EDFGZ_SAVING COMPILE_TIFF_SAVING COMPILE_CONFIG\
LINK_STRICT_VERSION
```
* start the compilation linux_compilation
* finally for the Tango server installation [PyTango Device Server](index.html#tango-installation)
##### Initialisation and Capabilities[¶](#initialisation-and-capabilities)
In order to help people to understand how the camera plugin has been implemented in LImA this section provide some important information about the developer’s choices.
###### Camera initialisation[¶](#camera-initialisation)
There is nothing specific.
The available cameras must be enumerated. A selected camera can then be inited.
(Note that at the moment only one camera will be handled by the plugin.)
###### Std capabilites[¶](#std-capabilites)
This plugin has been implement in respect of the mandatory capabilites but with some limitations according to some programmer’s choices. We only provide here extra information for a better understanding of the capabilities for the Zwo camera.
* HwDetInfo
TODO
* HwSync
TODO
###### Optional capabilites[¶](#optional-capabilites)
In addition to the standard capabilities, we make the choice to implement some optional capabilities in order to have an improved simulation.
TODO
* BinCtrl
TODO
* BufferCtrl
TODO
* FlipCtrl
TODO
* RoiCtrl
TODO
* ShutterCtrl
TODO
* SavingCtrl
TODO
* VideoCtrl
TODO
##### Configuration[¶](#configuration)
TODO
##### How to use[¶](#how-to-use)
The LimaCCDs tango server provides a complete interface to the zwo plugin so feel free to test.
For a quick test one can use python, is this a short code example:
```
from Lima import Zwo from lima import Core import time
cam = Zwo.Camera(0)
hwint = Zwo.Interface(cam)
control = Core.CtControl(hwint)
acq = control.acquisition()
# setting new file parameters and autosaving mode saving = control.saving()
pars = saving.getParameters()
pars.directory = '/tmp/'
pars.prefix = 'testsimul_'
pars.suffix = '.edf'
pars.fileFormat = Core.CtSaving.EDF pars.savingMode = Core.CtSaving.AutoFrame saving.setParameters(pars)
# now ask for 2 sec. exposure and 10 frames acq.setAcqExpoTime(2)
acq.setNbImages(10)
control.prepareAcq()
control.startAcq()
# wait for last image (#9) ready lastimg = control.getStatus().ImageCounters.LastImageReady while lastimg != 9:
time.sleep(0.1)
lastimg = control.getStatus().ImageCounters.LastImageReady
# read the first image im0 = control.ReadImage(0)
```
### Windows and Linux[¶](#windows-and-linux)
#### Andor SDK2 camera plugin[¶](#andor-sdk2-camera-plugin)
##### Introduction[¶](#introduction)
Andor Technology manufactuer offers a large catalogue of scientific cameras. Covered scientific applications are low ligth imaging, spectroscopy, microscopy, time-resolved and high energy detection.
Andor is providing a unique Software Development Tool (SDK) for both Windows and Linux, supporting different interface buses such as USB, CameraLink and also some specific acquisition PCI board.
The Lima module as been tested only with these camera models:* IKon-M and IKon-L (USB interface, Linux OS debian 6)
* IKon-L (USB interface, Windows XP - 32bits)
##### Prerequisites[¶](#prerequisites)
###### Linux[¶](#linux)
First, you have to install the Andor Software developpement Kit (SDK) in the default path (/usr/local). For our tests, we used the SDK for Linux version **V2.91.30001.0** and ran the install script `install_andor` for which option 5 (All USB Cameras) was selected, the default installation is made under `/usr/local/` with:
> * `/usr/local/include`, header files
> * `/usr/local/lib`, library files
> * `/usr/local/etc/andor`, configuration files
The Linux SDK 2.91 has shared libraries which has been compiled on recent linux kernel, check first you have the right kernel and libc available by compiling one of the example program available under examples/console.
Andor python module needs at least the lima core module.
For the USB camera the SDK is using the libusb under linux, check first your system is equiped with the libusb package otherwise you will not compile the Andor Lima plugin.
###### Windows XP - 32 bits[¶](#windows-xp-32-bits)
First, you have to install the Andor Software developpement Kit (SDK) in default path (`C:\\Program Files (x86)\\Andor iKon\\Drivers`).
Add the location of the file `\\Lima\\camera\\andor\\sdk\\msvc\\bin\\ATMCD32D.DLL` to your `PATH` environment variable.
##### Installation & Module configuration[¶](#installation-module-configuration)
Follow the generic instructions in [Build and Install](index.html#build-installation). If using CMake directly, add the following flag:
```
-DLIMACAMERA_ANDOR=true
```
For the Tango server installation, refers to [PyTango Device Server](index.html#tango-installation).
##### Initialisation and Capabilities[¶](#initialisation-and-capabilities)
Implementing a new plugin for new detector is driven by the LIMA framework but the developer has some freedoms to choose which standard and specific features will be made available. This section is supposed to give you the correct information regarding how the camera is exported within the LIMA framework.
###### Camera initialisation[¶](#camera-initialisation)
The camera will be initialized within the `AndorCamera` object. The `AndorCamera()` contructor sets the camera with default parameters for Preampifier-Gain, VerticalShiftSpeed and the ADC/HorizontalSpeed.
These parameters are optimized for the faster mode, which means the maximum gain, the “fasten recommended” VSSpeed (i.e as returned by GetFastestRecommendedVSSpeed() SDK function call) and the ADC with the faster Horizontal speed.
All the parameters can be set and get using the corresponding methods, the default values (max speeds and gain)
can be applied with -1 as passed value:
> set/getPGain()
> set/getVsSpeed()
> set/getADCSpeed()
Some other methods are available but they can not be supported depending on which camera model you are using:
> set/getHighCapacity()
> set/getFanMode()
> set/getBaselineClamp()
The above parameters, only support enumerate type for values.
###### Std capabilities[¶](#std-capabilities)
This plugin has been implemented in respect of the mandatory capabilites but with some limitations which are due to the camera and SDK features. We only provide here extra information for a better understanding of the capabilities for Andor cameras.
* HwDetInfo
getCurrImageType/getDefImageType(): the methods call the SDK GetBitDepth() function to resolve the image data type. The bit-depth correspond to the AD channel dynamic range which depends on the selected ADC channel.
By experience and with IKon detectors we only have Bpp16 of dynamic range, but the methods can return Bpp8 and Bpp32 as well.
setCurrImageType(): this method do not change the image type which is fixed to 16bpp.
* HwSync
get/setTrigMode(): the only supported mode are IntTrig, ExtTrigSingle, ExtGate and IntTrigMult
###### Optional capabilities[¶](#optional-capabilities)
In addition to the standard capabilities, we make the choice to implement some optional capabilities which are supported by the SDK and the I-Kon cameras. A Shutter control, a hardware ROI and a hardware Binning are available.
* HwShutter
setMode(): only ShutterAuto and ShutterManual modes are supported
* HwRoi
There is no restriction for the ROI setting
* HwBin
There is no restriction for the Binning but the maximum binning is given by the SDK function GetMaximumBinning() which depends on the camera model
##### Configuration[¶](#configuration)
Plug your USB camera on any USB port of the computer, that’s all !
##### How to use[¶](#how-to-use)
This is a python code example for a simple test:
```
from Lima import Andor from lima import Core
cam = Andor.Camera("/usr/local/etc/andor", 0)
hwint = Andor.Interface(cam)
ct = Core.CtControl(hwint)
acq = ct.acquisition()
# configure some hw parameters hwint.setTemperatureSP(-30)
hwint.setCooler(True)
.... wait here for cooling
# set some low level configuration hwint.setPGain(2)
hwint.setCooler(True)
hwint.setFanMode(cam.FAN_ON_FULL)
hwint.setHighCapacity(cam.HIGH_SENSITIVITY)
hwint.setBaselineClamp(cam.BLCLAMP_ENABLED)
hwint.setFastExtTrigger(False)
hwint.setShutterLevel(1)
# setting new file parameters and autosaving mode saving=ct.saving()
pars=saving.getParameters()
pars.directory='/buffer/lcb18012/opisg/test_lima'
pars.prefix='test1_'
pars.suffix='.edf'
pars.fileFormat=Core.CtSaving.EDF pars.savingMode=Core.CtSaving.AutoFrame saving.setParameters(pars)
# set accumulation mode
acq_pars= acq.getPars()
#0-normal,1-concatenation,2-accumu acq_pars.acqMode = 2 acq_pars.accMaxExpoTime = 0.05 acq_pars.acqExpoTime =1 acq_pars.acqNbFrames = 1
acq.setPars(acq_pars)
# here we should have 21 accumalated images per frame print acq.getAccNbFrames()
# now ask for 2 sec. exposure and 10 frames acq.setAcqExpoTime(2)
acq.setNbImages(10)
ct.prepareAcq()
ct.startAcq()
# wait for last image (#9) ready lastimg = ct.getStatus().ImageCounters.LastImageReady while lastimg !=9:
time.sleep(1)
lastimg = ct.getStatus().ImageCounters.LastImageReady
# read the first image im0 = ct.ReadImage(0)
```
#### Basler camera[¶](#basler-camera)
##### Introduction[¶](#introduction)
Basler’s area scan cameras are designed for industrial users who demand superior image quality and an excellent price/performance ratio. You can choose from an area scan portfolio that includes monochrome or color models with various resolutions, frame rates, and sensor technologies.
The Lima module has been tested only with this **GigE** and **Usb3** cameras models:
> * Scout
> * Pilot
> * Ace
> * Ace 2
The Lima module has been tested with Pylon SDK versions **5.0.1** and now **6.X** since conda packages **1.10.X**.
Monochrome and color cameras are supported with these SDK versions.
##### Installation & Module configuration[¶](#installation-module-configuration)
First, you have to install the Basler SDK *Pylon* to the default path `/opt/pylon`.
Then, follow the generic instructions in [Build and Install](index.html#build-installation). If using CMake directly, add the following flag:
```
-DLIMACAMERA_BASLER=true
```
For the Tango server installation, refers to [PyTango Device Server](index.html#tango-installation).
##### Initialisation and Capabilities[¶](#initialisation-and-capabilities)
Implementing a new plugin for new detector is driven by the LIMA framework but the developer has some freedoms to choose which standard and specific features will be made available. This section is supposed to give you the correct information regarding how the camera is exported within the LIMA framework.
###### Camera initialisation[¶](#camera-initialisation)
The camera will be initialized by creating Basler ::Camera object. The Basler camera can be idenfified either by:
* IP/hostname (examples: `ip://192.168.5.2`, `ip://white_beam_viewer1.esrf.fr`) or
* Basler serial number (example: `sn://12345678`) or
* Basler user name (example: `uname://white_beam_viewer1`)
In case an IP is given, the `ip://` scheme prefix is optional.
Only the camera ID is mandatory.
Small example showing possible ways to initialize:
```
from Lima import Basler from lima import Core
# From an IP (notice ip:// prefix is optional)
cam = Basler.Camera('192.168.5.2')
# From a basler serial number cam = Basler.Camera('sn://12345678')
# From a basler user name cam = Basler.Camera('uname://white_beam_viewer1')
```
###### Std capabilites[¶](#std-capabilites)
This plugin has been implemented in respect of the mandatory capabilites but with some limitations which are due to the camera and SDK features. Only restriction on capabilites are documented here.
* HwDetInfo
getCurrImageType/getDefImageType(): it can change if the video mode change (see HwVideo capability).
setCurrImageType(): It only supports Bpp8 and Bpp16.
* HwSync
get/setTrigMode(): the supported mode are IntTrig, IntTrigMult, ExtTrigMult and ExtGate.
###### Optional capabilites[¶](#optional-capabilites)
In addition to the standard capabilities, we make the choice to implement some optional capabilities which are supported by the SDK. **Video**, Roi and Binning are available.
* HwVideo
The basler cameras are pure video device, so video format for image are supported:
**Color cameras ONLY**
+ BAYER_RG8
+ BAYER_BG8
+ BAYER_RG16
+ BAYER_BG16
+ RGB24
+ BGR24
+ RGB32
+ BGR32
+ YUV411
+ YUV422
+ YUV444
**Color and Monochrome cameras**
+ Y8
+ Y16
Use get/setMode() methods of the *video* object (i.e. CtControl::video()) to read or set the format.
* HwBin
There is no restriction for the binning up to the maximum size.
* HwRoi
There is no restriction for the Roi up to the maximum size.
##### Configuration[¶](#configuration)
* First you need to decide how you want to reference your camera (by IP/hostname, serial number or user name)
* Second, you have to setup the IP address of the Basler Camera by using *IpConfigurator* (`/opt/pylon/bin/IpConfigurator`)
or by matching the MAC address with a choosen IP into the DHCP. If you plan to reference the camera by user name you should also set it in *IpConfigurator*. If you plan to reference the camera by serial number you should note down the serial number that appears in the label of your camera.
* Then in the Basler Tango device, set the property *camera_id* according to the type of ID you choose
(see [Basler Tango device](index.html#lima-tango-basler) for more details)
* If you are running the server with linux kernel >= 2.6.13, you should add this line into */etc/security/limits.conf*. With this line, the acquisition thread will be in real time mode.
```
USER_RUNNING_DEVICE_SERVER - rtprio 99
```
##### How to use[¶](#how-to-use)
This is a python code example for a simple test:
```
from Lima import Basler from lima import Core
#---+
# packet-size |
# |
#---+ |
# inter-packet delay | |
# | |
#---+ | |
# frame-transmission delay | | |
# | | |
#---+ | | |
# cam ip or hostname | | | |
# v v v v cam = Basler.Camera('192.168.1.1', 0, 0, 8000)
hwint = Basler.Interface(cam)
ct = Core.CtControl(hwint)
acq = ct.acquisition()
# set and test video
#
video=ct.video()
video.setMode(Core.RGB24)
video.startLive()
video.stopLive()
video_img = video.getLastImage()
# set and test an acquisition
#
# setting new file parameters and autosaving mode saving=ct.saving()
pars=saving.getParameters()
pars.directory='/buffer/lcb18012/opisg/test_lima'
pars.prefix='test1_'
pars.suffix='.edf'
pars.fileFormat=Core.CtSaving.TIFF pars.savingMode=Core.CtSaving.AutoFrame saving.setParameters(pars)
# now ask for 2 sec. exposure and 10 frames acq.setAcqExpoTime(2)
acq.setNbImages(10)
ct.prepareAcq()
ct.startAcq()
# wait for last image (#9) ready lastimg = ct.getStatus().ImageCounters.LastImageReady while lastimg !=9:
time.sleep(1)
lastimg = ct.getStatus().ImageCounters.LastImageReady
# read the first image im0 = ct.ReadImage(0)
```
#### Tucsen / Dhyana[¶](#tucsen-dhyana)
The Dhyana95 uses backside-illuminated sCMOS thinned chip technology to avoid light interference from the wiring layer,
thereby increasing the pixel area and improving the photoelectric conversion rate.
##### Intoduction[¶](#intoduction)
This plugin control a TUCSEN Dhyana (95) camera under WINDOWS, using TUCam (32 bits) SDK 1.0.0.9 library.
Linux is supported as well using the TUCam SDK (x86_64) for Linux, release. 1.0.0.0.
To get the SDK please contact your camera seller.
##### Prerequisite[¶](#prerequisite)
The Dhyana 95 is only supporting USB3 interface. On Linux USB device can only be accessed by root user.
To allow any user to control the camera you should manually change the udev settings for this particular usb device.
As root create a new file under /etc/udev/rules.d/99-tucsen.rules add the following udev rules:
```
ATTR{idVendor}=="5453", MODE="0666"
```
Then you can simply unplug your camera, restart the computer and then plug the camera
##### Installation & Module configuration[¶](#installation-module-configuration)
Follow the generic instructions in [Build and Install](index.html#build-installation). If using CMake directly, add the following flag:
```
-DLIMACAMERA_DHYANA=true
```
For the Tango server installation, refers to [PyTango Device Server](index.html#tango-installation).
##### Initialisation and Capabilities[¶](#initialisation-and-capabilities)
Implementing a new plugin for new detector is driven by the LIMA framework but the developer has some freedoms to choose which standard and specific features will be made available. This section is supposed to give you the correct information regarding how the camera is exported within the LIMA framework.
###### Camera initialisation[¶](#camera-initialisation)
There is no initialisation to perform, just be sure your camera is switched on and connected on the computer via the USB cable.
###### Std capabilites[¶](#std-capabilites)
This plugin has been implemented in respect of the mandatory capabilites .
* HwDetInfo
> It only supports Bpp16.
* HwSync
Supported trigger mode are:
+ IntTrig
+ ExtTrigSingle
+ ExtTrigMult
+ ExtGate
###### Optional capabilites[¶](#optional-capabilites)
* Rolling (standard) vs. Global shutter
The camera can support different trigger modes, please refer to the documentation for more details.
The camera plugin provides commands to change the trigger (shutter) mode, from standard (rolling) to global. An other mode calls “synchronous” is also available.
* Cooling
+ Cooling method : Peltier cooling
+ Cooling temperature : Forced air (Ambient at +25 Celsius): -10 Celsius
+ The TUCam SDK allows accessing the temperature target (R/W).
* HwRoi
Roi parameters (x, y , width, height), thanks to Lima you can set any Roi but to activate a real Hw Roi the camera only support x offset as factor of 4 and width as factor of 8.
* HwBin
There is no hardware support for binning.
* HwShutter
There is no shutter control.
##### Configuration[¶](#configuration)
No Specific hardware configuration are needed
##### Getting started[¶](#getting-started)
For a quick test one can use the python binding, here is a short code example:
```
from Lima import Dhyana from lima import Core import time
cam = Dhyana.Camera()
# set temperature cooling cam.setTemperatureTarget(-10)
# Get the hardware interface hwint = Dhyana.Interface(cam)
# Get the control interface control = Core.CtControl(hwint)
# Get the acquisition control acq = control.acquisition()
# Set new file parameters and autosaving mode saving=control.saving()
pars=saving.getParameters()
pars.directory='/tmp/'
pars.prefix='test1_'
pars.suffix='.edf'
pars.fileFormat=Core.CtSaving.EDF pars.savingMode=Core.CtSaving.AutoFrame saving.setParameters(pars)
# Now ask for 2 sec. exposure and 10 frames acq.setAcqExpoTime(2)
acq.setAcqNbFrames(10)
control.prepareAcq()
control.startAcq()
# Wait for last image (#9) ready lastimg = control.getStatus().ImageCounters.LastImageReady while lastimg !=9:
time.sleep(0.1)
lastimg = control.getStatus().ImageCounters.LastImageReady
# read the first image
im0 = control.ReadImage(0)
```
#### Iris[¶](#iris)
##### Introduction[¶](#introduction)
This is the official Lima camera iris. It has been made to help you getting started with Lima and to test/play Lima without any hardware.
##### Prerequisite[¶](#prerequisite)
There is no special prerequisite, the iris plugin can be compiled and tested on both Linux and Windows platforms.
##### Installation & Module configuration[¶](#installation-module-configuration)
Follow the generic instructions in [Build and Install](index.html#build-installation). If using CMake directly, add the following flag:
```
-DLIMACAMERA_IRIS=true
```
For the Tango server installation, refers to [PyTango Device Server](index.html#tango-installation).
##### Initialisation and Capabilities[¶](#initialisation-and-capabilities)
Implementing a new plugin for new detector is driven by the LIMA framework but the developer has some freedoms to choose which standard and specific features will be made available. This section is supposed to give you the correct information regarding how the camera is exported within the LIMA framework.
###### Camera initialisation[¶](#camera-initialisation)
The camera will be initialized within the `Camera` object. The `Camera()` constructor takes an optional mode parameter.
###### Standard capabilities[¶](#standard-capabilities)
Described the standard capabilites offered by the camera.
###### Optional capabilities[¶](#optional-capabilities)
Described the optional capabilites offered by the camera.
##### Configuration[¶](#configuration)
Described the eventual configuration steps.
##### Getting started[¶](#getting-started)
For a quick test one can use the python binding, here is a short code example:
```
from Lima import Simulator from Lima import Core import time
def test_mode_generator(cam, nb_frames_prefetched = 0):
if nb_frames_prefetched:
cam.setMode(Simulator.Camera.MODE_GENERATOR_PREFETCH)
fb = cam.getFrameGetter()
fb.setNbPrefetchedFrames(nb_frames_prefetched);
else:
cam.setMode(Simulator.Camera.MODE_GENERATOR)
fb = cam.getFrameGetter()
# Add a peak
p1 = Simulator.GaussPeak(10, 10, 23, 1000) # peak at 10,10 fwhm=23 and max=1000
fb.setPeaks([p1])
def test_mode_loader(cam, nb_frames_prefetched = 0):
if nb_frames_prefetched:
cam.setMode(Simulator.Camera.MODE_LOADER_PREFETCH)
fb = cam.getFrameGetter()
test = fb.getNbPrefetchedFrames();
else:
cam.setMode(Simulator.Camera.MODE_LOADER)
fb = cam.getFrameGetter()
# Set file pattern
fb.setFilePattern(b'input\\test_*.edf')
cam = Simulator.Camera()
# Select one of the mode to test
#test_mode_generator(cam)
#test_mode_generator(cam, 10)
#test_mode_loader(cam)
test_mode_loader(cam, 100)
# Get the hardware interface hwint = Simulator.Interface(cam)
# Get the control interface control = Core.CtControl(hwint)
# Get the acquisition control acq = control.acquisition()
# Set new file parameters and autosaving mode saving=control.saving()
pars=saving.getParameters()
pars.directory='/tmp/'
pars.prefix='testsimul_'
pars.suffix='.edf'
pars.fileFormat=Core.CtSaving.EDF pars.savingMode=Core.CtSaving.AutoFrame saving.setParameters(pars)
# Now ask for 2 sec. exposure and 10 frames acq.setAcqExpoTime(2)
acq.setAcqNbFrames(10)
control.prepareAcq()
control.startAcq()
# Wait for last image (#9) ready lastimg = control.getStatus().ImageCounters.LastImageReady while lastimg !=9:
time.sleep(0.1)
lastimg = control.getStatus().ImageCounters.LastImageReady
# read the first image im0 = control.ReadImage(0)
```
#### RoperScientific / Princeton[¶](#roperscientific-princeton)
##### Introduction[¶](#introduction)
This plugin control a RoperScientific/Princeton camera under Windows and Linux, using the PVCAM (Photometrics Virtual Camera Access Method) libraries.
It is in production at SOLEIL under windows and it has been tested at Desy under Linux.
Model used at SOLEIL: PI-MTE:2048B
##### Prerequisite[¶](#prerequisite)
The RoperScientific is connected to a specific computer with a PCI board. The Lima/RoperScientific client must run on this PC.
##### Initialisation and Capabilities[¶](#initialisation-and-capabilities)
Implementing a new plugin for new detector is driven by the LIMA framework but the developer has some freedoms to choose which standard and specific features will be made available. This section is supposed to give you the correct information regarding how the camera is exported within the LIMA framework.
###### Camera initialisation[¶](#camera-initialisation)
The camera will be initialized within the :cpp:`RoperScientific::Camera` object. The camera number (as an integer) should be given to the constructor. For example: 0.
###### Std capabilites[¶](#std-capabilites)
This plugin has been implemented in respect of the mandatory capabilites but with some limitations according to some programmer’s choices. We only provide here extra information for a better understanding of the capabilities for the RoperScientific camera.
* HwDetInfo
> * Max image size is : 2048 * 2048
> * 16 bit unsigned type is supported
* HwSync
Trigger type supported are:
> + IntTrig
> + ExtTrigSingle
> + ExtTrigMult
> + ExtGate
###### Optional capabilites[¶](#optional-capabilites)
* HwBin:
> + all values are accepted
>
* HwRoi
###### Specific control parameters[¶](#specific-control-parameters)
Some specific paramaters are available within the camera hardware interface. Those parameters should be used carefully and one should refer to the camera SDK (or user’s guide) documentation for a better understanding.
* getTemperature()
* set/getTemperatureSetPoint()
* set/getGain()
* set/getInternalAcqMode()
> * “FOCUS”
> * “STANDARD”
* set/getSpeedTableIndex()
##### Configuration[¶](#configuration)
No Specific hardware configuration are needed
##### How to use[¶](#how-to-use)
Here is the list of accessible fonctions to configure and use the RoperScientific detector:
```
void setGain(long);
long getGain();
void setFullFrame(rgn_type* roi);
void setBinRoiParameters(rgn_type* roi);
void setSpeedTableIndex(unsigned);
unsigned getSpeedTableIndex(void);
const std::string& getADCRate(void);
double getTemperature();
double getTemperatureSetPoint();
void setTemperatureSetPoint(double temperature);
```
Code example in python:
```
from Lima import RoperScientific from lima import Core
cam = RoperScientific.Camera(0)
hwint = RoperScientific.Interface(cam)
ct = Core.CtControl(hwint)
acq = ct.acquisition()
# set some configuration cam.setTemperatureSetPoint(0)
cam.setAdcRate(0) # 0-1MHz, 1-100KHz
# setting new file parameters and autosaving mode saving=ct.saving()
pars=saving.getParameters()
pars.directory='/buffer/lcb18012/opisg/test_lima'
pars.prefix='test1_'
pars.suffix='.edf'
pars.fileFormat=Core.CtSaving.EDF pars.savingMode=Core.CtSaving.AutoFrame saving.setParameters(pars)
# now ask for 2 sec. exposure and 10 frames acq.setAcqExpoTime(2)
acq.setNbImages(10)
ct.prepareAcq()
ct.startAcq()
# wait for last image (#9) ready lastimg = ct.getStatus().ImageCounters.LastImageReady while lastimg !=9:
time.sleep(0.1)
lastimg = ct.getStatus().ImageCounters.LastImageReady
# read the first image im0 = ct.ReadImage(0)
```
#### Simulator[¶](#simulator)
##### Introduction[¶](#introduction)
This is the official Lima camera simulator. It has been made to help you getting started with Lima and to test/play Lima without any hardware.
The simulator provides two modes of operations:
> * **Frame Builder** generates frames with diffraction patterns and a set of parameters can be tuned to change those patterns like for instance the number and position of gaussian peaks;
> * **Frame Loader** loads frames from files.
Both modes have a preteched variant, where the frames are preteched in memory before the acquisition is started. This feature allows to simulate high frame rates detectors.
##### Prerequisite[¶](#prerequisite)
There is no special prerequisite, the simulator can be compiled and tested on both Linux and Windows platforms.
##### Installation & Module configuration[¶](#installation-module-configuration)
Follow the generic instructions in [Build and Install](index.html#build-installation). If using CMake directly, add the following flag:
```
-DLIMACAMERA_SIMULATOR=true
```
For the Tango server installation, refers to [PyTango Device Server](index.html#tango-installation).
##### Initialisation and Capabilities[¶](#initialisation-and-capabilities)
Implementing a new plugin for new detector is driven by the LIMA framework but the developer has some freedoms to choose which standard and specific features will be made available. This section is supposed to give you the correct information regarding how the camera is exported within the LIMA framework.
###### Camera initialisation[¶](#camera-initialisation)
The camera will be initialized within the `Camera` object. The `Camera()` constructor takes an optional mode parameter.
This simulator plugin architecture is based on the `FrameGetter` interface that have multiple implementations.
The `SimulatorCamera` class provides a specific member function `SimulatorCamera::getFrameGetter()` that returns the `FrameGetter` instance.
Depending on the current mode, `FrameGetter` can be dynamically casted to either:
> * `FrameBuilder`
> * `FrameLoader`
> * `FramePrefetcher`
> * `FramePrefetcher`
The class `FrameBuilder` can be parametrized with:
> * `setFrameDim()`: set a new frame dimension (max. is 1024x1024)
> * `setPeaks()`: set a list of GaussPeak positions (GaussPeak struct -> x, y, fwhm, max)
> * `setPeakAngles()`: set a list of GaussPeak angles
> * `setFillType()`: set the image fill type Gauss or Diffraction or Empty (default is Gauss)
> * `setRotationAxis()`: set the rotation axis policy Static, RotationX or RotationY (default is RotationY)
> * `setRotationAngle()`: set a peak rotation angle in deg (default is 0)
> * `setRotationSpeed()`: set a peak rotation speed ixin deg/frame (default is 0)
> * `setGrowFactor()`: set a growing factor (default is 1.0)
> * `setDiffractionPos()`: set the source diplacement position x and y (default is center)
> * `setDiffractionSpeed()`: set the source diplacement speed sx and sy (default is 0,0)
The class `FrameLoader` can be parametrized with:
> * `setFilePattern()`: set the file pattern used to load the frames than may include globing pattern, i.e. `input/test_*.edf`
The `template <typename FrameGetterImpl> FramePrefetcher` variants have an addition parameter:
> * `setNbPrefetchedFrames()`: set the number of frames to prefetch in memory
###### Standard capabilities[¶](#standard-capabilities)
This plugin has been implemented in respect of the standard capabilites of a camera plugin but with some limitations according to some programmer’s choices. We only provide here extra information for a better understanding of the capabilities for the simulator camera.
> * `HwDetInfo`: The default (and max.) frame size if about 1024x1024-Bpp32, but one can only change the image type by calling `DetInfoCtrlObj::setCurrImageType()`.
> * `HwSync`: IntTrig and IntTrigMult triggers mode are supported. For both exposure time and latency time min. is 10e-9 and max. is 10e6. ExtTrigSingle and ExtTrigMult are also supported. The camera and the Tango simulator provides an API to manually trig it.
###### Optional capabilities[¶](#optional-capabilities)
In addition to the standard capabilities, some optional capabilities are implemented:
* `HwShutter`: The simulator only support ShutterAutoFrame and ShutterManual modes.
* `HwRoi`: There is no restriction for the ROI.
* `HwBin`: Bin 1x1 or 2x2 only.
##### Configuration[¶](#configuration)
No hardware configuration of course!
##### How to use[¶](#how-to-use)
The LimaCCDs tango server provides a complete interface to the simulator plugin so feel free to test.
For a quick test one can use the python binding, here is a short code example:
```
from Lima import Simulator from Lima import Core import time
def test_mode_generator(cam, nb_frames_prefetched = 0):
if nb_frames_prefetched:
cam.setMode(Simulator.Camera.MODE_GENERATOR_PREFETCH)
fb = cam.getFrameGetter()
fb.setNbPrefetchedFrames(nb_frames_prefetched);
else:
cam.setMode(Simulator.Camera.MODE_GENERATOR)
fb = cam.getFrameGetter()
# Add a peak
p1 = Simulator.GaussPeak(10, 10, 23, 1000) # peak at 10,10 fwhm=23 and max=1000
fb.setPeaks([p1])
def test_mode_loader(cam, nb_frames_prefetched = 0):
if nb_frames_prefetched:
cam.setMode(Simulator.Camera.MODE_LOADER_PREFETCH)
fb = cam.getFrameGetter()
test = fb.getNbPrefetchedFrames();
else:
cam.setMode(Simulator.Camera.MODE_LOADER)
fb = cam.getFrameGetter()
# Set file pattern
fb.setFilePattern(b'input\\test_*.edf')
cam = Simulator.Camera()
# Select one of the mode to test
#test_mode_generator(cam)
#test_mode_generator(cam, 10)
#test_mode_loader(cam)
test_mode_loader(cam, 100)
# Get the hardware interface hwint = Simulator.Interface(cam)
# Get the control interface control = Core.CtControl(hwint)
# Get the acquisition control acq = control.acquisition()
# Set new file parameters and autosaving mode saving=control.saving()
pars=saving.getParameters()
pars.directory='/tmp/'
pars.prefix='testsimul_'
pars.suffix='.edf'
pars.fileFormat=Core.CtSaving.EDF pars.savingMode=Core.CtSaving.AutoFrame saving.setParameters(pars)
# Now ask for 2 sec. exposure and 10 frames acq.setAcqExpoTime(2)
acq.setAcqNbFrames(10)
control.prepareAcq()
control.startAcq()
# Wait for last image (#9) ready lastimg = control.getStatus().ImageCounters.LastImageReady while lastimg !=9:
time.sleep(0.1)
lastimg = control.getStatus().ImageCounters.LastImageReady
# read the first image im0 = control.ReadImage(0)
```
Future Cameras[¶](#future-cameras)
---
### Acknowledgement[¶](#acknowledgement)
Many contributors contributed to new camera plugins, including:
> * [ESRF](https://www.esrf.eu/),
> * [SOLEIL](https://www.synchrotron-soleil.fr/),
> * [DESY](http://www.desy.de/),
> * [ALBA](https://www.cells.es/en),
> * [FRMII](https://www.frm2.tum.de),
> * [ANKA](https://www.anka.kit.edu/).
thank you for your support.
### Under development[¶](#under-development)
During the coming year, several new detector plugins should be released:
* Arinax Bi-zoom (Arinax ltd.)
* Basler SDK Pylon 6.1.X (ESRF)
### Foreseen[¶](#foreseen)
* QHYCCD model Q178-Cool (FRMII)
Python TANGO server[¶](#python-tango-server)
---
This is the python Tango devices server by the ESRF team.
This server provides a main device for the standard camera control, a camera specific device for the camera configuration and a set of “plugin” devices for extra operations or just to provide some specific API for clients.
Thanks to the Lima framework, the control can be achieved through a common server and a set of software operations (Mask,Flatfield,Background,RoiCounter,PeakFinder…) on image as well. The configuration of the detector is done by the specific detector device.
At ESRF we decided to develop the Tango devices only in python language which implies that all the detector C++ interfaces have been wrapped in python.
### Main device: LimaCCDs[¶](#main-device-limaccds)
**LimaCCDs** is the generic device and it provides a unique interface to control any supported cameras. One can find below the commands, the attributes and the properties.
To run a LimaCCDs server you will need at least to configure the **LimaCameraType** property. This property is used by the LimaCCDs server to create the proper camera device. Please refers to any camera (e.g Basler) section for further information.
#### Property[¶](#property)
| Property name | Mandatory | Default value | Description |
| --- | --- | --- | --- |
| AccThresholdCallbackModule | No | “” | Plugin file name which manages threshold, see acc_saturated_* attributes and the *AccSaturated* commands to activate and use this feature |
| BufferMaxMemory | No | 70 | The maximum among of memory in percent of the available RAM that Lima is using to allocate frame buffer. |
| ConfigurationFilePath | No | ~/lima_<serv-name>.cfg | The default configuration file path |
| ConfigurationDefaultName | No | “default” | Your default configuration name |
| IntrumentName | No | “” | The instrument name, e.g ESRF-ID02 (*****) |
| LimaCameraType | Yes | N/A | The camera type: e.g. Maxipix |
| MaxVideoFPS | No | 30 | Maximum value for frame-per-second |
| NbProcessingThread | No | 1 | The max number of thread for processing.
Can be used to improve the performance when more than 1 task (plugin device) is activated |
| TangoEvent | No | False | Activate Tango Event for counters and new images |
| UserDetectorName | No | “” | A user detector identifier, e.g frelon-saxs, (*****) |
| ImageOpMode | No | “HardAndSoft” | Configure the image op mode. One of ‘HardOnly’, ‘SoftOnly’, ‘HardAndSoft’ |
(*****) Properties only used to set meta-data in HDF5 saving format.
#### Commands[¶](#commands)
| **Command name** | **Arg. in** | **Arg. out** | **Description** |
| --- | --- | --- | --- |
| Init | DevVoid | DevVoid | Do not use |
| State | DevVoid | DevLong | Return the device state |
| Status | DevVoid | DevString | Return the device state as a string |
| getAttrStringValueList | DevString:
Attribute name | DevVarStringArray:
String value list | Return the authorized string value list for a given attribute name |
| prepareAcq | DevVoid | DevVoid | Prepare the camera for a new acquisition, has to be called each time a parameter is set. |
| startAcq | DevVoid | DevVoid | Start the acquisition |
| stopAcq | DevVoid | DevVoid | Stop the acquisition after current frame is acquired, and wait for all tasks to finish |
| abortAcq | DevVoid | DevVoid | Abort the acquisition, the current frame is lost |
| setImageHeader | DevVarStringArray:
Array of string header | DevVoid |
Set the image header:* [0]=”ImageId0 delimiter imageHeader0,
* [1] = ImageId1 delimiter imageHeader1..
|
| resetCommonHeader | DevVoid | DevVoid | Reset the common header |
| resetFrameHeaders | DevVoid | DevVoid | Reset the frame headers |
| getImage | DevLong: Image number(0-N) | DevVarCharArray: Image data | Return the image data in raw format (char array) |
| getBaseImage | DevLong: Image number(0-N) | DevVarCharArray: Image data | Return the base image data in raw format (char array). Base image is the raw image before processing |
| readImage | DevLong: Image number(0-N) | DevEncoded: Encoded image | Return the image in encoded format of type “**DATA_ARRAY**” (see [DevEncoded](#data-array-encoded)) |
| readImageSeq | DevLongArray: Image number(0-N) list | DevEncoded: Encoded image(S) | Return a stack of images in encoded format of type “**DATA_ARRAY**” (see [DevEncoded](#data-array-encoded)) |
| writeImage | DevLong: Image number(0-N) | DevVoid | Save manually an image |
| readAccSaturatedImageCounter | DevLong: Image number | DevVarUShortArray: Image counter | The image counter |
| readAccSaturatedSumCounter | DevLong: from image id | DevVarLongArray: result | number of result for each images,sum counter of raw image #0 of image #0,sum counter of raw image #1 of image #0,… |
| setAccSaturatedMask | DevString | DevVoid | Full path of mask file, use empty string (“”) to unset the mask |
| closeShutterManual | DevVoid | DevVoid | Only if the camera has this capability |
| openShutterManual | DevVoid | DevVoid | Only if the camera has this capability |
| reset | DevVoid | DevVoid | Reset the camera to factory setting |
| getPluginDeviceNameFromType | DevString | DevString | Return the device name corresponding to the passed plugin named (.e.g FlatField) |
| configStore | DevVarStringArray:config name,module1,
module2, … , modulen | DevVoid | Store (im memory) a current config with name and for the listed modules (e.g. **Acquisition**,
**Image**, **RoiCounters**, **Saving** …).
See the *config_available_name* and *config_available_module* attributes for full list. |
| configApply | DevString: config name | DevVoid | Apply the named config |
| configPop | DevVoid | DevVoid | Pop the named config from the list |
| configDelete | DevVoid | DevVoid | Delete the named config |
| configFileSave | DevVoid | DevVoid | Save all the config into file (see properties for config file name) |
| configFileLoad | DevVoid | DevVoid | Load the configs from file |
#### Attributes[¶](#attributes)
You will here a long list of attributes, this reflects the richness of the LIMA library. We organized them in modules which correspond to specific functions. A function module is identified by an attribute name prefix (excepted for informational attributes),
for instance the **Acquisition** module attributes are always named **acq_<attr-name>**. The available modules are :
> * General Information
> * Status (prefix *last_* and *ready_*)
> * Acquisition (prefix *acq_* for most of them sorry)
> * Accumulation (prefix *acc_*)
> * Saving (prefix *saving_*)
> * Image (prefix *image_*)
> * Shutter (prefix *shutter_*)
> * Debug (prefix *debug_*)
> * Video (prefix *video_*)
> * Shared Memory (prefix *shared_memory_*)
> * Configuration (prefix *config_*)
> * Buffer (prefix *buffer_*)
> * Plugin (prefix *plugin_*)
Many attributes are of type DevString and they have a fixed list of possible values. You can get the list by calling the special command
**getAttrStringValueList**. Because a camera cannot support some attribute values , the command getAttrStringValueList will give you the the value list for the camera. For instance the attribute *video_mode* supports up to 14 different video formats, but a camera can only supports few of them.
##### General Information[¶](#general-information)
| Attribute name | RW | Type | Description |
| --- | --- | --- | --- |
| lima_version | ro | DevString | The lima core library version number |
| lima_type | ro | DevString | LImA camera type:
Maxipix,Pilatus,Frelon,Pco, Basler … |
| camera_type | ro | DevString | Like lima_type but in upper-case !! |
| camera_pixelsize | ro | DevDouble[x,y] | The camera pixel size in x and y dimension |
| camera_model | ro | DevString | Camera model return by the detector layer:.e.g. 5x1- TPX1 |
##### Status[¶](#status)
| Attribute name | RW | Type | Description |
| --- | --- | --- | --- |
| last_base_image_ready | ro | DevLong | The last base (before treatment) ready |
| last_image_ready | ro | DevLong | The last acquired image number, ready for reading |
| last_image_saved | ro | DevLong | The last saved image number |
| last_image_acquired | ro | DevLong | The last acquired image number |
| last_counter_ready | ro | DevLong | Tell which image counter is last ready |
| ready_for_next_image | ro | DevBoolean | True after a camera readout, otherwise false. Can be used for fast synchronisation with trigger mode (internal or external). |
| ready_for_next_acq | ro | DevBoolean | True after end of acquisition, otherwise false. |
| user_detector_name | rw | DevString | User detector name |
| instrument_name | rw | DevString | Intrument/beamline name |
##### Acquisition[¶](#acquisition)
LImA acquisition time[¶](#id1)
| Attribute name | RW | Type | Description |
| --- | --- | --- | --- |
| acq_status | ro | DevString | Acquisition status: Ready, Running, Fault or Configuration |
| acq_status_fault_error | ro | DevString | In case of Fault state, return the error message |
| acq_mode | rw | DevString |
Acquisition mode:* **Single**, default mode one frame per image
* **Concatenation**, frames are concatenated in image
* **Accumulation**, powerful mode to avoid saturation of the pixel, the exposure is shared by multiple frames, see acc_ attributes for more
|
| acq_nb_frames | rw | DevLong | Number of frames to be acquired, Default is 1 frame |
| acq_trigger_mode | rw | DevString |
Trigger mode:* **Internal_trigger**, the software trigger,
start the acquisition immediately after an acqStart() call,
all the acq_nb_frames are acquired in an sequence.
* **External_trigger**, wait for an external trigger signal to start the an acquisition for the acq_nb_frames number of frames.
* **External_trigger_multi**, as the previous mode except that each frames need a new trigger input
(e.g. for 4 frames 4 pulses are waiting for)
* **Internal_trigger_multi**, as for internal_trigger except that for each frame the startAcq() has to called once.
* **External_gate**, wait for a gate signal for each frame,
the gate period is the exposure time.
* **External_start_stop**
|
| latency_time | rw | DevDouble | Latency time in second between two frame acquisitions,
can not be zero, the minimum time corresponds to the readout time of the detector. |
| valid_ranges | ro | DevDouble[4] | min exposure, max exposure, min latency, max latency |
| concat_nb_frames | rw | DevLong | The nb of frames to concatenate in one image |
| acq_expo_time | rw | DevDouble | The exposure time of the image, Default is 1 second |
##### Accumulation[¶](#accumulation)
| Attribute name | RW | Type | Description |
| --- | --- | --- | --- |
| acc_expotime | ro | DevDouble | The effective accumulation total exposure time. |
| acc_nb_frames | ro | DevLong | The calculated accumulation number of frames per image. |
| acc_max_expotime | rw | DevDouble | The maximum exposure time per frame for accumulation |
| acc_time_mode | rw | DevString |
Accumulation time mode:* **Live**,acq_expo_time = acc_live_time
* **Real**,acq_expo_time = acc_dead_time + acc_live_time
|
| acc_dead_time | ro | DevDouble | Total accumulation dead time |
| acc_live_time | ro | DevDouble | Total accumulation live time which corresponds to the detector total counting time. |
| acc_offset_before | rw | DevLong | Set a offset value to be added to each pixel value |
| acc_saturated_active | rw | DevBoolean | To activate the saturation counters (i.e. readAccSaturated commands) |
| acc_saturated_cblevel | rw | DevLong | Set at which level of total saturated pixels the callback plugin (if set with the AccThresholdCallbackModule property) will be called |
| acc_saturated_threshold | rw | DevLong | The threshold for counting saturated pixels |
| acc_threshold_before | rw | DevLong | Set a threshold value to be substract to each pixel value |
##### Saving[¶](#saving)
| Attribute name | RW | Type | Description |
| --- | --- | --- | --- |
| saving_mode | rw | DevString |
Saving mode:* **Manual**, no automatic saving, a command will be implemented in a next release to be able to save an acquired image.
* **Auto_Frame**, Frames are automatically saved according the saving parameters (see below).
* **Auto_header**, Frames are only saved when the setImageHeader() is called in order to set header information with image data.
|
| saving_directory | rw | DevString | The directory where to save the image files |
| saving_prefix | rw | DevString | The image file prefix |
| saving_suffix | rw | DevString | The image file suffix |
| saving_next_number | rw | DevLong | The image next number The full image file name is:
/saving_directory/saving_prefix+sprintf(“%04d”,saving_next_number)+saving_suffix |
| saving_format | rw | DevString |
The data format for saving:* `RAW`, save in binary format
* `EDF`, save in ESRF Data Format
* `EDFGZ` (or edf.gz), EDF with Deflate filter compression
* `EDFLZ4` (or edf.lz4), EDF with BS/LZ4 filter compression
* `TIFF`, The famous TIFF format
* `CBF`, save in CBF format (a compressed format for crystallography)
* `HDF5` save in Nexus HDF5 format
* `HDF5GZ` save in Nexus HDF5 format with Deflate filter compression
* `HDF5BS` save in Nexus HDF5 format with BS/LZ4 filter compression
|
| saving_overwrite_policy | rw | DevString |
In case of existing files an overwite policy is mandatory:* **Abort**, if the file exists the saving is aborted
* **Overwrite**, if the file exists it is overwritten
* **Append**, if the file exists the image is append to the file
|
| saving_frame_per_file | rw | DevLong | Number of frames saved in each file |
| saving_common_header | rw | DevString[] | Common header with multiple entries |
| saving_header_delimiter | rw | DevString[] | The header delimiters, [0] = key header delimiter, [1] =
entry header delimiter, [2] = image number header delimiter. Default : [0] = “=”, [1] = “n”, [2] = “;” |
| saving_max_writing_task | rw | DevShort | Set the max. tasks for saving file, default is 1 |
| saving_statistics | ro | DevDouble[] | Return stats: saving speed, compression ratio,
compression speed and incoming speed (speed in byte/s) |
| saving_statistics_history | rw | DevLong | Set size of history for stats calculation, default is 16 frames |
| saving_managed_mode | rw | DevString | On some detectors, saving can be managed by the hardware (sdk), you can switch the mode using these attribute values:
* HARDWARE, lima will not manage the saving but set the camera to do the job
* SOFTWARE (default) Lima is managing the saving
|
##### Image[¶](#image)
| Attribute name | RW | Type | Description |
| --- | --- | --- | --- |
| image_type | ro | DevString |
Return the current image data type, bit per pixel signed or unsigned:* Bpp8, Bpp8S, Bpp10, Bpp10S, Bpp12, Bpp12S, Bpp14,
* Bpp14S, Bpp16, Bpp16S, Bpp32, Bpp32S , Bpp32F.
|
| image_width | ro | DevLong | Width size of the detector in pixel |
| image_height | ro | DevLong | Height size of the detector in pixel |
| image_sizes | ro | DevULong[4] | Signed(0-unsigned,1-signed), depth(nb bytes), width and height |
| image_max_dim | ro | DevULong[2] | Maximum image dimension, width and height in pixel |
| image_roi | rw | DevLong[4] | Region Of Interest on image, [0] = Begin X, [1] = End X,
[2] Begin Y, [3] = End Y, default ROI is [0,0,0,0] (no ROI) |
| image_bin | rw | DevLong[2] | Binning on image, [0] = Binning factor on X, [1] =
Binning factor on Y. Default binning is 1 x 1 |
| image_flip | rw | DevBoolean[2] | Flip on the image, [0] = flip over X axis, [1] flip over Y axis. Default flip is False x False |
| image_rotation | rw | DevString | Rotate the image: “0”, “90”, “180” or “270” |
##### Shutter[¶](#shutter)
| Attribute name | RW | Type | Description |
| --- | --- | --- | --- |
| shutter_ctrl_is_available | ro | DevBoolean | Return true if the camera has a shutter control |
| shutter_mode | rw | DevString |
Synchronization for shutter, modes are available:* **Manual**
* **Auto_frame**, the output signal is activated for each individual frame of a sequence
* **Auto_sequence**, the output signal is activated during the whole sequence
|
| shutter_open_time | rw | DevDouble | Delay (sec.) between the output shutter trigger and the beginning of the acquisition, if not null the shutter signal is set on before the acquisition is started. |
| shutter_close_time | rw | DevDouble | Delay (sec.) between the shutter trigger and the end of the acquisition, if not null the shutter signal is set on before the end of the acquisition. |
| shutter_manual_state | rw | DevString | To open/close manually the shutter (if Manual mode is supported, see shutter_mode) |
##### Debug[¶](#debug)
| Attribute name | RW | Type | Description |
| --- | --- | --- | --- |
| debug_module_possible | ro | DevString[] | Return the list of possible debug modules |
| debug_modules | rw | DevString[] |
Set the debug module level of LImA:* “None”
* “Common”
* “Hardware”
* “HardwareSerial”
* “Control”
* “Espia”
* “EspiaSerial”
* “Focla”
* “Camera”
* “CameraCom”
* “Test”
* “Application”
|
| debug_types_possible | ro | DevString[] | Return the list of the possible debug types |
| debug_types | rw | DevString[] |
Set the debug type level of LImA:* “Fatal”
* “Error”
* “Warning”
* “Trace”
* “Funct”
* “Param”
* “Return”
* “Always”
|
##### Video[¶](#video)
| Attribute name | RW | Type | Description |
| --- | --- | --- | --- |
| video_active | rw | DevBoolean | Start the video mode (or not) |
| video_live | rw | DevBoolean | Start the video streaming (or not) |
| video_exposure | rw | DevDouble | The video exposure time (can be different to the acq_expo_time) |
| video_gain | rw | DevDouble | The video gain (if supported by the hardware) |
| video_mode | rw | DevString |
The video mode is the video format supported by the camera, it can be:* Y8, grey image 8bits
* Y16, grey image 16bits
* Y32, grey image 32bits
* RGB555, color image RGB 555 encoding
* RGB564, color image RGB 555 encoding
* RGB24, color image RGB 24bits encoding
* RGB32, color image RGB 32bits encoding
* BGR24, color image BGR 24bits encoding
* BGR32, color image BGR 32bits encoding
* BAYER_RG8, color image BAYER RG 8bits encoding
* BAYER_RG16, color image BAYER RG 16bits encoding
* I420, color image I420 (or YUV420) planar encoding
* YUV411, color image YUV411 planar encoding
* YUV422PACKED, color image YUV422 planar encoding packed
* YUV422, color image YUV422 planar encoding
* YUV444, color image YUV444 planar encoding
Depending of your camera, the supported formats can be retrieve using the command **getAttrStringValueList** |
| video_roi | rw | DevLong[4] | A ROI on the video image (independent of the image_roi attribute) |
| video_bin | rw | DevULong[2] | A Binning on the video image (independt of the image_bin attribute) |
| video_last_image | rw | DevEncoded | The last video image, in DevEncoded “**VIDEO_IMAGE**” format, and using the video_mode set, see the DevEncoded definition [VIDEO_IMAGE](#video-image-encoded) |
| video_source | rw | DevString | The source for video image, BASE_IMAGE (raw image) or LAST_IMAGE (after soft operation)
Only valid with monochrome or scientific cameras |
| video_last_image_counter | rw | DevLong64 | The image counter |
##### Shared Memory[¶](#shared-memory)
| Attribute name | RW | Type | Description |
| --- | --- | --- | --- |
| shared_memory_names | rw | DevString[2] | Firstname and surname of the SPS typed shared memory (default is LimaCCDs,<camera_type>) |
| shared_memory_active | rw | | Activate or not the shared memory. The shared memory is for image display |
##### Config[¶](#config)
| Attribute name | RW | Type | Description |
| --- | --- | --- | --- |
| config_available_module | ro | DevString[] | List of possible config modules, |
| config_available_name | ro | DevString[] | List of existing config names |
##### Buffers[¶](#buffers)
| Attribute name | RW | Type | Description |
| --- | --- | --- | --- |
| buffer_max_memory | rw | DevShort | The maximum among of memory in percent of the available RAM that Lima is using to allocate frame buffer. |
##### Plugin[¶](#plugin)
| Attribute name | RW | Type | Description |
| --- | --- | --- | --- |
| plugin_type_list | ro | DevString[] | List of the available plugin type, to get one device name use instead the **getPluginDeviceNameFromType** command |
| plugin_list | ro | DevString[] | List of the available plugin as couple of type, device name |
#### DevEncoded[¶](#devencoded)
##### DATA_ARRAY[¶](#data-array)
The DATA_ARRAY DevEncoded has been invented for special Tango client like SPEC. It is used by the **readImage** command.
It can only embed raw data (no video data). The supported image format can be retrieve with the **image_type** attribute (Bpp8,Bpp8S, …, Bpp16,..)
This encoded format is very generic and it supports many different type of data from scalar to image stack (see DataArrayCategory enumerate C-type).
The readImage command only supports *Image* data array category.
The DATA_ARRAY format is composed of a fixed header followed by the raw data. The header is a C-like structure,
with **little-endian** byte order and no alignment:
```
# The DATA_ARRAY definition struct {
unsigned int magic= 0x44544159; // magic key
unsigned short version; // version, only 2 supported (since v1.9.5 - 2014)
unsigned short header_size; // size of the header
DataArrayCategory category; // data array category, see DataArrayCategory enumerate
DataArrayType data_type; // data type, see DataArrayType enumerate
unsigned short endianness; // 0-little-endian, 1-big-endian
unsigned short nb_dim; // number of dimension (0 to 5 max)e.g 2 for image
unsigned short dim[6]; // size for each dimension, e.g [width,height]
unsigned int dim_step[6]; // step size in pixel for each dimension, e.g [1,height]
unsigned int padding[2]; // 8 bytes of padding (for alignment)
} DATA_ARRAY_STRUCT;
enum DataArrayCategory {
ScalarStack = 0;
Spectrum;
Image;
SpectrumStack;
ImageStack;
};
enum DataArrayType{
DARRAY_UINT8 = 0;
DARRAY_UINT16;
DARRAY_UINT32;
DARRAY_UINT64;
DARRAY_INT8;
DARRAY_INT16;
DARRAY_INT32;
DARRAY_INT64;
DARRAY_FLOAT32;
DARRAY_FLOAT64;
};
```
##### VIDEO_IMAGE[¶](#video-image)
The VIDEO_IMAGE DevEncoded has been implemented for the **video_last_image** attribute to return the last image. It can embed any of the supported video format depending of the **video_mode** attribute value.
The VIDEO_IMAGE format is composed of a fixed header followed by the data. The header is a C-like structure,
with **big-endian** byte order and no alignment:
```
struct {
unsigned int magic_number = 0x5644454f;
unsigned short version; // only version 1 is supported
unsigned short image_mode; // Y8,Y16,....
long long frame_number; // the frame number (counter)
int width; // the frame width in pixel (horizontal size)
int height // the frame height in pixel (vertical size)
unsigned short endianness; // 0-little-endian, 1-big-endian
unsigned short header_size; // this header size in byte
unsigned short padding[2]; // 4 bytes of padding (for alignment)
} VIDEO_IMAGE_STRUCT;
```
### Camera devices[¶](#camera-devices)
Each camera has a configuration device with its own property/attribute/command lists.
The camera configuration device is supposed to give you access to the “private” parameters of the detector that LIMA does not need but you may want to set. For instance some detectors provides a temperature control with set-points and/or start/stop commands for a auxillary cooling system.
For more details about the camera device interface, please have a look on the following sections:
#### Andor Tango device[¶](#andor-tango-device)
This is the reference documentation of the Andor Tango device.
you can also find some useful information about prerequisite/installation/configuration/compilation in the [Andor camera plugin](index.html#camera-andor) section.
##### Properties[¶](#properties)
| Property name | Mandatory | Default value | Description |
| --- | --- | --- | --- |
| adc_speed | No | max. | The adc/Horiz. speed pair |
| baseline_clamp | No | Off | Clamping for baseline threshold, ON or OFF |
| camera_number | No | N/A | The camera number, default is 0 |
| cooler | No | Off | Start/stop the cooling system of the camera mode |
| config_path | No | N/A | The configuration path, for linux default is /usr/local/etc/andor |
| fast_ext_trigger | No | Off | Fast external trigger mode, see Andor documentation for usage |
| fan_mode | No | N/A | FAN mode, FAN_ON_FULL/FAN_ON_LOW/FAN_OFF |
| high_capacity | No | High_capacity | Camera can run in two modes, HIGH_CAPACITY or HIGH_SENSITIVITY |
| p_gain | No | max. | The preamplifier gain [X1-Xn] (see detector spec.) |
| shutter_level | No | High | The shutter output level mode |
| temperature_sp | No | N/A | The temperature setpoint in Celsius |
| vs_speed | No | fasten | The vertical shift speed (see detector spec.) |
##### Attributes[¶](#attributes)
| Attribute name | RW | Type | Description |
| --- | --- | --- | --- |
| adc_speed | rw | DevString | The ADC and Horizontal shift speed, in ADCchannel/Freq.Mhz, check the documentatio for more help **(*)** |
| baseline_clamp | rw | DevString |
The baseline clamping for threshold: **(**)*** **ON**
* **OFF**
|
| cooler | rw | DevString |
Start/stop the cooling system of the camera mode:* **ON**, the cooler is started
* **OFF**, the cooler is stopped
|
| cooling_status | ro | DevString | The status of the cooling system, tell if the setpoint temperature is reached |
| fan_mode | rw | DevString |
The FAN mode for extra-cooling: **(**)*** **FAN_OFF**
* **FAN_ON_FULL**
* **FAN_ON_LOW**
|
| fast_ext_trigger | rw | DevString |
Fast external trigger mode, see Andor documentation for usage Mode are:* **ON**, fast mode, the camera will not wait until the a keep clean cycle has been completed before accepting the next trigger
* **OFF**, slow mode
|
| high_capacity | rw | DevString |
Off/On the High Capacity mode: **(**)*** **HIGH_CAPACITY**
* **HIGH_SENSITIVITY**
|
| p_gain | rw | DevString | The preamplifier gain from X1 to Xn (see detector spec.) **(*)** |
| shutter_level | rw | DevString |
The shutter output level mode:* **LOW”**, output TTL low signal to open shutter
* **HIGH**, output TTL high signal to open shutter
|
| temperature | ro | DevShort | The current sensor temperature in Celsius |
| temperature_sp | rw | DevShort | The temperature setpoint in Celsius |
| timing | ro | Spectrum | The exposure and latency times |
| vs_speed | rw | DevString | The vertical shift speed, in us/pixel **(*)** |
**(*)** Use the command getAttrStringValueList to get the list of the supported value for these attributes.
**(**)** These attributes can not be supported by some camera models and the return value will be set to **UNSUPPORTED**.
##### Commands[¶](#commands)
| Command name | Arg. in | Arg. out | Description |
| --- | --- | --- | --- |
| Init | DevVoid | DevVoid | Do not use |
| State | DevVoid | DevLong | Return the device state |
| Status | DevVoid | DevString | Return the device state as a string |
| getAttrStringValueList | DevString:
Attribute name | DevVarStringArray:
String value list | Return the authorized string value list for a given attribute name |
#### Andor3 Tango device[¶](#andor3-tango-device)
This is the reference documentation of the Andor3 Tango device.
you can also find some useful information about prerequisite/installation/configuration/compilation in the [Andor3 camera plugin](index.html#camera-andor3) section.
##### Properties[¶](#properties)
| Property name | Mandatory | Default value | Description |
| --- | --- | --- | --- |
| config_path | Yes | N/A | The configuration path, for linux default is /usr/local/etc/andor/bitflow |
| camera_number | Yes | N/A | The camera number range is [0-N] |
| adc_gain | No | N/A | The ADC gain setting, see attribute for possible values |
| adc_rate | No | N/A | The ADC readout rate, see the attribute for possible values |
| temperature_sp | No | N/A | The sensor temperature set-point |
| cooler | No | N/A | To start/stop the cooler, values ON or OFF |
##### Attributes[¶](#attributes)
| Attribute name | RW | Type | Description |
| --- | --- | --- | --- |
| adc_gain | rw | DevString |
ADC/Gain pair settings :* **B11_HI-GAIN**
* **B11_LOW_GAIN**
* **B16_LH_GAIN**
|
| adc_rate | rw | DevString |
The ADC rate:* **MHZ10**
* **MHZ100**
* **MHZ200**
* **MHZ280**
|
| electronic_shutter_mode | rw | DevString |
The electronic shutter:* **GLOBAL**
* **ROLLING**
|
| cooler | rw | DevString |
Start/stop the cooling system of the camera mode:* **ON**, the cooler is started
* **OFF**, the cooler is stopped
|
| cooling_status | ro | DevString | The status of the cooling system, tell if the setpoint temperature is reached |
| fan_mode | rw | DevString |
The FAN mode for extra-cooling:* **OFF**
* **LOW**
* **HIGH**
|
| frame_rate | ro | DevDouble | The current frame rate, depends of exposure and latency time |
| max_frame_rate_transfer | ro | DevSouble | The rate transfer of the camera interface card, can be lower than the camera frame rate |
| readout_time | ro | DevDouble | The readout time of the camera sensor |
| overlap | rw | DevString |
To enable or disable the overlap mode:* **ON**
* **OFF**
|
| spurious_noise_filter | rw | DevString |
To enable or disable the spurious noise filter mode:* **ON**
* **OFF**
|
| temperature | ro | DevShort | The current sensor temperature in Celsius |
| temperature_sp | rw | DevShort | The temperature setpoint in Celsius |
##### Commands[¶](#commands)
| Command name | Arg. in | Arg. out | Description |
| --- | --- | --- | --- |
| Init | DevVoid | DevVoid | Do not use |
| State | DevVoid | DevLong | Return the device state |
| Status | DevVoid | DevString | Return the device state as a string |
| getAttrStringValueList | DevString:
Attribute name | DevVarStringArray:
String value list | Return the authorized string value list for a given attribute name |
#### Basler Tango device[¶](#basler-tango-device)
This is the reference documentation of the Basler Tango device.
you can also find some useful information about the camera models/prerequisite/installation/configuration/compilation in the [Basler camera plugin](index.html#camera-basler) section.
##### Properties[¶](#properties)
| Property name | Mandatory | Default value | Description |
| --- | --- | --- | --- |
| camera_id | No | uname://*<server instance name>* | The camera ID (see details below) |
| packet_size | No | 8000 | the packet size |
| inter_packet_delay | No | 0 | The inter packet delay |
| frame_transmission_delay | No | 0 | The frame transmission delay |
| force_video_mode | No | False | To force a B/W camera to generate video format |
*camera_id* property identifies the camera in the network. Several types of ID might be given:
* IP/hostname (examples: ip://192.168.5.2, ip://white_beam_viewer1.esrf.fr)
* Basler serial number (example: sn://12345678)
* Basler user name (example: uname://white_beam_viewer1)
If no *camera_id* is given, it uses the server instance name as the camera user name (example, if your server is called LimaCCDs/white_beam_viewer1, the default value for *camera_id* will be uname://white_beam_viewer1).
To maintain backward compatibility, the old *cam_ip_address* is still supported but is considered deprecated and might disappear in the future.
Both inter_packet_delay and frame_tranmission_delay properties can be used to tune the GiGE performance, for more information on how to configure a GiGE Basler camera please refer to the Basler documentation.
##### Attributes[¶](#attributes)
| Attribute name | RW | Type | Description |
| --- | --- | --- | --- |
| statistics_total_buffer_count | ro | DevLong | Total number of requested frames |
| statistics_failed_buffer_count | ro | DevLong | Total number of failed frames |
| test_image_selector | rw | DevString | Select a test image: image_off/image_1/…/image_7 **(*)** |
| output1_line_source | rw | DevString | Select a source for I/O output1 line **(*)** |
| user_output_lin1 | rw | DevBoolean | Switch on/off UserOuput on output1 line **(*)** |
| temperature | ro | DevFloat | Temperature of the camera core |
**(*)** Use the command getAttrStringValueList to get the list of the supported value for these attributes.
##### Commands[¶](#commands)
| Command name | Arg. in | Arg. out | Description |
| --- | --- | --- | --- |
| Init | DevVoid | DevVoid | Do not use |
| State | DevVoid | DevLong | Return the device state |
| Status | DevVoid | DevString | Return the device state as a string |
| getAttrStringValueList | DevString:
Attribute name | DevVarStringArray:
String value list | Return the authorized string value list for a given attribute name |
#### Dexela Tango device[¶](#dexela-tango-device)
This is the reference documentation of the Dexela Tango device.
you can also find some useful information about the camera models/prerequisite/installation/configuration/compilation in the [Dexela camera plugin](index.html#camera-dexela) section.
##### Properties[¶](#properties)
| Property name | Mandatory | Default value | Description |
| --- | --- | --- | --- |
| database_path | Yes | DexelaConfig.cfg | The database path file, e.g C:DexelaConfig.cfg |
| sensor_format | Yes | sensor2923 | The detector model |
##### Attributes[¶](#attributes)
| Attribute name | RW | Type | Description |
| --- | --- | --- | --- |
| full_well_mode | ro | DevString | The well-mode, can be set to HIGH or LOW |
##### Commands[¶](#commands)
| Command name | Arg. in | Arg. out | Description |
| --- | --- | --- | --- |
| Init | DevVoid | DevVoid | Do not use |
| State | DevVoid | DevLong | Return the device state |
| Status | DevVoid | DevString | Return the device state as a string |
| getAttrStringValueList | DevString:
Attribute name | DevVarStringArray:
String value list | Return the authorized string value list for a given attribute name |
#### Dhyana Tango device[¶](#dhyana-tango-device)
This is the reference documentation of the Dhyana Tango device.
you can also find some useful information about the camera models/prerequisite/installation/configuration/compilation in the [Dhyana camera plugin](index.html#camera-dhyana) section.
##### Properties[¶](#properties)
| Property name | Mandatory | Default value | Description |
| --- | --- | --- | --- |
| internal_trigger_timer | No | 999 | Soft timer to generate software trigger in millisecond. |
| temperature_target | No | n/a | To start cooling the detector (C) |
| trigger_mode | No | STANDARD |
Tucam trigger mode:* STANDARD
* GLOBAL
* SYNCHRONOUS
|
| trigger_edge | No | RISING |
To set the trigger level:* RISING
* FALLING
|
##### Attributes[¶](#attributes)
| Attribute name | RW | Type | Description |
| --- | --- | --- | --- |
| global_gain | rw | DevString | Global gain setting on the sensor, HDR, HIGH or LOW |
| fan_speed | rw | DevUShort | FAN speed for cooling, from 0 to 10 |
| temperature | ro | Devdouble | Temperature of the sensor |
| temperature_target | rw | Devdouble | Temperature target |
| firmware_version | ro | DevString | Firmware version |
| tucam_version | ro | DevString | TUCAM SDK version |
| trigger_mode | rw | DevString | Tucam trigger mode: STANDARD, GLOBAL or SYNCHRONOUS |
| trigger_edge | rw | DevString | To set the input trigger level: RISING or FALLING |
##### Commands[¶](#commands)
| Command name | Arg. in | Arg. out | Description |
| --- | --- | --- | --- |
| Init | DevVoid | DevVoid | Do not use |
| State | DevVoid | DevLong | Return the device state |
| Status | DevVoid | DevString | Return the device state as a string |
| getAttrStringValueList | DevString:
Attribute name | DevVarStringArray:
String value list | Return the authorized string value list for a given attribute name |
#### Frelon Tango device[¶](#frelon-tango-device)
This is the reference documentation of the Frelon Tango device.
you can also find some useful information about the camera models/prerequisite/installation/configuration/compilation in the [Frelon camera plugin](index.html#camera-frelon) section.
##### Properties[¶](#properties)
| Property name | Mandatory | Default value | Description |
| --- | --- | --- | --- |
| espia_dev_nb | No | 0 | The acquisition Espia board number |
##### Attributes[¶](#attributes)
| Attribute name | RW | Type | Description |
| --- | --- | --- | --- |
| espia_dev_nb | ro | DevString | The Espia board number. |
| image_mode | rw | DevString |
The acquisition image mode:* **Frame transfert**
* **Full frame**
|
| input_channel | rw | DevString |
The Inputs ADC channels:* **1**
* **2**
* **3**
* **4**
* **1-2**
* **3-4**
* **1-3**
* **2-4**
* **1-2-3-4**
|
| e2v_correction | rw | DevString |
Active/Desactive the corrstion for e2v cameras:* **On**
* **Off**
|
| roi_mode | rw | DevString |
The roi mode:* **None**
* **Slow**
* **Fast**
* **Kinetic**
|
| roi_bin_offset | rw | DevLong | The roi offset in line |
| spb2_config | rw | DevString | The internal config for pixel rate, **precision** or **speed**.
Depending on your camera model, the pixel rates are factory defined |
| seq_status | ro | DevLong | |
Please refer to the *Frelon User’s Guide* for more information about the above specfic configuration parameters.
##### Commands[¶](#commands)
| Command name | Arg. in | Arg. out | Description |
| --- | --- | --- | --- |
| Init | DevVoid | DevVoid | Do not use |
| State | DevVoid | DevLong | Return the device state |
| Status | DevVoid | DevString | Return the device state as a string |
| getAttrStringValueList | DevString:
Attribute name | DevVarStringArray:
String value list | Return the authorized string value list for a given attribute name |
| execSerialCommand | DevString command | DevString command result | Send a command through the serial line |
| resetLink | DevVoid | DevVoid | reset the espia link |
#### ImXPAD Tango device[¶](#imxpad-tango-device)
This is the reference documentation of the ImXPAD Tango device.
you can also find some useful information about the camera models/prerequisite/installation/configuration/compilation in the [ImXPAD camera plugin](index.html#camera-imxpad) section.
##### Properties[¶](#properties)
| Property name | Mandatory | Default value | Description |
| --- | --- | --- | --- |
| camera_ip_address | Yes | N/A | IP address |
| port | No | 3456 | socket port number |
| model | No | XPAD_S70 | detector model |
| usb_device_id | No | N/A | reserved, do not use |
| config_path | Yes | N/A | The configuration directory path (see loadConfig command) |
##### Attributes[¶](#attributes)
This camera device has no attribute.
##### Commands[¶](#commands)
| Command name | Arg. in | Arg. out | Description |
| --- | --- | --- | --- |
| Init | DevVoid | DevVoid | Do not use |
| State | DevVoid | DevLong | Return the device state |
| Status | DevVoid | DevString | Return the device state as a string |
| getAttrStringValueList | DevString:
Attribute name | DevVarStringArray:
String value list | Return the authorized string value list for a given attribute name |
| loadConfig | DevString | DevVoid | the config file prefix, the property config_path is mandatory |
#### Marccd Tango device[¶](#marccd-tango-device)
This is the reference documentation of the Marccd Tango device.
you can also find some useful information about the camera models/prerequisite/installation/configuration/compilation in the [Marccd camera plugin](index.html#camera-marccd) section.
##### Properties[¶](#properties)
| Property name | Mandatory | Default value | Description |
| --- | --- | --- | --- |
| camera_ip | Yes | marccd1 | The camera hostname or ip address |
| port_number | Yes | 2222 | The Socket port number |
| image_path | Yes | /buffer/rayonix | The temporary image storage directory |
##### Attributes[¶](#attributes)
| Attribute name | RW | Type | Description |
| --- | --- | --- | --- |
| source_beam_x | rw | DevFloat | Information saved with the image (.mccd format) |
| source_beam_y | rw | DevFloat | “ |
| source_distance | rw | DevFloat | “ |
| source_wavelength | rw | DevFloat | “ |
| header_beam_x | ro | DevFloat | “ |
| header_beam_y | ro | DevFloat | “ |
| header_distance | ro | DevFloat | “ |
| header_pixelsize_x | ro | DevFloat | “ |
| header_pixelsize_y | ro | DevFloat | “ |
| header_integration_time | ro | DevFloat | “ |
| header_exposure_time | ro | DevFloat | “ |
| header_readout_time | ro | DevFloat | “ |
| header_wavelength | ro | DevFloat | “ |
| header_acquire_timestamp | ro | DevFloat | “ |
| header_header_timestamp | ro | DevFloat | “ |
| header_save_timestamp | ro | DevFloat | “ |
| header_mean_bias | ro | DevFloat | “ |
| header_photons_per_100adu | ro | DevFloat | “ |
| header_mean | ro | DevFloat | “ |
| header_rms | ro | DevFloat | “ |
| header_temperature | ro | DevFloat[9] | “ |
| header_pressure | ro | DevFloat[9] | “ |
##### Commands[¶](#commands)
| Command name | Arg. in | Arg. out | Description |
| --- | --- | --- | --- |
| Init | DevVoid | DevVoid | Do not use |
| State | DevVoid | DevLong | Return the device state |
| Status | DevVoid | DevString | Return the device state as a string |
| takeBackgroundFrame | DevVoid | DevVoid | Take a new background image for the correction |
| saveBG | DevVoid | DevVoid | Save the last background image |
#### Maxipix Tango device[¶](#maxipix-tango-device)
This is the reference documentation of the Maxipix Tango device.
you can also find some useful information about the camera models/prerequisite/installation/configuration/compilation in the [Maxipix camera plugin](index.html#camera-maxipix) section.
##### Properties[¶](#properties)
| Property name | Mandatory | Default value | Description |
| --- | --- | --- | --- |
| config_name | Yes | N/A | The configuration name |
| config_path | Yes | N/A | The configuration directory path where the files are available |
| espia_dev_nb | No | 0 | The acquisition Espia board number |
| reconstruction_active | No | True | Activate the reconstruction or not |
| fill_mode | No | Raw |
the chip-gap filling mode, **Raw**, **Zero**,**Dispatch** or **Mean**.
|
| gate_level | No | High_Rise | The Input gate level, **High_rise** or
**Low_Fall** |
| gate_mode | No | Inactive | The gate mode, **Inactive** or **Active** |
| ready_level | No | High_Rise | The output ready level, **High_rise** or
**Low_Fall** |
| ready_mode | No | Exposure | The output Ready mode, **Exposure** or
**Exposure_Readout** |
| shutter_level | No | High_Rise | The output Shutter level, **High_rise** or **Low_Fall** |
| trigger_level | No | High_Rise | The output Trigger level, **High_rise** or **Low_Fall** |
##### Attributes[¶](#attributes)
| Attribute name | RW | Type | Description |
| --- | --- | --- | --- |
| config_name | rw | DevString | the configuration name. If changed the detector is reconfigured and reset. |
| config_path | rw | DevString | the configuration directory path where the files are available |
| energy_calibration | rw | Spectrum DevDouble | The energy calibration, [0] = threshold setpoint , [1]
threshold step-size (keV) |
| energy_threshold | rw | DevDouble | The threshold in energy (keV) |
| threshold | rw | DevDouble | The detector threshold |
| threshold_noise | rw | Spectrum DevDouble | The threshold noise of each chip, [0] =chip0 thl, [0] =
chip1 thl, …. |
| espia_dev_nb | rw | DevString | The Espia board number. |
| fill_mode | rw | DevString |
The chip-gap filling mode:* **Raw**, the border pixel values are copied
* **Zero**, border and gap pixel are set to zero
* **Dispatch**, the border pixel values are interpolated over the full gap
* **Mean**, the gap pixels are filled with the border pixels average value.
|
| gate_level | rw | DevString |
The Input gate level:* **High_rise**
* **Low_Fall**
|
| gate_mode | rw | DevString |
The gate mode:* **Inactive**
* **Active**
|
| ready_mode | rw | DevString |
The output Ready mode:* **Exposure**
* **Exposure_Readout**
|
| shutter_level | rw | DevString |
The output Shutter level* **High_rise**
* **Low_Fall**
|
| trigger_level | rw | DevString |
The output Trigger level:* **High_rise**
* **Low_Fall**
|
| dac_possible | ro | DevString[] | Return the list of the the possible dac names |
| dac_name | rw | DevString | The dac name to be write/read (dac_value) |
| dac_value | rw | DevLong | The dac value of the given dac_name dac register |
**Warning**: we recommend to not change the DAC register values (dac_name and dac_value attributes) excepted if you well know what you are doing, if you have some troubles with the detector please contact the ESRF Detector Unit first.
##### Commands[¶](#commands)
| Command name | Arg. in | Arg. out | Description |
| --- | --- | --- | --- |
| Init | DevVoid | DevVoid | Do not use |
| State | DevVoid | DevLong | Return the device state |
| Status | DevVoid | DevString | Return the device state as a string |
| getAttrStringValueList | DevString:
Attribute name | DevVarStringArray:
String value list | Return the authorized string value list for a given attribute name |
#### Lambda Tango device[¶](#lambda-tango-device)
This is the reference documentation of the Lambda Tango device.
you can also find some useful information about the camera models/prerequisite/installation/configuration/compilation in the [Lambda camera plugin](index.html#camera-lambda) section.
##### Properties[¶](#properties)
This camera device has no property.
| Property name | Mandatory | Default value | Description |
| --- | --- | --- | --- |
| config_path | Yes | None | path the manufacturer configuration file of the detector should be something like: /opt/xsp/config |
##### Attributes[¶](#attributes)
| Attribute name | RW | Type | Description |
| --- | --- | --- | --- |
| distorsion_correction | ro | DevBoolean | Return **True** if the distorsion correction is active |
| temperature | ro | DevDouble | The detector temperature in C |
| humidity | ro | DevDouble | The detector humitity in % |
| energy_threshold | rw | DevDouble | The energy threshold in KeV |
| high_voltage | rw | DevDouble | The high voltage, relevant only for CdTe model |
Distorsion_correction, temperature and humidity are only relevant with detector equiped with the latest harwdare and firmware, since mid of 2020.
##### Commands[¶](#commands)
| Command name | Arg. in | Arg. out | Description |
| --- | --- | --- | --- |
| Init | DevVoid | DevVoid | Do not use |
| State | DevVoid | DevLong | Return the device state |
| Status | DevVoid | DevString | Return the device state as a string |
| getAttrStringValueList | DevString:
Attribute name | DevVarStringArray:
String value list | Return the authorized string value list for a given attribute name |
#### Merlin Tango device[¶](#merlin-tango-device)
This is the reference documentation of the Merlin Tango device.
you can also find some useful information about the camera models/prerequisite/installation/configuration/compilation in the [Merlin camera plugin](index.html#camera-merlin) section.
##### Properties[¶](#properties)
| Property name | Mandatory | Default value | Description |
| --- | --- | --- | --- |
| HostName | Yes | none | The detector IP address |
| CmdPort | No | 6431 | The tcp command port |
| DataPort | No | 6432 | The tcp data port |
| ImageWidth | No | 512 | The number of detector pixels |
| ImageHeight | No | 512 | The number of detector rasters |
| Chips | No | 4 | The number of detector medipix3 chips |
| Simulate | No | 0 | Command simulation mode |
##### Attributes[¶](#attributes)
| Attribute name | RW | Type | Description |
| --- | --- | --- | --- |
| acqRunning | ro | DevBoolean | Is acquisition active |
| chargeSumming | rw | DevString | Charge Summming mode (**ON/OFF**) |
| colourMode | rw | DevString | Colour mode (**MONOCHROME/COLOUR**) |
| continuousRW | rw | DevString | Continuous Collection (**ON/OFF**) |
| counter | rw | DevString | Counter (**COUNTER0/COUNTER1/BOTH**) |
| depth | rw | DevString | Counter depth (**BPP1/BPP6/BPP12/BPP24**) |
| fileDirectory | rw | DevString | Directory name if saving on Merlin PC |
| fileEnable | rw | DevString | Enable file saving to Merlin PC (**ON/OFF**) |
| fileName | rw | DevString | Filename if saving on Merlin PC |
| gain | rw | DevString | Gain Settings (**SHGM/HGM/LGM/SLGM**) |
| operatingEnergy | rw | DevFloat | Energy keV (0 < e < 999.99) |
| softwareVersion | ro | DevFloat | Software version number |
| temperature | ro | DevFloat | Temperature degrees C |
| threshold0 | rw | DevFloat | Threshold 0 keV (0 < th < 999.99) |
| threshold1 | rw | DevFloat | Threshold 1 keV (0 < th < 999.99) |
| threshold2 | rw | DevFloat | Threshold 2 keV (0 < th < 999.99) |
| threshold3 | rw | DevFloat | Threshold 3 keV (0 < th < 999.99) |
| threshold4 | rw | DevFloat | Threshold 4 keV (0 < th < 999.99) |
| threshold5 | rw | DevFloat | Threshold 5 keV (0 < th < 999.99) |
| threshold6 | rw | DevFloat | Threshold 6 keV (0 < th < 999.99) |
| threshold7 | rw | DevFloat | Threshold 7 keV (0 < th < 999.99) |
| triggerStartType | rw | DevString | Trigger start mode (**INTERNAL/RISING_EDGE_TTL/FALLING_EDGE_TTL/RISING_EDGE_LVDS/FALLING_EDGE_LVDS/SOFT**) |
| triggerStopType | rw | DevString | Trigger stop mode (**INTERNAL/RISING_EDGE_TTL/FALLING_EDGE_TTL/RISING_EDGE_LVDS/FALLING_EDGE_LVDS/SOFT**) |
| triggerOutTTL | rw | DevString | TTL Trigger stop mode (**TTL/LVDS/TTL_DELAYED/LVDS_DELAYED/FOLLOW_SHUTTER/ONE_PER_ACQ_BURST/SHUTTER_AND_SENSOR_READ/OUTPUT_BUSY**) |
| triggerOutLVDS | rw | DevString | LVDS Trigger stop mode (**TTL/LVDS/TTL_DELAYED/LVDS_DELAYED/FOLLOW_SHUTTER/ONE_PER_ACQ_BURST/SHUTTER_AND_SENSOR_READ/OUTPUT_BUSY**) |
| triggerOutTTLInvert | rw | DevString | TTL Trigger invert mode (**NORMAL/INVERTED**) |
| triggerOutLVDSInvert | rw | DevString | LVDS Trigger invert mode (**NORMAL/INVERTED**) |
| triggerOutTTLDelay | rw | DevLong64 | TTL Trigger delay ns (0 < del < 68719476720) |
| triggerOutLVDSDelay | rw | DevLong64 | LVDS Trigger delay ns (0 < del < 68719476720) |
| triggerUseDelay | rw | DevString | Use Trigger delay (**ON/OFF**) |
| thScanNum | rw | DevLong | Threshold number to scan (0 < n < 7) |
| thStart | rw | DevFloat | Threshold scan start energy keV (0 < e < 999.99) |
| thStep | rw | DevFloat | Threshold scan step energy keV (0 < e < 999.99) |
| thStop | rw | DevFloat | Threshold scan stop energy keV (0 < e < 999.99) |
##### Commands[¶](#commands)
| Command name | Arg. in | Arg. out | Description |
| --- | --- | --- | --- |
| Init | DevVoid | DevVoid | Do not use |
| State | DevVoid | DevLong | Return the device state |
| Status | DevVoid | DevString | Return the device state as a string |
| SoftTrigger | DevVoid | DevVoid | Perform soft trigger |
| Abort | DevVoid | DevVoid | Abort |
| THScan | DevVoid | DevVoid | Perform threshold scan |
| ResetHW | DevVoid | DevVoid | Reset |
#### Eiger Tango device[¶](#eiger-tango-device)
This is the reference documentation of the Dectris Eiger Tango device.
you can also find some useful information about the camera models/prerequisite/installation/configuration/compilation in the [Dectris Eiger camera plugin](index.html#camera-eiger) section.
##### Properties[¶](#properties)
##### Attributes[¶](#attributes)
| Attribute name | RW | Type | Description |
| --- | --- | --- | --- |
| api_version | ro | DevString | The detected API version, e.g ‘1.8.0’ |
| auto_summation | rw | DevString | If enable image depth is bpp32 and, if not image depth is bpp16 **(*)** |
| cam_status | ro | DevString | The internal camera status |
| compression_type | rw | DevString |
For data stream, supported compression are:* NONE
* LZ4
* BSLZ4
|
| countrate_correction | rw | DevString | Enable or disable the countrate correction **(*)** |
| detector_ip | ro | DevString | The IP address of the detector DCU, useful to run curl commands |
| efficency_correction | rw | DevString | Enable the efficienty correction |
| flatfield_correction | rw | DevString | Enable or disable the internal (vs. lima) flatfield correction **(*)** |
| has_hwroi_support | ro | DevBoolean | Return True if the camera supports hardware ROI |
| humidity | ro | DevFloat | Return the humidity percentage |
| hw_roi_supported_list | ro | DevString[] | List of supported HW Roi,[“roi1”,”x”, “y”, “width”, “height”, “roi2”…]
9M supports 4M-R and 4M-L ROIs and 16M only supports 4M ROI. |
| hw_roi_pattern | ro | DevString | “disabled”, “4M-R”, “4M-L” or “4M” |
| model_size | ro | DevString | 500K, 1M, 2M, 4M, 9M or 16M |
| pixel_mask | rw | DevString | Enable or disable the pixel mask correction **(*)** |
| photon_energy | rw | DevFloat | The photon energy,it should be set to the incoming beam energy. Actually it’s an helper which set the threshold |
| plugin_status | ro | DevString | The camera plugin status |
| retrigger | rw | DevString | Enable or disable the retrigger mode **(*)** |
| serie_id | ro | DevLong | The current acquisition serie identifier |
| stream_last_info | ro | DevString[] | Information on data stream, encoding, frame_dim and packed_size |
| stream_stats | ro | DevDouble[] | ave_size, ave_time, ave_speed |
| threshold_energy | rw | DevFloat | The threshold energy (eV), it will set the camera detection threshold.
This should be set between 50 to 60 % of the incoming beam energy. |
| threshold_energy2 | rw | DevFloat | The 2nd threshold energy (eV), useful only if you need to activate the threshold differential mode |
| threshold_diff_mode | rw | DevString | Enable or disable the threshold diff mode, can be use to mask gamma x-rays (i.e cosmics) **(*)** |
| temperature | ro | DevFloat | The sensor temperature |
| virtual_pixel_correction | rw | DevString | Enable or disable the virtual-pixel correction **(*)** |
**(*)** These attributes can take as value **ON** or **OFF**. Please refer to the Dectris documention for more information regarding the online corrections.
##### Commands[¶](#commands)
| Command name | Arg. in | Arg. out | Description |
| --- | --- | --- | --- |
| deleteMemoryFiles | DevVoid | DevVoid | To remove the temporary mem. files |
| initialize | DevVoid | DevVoid | To initialize the detector |
| latchStreamStatistics | DevBoolean |
DevVarDoubleArray:* ave_size,
* ave_time,
* ave_speed
| If True, reset the statistics |
| resetHighVoltage | DevVoid | DevVoid | For CdTe sensors only, switch off/on the high-voltage |
| Init | DevVoid | DevVoid | Do not use |
| State | DevVoid | DevLong | Return the device state |
| Status | DevVoid | DevString | Return the device state as a string |
| getAttrStringValueList | DevString:
Attribute name | DevVarStringArray:
String value list | Return the authorized string value list for a given attribute name |
#### Mythen3 Tango device[¶](#mythen3-tango-device)
This is the reference documentation of the Mythen3 Tango device.
you can also find some useful information about the camera models/prerequisite/installation/configuration/compilation in the [Xspress3 camera plugin](index.html#camera-mythen3) section.
##### Properties[¶](#properties)
| Property name | Mandatory | Default value | Description |
| --- | --- | --- | --- |
| HostName | Yes | | The Mythen detector socket server IP address |
| TcpPort | No | 1031 | The tcp communication port. |
| Simulate | No | 0 | Command simulation mode. |
##### Attributes[¶](#attributes)
| Attribute name | RW | Type | Description |
| --- | --- | --- | --- |
| acqRunning | ro | DevBoolean | Is acquisition active |
| assemblyDate | ro | DevString | Assembly date of the Mythen system |
| badChannelInterpolation | rw | DevString | Enable/Disable Bad Channel Interpolation Mode (**ON/OFF**) |
| badChannels | ro | DevLong[1280*Nb] | Display state of each channel for each active module [Nb = nbModules] |
| commandID | ro | DevLong | Command identifier (increases by 1) |
| continuousTrigger | rw | DevString | Enable/Disable continuous trigger mode (**ON/OFF**) |
| cutoff | ro | DevLong | Count value before flatfield correction |
| delayBeforeFrame | rw | DevLong64 | Time delay between trigger & start (100ns increments) |
| energy | rw | DevFloat[Nb] | X-ray Energy (4.09 < e keV < 40) [Nb = nbModules] |
| energyMax | ro | DevFloat | Maximum X-ray Energy keV |
| energyMin | ro | DevFloat | Minimum X-ray Energy keV |
| flatField | ro | DevLong[1280*Nb] | Flat field correction values |
| flatFieldCorrection | rw | DevString | Enable/Disable Flat Field Correction Mode (**ON/OFF**) |
| gateMode | rw | DevString | Enable/Disable gate mode (**ON/OFF**) |
| gates | rw | DevLong | Number of gates per frame |
| hwStatus | ro | DevString | The hardware status |
| inputSignalPolarity | rw | DevString | Input Signal Polarity (**RISING_EDGE/FALLING_EDGE**) |
| kthresh | ro | DevFloat[Nb] | Threshold Energy (4.0 < e keV < 20) [Nb = nbModules] |
| kthreshEnergy | w | DevFloat[2] | Threshold & Energy keV |
| kthreshMax | ro | DevFloat | Maximum Threshold Energy keV |
| kthreshMin | ro | DevFloat | Minimum Threshold Energy keV |
| maxNbModules | ro | DevLong | Maximum nos. of Mythen modules |
| module | rw | DevLong | Number of selected module (-1 = all) |
| nbits | rw | DevString | Number of bits to readout (**BPP24/BPP16/BPP8/BPP4**) |
| nbModules | rw | DevLong | Number of modules in the system |
| outputSignalPolarity | rw | DevString | Output Signal Polarity (**RISING_EDGE/FALLING_EDGE**) |
| predefinedSettings | w | DevString | Load predefined energy/kthresh settings (**Cu/Ag/Mo/Cr**) |
| rateCorrection | rw | DevString | Enable/Disable rate correction mode (**ON/OFF**) |
| sensorMaterial | ro | DevLong | The sensor material (0=silicon) |
| sensorThickness | ro | DevLong | The sensor thickness um |
| serialNumbers | ro | DevLong[Nb] | Serial nos. of Mythen modules [Nb = nbModules] |
| systemNum | ro | DevLong | The serial number of the Mythen |
| tau | rw | DevFloat[Nb] | Dead time constants for rate correction [Nb = nbModules] |
| testPattern | ro | DevLong[1280*Nb] | Read back a test pattern |
| triggered | rw | DevString | Enable/Disable triggered mode (**ON/OFF**) |
| useRawReadout | rw | DevString | Raw readout packed Mode (**ON/OFF**) |
| version | ro | DevString | The software version of the socket server |
##### Commands[¶](#commands)
| Command name | Arg. in | Arg. out | Description |
| --- | --- | --- | --- |
| Init | DevVoid | DevVoid | Do not use |
| State | DevVoid | DevLong | Return the device state |
| Status | DevVoid | DevString | Return the device state as a string |
| LogStart | DevVoid | DevVoid | Start logging server activity (use sparingly) |
| LogStop | DevVoid | DevVoid | Stop logging server activity |
| LogRead | DevVoid | DevVoid | Print logging file to terminal |
| ReadFrame | DevLong | DevVarULongArray | [in] frame number [out] a frame of mythen data |
| ReadData | DevVoid | DevVarULongArray | [out] all frames of mythen data |
| ResetMythen | DevVoid | DevVoid | Reset |
#### Pilatus Tango device[¶](#pilatus-tango-device)
This is the reference documentation of the Pilatus Tango device.
you can also find some useful information about the camera models/prerequisite/installation/configuration/compilation in the [Pilatus camera plugin](index.html#camera-pilatus) section.
##### Properties[¶](#properties)
This camera device has no property.
| Property name | Mandatory | Default value | Description |
| --- | --- | --- | --- |
| host_name | No | localhost | Pilatus computer hostname |
| host_port | No | 41234 | Pilatus camserver port number |
| config_file | No | /home/det/
p2_det/config/
cam_data/
camera.def | Configuration file path, read to get pilatus version (2 or 3)
and the camera size (height and width) |
| tmpfs_path | No | /lima_data | Path to the temporary file-system where camserver will store the images |
##### Attributes[¶](#attributes)
| Attribute name | RW | Type | Description |
| --- | --- | --- | --- |
| threshold_gain | rw | DevString | The detector threshold gain (**LOW,MID,HIGH,ULTRA HIGH**) |
| fill_mode | rw | DevString | The gap fill mode (**ON,OFF**) |
| threshold | rw | DevLong | The threshold level of detector in eV |
| energy_threshold | rw | DevFloat | The energy threshold in keV (set the gain and the threshold) |
| trigger_delay | rw | DevDouble | The start exposure delay after the hard trigger |
| nb_exposure_per_frame | rw | DevLong | The number of exposure/frame to set an accumulation of frames |
##### Commands[¶](#commands)
| Command name | Arg. in | Arg. out | Description |
| --- | --- | --- | --- |
| Init | DevVoid | DevVoid | Do not use |
| State | DevVoid | DevLong | Return the device state |
| Status | DevVoid | DevString | Return the device state as a string |
| getAttrStringValueList | DevString:
Attribute name | DevVarStringArray:
String value list | Return the authorized string value list for a given attribute name |
#### PCO Tango device[¶](#pco-tango-device)
This is the reference documentation of the PCO Tango device.
You can also find some useful information about the camera models/prerequisite/installation/configuration/compilation in the [PCO camera plugin](index.html#camera-pco) section.
##### Properties[¶](#properties)
| Property name | Mandatory | Default value | Description |
| --- | --- | --- | --- |
| debug_control | No | 0 | Enable/Disble the debug (0/1) |
| debug_module | No | 0 |
To set the debug module list (in hex format 0x….)* None = 0x001
* Common = 0x002
* Hardware = 0x004
* HardwareSerial = 0x008
* Control = 0x010
* Espia = 0x020
* EspiaSerial = 0x040
* Focla = 0x080
* Camera = 0x100
* CameraCom = 0x200
* Test = 0x400
* Application = 0x800
|
| debug_format | No | 0 |
To set the debug format (in hex format 0x….)* DateTime = 0x001
* Thread = 0x002
* Module = 0x004
* Obj = 0x008
* Funct = 0x010
* FileLine = 0x020
* Type = 0x040
* Indent = 0x080
* Color = 0x100
|
| debug_type | No | 0 |
To set the debug type (in hex format 0x….)* Fatal = 0x001
* Error = 0x002
* Warning = 0x004
* Trace = 0x008
* Funct = 0x010
* Param = 0x020
* Return = 0x040
* Always = 0x080
|
| params | No | empty |
List of parameters/options (one per line)* sn = <camera serial number>
(
if it is 0 or doesn’t exist, the first camera found will be opened if the serial number is not found, OpenCam will fail
)
* trigSingleMulti = 1
(
enable TriggerSingleMulti as TriggerMulti for compability with SPEC START
)
* xMinSize = 1
(
enable correction for the X minimum size for the CLHS firmware bug
)
* bitAligment = <MSB | LSB>
(
bit aligment of the image data, i.e. for 12b:
[MSB - xxxx xxxx xxxx 0000]
[LSB - 0000 xxxx xxxx xxxx]
)
|
##### Attributes[¶](#attributes)
| Attribute name | RW | Type | Description |
| --- | --- | --- | --- |
| acqTimeoutRetry | rw | DevLong | Maximum Timeout retries during acq (0 - infinite) |
| adc | rw | DevLong | Number of working ADC’s |
| adcMax | ro | DevLong | Maximum number of ADC’s |
| binInfo | ro | DevLong | PCO hw binning info |
| bitAlignment | rw | DevString |
Bit alignment* MSB (0)
* LSB (1)
|
| bytesPerPixel | ro | DevLong | Bytes per Pixel |
| camerasFound | ro | DevString | List of cameras found during the Open search |
| camInfo | ro | DevString | General camera parameters information |
| camName | ro | DevString | Camera Name |
| camNameBase | ro | DevString | Camera Name (Pco) |
| camNameEx | ro | DevString | Camera Name, Interface, Sensor |
| camType | ro | DevString | Camera Type |
| cdiMode | rw | DevLong |
Correlated Double Imaging Mode* enabled/disabled = 1/0 (rw)
* not allowed = -1 (ro)
|
| clXferPar | ro | DevString | General CameraLink parameters |
| cocRunTime | ro | DevDouble | cocRunTime (s) - only valid after the camera is armed |
| coolingTemperature | ro | DevDouble | Cooling Temperature |
| debugInt | rw | DevString | PCO plugin internal debug level (hex format: 0x….) |
| debugIntTypes | ro | DevString | PCO plugin internal debug types |
| doubleImageMode | rw | DevLong |
Double Image Mode* enabled/disabled = 1/0 (rw)
* not allowed = -1 (ro)
|
| firmwareInfo | ro | DevString | Firmware info |
| frameRate | ro | DevDouble | Framerate, calculated as: 1/cocRunTime (1/s) |
| generalCAPS1 | ro | DevString | General PCO CAPS1 value (hex and bin) |
| info | ro | DevString | General camera parameters information |
| lastError | ro | DevString | The last PCO error message |
| lastImgAcquired | ro | DevLong | Last image acquired (during recording) |
| lastImgRecorded | ro | DevLong | Last image recorded (during recording) |
| logMsg | ro | DevString | Last Log msgs |
| logPcoEnabled | ro | DevLong | PCO logs are enabled |
| maxNbImages | ro | DevLong | The maximum number of images which can be acquired by the camera (recording mode) |
| paramsInfo | ro | DevString | Values of the PCO properties **params** |
| pixelRate | ro | DevLong | Actual Pixel Rate (Hz) |
| pixelRateInfo | ro | DevString | Pixel Rate information |
| pixelRateValidValues | ro | DevString | Allowed Pixel Rates |
| recorderForcedFifo | rw | DevLong | Forced Fifo Mode (**only for recording cams**) |
| roiInfo | ro | DevString | PCO ROI info |
| roiLastFixed | ro | DevString | Last fixed ROI info |
| rollingShutter | rw | DevLong |
Rolling Shutter Mode as int (**only for some types of EDGE**)* 1 = ROLLING
* 2 = GLOBAL
* 4 = GLOBAL RESET
|
| rollingShutterInfo | ro | DevString | Rolling Shutter info |
| rollingShutterStr | rw | DevLong | Rolling Shutter Mode as str (**only for some types of EDGE**) |
| temperatureInfo | ro | DevString | Temperature info |
| test | rw | DevString | Debug test function (**do not use it**) |
| timestampMode | rw | DevLong |
Timestamp mode* 0 = none
* 1 = BCD coded stamp in the first 14 pixel
* 2 = BCD coded stamp in the first 14 pixel + ASCII text
* 3 = ASCII text (**only for some cameras**)
|
| traceAcq | ro | DevString | Debug information for some types of acq |
| version | ro | DevString | Version information of the plugin |
| versionAtt | ro | DevString | Version of att file |
| versionSdk | ro | DevString | PCO SDK Release |
##### Commands[¶](#commands)
| Command name | Arg. in | Arg. out | Description |
| --- | --- | --- | --- |
| Init | DevVoid | DevVoid | Do NOT use |
| State | DevVoid | DevLong | Return the device state |
| Status | DevVoid | DevString | Return the device state as a string |
| getAttrStringValueList | DevString:
Attribute name | DevVarStringArray:
String value list | Return the authorized string value list for a given attribute name |
| talk | DevString | DevString | **WARNING**: use this command for test only,
This is a backdoor cmd and it can distrub Lima |
#### PerkinElmer Tango device[¶](#perkinelmer-tango-device)
This is the reference documentation of the PerkinElmer Tango device.
you can also find some useful information about the camera models/prerequisite/installation/configuration/compilation in the [PerkinElmer camera plugin](index.html#camera-perkinelmer) section.
##### Properties[¶](#properties)
This device has no property.
##### Attributes[¶](#attributes)
| Attribute name | RW | Type | Description |
| --- | --- | --- | --- |
| correction_mode | rw | DevString | ‘NO’, ‘OFFSET ONLY’ or ‘OFFSET AND GAIN’ |
| gain | rw | DevLong | The gain value, from 0 to 63 |
| keep_first_image | rw | DevString | ‘YES’ or ‘NO’, you can decide to trash the 1st image |
##### Commands[¶](#commands)
| Command name | Arg. in | Arg. out | Description |
| --- | --- | --- | --- |
| Init | DevVoid | DevVoid | Do not use |
| State | DevVoid | DevLong | Return the device state |
| Status | DevVoid | DevString | Return the device state as a string |
| getAttrStringValueList | DevString:
Attribute name | DevVarStringArray:
String value list | Return the authorized string value list for a given attribute name |
| startAcqOffsetImage | DevVarDoubleArray:
nb_frames, exposure_time | DevVoid | Start acquisition for an offset calibration |
| startAcqGainImage | DevVarDoubleArray:
nb_frames, exposure_time | DevVoid | Start an acquisition for an gain calibration |
#### Pixirad Tango device[¶](#pixirad-tango-device)
This is the reference documentation of the Pixirad Tango device.
you can also find some useful information about the camera models/prerequisite/installation/configuration/compilation in the [Pixirad camera plugin](index.html#camera-pixirad) section.
##### Properties[¶](#properties)
| Property name | Mandatory | Default value | Description |
| --- | --- | --- | --- |
| ip_address | Yes | N/A | The ip address or the hostname of the detector computer interface |
| port_number | No | 6666 | The port number for detector (DAQ commmands) |
| initial_model | No | PX8 | Model type PX1, PX2 or PX8 |
##### Attributes[¶](#attributes)
| Attribute name | RW | Type | Description |
| --- | --- | --- | --- |
| high_threshold0 | rw | DevDouble | High Energy threshold 0 (KeV) |
| low_threshold0 | rw | DevDouble | Low Energy threshold 0 (KeV) |
| high_threshold1 | rw | DevDouble | High Energy threshold 1 (KeV) |
| low_threshold1 | rw | DevDouble | Low Energy threshold 1 (KeV) |
| dead_time_free_mode | rw | DevString |
Enable or disable the free mode dead-time:* **DEAD_TIME_FREE_MODE_OFF**
* **DEAD_TIME_FREE_MODE_ON**
|
| cooling_temperature_setpoint | rw | DevDouble | Cooling temperature setpoint for the peltier module of the detector |
| high_voltage_biais | rw | DevDouble | Bias tension for the high voltage in manual mode |
| high_voltage_delay_before_on | rw | DevDouble | Delay for the hv before acquisition |
| h_v_refresh_period | rw | DevShort | How many image before hv is reset |
| delay_between_frames | rw | DevShort | Delay between frame in loop acquisition (millisecond) |
| color_mode | rw | DevString |
Color mode:* **COLMODE_1COL0**
* **COLMODE_2COL**
* **COLMODE_1COL1**
* **COLMODE_DTF**
* **COLMODE_4COL**
|
| sensor_config_build | rw | DevString |
The configuration build:* **PX1**
* **PX2**
* **PX8**
|
| trsf_mode | rw | DevString |
Moderated or unmoderated udp transport, modes are:* **UMOD**
* **UNMODH**
* **MOD**
|
| h_v_bias_mode_power | rw | DevBoolean | Enable (True) or disable (False) the high voltage |
| hybrid_mode | rw | DevString | **CDTE** or **GAAS** |
| temperature_peltier_cold | rw | DevDouble | Temperature of the peltier (live) cold surface in Celsuis |
| temperature_peltier_hot | rw | DevDouble | Temperature of the peltier (live) Hot surface in Celsuis |
| high_voltage_tension | rw | DevDouble | The tension of the High Voltage in Volt |
| box_humidity | ro | DevDouble | The moisture level in the detector box |
| box_temperature | ro | DevDouble | The temperature in the detector box in Celsuis |
| peltier_power | ro | DevDouble | The percentage of peltier power |
| alarm_temp_too_hot | ro | DevBoolean | The temperature is too hot alarm |
| alarm_temp_too_hot_enabled | ro | DevBoolean | The Alarm <<Temperature is too hot>> is enabled or not (is watched or not) |
| alarm_temp_too_cold | ro | DevBoolean | The temperature is too cold alarm |
| alarm_temp_too_cold_enabled | ro | DevBoolean | The Alarm <<Temperature is too cold>> is enabled or not (is watched or not) |
| alarm_humidity | ro | DevBoolean | The humidity is too high |
| alarm_humidity_enabled | ro | DevBoolean | The Alarm <<Humidity>> is enabled or not (is watched or not) |
Please refer to the Pixirad documention for more information on parameter meanings.
##### Commands[¶](#commands)
| Command name | Arg. in | Arg. out | Description |
| --- | --- | --- | --- |
| Init | DevVoid | DevVoid | Do not use |
| State | DevVoid | DevLong | Return the device state |
| Status | DevVoid | DevString | Return the device state as a string |
| getAttrStringValueList | DevString:
Attribute name | DevVarStringArray:
String value list | Return the authorized string value list for a given attribute name |
#### PhotonicScience Tango device[¶](#photonicscience-tango-device)
This is the reference documentation of the PhotonicScience Tango device.
you can also find some useful information about the camera models/prerequisite/installation/configuration/compilation in the [PhotonicScience camera plugin](index.html#camera-photonicscience) section.
##### Properties[¶](#properties)
| Property name | Mandatory | Default value | Description |
| --- | --- | --- | --- |
| camera_library_path | Yes | N/A | the path to the camera DLL library file e.g.: ImageStar4022_v2.5imagestar4022control.dll |
##### Attributes[¶](#attributes)
This camera device has no attribute.
##### Commands[¶](#commands)
| Command name | Arg. in | Arg. out | Description |
| --- | --- | --- | --- |
| Init | DevVoid | DevVoid | Do not use |
| State | DevVoid | DevLong | Return the device state |
| Status | DevVoid | DevString | Return the device state as a string |
| getAttrStringValueList | DevString:
Attribute name | DevVarStringArray:
String value list | Return the authorized string value list for a given attribute name |
#### PointGrey Tango device[¶](#pointgrey-tango-device)
This is the reference documentation of the PointGrey Tango device.
you can also find some useful information about the camera models/prerequisite/installation/configuration/compilation in the [PointGrey camera plugin](index.html#camera-pointgrey) section.
##### Properties[¶](#properties)
| Property name | Mandatory | Default value | Description |
| --- | --- | --- | --- |
| camera_serial | Yes | N/A | The serial number of the camera, used to get the connection |
| packet_size | No | -1 | The packet size, in byte |
| packet_delay | No | -1 | The packet inter delay , in us last both parameters can be used to tune the camera GigE bandwidth, please refer to the camera documentation for more information |
##### Attributes[¶](#attributes)
| Attribute name | RW | Type | Description |
| --- | --- | --- | --- |
| gain | rw | DevDouble | The camera gain factor, in dB |
| auto_gain | rw | DevBoolean | Auto gain mode can be switched on or off |
| auto_exp_time | rw | DevBoolean | The camera can be set to auto-exposure mode |
| auto_frame_mode | rw | DevBoolean | The camera can be set to auto frame rate mode |
| frame_rate | rw | DevDouble | The frame rate, in fps |
| packet_size | rw | DevLong | See the corresponding property |
| packet_delay | rw | DevLong | See the corresponding property |
| exp_time_range | ro | DevDouble[] | Return the exposure time range (min,max) in ms |
| gain_range | ro | DevDouble[] | Return the gain range (min,max) in dB |
| frame_rate_range | ro | DevDouble[] | Return the frame rate range (min,max) in fps |
##### Commands[¶](#commands)
| Command name | Arg. in | Arg. out | Description |
| --- | --- | --- | --- |
| Init | DevVoid | DevVoid | Do not use |
| State | DevVoid | DevLong | Return the device state |
| Status | DevVoid | DevString | Return the device state as a string |
| getAttrStringValueList | DevString:
Attribute name | DevVarStringArray:
String value list | Return the authorized string value list for a given attribute name |
#### Prosilica Tango device[¶](#prosilica-tango-device)
This is the reference documentation of the Prosilica Tango device.
you can also find some useful information about the camera models/prerequisite/installation/configuration/compilation in the [Prosilica camera plugin](index.html#camera-prosilica) section.
##### Properties[¶](#properties)
| Property name | Mandatory | Default value | Description |
| --- | --- | --- | --- |
| cam_ip_address | Yes | N/A | The camera’s ip or hostname |
##### Attributes[¶](#attributes)
| Attribute name | RW | Type | Description |
| --- | --- | --- | --- |
| gain | rw | DevFloat | normalized video gain, value between 0 (=pvmin),
no gain, and 1 (=pvmax) |
| pv_gain_range | ro | DevULong[pvmin, pvmax] | min and max allowed values of the PvApi gain |
| pv_gain | rw | DevULong | video gain, value in the interval [pvmin, pvmax] |
##### Commands[¶](#commands)
| Command name | Arg. in | Arg. out | Description |
| --- | --- | --- | --- |
| Init | DevVoid | DevVoid | Do not use |
| State | DevVoid | DevLong | Return the device state |
| Status | DevVoid | DevString | Return the device state as a string |
| getAttrStringValueList | DevString:
Attribute name | DevVarStringArray:
String value list | Return the authorized string value list for a given attribute name |
#### RayonixHs Tango device[¶](#rayonixhs-tango-device)
This is the reference documentation of the RayonixHs Tango device.
you can also find some useful information about the camera models/prerequisite/installation/configuration/compilation in the [RayonixHs camera plugin](index.html#camera-rayonixhs) section.
##### Properties[¶](#properties)
| Property name | Mandatory | Default value | Description |
| --- | --- | --- | --- |
| frame_mode | No | single | The frame mode, **single** or **fast_transfer** |
| frame_trigger_signal_type | No | opto | The frame trigger signal type (input #1) |
| sequence_gate_signal_type | No | opto | The gate signal type (input #2) |
| electronic_shutter_enabled | No | false | The electronic shutter **true** or **false** to activate or not |
| cooler_temperature_setpoint | No | -120 | The cooling system temperature setpoint in Celsuis |
| sensor_temperature_setpoint | No | -80 | The detector (sensor) temperature setpoint in Celsuis |
| output1_signal_type | No | cmos | The output #1 signal type |
| output2_signal_type | No | cmos | The output #2 signal type |
| output1_id | No | shutter | The output #1 signal source |
| output2_id | No | frame | the output #2 signal source |
The Rayonix HS input/output system supports different type of signals:* OPTO/OPTO_INVERTED/CMOS/CMOS_PULLDOWN/CMOS_PULLUP/CMOS_PULLDOWN_INVERTED/CMOS_PULLUP_INVERTED
And it provides a output multiplexer for both outputs within the following list of sources:* SHUTTER/INTEGRATE/FRAME/LINE/SHUTTER_OPENING/SHUTTER_CLOSING/SHUTTER_ACTIVE/TRIGGER_RISE_WAIT/TRIGGER_RISE_ACK/TRIGGER_FALL_WAIT/TRIGGER_FALL_ACK/TRIGGER_2_RISE_WAIT/TRIGGER_2_RISE_ACK/INPUT_FRAME/INPUT_GATE
##### Attributes[¶](#attributes)
| Attribute name | RW | Type | Description |
| --- | --- | --- | --- |
| frame_mode | rw | DevString | The frame mode, **single** or **fast_transfer** |
| frame_trigger_signal_type | rw | DevString | The frame trigger signal type (input #1) |
| sequence_gate_signal_type | rw | DevString | The gate signal type (input #2) |
| electronic_shutter_enabled | rw | DevString | The electronic shutter **true** or **false** to activate or not |
| cooler_temperature_setpoint | rw | DevDouble | The cooling system temperature setpoint in Celsuis |
| sensor_temperature_setpoint | rw | DevDouble | The detector (sensor) temperature setpoint in Celsuis |
| output1_signal_type | rw | DevString | The output #1 signal type |
| output2_signal_type | rw | DevString | The output #2 signal type |
| output1_id | rw | DevString | The output #1 signal source |
| output2_id | rw | DevString | The output #2 signal source |
| vacuum_valve | rw | DevString | The vacuum valve command **true** or **false** to open or close |
**Warning**: be careful with the temperature setting (and vacuum valve), the operating temperature is factory-determined and should never be changed. There is no reason to run the detector at a warner temperature.
For the signal type and source the possible values are listed above in the *Properties* section.
##### Commands[¶](#commands)
| Command name | Arg. in | Arg. out | Description |
| --- | --- | --- | --- |
| Init | DevVoid | DevVoid | Do not use |
| State | DevVoid | DevLong | Return the device state |
| Status | DevVoid | DevString | Return the device state as a string |
| getAttrStringValueList | DevString:
Attribute name | DevVarStringArray:
String value list | Return the authorized string value list for a given attribute name |
#### Simulator Tango device[¶](#simulator-tango-device)
This is the reference documentation of the Simulator Tango device.
you can also find some useful information about the camera models/prerequisite/installation/configuration/compilation in the [Simulator camera plugin](index.html#camera-simulator) section.
##### Properties[¶](#properties)
| Property name | Mandatory | Default value | Description |
| --- | --- | --- | --- |
| peaks | No | N/A | A gauss peak list [x0,y0,w0,A0,x1,y1,w1,A1…] |
| peak_angles | No | N/A | The base rotation angle for each peak |
| fill_type | No | Gauss | The image fill type: Gauss or Diffraction |
| rotation_axis | No | rotationy | Peak move policy: STATIC, ROTATIONX, ROTATIONY |
| frame_dim | No | 1024, 1024, 4 | Size of the frame. Width, height, depth. The depth is one of 1, 2, 4 |
| pixel_size | No | 1e-6, 1e-6 | Pixel size metadata in meter. Default is 1um pixel size |
##### Attributes[¶](#attributes)
| Attribute name | RW | Type | Description |
| --- | --- | --- | --- |
| peaks | rw | Spectrum,DevDouble | The gauss peak list [x0,y0,w0,A0,x1,y1,w1,A1…] |
| peak_angles | rw | Spectrum,DevDouble | The base rotation angle for each peak |
| grow_factor | rw | DevDouble | The Grow factor for gauss peaks |
| fill_type | rw | DevString | The image fill type: Gauss or Diffraction |
| rotation_axis | rw | DevString | The rotation axis policy: Static, RotationX or RotationY |
| diffraction_pos | rw | Spectrum,DevDouble | The source diplacement position: x and y |
| diffraction_speed | rw | Spectrum,DevDouble | The source diplacement speed: sx and sy |
| rotation_angle | rw | DevDouble | The peak rotation angle in deg |
| rotation_speed | rw | DevDouble | The peak rotation speed in deg/frame |
##### Commands[¶](#commands)
| Command name | Arg. in | Arg. out | Description |
| --- | --- | --- | --- |
| Init | DevVoid | DevVoid | Do not use |
| State | DevVoid | DevLong | Return the device state |
| Status | DevVoid | DevString | Return the device state as a string |
| getAttrStringValueList | DevString:
Attribute name | DevVarStringArray:
String value list | Return the authorized string value list for a given attribute name |
##### Custom LimaCCDs camera simulator[¶](#custom-limaccds-camera-simulator)
A custom camera simulator can be created following this recipe.
* Create a custom tango camera simulator
* Register this new module as a Lima camera entry point
* Set/update the tango database
###### Tango camera simulator[¶](#tango-camera-simulator)
```
# module myproject.MySimulator.py
from Lima import Core from Lima import Simulator import Lima.Server.camera.Simulator as TangoSimuMod
class MyCamera(Simulator.Camera):
"""Derive the camera in order to custom the way to render the frame"""
def fillData(self, data):
# Increment the first pixel every frames
data.buffer[0, 0] = data.buffer[0, 0] + 1
class MySimulator(TangoSimuMod.Simulator):
"""Derive the tango device in order to handle extra attributes/properties/commands implementation"""
class MySimulatorClass(TangoSimuMod.SimulatorClass):
"""Derive the tango device class in order to describe extra attributes/properties/commands"""
# Plugin
def get_control(**kwargs):
return TangoSimuMod.get_control(
_Camera=MyCamera,
_Simulator=MySimulator,
**kwargs)
def get_tango_specific_class_n_device():
return MySimulatorClass, MySimulator
```
###### Lima camera entry point[¶](#lima-camera-entry-point)
Lima provides entry points for plugins and cameras.
This can be used to register our new camera.
This allows LimaCCDs launcher to found your camera from your project.
```
# setup.py
setup(
name=__name__,
version=__version__,
...
entry_points={
"Lima_tango_camera": ["MySimulator = myproject.MySimulator"],
},
)
```
###### Database description[¶](#database-description)
This is a representation of the Tango database content.
```
personal_name: my_simulator server: LimaCCDs device:
- class: MySimulator
tango_name: id00/mysimulator/my_simulator
properties:
mode: GENERATOR_PREFETCH
nb_prefetched_frames: 1 # Alloc a single frame in memory
fill_type: EMPTY # Let python filling the full frame
- class: LimaCCDs
properties:
LimaCameraType: MySimulator # Ask to use your custom camera
```
###### Start the tango device[¶](#start-the-tango-device)
```
LimaCCDs my_simulator
```
#### SlsDetector Tango device[¶](#slsdetector-tango-device)
This is the reference documentation of the PSI SlsDetector Tango device.
You can also find some useful information about the camera models/prerequisite/installation/configuration/compilation in the [SlsDetector camera plugin](index.html#camera-slsdetector) section.
##### Properties[¶](#properties)
| Property name | Mandatory | Default value | Description |
| --- | --- | --- | --- |
| config_fname | Yes | *
| Path to the SlsDetector config file |
| apply_corrections | No | True | Perform corrections on each frame |
| high_voltage | No | 0 | Initial detector high voltage (V)
(set to 150 if already tested) |
| fixed_clock_div | No | 0 | Initial detector fixed-clock-div |
| threshold_energy | No | 0 | Initial detector threshold energy (eV) |
| tolerate_lost_packets | No | True | Initial tolerance to lost packets |
| pixel_depth_cpu_affinity_map | No | [] | Default PixelDepthCPUAffinityMap as Python string(s) defining a dict:
{<pixel_depth>: <global_affinity>}, being global_affinity a tuple:
(<recv_list>, <lima>, <other>, <netdev_grp_list>), where recv_list is a list of tupples in the form: (<listeners>, <port_threads>),
where listeners and port_threads are tuples of affinities,
lima and and other are affinities, and netdev_grp_list is a list of tuples in the form:
(<comma_separated_netdev_name_list>, <rx_queue_affinity_map>), the latter in the form of: {<queue>: (<irq>, <processing>)}.
Each affinity can be expressed by one of the functions: Mask(mask)
or CPU(<cpu1>[, …, <cpuN>]) for independent CPU enumeration |
##### Attributes[¶](#attributes)
| Attribute name | RW | Type | Description |
| --- | --- | --- | --- |
| config_fname | ro | DevString | Path to the SlsDetector config file |
| hostname_list | ro | DevVarStringArray | The list of the Eiger half-modules’ hostnames |
| apply_corrections | ro | DevBoolean | Pixel software corrections are applied on each frame |
| dac_name_list | ro | DevVarStringArray | The list of the DAC signals’ names |
| dac_<signal_name> | rw | DevVarLongArray | Array with the DAC <signal_name> value for each half-module, in A/D units |
| dac_name_list_mv | ro | DevVarStringArray | The list of the DAC signals’ names supporting milli-volt units |
| dac_<signal_name>_mv | rw | DevVarLongArray | Array with the DAC <signal_name> value for each half-module, in milli-volt units |
| adc_name_list | ro | DevVarStringArray | The list of the ADC signals’ names |
| adc_<signal_name> | rw | DevVarDoubleArray | Array with the ADC <signal_name> value for each half-module, in user units (deg C, etc.) |
| pixel_depth | rw | DevString |
The image pixel bit-depth:* **4** (not implemented in LImA yet)
* **8**
* **16**
* **32**
|
| raw_mode | rw | DevBoolean | Publish image as given by the Receivers (no SW reconstruction) |
| threshold_energy | rw | DevLong | The energy (in eV) the pixel discriminator thresholds (Vcmp & Trim bits) is set at |
| high_voltage | rw | DevShort | The detector high voltage (in V) |
| tx_frame_delay | rw | DevLong | Frame Tx delay (6.2 ns units) |
| all_trim_bits | rw | DevVarLongArray | Array with the pixel trimming value [0-63] for each half-module, if all the pixels in the half-module have the same trimming value, -1 otherwise |
| clock_div | rw | DevString |
The readout clock divider:* **FULL_SPEED**
* **HALF_SPEED**
* **QUARTER_SPEED**
* **SUPER_SLOW_SPEED**
|
| fixed_clock_div | rw | DevBoolean | If active, will try to keep the same clock_div when changing pixel_depth |
| readout_flags | rw | DevString |
The flags affecting the readout mode (Parallel|NonParallel|Safe + StoreInRAM|Continous):* **PARALLEL + STORE_IN_RAM**
* **PARALLEL + CONTINUOUS**
* **NON_PARALLEL + STORE_IN_RAM**
* **NON_PARALLEL + CONTINUOUS**
* **SAFE + STORE_IN_RAM**
* **SAFE + CONTINUOUS**
|
| max_frame_rate | ro | DevDouble | Maximum number of frames per second (kHz) |
| tolerate_lost_packets | rw | DevBoolean | Allow acquisitions with incomplete frames due to overrun |
| pixel_depth_cpu_affinity_map | rw | DevString | PixelDepth -> CPUAffinity map as a Python string
(see description of corresponding device property) |
Please refer to the *PSI/SLS Eiger User’s Manual* for more information about the above specfic configuration parameters.
Note: CPU-affinity control now acts, in a per-pixel_depth basis, on the following execution elements:
* Receiver listener threads
* Receiver writer threads
* Lima control & processing threads
* Other processes in the OS
* Network devices’ processing tasks (kernel space)
Network devices can be grouped, each group will have the same CPU-affinity for the processing tasks.
##### Commands[¶](#commands)
| Command name | Arg. in | Arg. out | Description |
| --- | --- | --- | --- |
| Init | DevVoid | DevVoid | Do not use |
| State | DevVoid | DevLong | Return the device state |
| Status | DevVoid | DevString | Return the device state as a string |
| getAttrStringValueList | DevString:
Attribute name | DevVarStringArray:
String value list | Return the authorized string value list for a given attribute name |
| putCmd | DevString | DevVoid | Command setting a SlsDetector parameter (no response) |
| getCmd | DevString:
get command | DevString:
command result | Command getting a SlsDetector parameter (with response) |
| getNbBadFrames | DevLong:
port_idx | DevLong:
nb_bad_frames | Get the number of bad frames in the current (or last) acquisition for the given receiver port (-1=all) |
| getBadFrameList | DevLong:
port_idx | DevVarLongArray:
bad_frame_list | Get the list of bad frames in the current (or last) acquisition for the given receiver port (-1=all) |
#### Ueye Tango device[¶](#ueye-tango-device)
This is the reference documentation of the Ueye Tango device.
you can also find some useful information about the camera models/prerequisite/installation/configuration/compilation in the [Ueye camera plugin](index.html#camera-ueye) section.
##### Properties[¶](#properties)
| Property name | Mandatory | Default value | Description |
| --- | --- | --- | --- |
| address | No | 0 | The video address |
##### Attributes[¶](#attributes)
This device has no attribute.
##### Commands[¶](#commands)
| Command name | Arg. in | Arg. out | Description |
| --- | --- | --- | --- |
| Init | DevVoid | DevVoid | Do not use |
| State | DevVoid | DevLong | Return the device state |
| Status | DevVoid | DevString | Return the device state as a string |
| getAttrStringValueList | DevString:
Attribute name | DevVarStringArray:
String value list | Return the authorized string value list for a given attribute name |
#### Ultra Tango device[¶](#ultra-tango-device)
This is the reference documentation of the Ultra Tango device.
you can also find some useful information about the camera models/prerequisite/installation/configuration/compilation in the [Ultra camera plugin](index.html#camera-ultra) section.
##### Properties[¶](#properties)
| Property name | Mandatory | Default value | Description |
| --- | --- | --- | --- |
| headIpaddress | No | 192.168.1.100 | The detector head IP address |
| hostIpaddress | No | 192.168.1.103 | The host IP address |
| tcpPort | No | 7 | The tcp echo port |
| udpPort | No | 5005 | The upd port |
| nPixels | No | 512 | The number of detector pixels |
##### Attributes[¶](#attributes)
| Attribute name | RW | Type | Description |
| --- | --- | --- | --- |
| headColdTemp | ro | DevFloat | The head cold temperature in K |
| heatHotTemp | ro | DevFloat | The head hot temperature in K |
| tecColdTemp | ro | DevFloat | |
| tecSupplyVolts | ro | DevFloat | |
| adcPosSupplyVolts | ro | DevFloat | |
| adcNegSupplyVolts | ro | DevFloat | |
| vinPosSupplyVolts | ro | DevFloat | |
| vinNegSupplyVlots | ro | DevFloat | |
| headADCVdd | ro | DevFloat | |
| headVdd | rw | DevFloat | |
| headVref | rw | DevFloat | |
| headVrefc | rw | DevFloat | |
| headVpupref | rw | DevFloat | |
| headVclamp | rw | DevFloat | |
| headVres1 | rw | DevFloat | |
| headVres2 | rw | DevFloat | |
| headVTrip | rw | DevFloat | |
| fpgaXchipReg | rw | DevULong | |
| fpgaPwrReg | rw | DevULong | |
| fpgaSyncReg | rw | DevULong | |
| fpgaAdcReg | rw | DevULong | |
| frameCount | ro | DevULong | |
| frameError | ro | DevULong | |
| headPowerEnabled | rw | DevBoolean | |
| tecPowerEnabled | rw | Devboolean | |
| biasEnabled | rw | Devboolean | |
| syncEnabled | rw | Devboolean | |
| calibEnabled | rw | Devboolean | |
| 8pCEnabled | ro | DevBoolean | |
| tecOverTemp | ro | DevBoolean | |
| adcOffset | rw | DevFloat[16] | |
| adcGain | rw | DevFloat[16] | |
| aux1 | rw | DevULong[2] | |
| aux2 | rw | DevULong[2] | |
| xchipTiming | rw | DevULong[9] | |
Please refer to the manufacturer’s documentation for more information about the above listed parameters and how to use them.
##### Commands[¶](#commands)
| Command name | Arg. in | Arg. out | Description |
| --- | --- | --- | --- |
| Init | DevVoid | DevVoid | Do not use |
| State | DevVoid | DevLong | Return the device state |
| Status | DevVoid | DevString | Return the device state as a string |
| getAttrStringValueList | DevString:
Attribute name | DevVarStringArray:
String value list | Return the authorized string value list for a given attribute name |
| SaveConfiguration | DevVoid | DevVoid | Save the current configuration |
| RestoreConfiguration | DevVoid | DevVoid | Restore the latest configuration |
#### V4l2 Tango device[¶](#v4l2-tango-device)
This is the reference documentation of the V4l2 Tango device.
you can also find some useful information about the camera models/prerequisite/installation/configuration/compilation in the [V4l2 camera plugin](index.html#camera-v4l2) section.
##### Properties[¶](#properties)
| Property name | Mandatory | Default value | Description |
| --- | --- | --- | --- |
| video_device | No | /dev/video0 | The video device path |
##### Attributes[¶](#attributes)
This device has no attribute.
##### Commands[¶](#commands)
| Command name | Arg. in | Arg. out | Description |
| --- | --- | --- | --- |
| Init | DevVoid | DevVoid | Do not use |
| State | DevVoid | DevLong | Return the device state |
| Status | DevVoid | DevString | Return the device state as a string |
| getAttrStringValueList | DevString:
Attribute name | DevVarStringArray:
String value list | Return the authorized string value list for a given attribute name |
#### Ximea Tango device[¶](#ximea-tango-device)
This is the reference documentation of the Ximea Tango device.
You can also find some useful information about the camera models/prerequisite/installation/configuration/compilation in the [Ximea camera plugin](index.html#camera-ximea) section.
##### Properties[¶](#properties)
| Property name | Mandatory | Default value | Description |
| --- | --- | --- | --- |
| camera_id | Yes | N/A | Camera ID |
| trigger_gpi_port | No | PORT_2 | GPI port used by default for trigger input |
| gpo_port | No | PORT_2 | GPO port used for output when camera active |
| gpo_mode | No | FRAME_ACTIVE | GPO port used for output when camera active |
| timeout | No | 200 | Timeout for internal loop (on top of exposure time) |
| startup_temp_control_mode | No | AUTO | Startup temperature control mode |
| startup_target_temp | No | 25.0 | Startup target temperature |
| startup_mode | No | 2_12_HDR_HL | Startup camera mode |
##### Attributes[¶](#attributes)
| Attribute name | RW | Type | Description |
| --- | --- | --- | --- |
| trigger_polarity | rw | DevString | Select trigger polarity |
| software_trigger | w | DevBoolean | Software trigger; write to generate trigger, reads always false |
| gpi_selector | rw | DevString | Select GPI to configure |
| gpi_mode | rw | DevString | Select GPI mode |
| gpi_level | r | DevLong | Read GPI level |
| gpi_level_at_exp_start | r | DevLong | Read GPI level at exposure start |
| gpi_level_at_exp_end | r | DevLong | Read GPI level at exposure end |
| gpi_debounce | rw | DevBoolean | Enable GPI debounce |
| gpo_selector | rw | DevString | Select GPO to configure |
| gpo_mode | rw | DevString | Select GPO mode |
| led_selector | rw | DevString | Select LED to configure |
| led_mode | rw | DevString | Select LED mode |
| mode | rw | DevString | Select configuration preset |
| gain_selector | rw | DevString | Select gain type |
| gain | rw | DevLong | Gain value |
| is_cooled | r | DevLong | Is the camera cooled |
| temp_control_mode | rw | DevString | Temperature control mode |
| temp_target | rw | DevDouble | Target temperature |
| thermometer | rw | DevString | Select thermometer |
| temperature | r | DevDouble | Thermometer temperature |
| temp_chip | r | DevDouble | Camera sensor temperature |
| temp_housing | r | DevDouble | Camera housing temperature |
| temp_back | r | DevDouble | Camera housing back side temperature |
| temp_sensor | r | DevDouble | Sensor board temperature |
| thermal_element | rw | DevString | Thermal control element |
| thermal_element_value | rw | DevDouble | Thermal element control value |
| exposure_selector | rw | DevString | Exposure mode selector |
| burst_count | rw | DevLong | Burst count |
| downsampling | rw | DevString | Downsampling value |
| downsampling_type | rw | DevString | Downsampling type |
| test_pattern_generator | rw | DevString | Test pattern generator |
| test_pattern | rw | DevString | Test pattern |
| image_format | rw | DevString | Image format |
| shutter | rw | DevString | Shutter mode |
| taps | rw | DevString | Sensor taps |
| auto_exposure_gain | rw | DevBoolean | Auto exposure and gain control |
| auto_white_balance | rw | DevBoolean | Auto white balance control |
| horizontal_flip | rw | DevBoolean | Horizontal flip |
| vertical_flip | rw | DevBoolean | Vertical flip |
| interline_exp_mode | rw | DevString | Interline exposure mode |
| binning_engine | rw | DevString | Binning engine selector |
| horizontal_binning_pattern | rw | DevString | Binning horizontal pattern |
| vertical_binning_pattern | rw | DevString | Binning vertical pattern |
| decimation_engine | rw | DevString | Decimation engine selector |
| horizontal_decimation | rw | DevLong | Horizontal decimation value |
| vertical_decimation | rw | DevLong | Vertical decimation value |
| horizontal_decimation_pattern | rw | DevString | Decimation horizontal pattern |
| vertical_decimation_pattern | rw | DevString | Decimation vertical pattern |
| exposure_priority | rw | DevDouble | Exposure priority (e.g. 0.8 - exposure 80%, gain 20%) |
| auto_gain_limit | rw | DevLong | Gain limit for AEAG procedure |
| auto_exposure_limit | rw | DevLong | Exposure limit for AEAG procedure |
| auto_intensity_level | rw | DevDouble | Target average intensity for AEAG procedure |
| bandwidth_limit | rw | DevDouble | Bandwidth limit |
| bandwidth_limit_enabled | rw | DevBoolean | Enable bandwidth limiting |
| available_bandwidth | r | DevDouble | Measured available bandwidth |
| frame_rate | rw | DevDouble | Frame rate (or limit) |
| counter_selector | rw | DevString | Counter selector |
| counter_value | r | DevLong | Selected counter value |
| acq_timing_mode | rw | DevString | Acquisition timing mode |
| trigger_delay | rw | DevLong | Trigger delay |
| acq_status | r | DevBoolean | Acquisition status |
| feature_selector | rw | DevString | Sensor additional features |
| feature_value | rw | DevLong | Selected feature value |
| plugin_version | r | DevString | Plugin version number |
| timeout | rw | DevLong | Timeout for internal loop (on top of exposure time) |
| camera_serial_number | r | DevString | Camera serial number |
| readout_time | r | DevDouble | Mean readout time in seconds |
| readout_time_last_frame | r | DevDouble | Readout time of last frame in seconds |
##### Commands[¶](#commands)
| Command name | Arg. in | Arg. out | Description |
| --- | --- | --- | --- |
| Init | DevVoid | DevVoid | Do not use |
| State | DevVoid | DevLong | Return the device state |
| Status | DevVoid | DevString | Return the device state as a string |
| getAttrStringValueList | DevString:
Attribute name | DevVarStringArray:
String value list | Return the authorized string value list for a given attribute name |
#### Xh Tango device[¶](#xh-tango-device)
This is the reference documentation of the Xh Tango device.
you can also find some useful information about the camera models/prerequisite/installation/configuration/compilation in the [Xh camera plugin](index.html#camera-xh) section.
##### Properties[¶](#properties)
| Property name | Mandatory | Default value | Description |
| --- | --- | --- | --- |
| cam_ip_address | Yes | N/A | The detector IP address |
| port | No | 1972 | The port number |
| config_name | No | “config” | The default configuration filename |
##### Attributes[¶](#attributes)
| Attribute name | RW | Type | Description |
| --- | --- | --- | --- |
| clockmode | wo | DevString | The clockmode, **XhInternalClock**, **XhESRF5468Mhz** or **XhESRF1136Mhz** |
| nbscans | wr | DevLong | the number of scans for accumulation |
##### Commands[¶](#commands)
| Command name | Arg. in | Arg. out | Description |
| --- | --- | --- | --- |
| Init | DevVoid | DevVoid | Do not use |
| State | DevVoid | DevLong | Return the device state |
| Status | DevVoid | DevString | Return the device state as a string |
| getAttrStringValueList | DevString:
Attribute name | DevVarStringArray:
String value list | Return the authorized string value list for a given attribute name |
| reset | DevVoid | DevVoid | Perform a hardware reset of the detector |
| setHeadCaps | DevVarULongArray | DevVoid | Caps for AB, Caps for CD |
| sendCommand | DevString | DevVoid | Backdoor command to send direct command to the *da.server* server |
#### Xpad Tango device[¶](#xpad-tango-device)
This is the reference documentation of the Xpad Tango device.
you can also find some useful information about the camera models/prerequisite/installation/configuration/compilation in the [Xpad camera plugin](index.html#camera-xpad) section.
##### Properties[¶](#properties)
None.
##### Attributes[¶](#attributes)
None.
##### Commands[¶](#commands)
| Command name | Arg. in | Arg. out | Description |
| --- | --- | --- | --- |
| Init | DevVoid | DevVoid | Do not use |
| State | DevVoid | DevLong | Return the device state |
| Status | DevVoid | DevString | Return the device state as a string |
| getAttrStringValueList | DevString:
Attribute name | DevVarStringArray:
String value list | Return the authorized string value list for a given attribute name |
#### Xspress3 Tango device[¶](#xspress3-tango-device)
This is the reference documentation of the Xspress3 Tango device.
you can also find some useful information about the camera models/prerequisite/installation/configuration/compilation in the [Xspress3 camera plugin](index.html#camera-xspress3) section.
test reference to camera plugin section: [ADSC camera](index.html#camera-adsc)
##### Properties[¶](#properties)
| Property name | Mandatory | Default value | Description |
| --- | --- | --- | --- |
| basIpaddress | No | none | Override the base IP address (e.g. 192.168.0.1)
from which all other addresses are calculated or NULL to use the default |
| basMacAddress | No | none | Override the base MAC address (e.g. 02.00.00.00.00)
from which all other card MAC address`s are calculated or NULL to use the default |
| basePort | No | none | Override the base IP port number or 0 to use the default |
| createScopeModule | No | False | true = do not create a scope data module |
| nbFrames | No | 1 | Number of 4096 energy bin spectra timeframes |
| scopeModName | No | NULL | The scope data module filename or NULL to use the default |
| nbCards | No | 1 | The number of xspress3 cards that constitute the xspress3 system,
between 1 and XSP3_MAX_CARDS |
| nbChans | No | -1 | Limit the number of channels |
| debug | No | 0 | debug message (0 = off, 1=normal, 2=verbose) |
| noUDP | No | False | True = do not do UDP connection |
| cardIndex | No | none | Starting card index |
| directoryName | No | non | The directory name to save and restore configurations |
##### Attributes[¶](#attributes)
| Attribute name | RW | Type | Description |
| --- | --- | --- | --- |
| card | rw | DevLong | |
| numChan | ro | DevLong | |
| numCards | ro | DevLong | |
| chansPerCard | ro | DevLong | |
| maxNumChan | ro | DevLong | |
| binsPerMca | ro | DevLong | |
| windows | rw | DevLong[32] | |
| runMode | rw | DevBoolean[4] | |
| clocks | rw | Devbooleanp[3] | |
| goodsThreshold | rw | DevLong[16] | |
| dtcEnergy | rw | DevDouble | |
| dtcParameters | rw | DevDouble[48] | |
| scaling | rw | DevDouble[8] | |
| fanTemperatures | rw | DevDouble[50] | |
| fanController | rw | DevDouble[2] | |
| setPoint | wo | DevDouble | |
| roi | wo | DevLong[25] | |
| useDtc | rw | DevBoolean | |
| setTiming | wo | DevLong | |
| adcTempLimit | wo | DevLong | |
| setPlayback | wo | DevBoolean | |
| playbackfilename | wo | DevString | |
| dataSource | rw | DevLong[8] | |
##### Commands[¶](#commands)
| Command name | Arg. in | Arg. out | Description |
| --- | --- | --- | --- |
| Init | DevVoid | DevVoid | Do not use |
| State | DevVoid | DevLong | Return the device state |
| Status | DevVoid | DevString | Return the device state as a string |
| getAttrStringValueList | DevString:
Attribute name | DevVarStringArray:
String value list | Return the authorized string value list for a given attribute name |
| Reset | DevVoid | DevVoid | |
| InitBrams | DevLong:
channel | DevVoid | |
| Pause | DevVoid | DevVoid | |
| Restart | DevVoid | DevVoid | |
| Arm | DevVoid | DevVoid | |
| Clear | DevVoid | DevVoid | |
| SaveSettings | DevVoid | DevVoid | |
| RestoreSettings | DevBoolean | DevVoid | Force restore if major revision of saved file does not match the firmware revision |
| InitRois | DevLong:
channel | DevVoid | |
| ReadHistogram | DevVarLongArray:
frame, channel | DevVarULongArray: | Return the histogram data |
| ReadScalers | DevVarLongArray:
frame, channel | DevVarULongArray: | Return the scaler data |
| StartScope | DevVoid | DevVoid | |
| LoadPlayback | DevVarLongArray:
src0,src1,
[num_streams,
digital] | DevVoid | |
| FormatRun | DevVarLongArray:
chan,[nbits_eng,
aux1_mode,
adc_bits,
min_samples,
aux2_mode,
pileup_reject | DevVoid | |
### Plugin devices: software operation and extra interfaces[¶](#plugin-devices-software-operation-and-extra-interfaces)
User-defined software plugins can be used to execute arbitrary image-based operations. An entry point in the control layer completely exports the ProcessLib functionality, allowing an external code to be called on every frame. The software operation can be implemented in C++ or Python.
The software operations on image are embedded into individual Tango devices and are available in the **plugins/** directory. They are automatically exported by the LimaCCDs server.
The software operations are of two types, *Sink* or *Link* :* **Link** operation is supposed to modify the frame data, so it gets the frame data as input parameter and it will return a “corrected” image (e.g. Mask/Flatfield/BackgroundSubstraction).
* **Sink** operation is taken the frame data as input parameter to apply some software operation in order to return new data like statistics, peak positions, alarm on saturation … etc.
In addition to sink/link plugin device, a plugin can just be implemented to provide/export a subset of the Lima interface or a legacy interface for some specific client applications (e.g SPEC, LimaTacoCCD plugin).
Today there are about 8 standard plugin devices:
* BackgroundSubstraction : link operation, to correct the frames with a background image (substraction)
* FlatField: link operation to correct the frames with a flatfield image (divide + option normalisation)
* Mask: link operation to mask pixels. Very useful if some pixel are not working properly and if you want to set then to a fix value or to zero.
* MemCached: sink operation to publish images to a memcached server.
* PeakFinder: thanks to <NAME> from DESY, a sink operation which can detect diffraction peaks.
* Roi2Spectrum: sink operation to apply ROI spectrum on the frames. You can define more than one spectra with ROI coordinates and by specifying in which direction you need to bin the values, vertical or horizontal.
* RoiCounter: sink operation to get calculating statistics on image regions.
* RoiCollection: sink operation to generate a spectrum of Roi integration counters.
* LimaTacoCCD: extra interface for TACO clients, it only provides commands (TACO does not have attribute !), it is still used at ESRF for SPEC.
* LiveViewer: extra interface to provide a live view of the last acquired image, can be used from atkpanel.
If you need to implement your own plugin device we can provide you some example codes, use the mailing-list [<EMAIL>](mailto:<EMAIL>) to get help.
#### Background Substraction[¶](#background-substraction)
The Background substraction correction is a simple operation you can active when a detector has some dark-current noise independent of the dose of photons it will receive.
To set the correction you must provide to the device a background image file (**setBackgroundImage** command) and then start the correction (**start** command). Instead of providing an external image file you can simply ask the device to use an image taken. Call the command **takeNextAcquistionAsBackground** to set the internal background image from an acquisition image.
One can apply an extra offset correction using the **offset** attribute value.
##### Properties[¶](#properties)
This device has no property.
##### Attributes[¶](#attributes)
| Attribute name | RW | Type | Description |
| --- | --- | --- | --- |
| delete_dark_after_read | rw | DevBoolean | If true the device will delete the file after reading Can be useful to not keep obsolete dark image file after use |
| offset | rw | DevLong | Set a offset level to be applied in addition to the background correction |
| RunLevel | rw | DevLong | Run level in the processing chain, from 0 to N |
| State | ro | State | OFF or ON (stopped or started) |
| Status | ro | DevString | “OFF” “ON” (stopped or started) |
##### Commands[¶](#commands)
| Command name | Arg. in | Arg. out | Description |
| --- | --- | --- | --- |
| Init | DevVoid | DevVoid | Do not use |
| setBackgroundImage | DevString | DevVoid | Full path of background image file |
| Start | DevVoid | DevVoid | Start the correction for next image |
| State | DevVoid | DevLong | Return the device state |
| Status | DevVoid | DevString | Return the device state as a string |
| Stop | DevVoid | DevVoid | Stop the correction after the next image |
| takeNextAcquisitionAsBackground | DevVoid | DevVoid | next taken image will replace the background |
#### Bpm[¶](#bpm)
This is the BPM (Beam Position Monitoring) device. It aims to detect an X-ray beam spot and returns statistics (x,y positions, FWHM, …).
It takes images and calculates the beam position using the builtin task BPM of the processlib library.
It can also push Tango event containing jpeg view of the image and several statistics and information (listed bellow) in a DevEncoded attribute name bvdata.
##### Properties[¶](#properties)
| Propertie name | RW | Type | Description |
| --- | --- | --- | --- |
| enable_bpm_calc | RW | DevBoolean | Enable or disable the bpm calculation algorithm. |
| enable_tango_event | RW | DevBoolean | if set to false, Bpm won’t push bvdata or other attributes through Tango. |
| calibration | RW | DevVarDoubleArray | Contains the calibration in X and Y ([X,Y]), value in unit/pixel. | |
| beammark | RW | DevVarLongArray | Contains coordinates (X,Y) in pixels of a beam mark set by the user. |
##### Attributes[¶](#attributes)
| Attribute name | RW | Type | Description |
| --- | --- | --- | --- |
| buffersize | RW | DevLong | Size of the buffer where a certain amount of images will be store before re-writing on the first one. |
| x | RO | DevDouble | coordinate on the x axis of the beam return by the BPM task. If the algorithm couldn’t find a X value then it is set at -1. |
| y | RO | DevDouble | Same as x but for Y axis. |
| txy | RO | DevDouble | Return an array [timestamp,x,y] of the last acquisition. |
| automatic_aoi | RW | DevBoolean | true or false for the AOI mode. |
| intensity | RO | DevDouble | Intensity of the area around beam. |
| max_intensity | RO | DevDouble | Maximum intensity on the image. |
| proj_x | RO | DevLong | Array containing sum of all pixel´s intensity on axis x |
| proj_y | RO | DevLong | Same as proj_x but on y axis. |
| fwhm_x | RO | DevDouble | Full width at half of maximum on the profil X. |
| fwhm_y | RO | DevDouble | same as fwhm_x but on y axis profil. |
| autoscale | RW | DevBoolean | Activate autoscale transformation on the image. (use min and max intensity on it in order to scale). |
| lut_method | RW | DevString | Method used in the transformation of image. can be “LOG” or “LINEAR”. |
| color_map | RW | DevBoolean | Image in black and white(color_map=false), or use a color map to display colors based on intensity. |
| bvdata | RO | DevEncoded | Attribute regrouping the image (jpeg format) and numerous information on it, such as timestamp,
number of the frame, x, y, txy, …
Everything is pack throught struck module and is either send in a Tango event or directly read.
WARNING : You need to have the decode function in order to read (can be found in the webserver Bpm, currently here : <https://gitlab.esrf.fr/limagroup/bpm-web> ) |
| calibration | RW | DevDouble | Attribute version of the calibration property. |
| beammark | RW | DevLong | Attribute version of the beammark property. |
| enable_bpm_calc | RW | DevBoolean | Enable or disable the bpm calculation algorithm. |
##### Commands[¶](#commands)
| Commands name | Arg.IN | Arg.OUT | Description |
| --- | --- | --- | --- |
| Start | DevVoid | DevVoid | Start Bpm device. |
| Stop | DevVoid | DevVoid | Stop Bpm device. |
| getResults | DevLong | DevVarDoubleArray | Take a number as parameter and return an array containing (framenb,x,y) values, starting to the frame number ask until there is no more image. |
| GetPixelIntensity | DevVarLongArray | DevLong | Return the intensity of pixel (x,y) passed as parameters |
| HasBackground | DevVoid | DevBoolean | Is there a background already in place ? |
| TakeBackground | DevVoid | DevVoid | Take the current image and set it as Background, using the Core.BACKGROUNDSUBSTRACTION module. |
| ResetBackground | DevVoid | DevVoid | Reset the Background. |
##### NOTE[¶](#note)
This plugin is supposed to replace the old BeamViewer plugin but with limited functionalities for the moment.
Some other plugins will be created in the future.
This plugin is mainly used in conjunction with the [bpm webserver application](https://gitlab.esrf.fr/limagroup/bpm-web)
#### FlatField[¶](#flatfield)
The flat fied correction can be used to remove artifacts from the images that are caused by variations in the pixel-to-pixel sensitivity of the detector and/or by the distortions in the optical path. Here the correction consists in providing a reference image taken using a uniform photon exposure. Then each raw image will be corrected by dividing the pixel values by their corresponding reference values (flatfield image pixels).
To set the correction you must provide to the device a flatfield image file (**setFlatFieldImage** command) and then start the correction (**start** command).
##### Properties[¶](#properties)
This device has no property.
##### Attributes[¶](#attributes)
| Attribute name | RW | Type | Description |
| --- | --- | --- | --- |
| RunLevel | rw | DevShort | Run level in the processing chain, from 0 to N |
| normalize | rw | DevBoolean | If true the flatfield image will be normalized first (using avg signal) |
| State | ro | State | OFF or ON (stopped or started) |
| Status | ro | DevString | “OFF” “ON” (stopped or started) |
##### Commands[¶](#commands)
| Command name | Arg. in | Arg. out | Description |
| --- | --- | --- | --- |
| Init | DevVoid | DevVoid | Do not use |
| setFlatFieldImage | DevString | DevVoid | Full path to flatfield image file |
| Start | DevVoid | DevVoid | Start the correction for next image |
| State | DevVoid | DevLong | Return the device state |
| Status | DevVoid | DevString | Return the device state as a string |
| Stop | DevVoid | DevVoid | Stop the correction after the next image |
#### Mask[¶](#mask)
The mask correction is very useful when you have some defective pixels on your detector sensor. Then you can provide a mask image file which can either applies a fixed value for those defective pixel (mask type == **DUMMY**) or sets those pixels to zero count (mask type = **STANDARD**).
To set the correction you must provide to the device a flatfield image file (**setFlatMaskImage** command) and then start the correction (**start** command).
##### Properties[¶](#properties)
This device has no property.
##### Attributes[¶](#attributes)
| Attribute name | RW | Type | Description |
| --- | --- | --- | --- |
| RunLevel | rw | DevShort | Run level in the processing chain, from 0 to N |
| type | rw | DevString |
Set the type of mask correction:* **DUMMY**, replace the pixel value with the mask image pixel value
* **STANDARD**, if the mask pixel value is equal to zero set the image pixel value to zero otherwise keep the image pixel value unchanged
|
| State | ro | State | OFF or ON (stopped or started) |
| Status | ro | DevString | “OFF” “ON” (stopped or started) |
##### Commands[¶](#commands)
| Command name | Arg. in | Arg. out | Description |
| --- | --- | --- | --- |
| getAttrStringValueList | DevString:
Attribute name | DevVarStringArray:
String value list | Return the authorized string value list for a given attribute name |
| Init | DevVoid | DevVoid | Do not use |
| setMaskImage | DevString | DevVoid | full path for the mask image file |
| Start | DevVoid | DevVoid | set the correction active |
| State | DevVoid | DevLong | Return the device state |
| Status | DevVoid | DevString | Return the device state as a string |
| Stop | DevVoid | DevVoid | set the correction inactive |
#### Memcached[¶](#memcached)
This plugins aims to publish the frames to a [memcached storage](<https://memcached.org/about>), a high performance multithreaded event-based key/value cache store intended to be used in a distributed system.
Once configured you can start the task using **Start** command and stop the task calling the **Stop** command.
##### Properties[¶](#properties)
| Property name | Mandatory | Default value | Description |
| --- | --- | --- | --- |
| ServerIP | Yes | 127.0.0.1 | The server IP |
| ServerPort | Yes | 11211 | The server Port |
| Default AcquisitionID | Yes | default | The default acquisition ID set a startup |
##### Attributes[¶](#attributes)
| Attribute name | RW | Type | Description |
| --- | --- | --- | --- |
| AcquisitionID | RW | DevString | Unique identifier of the acquisition (basename for the key) |
| Stats | RO | DevString | Memcached server statistics encoded as JSON |
| RunLevel | RW | DevLong | Run level in the processing chain, from 0 to N |
| State | RO | State | OFF or ON (stopped or started) |
| Status | RO | DevString | “OFF” “ON” (stopped or started) |
##### Commands[¶](#commands)
| Command name | Arg. in | Arg. out | Description |
| --- | --- | --- | --- |
| Init | DevVoid | DevVoid | Do not use |
| Start | DevVoid | DevVoid | Start the operation on image |
| State | DevVoid | DevLong | Return the device state |
| Status | DevVoid | DevString | Return the device state as a string |
| Stop | DevVoid | DevVoid | Stop the operation on image |
| FlushAll | DevVoid | DevVoid | Invalidate all existing cache items |
#### PeakFinder[¶](#peakfinder)
This is a nice plugin developed at DESY which can find peaks on an image and returns the positions of the peaks.
Once the configuration is ok you can start the task using **Start** command and stop the task calling the **Stop** command.
##### Properties[¶](#properties)
This device has no property.
##### Attributes[¶](#attributes)
| Attribute name | RW | Type | Description |
| --- | --- | --- | --- |
| BufferSize | rw | DevLong | Circular buffer size in image, default is 128 |
| ComputingMode | rw | DevString |
The computing algorithm :* **MAXIMUM**, find peak at maximum
* **CM**, find peak at center of mass
|
| CounterStatus | ro | DevLong | Counter related to the current number of proceeded images |
| RunLevel | rw | DevLong | Run level in the processing chain, from 0 to N |
| State | ro | State | OFF or ON (stopped or started) |
| Status | ro | DevString | “OFF” “ON” (stopped or started) |
##### Commands[¶](#commands)
| Command name | Arg. in | Arg. out | Description |
| --- | --- | --- | --- |
| Init | DevVoid | DevVoid | Do not use |
| readPeaks | DevVoid | DevVarDoubleArray frame0,x,y,frame1,.. | Return the peaks positions |
| setMaskFile | DevVarStringArray | DevVoid | Full path of mask file |
| Start | DevVoid | DevVoid | Start the operation on image |
| State | DevVoid | DevLong | Return the device state |
| Status | DevVoid | DevString | Return the device state as a string |
| Stop | DevVoid | DevVoid | Stop the operation on image |
#### Roi2Spectrum[¶](#roi2spectrum)
The Region-of-Interest to Spectrum operation is very useful to provide online integration of some areas of your detector.
The integration of the pixel values can set along the Y direction or the X direction.
You must create first the Rois by providing unique names (**addNames** command) and then set the Roi position using the index and the x,y, width, height
(**setRois** command). The direction for integration (so-called mode) can be set using te **setRoiModes** command.
Once the configuration is ok you can start the task using **Start** command and stop the task calling the **Stop** command.
The spectrum data can be retrieved by calling the **readImage** command, the command returns the spectrums as a stack stored into an image.
In addition to the statistics calculation you can provide a mask file (**setMask** command or **MaskFile** property/attribute)
where null pixel will not be taken into account.
##### Properties[¶](#properties)
| Property name | Mandatory | Default value | Description |
| --- | --- | --- | --- |
| BufferSize | No | 128 | Circular buffer size in image |
| MaskFile | No | “” | A mask file |
##### Attributes[¶](#attributes)
| Attribute name | RW | Type | Description |
| --- | --- | --- | --- |
| BufferSize | rw | DevLong | Circular buffer size in image, default is 128 |
| CounterStatus | ro | DevLong | Counter related to the current number of proceeded images |
| MaskFile | rw | DevString | The mask file |
| RunLevel | rw | DevLong | Run level in the processing chain, from 0 to N |
| State | ro | State | OFF or ON (stopped or started) |
| Status | ro | DevString | “OFF” “ON” (stopped or started) |
##### Commands[¶](#commands)
| Command name | Arg. in | Arg. out | Description |
| --- | --- | --- | --- |
| addNames | DevVarStringArray list of Roi names | DevVarStringArray list of Roi indexes | Set the names and return the corresponding indexes |
| clearAllRois | DevVoid | DevVoid | Remove the Rois |
| getNames | DevVoid | DevVarStringArray | Return the list of Roi names |
| getRoiModes | DevVarStringArray | DevVarStringArray | Return the Roi modes |
| getRois | DevVarStringArray list of Roi names | DevVarStringArray list of Roi position
(roi_id,x,y,width,heigth,…) | Return the Roi positions |
| Init | DevVoid | DevVoid | Do not use |
| readImage | DevVarLongArray | DevVarLongArray | |
| removeRois | roi_id,first image | spectrum stack | Return the stack of spectrum from the specified image index until the last image acquired |
| setRois | DevArLongArray
(roi_id,x,y,w,h,…) | DevVoid | Set roi positions |
| Start | DevVoid | DevVoid | Start the operation on image |
| State | DevVoid | DevLong | Return the device state |
| Status | DevVoid | DevString | Return the device state as a string |
| Stop | DevVoid | DevVoid | Stop the operation on image |
#### RoiCounter[¶](#roicounter)
The Region-of-Interest to Counter operation is very useful to provide online statistics on some detector areas. The operation will calculate for each image acquired the **average**, the **standard deviation**, the **sum**, the **minimum** and the **maximum pixel** values.
The Roi can be defined either with rectangle coordinates (x begin,y begin, width, height) or with arc coordinates (center x, center y, radius1, radius2, angle start, angle end). Different commands are provided for that purpose: **setRois** and **setArcRois**.
You must create first the Rois by providing unique names (**addNames** command) and then set the Roi position using the Roi index and the position (rectangle or arc position).
The statistics can be retrieved by calling the **readCounters** command, the command returns a list of statistics per Roi and frame.
In addition to the statistics calculation you can provide a mask file (**setMask** command or **MaskFile** property/attribute)
where null pixel will not be taken into account.
If you have a detector with pixels which randomly return wrong high count rate, you can use the **OverflowThreshold**
attribute to cut off those defective pixels.
##### Properties[¶](#properties)
| Property name | Mandatory | Default value | Description |
| --- | --- | --- | --- |
| BufferSize | No | 128 | Circular buffer size in image |
| MaskFile | No | “” | A mask file |
##### Attributes[¶](#attributes)
| Attribute name | RW | Type | Description |
| --- | --- | --- | --- |
| BufferSize | rw | DevLong | Circular buffer size in image, default is 128 |
| CounterStatus | ro | DevLong | Counter related to the current number of proceeded images |
| MaskFile | rw | DevString | The mask file |
| OverflowThreshold | rw | DevLong | cut off pixels above the threshold value |
| RunLevel | rw | DevLong | Run level in the processing chain, from 0 to N |
| State | ro | State | OFF or ON (stopped or started) |
| Status | ro | DevString | “OFF” “ON” (stopped or started) |
##### Commands[¶](#commands)
| Command name | Arg. in | Arg. out | Description |
| --- | --- | --- | --- |
| addNames | DevVarStringArray list of Roi names | DevVarStringArray list of Roi indexes | Set the names and return the corresponding indexes |
| clearAllRois | DevVoid | DevVoid | Remove the Rois |
| getNames | DevVoid | DevVarStringArray | Return the list of Roi names |
| getRoiModes | DevVarStringArray | DevVarStringArray | Return the Roi modes |
| getRois | DevVarStringArray list of Roi names | DevVarStringArray list of Roi position
(roi_id,x,y,width,heigth,…) | Return the Roi positions |
| getArcRois | DevVarStringArray list of ArcRoi names |
DevVarStringArraylist of ArcRoi position
(roi_id,x,y,width,heigth,…) | Return the ArcRoi positions |
| Init | DevVoid | DevVoid | Do not use |
| readCounters | DevVarLongArray | DevVarLongArray | |
| removeRois | roi_id,first image | spectrum stack | Return the stack of spectrum from the specified image index until the last image acquired |
| setArcRois | DevVarDoublArray
(roi_id0,centerx,centery,
radius1,raduis2,start_angle,
end_angle,roi_id1,…) | DevVoid | Set the Arc Rois |
| setMaskFile | DevVarStringArray full path file | DevVoid | Set the mask file |
| setRois | DevArLongArray
(roi_id0,x,y,w,h,roi_id1..) | DevVoid | Set roi positions |
| Start | DevVoid | DevVoid | Start the operation on image |
| State | DevVoid | DevLong | Return the device state |
| Status | DevVoid | DevString | Return the device state as a string |
| Stop | DevVoid | DevVoid | Stop the operation on image |
#### RoiCollection[¶](#roicollection)
The Roi collection plugin can be used to do data reduction on the image by providing a large number of Roi. The result will a spectrum of data.
The spectrum (command **readSpectrum**) is containing the ROI integration value of the pixels.
In addition to the statistics calculation you can provide a mask file (**setMask** command or **MaskFile** property/attribute)
where null pixel will not be taken into account.
If you have a detector with pixels which randomly return wrong high count rate, you can use the **OverflowThreshold**
attribute to cut off those defective pixels.
##### Properties[¶](#properties)
| Property name | Mandatory | Default value | Description |
| --- | --- | --- | --- |
| BufferSize | No | 128 | Circular buffer size in image |
| MaskFile | No | “” | A mask file |
##### Attributes[¶](#attributes)
| Attribute name | RW | Type | Description |
| --- | --- | --- | --- |
| BufferSize | rw | DevLong | Circular buffer size in image, default is 128 |
| CounterStatus | ro | DevLong | Counter related to the current number of proceeded images |
| OverflowThreshold | rw | DevLong | cut off pixels above the threshold value |
| MaskFile | rw | DevString | The mask file |
| RunLevel | rw | DevLong | Run level in the processing chain, from 0 to N |
| State | ro | State | OFF or ON (stopped or started) |
| Status | ro | DevString | “OFF” “ON” (stopped or started) |
##### Commands[¶](#commands)
#### LimaTacoCCD[¶](#limatacoccd)
This device has been created by legacy and it provides the only interface that SPEC software is supporting for “ESRF General CCD Dev” CCD-like controller.
##### Properties[¶](#properties)
| Property name | Mandatory | Default value | Description |
| --- | --- | --- | --- |
| ManualAsynchronousWrite | No | False | Flag for manual writting, can improve the performance of data saving |
##### Attributes[¶](#attributes)
This device has no attributes.
##### Commands[¶](#commands)
| Command name | Arg. in | Arg. out | Description |
| --- | --- | --- | --- |
| TacoState | DevVoid | DevLong | Return the device taco-like state |
| DevCcdStart | DevVoid | DevVoid | Start the acquisition |
| DevCcdStop | DevVoid | DevVoid | Stop the acquisition |
| DevCcdRead | DevVarLongArray[2]:
frame_nb,frame_size | DevVarCharArray:
the raw image | Return the image as a string |
| DevCcdReadAll | DevLong:
frame_size | DevEncoded | Return the concatenated frames in a DevEncoded format DATA_ARRAY (see [DevEncoded](index.html#data-array-encoded)) |
| DevCcdReadJPeg | DevShort:
jpeg compression | DevVarCharArray:
Jpeg image | Return a jpeg image |
| DevCcdWrite | DevVoid | DevVoid | Save the last image |
| DevCcdSetExposure | DevFloat | DevVoid | Set the exposure time in second |
| DevCcdGetExposure | DevVoid | DevFloat | Return the exposure time in second |
| DevCcdSetRoI | DevVarLongArray[4]:
startx,endx,starty,
endy | DevVoid | Set the new Region-of-Interest |
| DevCcdGetRoi | DevVoid | DevVarLongArray[4]:
startx,endx,starty,
endy | Return the last Region-of-Interest |
| DevCcdSetFilePar | DevStringArray[5] | | |
| DevCcdHeader | | | |
| DevCcdImageHeader | | | |
| DevCcdHeaderDelimiter | | | |
| DevCcdGetFilePar | | | |
| DevCcdDepth | | | |
| DevCcdYSize | | | |
| DevCcdXSize | | | |
| DevCcdReset | | | |
| DevCcdSetMode | | | |
| DevCcdGetMode | | | |
| DevCcdWriteFile | | | |
| DevCcdGetBin | | | |
| DevCcdSetBin | | | |
| DevCcdSetFrames | | | |
| DevCcdGetFrames | | | |
| DevCcdSetTrigger | | | |
| DevCcdGetTrigger | | | |
| DevCcdReadValues | | | |
| DevCcdSigValues | | | |
| DevCcdGetLstErrMsg | | | |
| DevCcdGetCurrent | | | |
| DevGetDebugFlags | | | |
| DevSetDebugFlags | | | |
#### LiveViewer[¶](#liveviewer)
This device was create for backward compatibility with former graphical applications used at ESRF by the diagnostic group for the monitoring of the electron beam. It is no longer maintain. Instead we recommend to use the video API provided via the main device LimaCCDs.
Nevertheless you will find here the of the available properties, attributes and commands.
##### Properties[¶](#properties)
| Property name | Mandatory | Default value | Description |
| --- | --- | --- | --- |
| AcquisitionAutoStart | No | False | If true start the acquistion at device startup |
##### Attributes[¶](#attributes)
| Attribute name | rw | Type | Description |
| --- | --- | --- | --- |
| Depth | ro | DevShort | Image depth in byte |
| Exposure | rw | DevDouble | Exposure time in second |
| ExternalTrigger | rw | DevBoolean | External trigger active if true |
| FrameRate | rw | DevDouble | Frame rate in fps |
| Frames | rw | DevLong | Number of frames to acquire |
| Gain | rw | DevDouble | Gain, support depends on the camera model |
| Image | ro | Image, DevUShort | The last image taken |
| ImageCounter | ro | DevLong | The image counter |
| JpegImage | ro | DevEncoded | The last image in JPEG format, only supported for B/W cameras. |
| JpegQuality | rw | DevLong | JPEG quality factor from 0 to 10 |
| Roi | rw | DevLong,Spectrum | The Roi position, start x, start y, width, height |
| State | ro | State | OFF or ON (stopped or started) |
| Status | ro | DevString | “OFF” “ON” (stopped or started) |
##### Commands[¶](#commands)
| Command name | Arg. in | Arg. out | Description |
| --- | --- | --- | --- |
| Init | DevVoid | DevVoid | Do not use |
| Reset | DevVoid | DevVoid | Reset the camera, factory setting is apply |
| ResetRoi | DevVoid | DevVoid | Remove the Roi, camera set to full size |
| Start | DevVoid | DevVoid | Start the camera for live acquisition |
| State | DevVoid | DevLong | Return the device state |
| Status | DevVoid | DevString | Return the device state as a string |
| Stop | DevVoid | DevVoid | Stop the camera live |
Understand the plugin architecture[¶](#understand-the-plugin-architecture)
---
### Library structure[¶](#library-structure)
The library structure is divided into two main layers: the control, containing the common control and processing code, and the hardware which is implementing the detector-specific part.
The control layer provides the library interface to the high level application. User requests to configure and control the acquisition are gathered by the control layer,
so the hardware layer functionality is limited to the generation the image frames in a best-effort basis.
The control layer is responsible of:
> * Adapting the received image geometry if it does not match the user requests,
> * Executing the frame processing chain.
### Generic Interface[¶](#generic-interface)
The Hardware Layer defines the interface between the Control Layer and the controller library. It provides the minimal functionality needed for the Control Layer to satisfy the user requests.
The main class in the Hardware Layer is the [`lima::HwInterface`](index.html#_CPPv4N4lima11HwInterfaceE), providing the interface to the Control Layer. In order to provide a flexible and evolvable interface, the configuration of this layer is implemented as a set of features (capabilities) that may or may not be implemented by the hardware.
The capabilities can be grouped in three categories:
1. **Standard.** Includes the synchronization parameters (exposure time, ext. trigger, etc), the detector information (Detector model, Max size, etc..) is considered standard and must be implemented for all detectors.
2. **Extended.** Optional common features like image transformations (binning, RoI, flip), advanced acquisition modes (kinetics, frame transfer), and extended mechanisms (camera serial line)
3. **Specific.** These are detector-specific features that can not be treated in a generic interface
As a camera plugin developer, your mission, should you choose to accept it, will consist in writing the code for the [`lima::HwInterface`](index.html#_CPPv4N4lima11HwInterfaceE) class and its depending classes (.e.g the capabilities classes).
Figure 1. Class diagram of a camera plugin.[¶](#id1)
### Hardware Interface[¶](#hardware-interface)
[`lima::HwInterface`](index.html#_CPPv4N4lima11HwInterfaceE) is the glue layer between the Control Layer and the camera plugin implementation. It informs LImA about the capabilities provided by the hardware.
class HwInterface
As an interface to the Control Layer, this class exports the capabilities provided by the hardware.
It is implemented by every camera plugins.
Public Functions
virtual void getCapList(CapList&) const = 0
Returns a list of capabilities.
virtual void reset(ResetLevel reset_level) = 0
Reset the hardware interface.
virtual void prepareAcq() = 0
Prepare the acquisition and make sure the camera is properly configured.
This member function is always called before the acquisition is started.
virtual void startAcq() = 0
Start the acquisition.
virtual void stopAcq() = 0
Stop the acquisition.
virtual void getStatus([StatusType](index.html#_CPPv4N4lima11HwInterface10StatusTypeE) &status) = 0
Returns the current state of the hardware.
virtual int getNbAcquiredFrames()
Returns the number of acquired frames.
virtual int getNbHwAcquiredFrames() = 0
Returns the number of acquired frames returned by the hardware (may differ from getNbAcquiredFrames if accumulation is on)
The [`lima::HwInterface::getStatus()`](index.html#_CPPv4N4lima11HwInterface9getStatusER10StatusType) member function should return the following information:
struct Status
A tuple of status with acquisition and detector status / mask.
Public Types
enum Basic
Basic detector states (some detectors may have additional states)
*Values:*
enumerator Fault
Fault.
enumerator Ready
Ready for acquisition.
enumerator Exposure
Counting photons.
enumerator Readout
Reading data from the chip.
enumerator Latency
Latency between exposures.
enumerator Config
Fault.
Public Members
[AcqStatus](index.html#_CPPv4N4lima9AcqStatusE) acq
Global acquisition status.
[DetStatus](index.html#_CPPv4N4lima9DetStatusE) det
Compound bit flags specifying the current detector status.
[DetStatus](index.html#_CPPv4N4lima9DetStatusE) det_mask
A mask specifying the detector status bits that are supported by the hardware.
Figure 2. Hardware capabilites block diagram[¶](#id2)
### Standard Capabilities[¶](#standard-capabilities)
These capabilities are mandatory for all the detectors. They define the minimum functionality necessary for image acquisition.
Three capability classes (DetInfo, Sync and BuffCtrl) are listed below with their set/get methods which have to be provided within the new camera plugin code.
#### Detector Information[¶](#detector-information)
The interface [`lima::HwDetInfoCtrlObj`](index.html#_CPPv4N4lima16HwDetInfoCtrlObjE) returns static information about the detector and the current image dimension.
class HwDetInfoCtrlObj
Provides static information about the detector and the current image dimension.
Public Functions
virtual void getMaxImageSize(Size &max_image_size) = 0
Return the maximum size of the image.
virtual void getDetectorImageSize(Size &det_image_size) = 0
Return the size of the detector image, it is always equal or greater than the MaxImageSize.
virtual void getDefImageType(ImageType &def_image_type) = 0
Returns the default data type of image (ushort, ulong, …)
virtual void getCurrImageType(ImageType &curr_image_type) = 0
Returns the current data type of image (ushort, ulong, …).
virtual void getPixelSize(double &x_size, double &y_size) = 0
Physical size of pixels (in meter)
virtual void getDetectorType(std::string &det_type) = 0
Returns the type of the detector (Frelon, Maxipix, …)
virtual void getDetectorModel(std::string &det_model) = 0
Returns the model of the detector.
virtual void registerMaxImageSizeCallback(HwMaxImageSizeCallback &cb) = 0
Register a callback called when the detector is reconfigured with a different geometry.
virtual void unregisterMaxImageSizeCallback(HwMaxImageSizeCallback &cb) = 0
Unregister a callback previsouly registered with registerMaxImageSizeCallback.
inline virtual void setUserDetectorName(const std::string &username)
Set a detector user name.
inline virtual void getUserDetectorName(std::string &username)
Get a detector user name.
Note
The `HwMaxImageSizeCallback` callback functions let the hardware inform the Lima library of a change of the detector maximum image size. This change can happen with some detectors which can be reconfigured with a different geometry. This camera capability is *NOT* a Roi *nor* a Bin capability. For instance, the maxipix detector is a mosaic of several individual sensor chips and it can be configured and reconfigured with different geometries according to user needs. A 2x2 maxipix detector can be configured in a 1x1 geometry.
#### Synchronization[¶](#synchronization)
The interface [`lima::HwSyncCtrlObj`](index.html#_CPPv4N4lima13HwSyncCtrlObjE) controls the acquisition parameters related to synchronization.
| Parameters | Description |
| --- | --- |
| set/getExpTime | Frame exposure time |
| set/getLatTime | Latency time between frames |
| checkTrigMode | A check method which returns True/False for the supported trigger modes |
| set/getTrigMode |
Triggering mode:* Internal: software triggering
* ExtStart: one external signal to start the whole sequence acquisition (one or more frames per sequence)
* MultExtStart: one external signal for each frame in the acquisition sequence
* Gate: controls start and stop of each frame
* ExtStartStop: one start signal to start acquisition of one frame and one signal to stop it
|
#### Buffer Management[¶](#buffer-management)
The interface [`lima::HwBufferCtrlObj`](index.html#_CPPv4N4lima15HwBufferCtrlObjE) controls the image memory buffer allocation and management. They are used:
* As temporary frame storage before saving, allowing disk/network speed fluctuations.
* To permanently hold images that can be read by the user after the acquisition is finished.
These buffer functionalities may be implemented by the hardware layer (kernel driver in the case of the Espia).
If not, an auxiliary buffer manager class will be provided to facilitate (and unify) its software implementation.
The buffer management parameters are:
| Parameters | Description |
| --- | --- |
| NbBuffers | Number of image buffers in memory. |
| NbConcatFrames | The number of concatenated frames per buffer. |
| NbAccFrames | The number of detector frames to accumulate into a single buffer. |
| MaxNbBuffers | This Read-Only parameter indicates the maximum number of buffers that can be allocated,
given the size of the frame and the number of (concatenated) frames per buffer. |
| BufferMode | Buffer filling mode (linear or circular) |
The buffer manager must also provide the following member functions:
* [`lima::HwBufferCtrlObj::getBufferPtr()`](index.html#_CPPv4N4lima15HwBufferCtrlObj12getBufferPtrEii)
* [`lima::HwBufferCtrlObj::getFramePtr()`](index.html#_CPPv4N4lima15HwBufferCtrlObj11getFramePtrEi)
* [`lima::HwBufferCtrlObj::getFrameInfo()`](index.html#_CPPv4N4lima15HwBufferCtrlObj12getFrameInfoEiR15HwFrameInfoType)
In most of simple cases, one just need to create a [`lima::SoftBufferCtrlObj`](index.html#_CPPv4N4lima17SoftBufferCtrlObjE) class instance within the Camera class instance to store the frames. A good example of a simple implementation is available in the Andor camera plugin code.
#### Frame callback[¶](#frame-callback)
The hardware must provide callbacks after each acquired frame. The callback function should receive the following information:
| Parameters | Description |
| --- | --- |
| AcqFrameNb | Index of the frame since the start of the acquisition |
| FramePtr | Pointer to the frame memory |
| FrameDim | Structure holding the width, height and type of the frame |
| TimeStamp | Time (in sec.) since the start of the acquisition |
The frame callbacks are implemented by means of an auxiliary class [`lima::HwFrameCallback`](index.html#_CPPv4N4lima15HwFrameCallbackE), which will be used by the Control Layer.
From the Hardware Layer point of view, the standard capability control object must implement two functions:
* setFrameCallbackActive(bool cb_active)
* frameReady(<callback_frame_info>)
Setting up a development environment[¶](#setting-up-a-development-environment)
---
LImA build dependency were updated with the latest version of LImA and that may be an issue on older distro where the tools are not available, namely:
* [CMake](https://cmake.org/) >= 3.1
* GCC with C++11 support >= 4.8.1
The first option is to build these packages from source but it is a PITA. One other option is to build with packages managed by [Conda](https://conda.io/docs) and the following instruction should get you started.
### Install Conda[¶](#install-conda)
If you don’t have Conda installed, get [Miniconda](https://conda.io/miniconda.html) and follow the [install instruction](https://conda.io/docs/user-guide/install/index.html).
### Create a build environment[¶](#create-a-build-environment)
A good practice would be not to pollute the base environment and work in a dedicated `lima` environment.
Prefer to use mamba tool for package installation rather than the default conda installer, mamba is faster and works better to solve dependencies:
```
conda create -n lima mamba conda activate lima
```
Conda channels must be defined in the proper order with conda-forge first and prepend to the default anaconda channel:
```
conda config --env --add channels conda-forge conda config --env --append channels esrf-bcu
```
Then install the build tools:
For linux
```
mamba install cmake gxx_linux-64
```
For windows, just be sure you have visual studio 2017 x64 installed
You might need to leave the Conda environment and enter it again so that the environment variables (CXX) needed by CMake are set:
```
conda deactivate conda activate lima
```
Finally, install the `lima-core` package (and dependencies) with Conda:
```
mamba install lima-core
```
If you want to run the LimaCCDs device server on top of your camera plugin we recommend to install the simulator tango package, then you will get installed all the packages by dependencies:
```
mamba install lima-camera-simulator-tango
```
And you are good to code! A good way to start is to use our seed project at:
```
git clone --bare https://github.com/esrf-bliss/Lima-camera-template.git cd Lima-camera-template.git git push --mirror https://github.com/esrf-bliss/Lima-camera-mycamera.git
```
Once you have your new repo ready, clone it and happy coding!
```
git clone https://github.com/esrf-bliss/Lima-camera-mycamera.git cd Lima-camera-mycamera git checkout develop
```
Once you are ready to build, here are the typical [CMake](https://cmake.org/) commands for an out of source build (in the build folder) and for installing in the current Conda environment (`$CONDA_PREFIX`)
For linux:
```
cmake -Bbuild -H. -DLIMA_ENABLE_PYTHON=1 -DCAMERA_ENABLE_TESTS=1 -DCMAKE_FIND_ROOT_PATH=$CONDA_PREFIX -DCMAKE_INSTALL_PREFIX=$CONDA_PREFIX cmake --build build --target install
```
For windows:
```
cmake -Bbuild -H. -DLIMA_ENABLE_PYTHON=1 -DCAMERA_ENABLE_TESTS=1 -DCMAKE_FIND_ROOT_PATH=%CONDA_PREFIX% -DCMAKE_INSTALL_PREFIX=%CONDA_PREFIX%
cmake --build build --target install --config Release
```
Source code organization[¶](#source-code-organization)
---
This chapter provides general guidelines to follow, to share a plugin with the community.
### Source code[¶](#source-code)
#### Plug-ins submodules[¶](#plug-ins-submodules)
The source files and documentation of each new plug-in must be located under Lima/Camera as shown figure below.
```
├───camera
└───mycamera
├───cmake
├───conda
│ ├───camera
│ └───tango
├───doc
├───include
├───python
├───sip
├───src
├───tango
└───test
```
To maintain homogeneity between the different plug-ins, each plug-in must have at minimum the following folders:
> * `/src` : contains the source files. Plug-ins must be developed in C++. The “src” folder must contain the following files :
> + `DetectorNameInterface.cpp` : interface class between detector capabilities from the hardware interface and the control layer **(mandatory)**
> + `DetectorNameDetInfoCtrObj.cpp` : capabilities to get static informations about the detector **(mandatory)**
> + `DetectorNameBufferCtrlObj.cpp` : capabilities to control the image memory buffer allocation **(mandatory)**
> + `DetectorNameSyncCtrlObj.cpp` : capabilities to control the image memory buffer allocation **(mandatory)**
> + `DetectorNameRoiCtrlObj.cpp` : capabilities to get a ROI **(optional)**
> + `DetectorNameBinCtrlObj.cpp` : capabilities to make pixel binning **(optional)**
> + `DetectorNameVideoCtrlObj.cpp` : capabilities to make video mode only for non-scientific detectors **(optional)**
> + `DetectorNameShutterCtrlObj.cpp` : capabilities to control shutter **(optional)**
> + `DetectorNameFlipCtrlObj.cpp` : capabilities to flip image **(optional)**
> + `DetectorNameEventCtrlObj.cpp` : capabilities to generate event **(optional)**
> + `DetectorNameSavingCtrlObj.cpp` : capabilities to save images in different formats **(optional)**
> * `/include` : contains the header files relative to the sources files described before.
> * `/doc` : contains at least `index.rst` for plug-in documentation. Other files such as image can be added. The minimum content of the index file is detailed in the documentation section.
> * Other folders can be added based on need. The contents of this file must be described in the documentation.
Note
If optional capabilities are not defined, they are emulated by the Lima Core.
#### Camera device[¶](#camera-device)
Once the plug-in was developed, you must create a camera device to execute all commands on the camera. This device can be developed in Python or C++. Python devices must be located on “Lima/applications/tango/camera”, C++ devices on “Lima/applications/tango/LimaDetector”
In order to enhance the general software quality of Device Servers developed by the various institutes using Tango, a Design and Implementation Guidelines document has been written by SOLEIL. This document can be downloaded [here](https://tango-controls.readthedocs.io/en/latest/development/device-api/ds-guideline).
It is recommended that the camera device comply with these design guidelines.
### Class names[¶](#class-names)
Again, to maintain homogeneity, it is recommended to follow this nomenclature for the class names:
* **DetectorName**::Camera
* **DetectorName**::Interface
* **DetectorName**::SyncCtrlObj
* **DetectorName**::DetInfoCtrlObj
As an example, one can look at the Prosilica plugin for a real implementation or at the simulator plugin for a mock implementation.
### How to test the new plugin with python[¶](#how-to-test-the-new-plugin-with-python)
In order to communicate with the underlying detector hardware, the lima client must instantiate the main object of the LImA framework [`lima::CtControl`](index.html#_CPPv4N4lima9CtControlE).
To be instantiated, [`lima::CtControl`](index.html#_CPPv4N4lima9CtControlE) requires an interface inherited from common [`lima::HwInterface`](index.html#_CPPv4N4lima11HwInterfaceE).
This interface requires the Camera object that encapsulates dependency with detector and its SDK.
For instance if you are using the python binding for the Prosilica camera, a client application initialization should do:
```
from Lima import Prosilica as ProsilicaAcq from Lima import Core
my_prosilica_ip_address = 192.168.1.2
# we need the camera object first camera = ProsilicaAcq.Camera(my_prosilica_ip_address)
# create the HwInterface which needs the camera as unique parameter camera_interface = ProsilicaAcq.Interface(camera)
# Now create the :cpp:class:`lima::CtControl` and passed to Lima the new HwInterface control = Core.CtControl(camera_interface)
```
The camera is now under control and it can be used to acquire images !
First get the sub-objects for the parameter setting of the detector, acquisition, saving and more if necessary.
```
acq = control.acquisition()
saving = control.saving()
acq.setAcqExpoTime(0.1)
acq.setAcqNbFrames(10)
pars=saving.getParameters()
pars.directory='/buffer/test_lima'
pars.prefix='test1_'
pars.suffix='.edf'
pars.fileFormat=Core.CtSaving.EDF pars.savingMode=Core.CtSaving.AutoFrame saving.setParameters(pars)
# pass parameters to camera hw interface control.prepareAcq()
# start the acquisition control.startAcq()
```
Note
Camera object is only used to enhance the separation between the generic interface and the API driver of the detector. It is similar to a proxy.
The camera class is also supposed to provide an access to the specific configuration of the detector. For instance if your detector has a threshold setting or a built-in background correction available you should implement these features in the Camera class. The [`lima::HwInterface`](index.html#_CPPv4N4lima11HwInterfaceE) will not know about the specific configuration and a client application should explicitly implement the configuration. A good example is the Andor camera, where there are few extra features like the temperature set-point (set/getTemperatureST()) or the cooler control (set/getCooler(bool)).
With the Andor camera one can set the cooling as:
```
camera.setTemperatureSP(-50)
camera.setCooler(True)
current_temp = camera.getTemperature()
```
The Lima project code provides some client application based on TANGO protocol for the remote access.
One can find a python implementation under applications/tango and a C++ version in applications/tango/LimaDetector.
The python server has been developed at ESRF and being used on lot of beamlines and the C++ server is the SOLEIL version which is also used on beamlines.
The `LimaCCDs` python server has its own documentation here.
Implementation Recommendations[¶](#implementation-recommendations)
---
Use the [pImpl idiom](https://en.cppreference.com/w/cpp/language/pimpl) to implement the Camera class, breaking compile-time dependency between the vendor SDK and the rest of LImA and downstream applications.
The C++ ABI is sadly [known to be not stable](<https://isocpp.org/files/papers/n4028.pdf>) between versions of compilers and even between build compiled with the same toolset but different switches. Most vendor SDKs are closed source and cannot be recompiled at will which is the reason why we recommend to use their C version if it exists. Wrapping the C++ API in a C API is a possible workaround.
Write a documentation[¶](#write-a-documentation)
---
Plugin documentation must be located in “Lima/camera/detector/name/doc”. It is composed of at least an “index.rst” file which contains information to install, configure and implement a camera plugin. The presence of this documentation is required to share a plugin with Lima community.
Plugins documentation is available in the section “Supported Cameras”.
The table below describes information that must be present in the index file :
C++ API[¶](#c-api)
---
Unfortunately very limited documentation is available from the source but that should improve over time.
### User API[¶](#user-api)
In this section we cover the classes that defines the user interface.
#### Hello, Lima![¶](#hello-lima)
Let’s get started with a simple example of an image acquisition function using the simulator camera.
```
// A camera instance and its hardware interface Simulator::Camera simu;
Simulator::Interface hw(simu);
// The control object CtControl ct = CtControl(&hw);
// Get the saving control and set some properties CtSaving *save = ct.saving();
save->setDirectory("./data");
save->setPrefix("test_");
save->setSuffix(".edf");
save->setNextNumber(100);
save->setFormat(CtSaving::EDF);
save->setSavingMode(CtSaving::AutoFrame);
save->setFramesPerFile(100);
// Set the binning or any other processing Bin bin(2, 2);
CtImage *image = ct.image();
image->setBin(bin);
// Get the acquisition control and set some properties CtAcquisition *acq = ct.acquisition();
acq->setAcqMode(Single);
acq->setAcqExpoTime(expo);
acq->setAcqNbFrames(nframe);
// Prepare acquisition (transfer properties to the camera)
ct.prepareAcq();
// Start acquisition ct.startAcq();
std::cout << "SIMUTEST: acq started" << std::endl;
//
long frame = -1;
while (frame < (nframe - 1))
{
using namespace std::chrono;
high_resolution_clock::time_point begin = high_resolution_clock::now();
usleep(100000);
CtControl::ImageStatus img_status;
ct.getImageStatus(img_status);
high_resolution_clock::time_point end = high_resolution_clock::now();
auto duration = duration_cast<microseconds>(end - begin).count();
std::cout << "SIMUTEST: acq frame nr " << img_status.LastImageAcquired
<< " - saving frame nr " << img_status.LastImageSaved << std::endl;
if (frame != img_status.LastImageAcquired) {
unsigned int nb_frames = img_status.LastImageAcquired - frame;
std::cout << " " << duration << " usec for " << nb_frames << " frames\n";
std::cout << " " << 1e6 * nb_frames / duration << " fps" << std::endl;
frame = img_status.LastImageAcquired;
}
}
std::cout << "SIMUTEST: acq finished" << std::endl;
// Stop acquisition ( not really necessary since all frames where acquired)
ct.stopAcq();
std::cout << "SIMUTEST: acq stopped" << std::endl;
```
#### Control Interfaces[¶](#control-interfaces)
The control interface is the high level interface that controls an acquisition.
class CtControl[¶](#_CPPv4N4lima9CtControlE)
Main client class which should be instantiated by the users in their acquisition software.
Advanced control accessors
inline [CtAcquisition](index.html#_CPPv4N4lima13CtAcquisitionE) *acquisition()[¶](#_CPPv4N4lima9CtControl11acquisitionEv)
Returns a pointer to the acquisition control.
inline [CtSaving](index.html#_CPPv4N4lima8CtSavingE) *saving()[¶](#_CPPv4N4lima9CtControl6savingEv)
Returns a pointer to the saving control.
inline [CtImage](index.html#_CPPv4N4lima7CtImageE) *image()[¶](#_CPPv4N4lima9CtControl5imageEv)
Returns a pointer to the image control.
inline [CtBuffer](index.html#_CPPv4N4lima8CtBufferE) *buffer()[¶](#_CPPv4N4lima9CtControl6bufferEv)
Returns a pointer to the buffer control.
inline CtAccumulation *accumulation()[¶](#_CPPv4N4lima9CtControl12accumulationEv)
Returns a pointer to the accumulation control.
inline CtVideo *video()[¶](#_CPPv4N4lima9CtControl5videoEv)
Returns a pointer to the video control.
inline [CtShutter](index.html#_CPPv4N4lima9CtShutterE) *shutter()[¶](#_CPPv4N4lima9CtControl7shutterEv)
Returns a pointer to the shutter control.
inline CtEvent *event()[¶](#_CPPv4N4lima9CtControl5eventEv)
Returns a pointer to the event control.
Public Functions
void abortAcq()[¶](#_CPPv4N4lima9CtControl8abortAcqEv)
stop an acquisition and purge all pending tasks.
void stopAcqAsync([AcqStatus](index.html#_CPPv4N4lima9AcqStatusE) acq_status, ErrorCode error_code, Data &data)[¶](#_CPPv4N4lima9CtControl12stopAcqAsyncE9AcqStatus9ErrorCodeR4Data)
aborts an acquisiton from a callback thread: it’s safe to call from a HW thread.
Creates a dummy task that calls stopAcq() and waits for all buffers to be released
void abortAcq([AcqStatus](index.html#_CPPv4N4lima9AcqStatusE) acq_status, ErrorCode error_code, Data &data, bool ctrl_mutex_locked = false)[¶](#_CPPv4N4lima9CtControl8abortAcqE9AcqStatus9ErrorCodeR4Datab)
This function is DEPRECATED.
Use stopAcqAsync instead
void registerImageStatusCallback([ImageStatusCallback](index.html#_CPPv4N4lima9CtControl19ImageStatusCallbackE) &cb)[¶](#_CPPv4N4lima9CtControl27registerImageStatusCallbackER19ImageStatusCallback)
registerImageStatusCallback is not thread safe!!!
void unregisterImageStatusCallback([ImageStatusCallback](index.html#_CPPv4N4lima9CtControl19ImageStatusCallbackE) &cb)[¶](#_CPPv4N4lima9CtControl29unregisterImageStatusCallbackER19ImageStatusCallback)
unregisterImageStatusCallback is not thread safe!!!
class _AbortAcqCallback : public TaskEventCallback[¶](#_CPPv4N4lima9CtControl17_AbortAcqCallbackE)
class _LastBaseImageReadyCallback : public TaskEventCallback[¶](#_CPPv4N4lima9CtControl27_LastBaseImageReadyCallbackE)
class _LastCounterReadyCallback : public TaskEventCallback[¶](#_CPPv4N4lima9CtControl25_LastCounterReadyCallbackE)
class _LastImageReadyCallback : public TaskEventCallback[¶](#_CPPv4N4lima9CtControl23_LastImageReadyCallbackE)
class _LastImageSavedCallback : public TaskEventCallback[¶](#_CPPv4N4lima9CtControl23_LastImageSavedCallbackE)
class _ReconstructionChangeCallback : public Callback[¶](#_CPPv4N4lima9CtControl29_ReconstructionChangeCallbackE)
struct ImageStatus[¶](#_CPPv4N4lima9CtControl11ImageStatusE)
class ImageStatusCallback[¶](#_CPPv4N4lima9CtControl19ImageStatusCallbackE)
Subclassed by lima::CtTestApp::ImageStatusCallback
class ImageStatusThread : public Thread[¶](#_CPPv4N4lima9CtControl17ImageStatusThreadE)
class SoftOpErrorHandler : public EventCallback[¶](#_CPPv4N4lima9CtControl18SoftOpErrorHandlerE)
struct Status[¶](#_CPPv4N4lima9CtControl6StatusE)
##### Acquisition Interface[¶](#acquisition-interface)
class CtAcquisition[¶](#_CPPv4N4lima13CtAcquisitionE)
This class control the acquisition of images given a hardware interface.
class _ValidRangesCallback : public ValidRangesCallback[¶](#_CPPv4N4lima13CtAcquisition20_ValidRangesCallbackE)
struct Parameters[¶](#_CPPv4N4lima13CtAcquisition10ParametersE)
##### Saving Interface[¶](#saving-interface)
class CtSaving[¶](#_CPPv4N4lima8CtSavingE)
Control saving settings such as file format and mode.
Saving modes
{
void setSavingMode(SavingMode mode)[¶](#_CPPv4N4lima8CtSaving13setSavingModeE10SavingMode)
set the saving mode for a saving stream
void getSavingMode(SavingMode &mode) const[¶](#_CPPv4NK4lima8CtSaving13getSavingModeER10SavingMode)
get the saving mode for a saving stream
void setOverwritePolicy(OverwritePolicy policy, int stream_idx = 0)[¶](#_CPPv4N4lima8CtSaving18setOverwritePolicyE15OverwritePolicyi)
set the overwrite policy for a saving stream
void getOverwritePolicy(OverwritePolicy &policy, int stream_idx = 0) const[¶](#_CPPv4NK4lima8CtSaving18getOverwritePolicyER15OverwritePolicyi)
get the overwrite policy for a saving stream
void setFramesPerFile(unsigned long frames_per_file, int stream_idx = 0)[¶](#_CPPv4N4lima8CtSaving16setFramesPerFileEmi)
set the number of frame saved per file for a saving stream
void getFramesPerFile(unsigned long &frames_per_file, int stream_idx = 0) const[¶](#_CPPv4NK4lima8CtSaving16getFramesPerFileERmi)
get the number of frame saved per file for a saving stream
void setManagedMode(ManagedMode mode)[¶](#_CPPv4N4lima8CtSaving14setManagedModeE11ManagedMode)
set who will manage the saving.
with this methode you can choose who will do the saving* if mode is set to Software, the saving will be managed by Lima core
* if mode is set to Hardware then it’s the sdk or the hardware of the camera that will manage the saving.
Parameters
**mode** – can be either Software or Hardware
void resetCommonHeader()[¶](#_CPPv4N4lima8CtSaving17resetCommonHeaderEv)
}
clear the common header
void setCommonHeader(const HeaderMap &header)[¶](#_CPPv4N4lima8CtSaving15setCommonHeaderERK9HeaderMap)
set the common header.
This is the header which will be write for all frame for this acquisition
void updateCommonHeader(const HeaderMap &header)[¶](#_CPPv4N4lima8CtSaving18updateCommonHeaderERK9HeaderMap)
replace/add field in the common header
void getCommonHeader(HeaderMap &header) const[¶](#_CPPv4NK4lima8CtSaving15getCommonHeaderER9HeaderMap)
get the current common header
void addToCommonHeader(const HeaderValue &value)[¶](#_CPPv4N4lima8CtSaving17addToCommonHeaderERK11HeaderValue)
add/replace a header value in the current common header
void updateFrameHeader(long frame_nr, const HeaderMap &header)[¶](#_CPPv4N4lima8CtSaving17updateFrameHeaderElRK9HeaderMap)
add/replace several value in the current frame header
void addToFrameHeader(long frame_nr, const HeaderValue &value)[¶](#_CPPv4N4lima8CtSaving16addToFrameHeaderElRK11HeaderValue)
add/replace a header value in the current frame header
void validateFrameHeader(long frame_nr)[¶](#_CPPv4N4lima8CtSaving19validateFrameHeaderEl)
validate a header for a frame.
this mean that the header is ready and can now be save. If you are in AutoHeader this will trigger the saving if the data frame is available
void getFrameHeader(long frame_nr, HeaderMap &header) const[¶](#_CPPv4NK4lima8CtSaving14getFrameHeaderElR9HeaderMap)
get the frame header.
Parameters
* **frame_nr** – the frame id
* **header** – the current frame header
void takeFrameHeader(long frame_nr, HeaderMap &header)[¶](#_CPPv4N4lima8CtSaving15takeFrameHeaderElR9HeaderMap)
get the frame header and remove it from the container
void removeFrameHeader(long frame_nr)[¶](#_CPPv4N4lima8CtSaving17removeFrameHeaderEl)
remove a frame header
Parameters
**frame_nr** – the frame id
void removeAllFrameHeaders()[¶](#_CPPv4N4lima8CtSaving21removeAllFrameHeadersEv)
remove all frame header
void getStatistic(std::list<double>&, std::list<double>&, std::list<double>&, std::list<double>&, int stream_idx = 0) const[¶](#_CPPv4NK4lima8CtSaving12getStatisticERNSt4listIdEERNSt4listIdEERNSt4listIdEERNSt4listIdEEi)
get write statistic
void setStatisticHistorySize(int aSize, int stream_idx = 0)[¶](#_CPPv4N4lima8CtSaving23setStatisticHistorySizeEii)
set the size of the write time static list
int getStatisticHistorySize(int stream_idx = 0) const[¶](#_CPPv4NK4lima8CtSaving23getStatisticHistorySizeEi)
get the size of the write time static list
void clear()[¶](#_CPPv4N4lima8CtSaving5clearEv)
clear everything.
* all waiting data to be saved
* close all stream
void writeFrame(int frame_nr = -1, int nb_frames = 1, bool synchronous = true)[¶](#_CPPv4N4lima8CtSaving10writeFrameEiib)
write manually a frame
Parameters
* **aFrameNumber** – the frame id you want to save
* **aNbFrames** – the number of frames you want to concatenate
void setStreamActive(int stream_idx, bool active)[¶](#_CPPv4N4lima8CtSaving15setStreamActiveEib)
activate/desactivate a stream
void getStreamActive(int stream_idx, bool &active) const[¶](#_CPPv4NK4lima8CtSaving15getStreamActiveEiRb)
get if stream is active
void getMaxConcurrentWritingTask(int&, int stream_idx = 0) const[¶](#_CPPv4NK4lima8CtSaving27getMaxConcurrentWritingTaskERii)
get the maximum number of parallel writing tasks
void setMaxConcurrentWritingTask(int, int stream_idx = 0)[¶](#_CPPv4N4lima8CtSaving27setMaxConcurrentWritingTaskEii)
get the maximum number of parallel writing tasks
Public Functions
void setParameters(const [Parameters](index.html#_CPPv4N4lima8CtSaving10ParametersE) &pars, int stream_idx = 0)[¶](#_CPPv4N4lima8CtSaving13setParametersERK10Parametersi)
set saving parameter for a saving stream
Parameters
* **pars** – parameters for the saving stream
* **stream_idx** – the id of the saving stream
void getParameters([Parameters](index.html#_CPPv4N4lima8CtSaving10ParametersE) &pars, int stream_idx = 0) const[¶](#_CPPv4NK4lima8CtSaving13getParametersER10Parametersi)
get the saving stream parameters
Parameters
* **pars** – the return parameters
* **stream_idx** – the stream id
void setDirectory(const std::string &directory, int stream_idx = 0)[¶](#_CPPv4N4lima8CtSaving12setDirectoryERKNSt6stringEi)
set the saving directory for a saving stream
void getDirectory(std::string &directory, int stream_idx = 0) const[¶](#_CPPv4NK4lima8CtSaving12getDirectoryERNSt6stringEi)
get the saving directory for a saving stream
void setPrefix(const std::string &prefix, int stream_idx = 0)[¶](#_CPPv4N4lima8CtSaving9setPrefixERKNSt6stringEi)
set the filename prefix for a saving stream
void getPrefix(std::string &prefix, int stream_idx = 0) const[¶](#_CPPv4NK4lima8CtSaving9getPrefixERNSt6stringEi)
get the filename prefix for a saving stream
void setSuffix(const std::string &suffix, int stream_idx = 0)[¶](#_CPPv4N4lima8CtSaving9setSuffixERKNSt6stringEi)
set the filename suffix for a saving stream
void getSuffix(std::string &suffix, int stream_idx = 0) const[¶](#_CPPv4NK4lima8CtSaving9getSuffixERNSt6stringEi)
get the filename suffix for a saving stream
void setOptions(const std::string &options, int stream_idx = 0)[¶](#_CPPv4N4lima8CtSaving10setOptionsERKNSt6stringEi)
set the additional options for a saving stream
void getOptions(std::string &options, int stream_idx = 0) const[¶](#_CPPv4NK4lima8CtSaving10getOptionsERNSt6stringEi)
get the additional options for a saving stream
void setNextNumber(long number, int stream_idx = 0)[¶](#_CPPv4N4lima8CtSaving13setNextNumberEli)
set the next number for the filename for a saving stream
void getNextNumber(long &number, int stream_idx = 0) const[¶](#_CPPv4NK4lima8CtSaving13getNextNumberERli)
get the next number for the filename for a saving stream
void setFormat(FileFormat format, int stream_idx = 0)[¶](#_CPPv4N4lima8CtSaving9setFormatE10FileFormati)
set the saving format for a saving stream
void getFormat(FileFormat &format, int stream_idx = 0) const[¶](#_CPPv4NK4lima8CtSaving9getFormatER10FileFormati)
get the saving format for a saving stream
void setFormatAsString(const std::string &format, int stream_idx = 0)[¶](#_CPPv4N4lima8CtSaving17setFormatAsStringERKNSt6stringEi)
set the saving format as string for a saving stream
void getFormatAsString(std::string &format, int stream_idx = 0) const[¶](#_CPPv4NK4lima8CtSaving17getFormatAsStringERNSt6stringEi)
get the saving format as string for a saving stream
void getFormatList(std::list<FileFormat> &format_list) const[¶](#_CPPv4NK4lima8CtSaving13getFormatListERNSt4listI10FileFormatEE)
get supported format list
void getFormatListAsString(std::list<std::string> &format_list) const[¶](#_CPPv4NK4lima8CtSaving21getFormatListAsStringERNSt4listINSt6stringEEE)
get supported format list as string
void setFormatSuffix(int stream_idx = 0)[¶](#_CPPv4N4lima8CtSaving15setFormatSuffixEi)
force saving suffix to be the default format extension
void getHardwareFormatList(std::list<std::string> &format_list) const[¶](#_CPPv4NK4lima8CtSaving21getHardwareFormatListERNSt4listINSt6stringEEE)
return a list of hardware possible saving format
class _ManualBackgroundSaveTask : public SinkTaskBase[¶](#_CPPv4N4lima8CtSaving25_ManualBackgroundSaveTaskE)
manual background saving
class _NewFrameSaveCBK : public Callback[¶](#_CPPv4N4lima8CtSaving16_NewFrameSaveCBKE)
class _SavingErrorHandler : public EventCallback[¶](#_CPPv4N4lima8CtSaving19_SavingErrorHandlerE)
struct Parameters[¶](#_CPPv4N4lima8CtSaving10ParametersE)
Public Functions
Parameters()[¶](#_CPPv4N4lima8CtSaving10Parameters10ParametersEv)
[Parameters](index.html#structlima_1_1_ct_saving_1_1_parameters) default constructor.
Public Members
std::string directory[¶](#_CPPv4N4lima8CtSaving10Parameters9directoryE)
base path where the files will be saved
std::string prefix[¶](#_CPPv4N4lima8CtSaving10Parameters6prefixE)
prefix of the filename
std::string suffix[¶](#_CPPv4N4lima8CtSaving10Parameters6suffixE)
suffix of the filename
long nextNumber[¶](#_CPPv4N4lima8CtSaving10Parameters10nextNumberE)
next file number
FileFormat fileFormat[¶](#_CPPv4N4lima8CtSaving10Parameters10fileFormatE)
the saving format (EDF,CBF…)
SavingMode savingMode[¶](#_CPPv4N4lima8CtSaving10Parameters10savingModeE)
saving mode (automatic,manual…)
OverwritePolicy overwritePolicy[¶](#_CPPv4N4lima8CtSaving10Parameters15overwritePolicyE)
how you the saving react it find existing filename
std::string indexFormat[¶](#_CPPv4N4lima8CtSaving10Parameters11indexFormatE)
ie: %.4d if you want 4 digits
long framesPerFile[¶](#_CPPv4N4lima8CtSaving10Parameters13framesPerFileE)
the number of images save in one files
class SaveContainer[¶](#_CPPv4N4lima8CtSaving13SaveContainerE)
Subclassed by lima::SaveContainerCbf, lima::SaveContainerEdf, lima::SaveContainerFits, lima::SaveContainerHdf5, lima::SaveContainerNxs, lima::SaveContainerTiff
Public Functions
inline virtual bool needParallelCompression() const[¶](#_CPPv4NK4lima8CtSaving13SaveContainer23needParallelCompressionEv)
should return true if container has compression or havy task to do before saving if return is true, getCompressionTask should return a Task
See also
[getCompressionTask](index.html#classlima_1_1_ct_saving_1_1_save_container_1af7f7a36862e87fa0b020c5ed74a44c6c)
inline virtual SinkTaskBase *getCompressionTask(const [CtSaving](index.html#_CPPv4N4lima8CtSavingE)::HeaderMap&)[¶](#_CPPv4N4lima8CtSaving13SaveContainer18getCompressionTaskERKN8CtSaving9HeaderMapE)
get a new compression task at each call.
this method is not call if needParallelCompression return false
See also
[needParallelCompression](index.html#classlima_1_1_ct_saving_1_1_save_container_1af60bf641385db939f74725758ec25f03)
struct Stat[¶](#_CPPv4N4lima8CtSaving13SaveContainer4StatE)
class Stream[¶](#_CPPv4N4lima8CtSaving6StreamE)
class _CompressionCBK : public TaskEventCallback[¶](#_CPPv4N4lima8CtSaving6Stream15_CompressionCBKE)
compression callback
class _SaveCBK : public TaskEventCallback[¶](#_CPPv4N4lima8CtSaving6Stream8_SaveCBKE)
save callback
class _SaveTask : public SinkTaskBase[¶](#_CPPv4N4lima8CtSaving6Stream9_SaveTaskE)
save task class
##### Image Interface[¶](#image-interface)
class CtImage[¶](#_CPPv4N4lima7CtImageE)
Control image processing settings such as ROI, binning and rotation.
##### Shutter Interface[¶](#shutter-interface)
class CtShutter[¶](#_CPPv4N4lima9CtShutterE)
Control shutter settings such as open and close time.
struct Parameters[¶](#_CPPv4N4lima9CtShutter10ParametersE)
##### Buffer Interface[¶](#buffer-interface)
class CtBuffer[¶](#_CPPv4N4lima8CtBufferE)
Controls buffer settings such as number of buffers, binning and rotation.
class _DataDestroyCallback : public Callback[¶](#_CPPv4N4lima8CtBuffer20_DataDestroyCallbackE)
struct Parameters[¶](#_CPPv4N4lima8CtBuffer10ParametersE)
#### Statuses[¶](#statuses)
enum lima::AcqStatus[¶](#_CPPv4N4lima9AcqStatusE)
The global acquisition status.
*Values:*
enumerator AcqReady[¶](#_CPPv4N4lima9AcqStatus8AcqReadyE)
Acquisition is Ready.
enumerator AcqRunning[¶](#_CPPv4N4lima9AcqStatus10AcqRunningE)
Acquisition is Running.
enumerator AcqFault[¶](#_CPPv4N4lima9AcqStatus8AcqFaultE)
An error occured.
enumerator AcqConfig[¶](#_CPPv4N4lima9AcqStatus9AcqConfigE)
Configuring the camera.
enum lima::DetStatus[¶](#_CPPv4N4lima9DetStatusE)
Compound bit flags specifying the current detector status.
*Values:*
enumerator DetIdle[¶](#_CPPv4N4lima9DetStatus7DetIdleE)
enumerator DetFault[¶](#_CPPv4N4lima9DetStatus8DetFaultE)
enumerator DetWaitForTrigger[¶](#_CPPv4N4lima9DetStatus17DetWaitForTriggerE)
enumerator DetShutterOpen[¶](#_CPPv4N4lima9DetStatus14DetShutterOpenE)
enumerator DetExposure[¶](#_CPPv4N4lima9DetStatus11DetExposureE)
enumerator DetShutterClose[¶](#_CPPv4N4lima9DetStatus15DetShutterCloseE)
enumerator DetChargeShift[¶](#_CPPv4N4lima9DetStatus14DetChargeShiftE)
enumerator DetReadout[¶](#_CPPv4N4lima9DetStatus10DetReadoutE)
enumerator DetLatency[¶](#_CPPv4N4lima9DetStatus10DetLatencyE)
### Camera Plugin API[¶](#camera-plugin-api)
#### Hardware Interface[¶](#hardware-interface)
The Hardware Interface is the low level interface that must be implemented by detector plugins.
class HwInterface[¶](#_CPPv4N4lima11HwInterfaceE)
As an interface to the Control Layer, this class exports the capabilities provided by the hardware.
It is implemented by every camera plugins.
Public Types
typedef struct lima::[HwInterface](index.html#_CPPv4N4lima11HwInterfaceE)::[Status](index.html#_CPPv4N4lima11HwInterface6StatusE) StatusType[¶](#_CPPv4N4lima11HwInterface10StatusTypeE)
A tuple of status with acquisition and detector status / mask.
Public Functions
virtual void getCapList(CapList&) const = 0[¶](#_CPPv4NK4lima11HwInterface10getCapListER7CapList)
Returns a list of capabilities.
virtual void reset(ResetLevel reset_level) = 0[¶](#_CPPv4N4lima11HwInterface5resetE10ResetLevel)
Reset the hardware interface.
virtual void prepareAcq() = 0[¶](#_CPPv4N4lima11HwInterface10prepareAcqEv)
Prepare the acquisition and make sure the camera is properly configured.
This member function is always called before the acquisition is started.
virtual void startAcq() = 0[¶](#_CPPv4N4lima11HwInterface8startAcqEv)
Start the acquisition.
virtual void stopAcq() = 0[¶](#_CPPv4N4lima11HwInterface7stopAcqEv)
Stop the acquisition.
virtual void getStatus([StatusType](index.html#_CPPv4N4lima11HwInterface10StatusTypeE) &status) = 0[¶](#_CPPv4N4lima11HwInterface9getStatusER10StatusType)
Returns the current state of the hardware.
virtual int getNbAcquiredFrames()[¶](#_CPPv4N4lima11HwInterface19getNbAcquiredFramesEv)
Returns the number of acquired frames.
virtual int getNbHwAcquiredFrames() = 0[¶](#_CPPv4N4lima11HwInterface21getNbHwAcquiredFramesEv)
Returns the number of acquired frames returned by the hardware (may differ from getNbAcquiredFrames if accumulation is on)
struct Status[¶](#_CPPv4N4lima11HwInterface6StatusE)
A tuple of status with acquisition and detector status / mask.
Public Types
enum Basic[¶](#_CPPv4N4lima11HwInterface6Status5BasicE)
Basic detector states (some detectors may have additional states)
*Values:*
enumerator Fault[¶](#_CPPv4N4lima11HwInterface6Status5Basic5FaultE)
Fault.
enumerator Ready[¶](#_CPPv4N4lima11HwInterface6Status5Basic5ReadyE)
Ready for acquisition.
enumerator Exposure[¶](#_CPPv4N4lima11HwInterface6Status5Basic8ExposureE)
Counting photons.
enumerator Readout[¶](#_CPPv4N4lima11HwInterface6Status5Basic7ReadoutE)
Reading data from the chip.
enumerator Latency[¶](#_CPPv4N4lima11HwInterface6Status5Basic7LatencyE)
Latency between exposures.
enumerator Config[¶](#_CPPv4N4lima11HwInterface6Status5Basic6ConfigE)
Fault.
Public Members
[AcqStatus](index.html#_CPPv4N4lima9AcqStatusE) acq[¶](#_CPPv4N4lima11HwInterface6Status3acqE)
Global acquisition status.
[DetStatus](index.html#_CPPv4N4lima9DetStatusE) det[¶](#_CPPv4N4lima11HwInterface6Status3detE)
Compound bit flags specifying the current detector status.
[DetStatus](index.html#_CPPv4N4lima9DetStatusE) det_mask[¶](#_CPPv4N4lima11HwInterface6Status8det_maskE)
A mask specifying the detector status bits that are supported by the hardware.
#### Capabilities interfaces[¶](#capabilities-interfaces)
class HwDetInfoCtrlObj[¶](#_CPPv4N4lima16HwDetInfoCtrlObjE)
Provides static information about the detector and the current image dimension.
Public Functions
virtual void getMaxImageSize(Size &max_image_size) = 0[¶](#_CPPv4N4lima16HwDetInfoCtrlObj15getMaxImageSizeER4Size)
Return the maximum size of the image.
virtual void getDetectorImageSize(Size &det_image_size) = 0[¶](#_CPPv4N4lima16HwDetInfoCtrlObj20getDetectorImageSizeER4Size)
Return the size of the detector image, it is always equal or greater than the MaxImageSize.
virtual void getDefImageType(ImageType &def_image_type) = 0[¶](#_CPPv4N4lima16HwDetInfoCtrlObj15getDefImageTypeER9ImageType)
Returns the default data type of image (ushort, ulong, …)
virtual void getCurrImageType(ImageType &curr_image_type) = 0[¶](#_CPPv4N4lima16HwDetInfoCtrlObj16getCurrImageTypeER9ImageType)
Returns the current data type of image (ushort, ulong, …).
virtual void getPixelSize(double &x_size, double &y_size) = 0[¶](#_CPPv4N4lima16HwDetInfoCtrlObj12getPixelSizeERdRd)
Physical size of pixels (in meter)
virtual void getDetectorType(std::string &det_type) = 0[¶](#_CPPv4N4lima16HwDetInfoCtrlObj15getDetectorTypeERNSt6stringE)
Returns the type of the detector (Frelon, Maxipix, …)
virtual void getDetectorModel(std::string &det_model) = 0[¶](#_CPPv4N4lima16HwDetInfoCtrlObj16getDetectorModelERNSt6stringE)
Returns the model of the detector.
virtual void registerMaxImageSizeCallback(HwMaxImageSizeCallback &cb) = 0[¶](#_CPPv4N4lima16HwDetInfoCtrlObj28registerMaxImageSizeCallbackER22HwMaxImageSizeCallback)
Register a callback called when the detector is reconfigured with a different geometry.
virtual void unregisterMaxImageSizeCallback(HwMaxImageSizeCallback &cb) = 0[¶](#_CPPv4N4lima16HwDetInfoCtrlObj30unregisterMaxImageSizeCallbackER22HwMaxImageSizeCallback)
Unregister a callback previsouly registered with registerMaxImageSizeCallback.
inline virtual void setUserDetectorName(const std::string &username)[¶](#_CPPv4N4lima16HwDetInfoCtrlObj19setUserDetectorNameERKNSt6stringE)
Set a detector user name.
inline virtual void getUserDetectorName(std::string &username)[¶](#_CPPv4N4lima16HwDetInfoCtrlObj19getUserDetectorNameERNSt6stringE)
Get a detector user name.
class HwBufferCtrlObj[¶](#_CPPv4N4lima15HwBufferCtrlObjE)
This interface controls the image memory buffer allocation and management.
Buffers are used:* As temporary frame storage before saving, allowing disk / network speed fluctuations.
* To permanently hold images that can be read by the user after the acquisition is finished. These buffer functionalities may be implemented by the hardware layer (kernel driver in the case of the Espia). If not, an auxiliary buffer manager class will be provided to facilitate (and unify) its software implementation. The buffer management parameters are :
Subclassed by [lima::SoftBufferCtrlObj](index.html#classlima_1_1_soft_buffer_ctrl_obj)
Public Functions
virtual void *getBufferPtr(int buffer_nb, int concat_frame_nb = 0) = 0[¶](#_CPPv4N4lima15HwBufferCtrlObj12getBufferPtrEii)
Returns a pointer to the buffer at the specified location.
virtual void *getFramePtr(int acq_frame_nb) = 0[¶](#_CPPv4N4lima15HwBufferCtrlObj11getFramePtrEi)
Returns a pointer to the frame at the specified location.
virtual void getStartTimestamp(Timestamp &start_ts) = 0[¶](#_CPPv4N4lima15HwBufferCtrlObj17getStartTimestampER9Timestamp)
Returns the start timestamp.
virtual void getFrameInfo(int acq_frame_nb, HwFrameInfoType &info) = 0[¶](#_CPPv4N4lima15HwBufferCtrlObj12getFrameInfoEiR15HwFrameInfoType)
Returns some information for the specified frame number such as timestamp.
class Callback[¶](#_CPPv4N4lima15HwBufferCtrlObj8CallbackE)
class HwSyncCtrlObj[¶](#_CPPv4N4lima13HwSyncCtrlObjE)
Public Functions
virtual bool checkTrigMode(TrigMode trig_mode) = 0[¶](#_CPPv4N4lima13HwSyncCtrlObj13checkTrigModeE8TrigMode)
Check wether a given trigger mode is supported.
virtual void setTrigMode(TrigMode trig_mode) = 0[¶](#_CPPv4N4lima13HwSyncCtrlObj11setTrigModeE8TrigMode)
Set the triggering mode.
virtual void getTrigMode(TrigMode &trig_mode) = 0[¶](#_CPPv4N4lima13HwSyncCtrlObj11getTrigModeER8TrigMode)
Get the current triggering mode.
virtual void setExpTime(double exp_time) = 0[¶](#_CPPv4N4lima13HwSyncCtrlObj10setExpTimeEd)
Set the frame exposure time.
virtual void getExpTime(double &exp_time) = 0[¶](#_CPPv4N4lima13HwSyncCtrlObj10getExpTimeERd)
Get the current frame exposure time.
virtual bool checkAutoExposureMode(AutoExposureMode mode) const[¶](#_CPPv4NK4lima13HwSyncCtrlObj21checkAutoExposureModeE16AutoExposureMode)
Check wether a given auto exposure mode is supported.
virtual void setHwAutoExposureMode(AutoExposureMode mode)[¶](#_CPPv4N4lima13HwSyncCtrlObj21setHwAutoExposureModeE16AutoExposureMode)
this method should be redefined in the subclass if the camera can managed auto exposure
virtual void setLatTime(double lat_time) = 0[¶](#_CPPv4N4lima13HwSyncCtrlObj10setLatTimeEd)
Set the latency time between frames.
virtual void getLatTime(double &lat_time) = 0[¶](#_CPPv4N4lima13HwSyncCtrlObj10getLatTimeERd)
Get the current latency time between frames.
class ValidRangesCallback[¶](#_CPPv4N4lima13HwSyncCtrlObj19ValidRangesCallbackE)
struct ValidRangesType[¶](#_CPPv4N4lima13HwSyncCtrlObj15ValidRangesTypeE)
#### Callbacks[¶](#callbacks)
class HwFrameCallback[¶](#_CPPv4N4lima15HwFrameCallbackE)
Subclassed by lima::HwTestApp::FrameCallback
#### Implementations Helpers[¶](#implementations-helpers)
class SoftBufferCtrlObj : public lima::[HwBufferCtrlObj](index.html#_CPPv4N4lima15HwBufferCtrlObjE)[¶](#_CPPv4N4lima17SoftBufferCtrlObjE)
This class is a basic [HwBufferCtrlObj](index.html#classlima_1_1_hw_buffer_ctrl_obj) software allocation implementation, It can be directly provided to the control layer as a [HwBufferCtrlObj](index.html#classlima_1_1_hw_buffer_ctrl_obj).
Public Functions
virtual void *getBufferPtr(int buffer_nb, int concat_frame_nb = 0)[¶](#_CPPv4N4lima17SoftBufferCtrlObj12getBufferPtrEii)
Returns a pointer to the buffer at the specified location.
virtual void *getFramePtr(int acq_frame_nb)[¶](#_CPPv4N4lima17SoftBufferCtrlObj11getFramePtrEi)
Returns a pointer to the frame at the specified location.
virtual void getStartTimestamp(Timestamp &start_ts)[¶](#_CPPv4N4lima17SoftBufferCtrlObj17getStartTimestampER9Timestamp)
Returns the start timestamp.
virtual void getFrameInfo(int acq_frame_nb, HwFrameInfoType &info)[¶](#_CPPv4N4lima17SoftBufferCtrlObj12getFrameInfoEiR15HwFrameInfoType)
Returns some information for the specified frame number such as timestamp.
class Sync : public Callback[¶](#_CPPv4N4lima17SoftBufferCtrlObj4SyncE)
Python API[¶](#python-api)
---
Most of the previous sections about the user interface routines applies to the Python binding. Naturally, some specifics concerning Python come into play.
This documentation is very much a work in progress. Stay tuned!
### Hello, pyLima![¶](#hello-pylima)
Let’s start with a simple example of an image acquisition function using the simulator camera.
```
from Lima import Core from Lima import Simulator import time
def test_mode_generator(cam, nb_frames_prefetched=0):
if nb_frames_prefetched:
cam.setMode(Simulator.Camera.MODE_GENERATOR_PREFETCH)
fb = cam.getFrameGetter()
fb.setNbPrefetchedFrames(nb_frames_prefetched)
test = fb.getNbPrefetchedFrames()
else:
cam.setMode(Simulator.Camera.MODE_GENERATOR)
fb = cam.getFrameGetter()
# Add a peak
p1 = Simulator.GaussPeak(10, 10, 23, 1000) # peak at 10,10 fwhm=23 and max=1000
fb.setPeaks([p1])
def test_mode_loader(cam, nb_frames_prefetched=0):
if nb_frames_prefetched:
cam.setMode(Simulator.Camera.MODE_LOADER_PREFETCH)
fb = cam.getFrameGetter()
fb.setNbPrefetchedFrames(nb_frames_prefetched)
test = fb.getNbPrefetchedFrames()
else:
cam.setMode(Simulator.Camera.MODE_LOADER)
fb = cam.getFrameGetter()
# Set file pattern
fb.setFilePattern(b'input\\test_*.edf')
cam = Simulator.Camera()
#test_mode_generator(cam)
#test_mode_generator(cam, 10)
#test_mode_loader(cam)
test_mode_loader(cam, 100)
# Get the hardware interface hwint = Simulator.Interface(cam)
# Get the control interface control = Core.CtControl(hwint)
# Get the acquisition control acq = control.acquisition()
# Set new file parameters and autosaving mode saving = control.saving()
pars=saving.getParameters()
pars.directory = b'output'
pars.prefix = b'testsimul_'
pars.suffix = b'.edf'
pars.fileFormat = Core.CtSaving.EDF pars.savingMode = Core.CtSaving.AutoFrame saving.setParameters(pars)
acq = control.acquisition()
# now ask for 2 sec. exposure and 10 frames acq.setAcqExpoTime(0.1)
acq.setAcqNbFrames(10)
control.prepareAcq()
control.startAcq()
# wait for last image (#9) ready status = control.getStatus()
lastimg = status.ImageCounters.LastImageReady while lastimg != 9:
time.sleep(0.1)
lastimg = control.getStatus().ImageCounters.LastImageReady
status = control.getStatus()
lastimg = status.ImageCounters.LastImageReady
# read the first image im0 = control.ReadImage(0)
```
Prerequisite[¶](#prerequisite)
---
For collaborative development, we use the “Fork & Pull” model from Github. So anyone who wants to contribute needs an account on Github. Then you need to fork the project you want to contribute.
Note
If you want to contribute with a new camera plug-in you should first request us (by email @ [<EMAIL>](mailto:lima%40esrf.fr)) to get the new plug-in camera sub-module created. We will provide:
* a default structure of directories (<mycamera>/src /include sip/ doc/ python/ test/)
* the build system file (<mycamera>/CMakeLists.txt)
* templates files (src and include) for the mandatory classes:
> * <MyCamera>Interface
> * <MyCamera>DetInfoCtrlObj
> * <MyCamera>SyncCtrlObj
* a standard .gitignore file
* a template index.rst for the documentation
As above do not forget to fork the new sub-module project.
### Create a github account[¶](#create-a-github-account)
This is an easy task, you just have to [Sign up](https://github.com/signup/free), it’s free!
### Fork a project[¶](#fork-a-project)
Check out the [Github doc](https://help.github.com/articles/fork-a-repo), it is far better explained than we could do ;)
Contribute guideline[¶](#contribute-guideline)
---
It is very simple to contribute, you should follow the steps below.
1. Branch
> First of all you have to create a branch for a new feature or for a bug fix, use an explicit
> branch name, for instance “soleil_video_patch” .
2. Code/patch
> If it’s a patch from an existing module, respect and keep the coding style of the previous programmer (indentation,variable naming,end-line…).
> If you’re starting a new camera project, you’ve just to respect few rules:
> * Class member must start with ‘**m_**’
> * Class method must be in **CamelCase**
> * You must define the camera’s namespace
3. Commit
> Do as many commit as you need with clear comments.
> Prefer an atomic commit with a single change rather than a huge commit with too many (unrelated) changes.
4. Pull Request
> Then submit a [Pull Request](https://help.github.com/articles/using-pull-requests)
At this stage you have to wait, we need some time to accept or reject your request. So there are two possible issues:
1. The Pull-request is accepted, congrat!
> We merge your branch with the the main project master branch, then everything is fine and you can now synchronize your forked project with the main project and go on with your next contribution.
2. The pull-request is rejected:
> The pull request could be rejected if:
> * the new code doesn’t compile
> * it breaks backward compatibility
> * the python wrapping is missing or not updated
> * the commit log message doesn’t describe what you actually do
> In case of a new camera plug-in sub-module the first pull request will be rejected if:
> * as above
> * the documentation is missing or if it does not fit with the guidelines (e.i [Understand the plugin architecture](index.html#guidelines))
> We will tell you (code review on Github and/or email) about the reason and we will give some advises to improve your next tentative of pull-request.
> So at this point you have to loop to item 2 (Code/Patch) again.
> Good luck !
> |
github.com/liangdas/mqant | go | Go | README
[¶](#section-readme)
---
### mqant
mqant是一款基于Golang语言的简洁,高效,高性能的分布式游戏服务器框架,研发的初衷是要实现一款能支持高并发,高性能,高实时性,的游戏服务器框架,也希望mqant未来能够做即时通讯和物联网方面的应用
### mqant 2x开始支持分布式服务发现
[请务必先查看2x跟1x版本差异](https://github.com/liangdas/mqant/wiki/mqant%E6%9C%8D%E5%8A%A1%E5%8F%91%E7%8E%B0%E6%A6%82%E8%BF%B0)
### 为什么要用golang
[Node、PHP、Java 和 Go 服务端 I/O 性能PK](http://blog.csdn.net/listen2you/article/details/72935679)
### 特性
1. 高性能分布式 2. 支持分布式服务发现 3. 基于golang协程,开发过程全程做到无callback回调,代码可读性更高 4. 远程RPC使用nats作为通道 5. 网关采用MQTT协议,无需再开发客户端底层库,直接套用已有的MQTT客户端代码库,可以支持IOS,Android,websocket,PC等多平台通信 6. 默认支持mqtt协议,同时网关也支持开发者自定义的粘包协议
### 社区
QQ交流群 :463735103
技术交流社区:[www.mqant.com](http://www.mqant.com)
### 模块
> 将不断加入更多的模块
[mqant组件库](https://github.com/liangdas/mqant-modules)
```
短信验证码
房间模块
```
[压力测试工具:armyant](https://github.com/liangdas/armyant)
### 社区贡献的库
[mqant-docker](https://github.com/bjfumac/mqant-docker)
[MQTT-Laya](https://github.com/bjfumac/MQTT-Laya)
### 依赖项目
```
go get github.com/gorilla/mux go get github.com/gorilla/websocket go get go.etcd.io/etcd/clientv3 go get go.etcd.io/etcd/etcdserver/api/v3rpc/rpctypes go get github.com/hashicorp/consul go get github.com/golang/protobuf go get github.com/golang/net/context go get github.com/gomodule/redigo go get github.com/nats-io/go-nats
```
### 文档
快速上手:
[mqant wiki](https://github.com/liangdas/mqant/wiki)
### 概述
1. [mqant的设计动机](https://github.com/liangdas/mqant/wiki/mqant%E7%9A%84%E8%AE%BE%E8%AE%A1%E5%8A%A8%E6%9C%BA)
2. mqant框架介绍 3. [框架架构概述](https://github.com/liangdas/mqant/wiki/mqant%E6%A1%86%E6%9E%B6%E6%A6%82%E8%BF%B0)
4. [通信协议与客户端支持](https://github.com/liangdas/mqant/wiki/%E9%80%9A%E4%BF%A1%E5%8D%8F%E8%AE%AE%E4%B8%8E%E5%AE%A2%E6%88%B7%E7%AB%AF%E6%94%AF%E6%8C%81%E4%BB%8B%E7%BB%8D)
5.
...
### 演示示例
```
mqant 项目只包含mqant的代码文件 mqantserver 项目包括了完整的测试demo代码和mqant所依赖的库 如果你是新手可以优先下载mqantserver项目进行试验
```
[在线Demo演示](http://www.mqant.com/mqant/chat/) 【[源码下载](https://github.com/liangdas/mqantserver)】
[多人对战吃小球游戏(绿色小球是在线玩家,点击屏幕任意位置移动小球,可以同时开两个浏览器测试,支持移动端)](http://www.mqant.com/mqant/hitball/)【[源码下载](https://github.com/liangdas/mqantserver)】
### 框架架构
[mqant的设计动机](https://github.com/liangdas/mqant/wiki/mqant%E7%9A%84%E8%AE%BE%E8%AE%A1%E5%8A%A8%E6%9C%BA)
[框架架构](https://github.com/liangdas/mqant/wiki/mqant%E6%A1%86%E6%9E%B6%E6%A6%82%E8%BF%B0)
#### 下一步计划
1. 分布式架构管理模块(Master)
1. 模块发现
2. 模块管理
1. 模块动态添加删除
2. 模块状态监控 2. 新增英文版文档 希望有兴趣的英语好的同学能参与帮忙编写英文版本的文档
3. 【已完成】异常日志监控和汇报
1. 异常日志分类汇总
2. 定时将异常日志发送到Email
3. 定时将异常日志通过webhook发送到团队协作工具中(钉钉,worktile等)
4. 【已完成】rpc添加track分布式跟踪系统的接口[Appdash,用Go实现的分布式系统跟踪神器](http://tonybai.com/2015/06/17/appdash-distributed-systems-tracing-in-go/)
#### 贡献者
欢迎提供dev分支的pull request
bug请直接通过issue提交
凡提交代码和建议, bug的童鞋, 均会在下列贡献者名单者出现
1. [xlionet](https://github.com/xlionet)
2. [lulucas](https://github.com/lulucas/mqant-UnityExample)
3. [c2matrix](https://github.com/c2matrix)
4. [bjfumac【mqant-docker】[MQTT-Laya]](https://github.com/bjfumac)
5. [jarekzha 【jarekzha-master】](https://github.com/jarekzha)
#### 打赏作者
![alt mqant作者打赏码](https://github.com/liangdas/mqant/wiki/images/donation.png)
#### 版本日志
##### [v1.7.0新特性](https://github.com/liangdas/mqant/wiki/v1.7.0)
##### [v1.6.6新特性](https://github.com/liangdas/mqant/wiki/v1.6.6)
##### [v1.6.5新特性](https://github.com/liangdas/mqant/wiki/v1.6.5)
##### [v1.6.4新特性](https://github.com/liangdas/mqant/wiki/v1.6.4)
##### [v1.6.3新特性](https://github.com/liangdas/mqant/wiki/v1.6.3)
##### [v1.6.2新特性](https://github.com/liangdas/mqant/wiki/v1.6.2)
##### [v1.6.1新特性](https://github.com/liangdas/mqant/wiki/v1.6.1)
##### [v1.6.0新特性](https://github.com/liangdas/mqant/wiki/v1.6.0)
##### [v1.5.0新特性](https://github.com/liangdas/mqant/wiki/v1.5.0)
##### [v1.4.0新特性](https://github.com/liangdas/mqant/wiki/v1.4.0)
##### [v1.3.0新特性](https://github.com/liangdas/mqant/wiki/v1.3.0)
##### [v1.2.0新特性](https://github.com/liangdas/mqant/wiki/v1.2.0)
##### [v1.1.0新特性](https://github.com/liangdas/mqant/wiki/v1.1.0)
##### v1.0.0
```
mqant第一个版本
```
Documentation
[¶](#section-documentation)
---
### Overview [¶](#pkg-overview)
Copyright 2014 mqant Author. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
```
http://www.apache.org/licenses/LICENSE-2.0
```
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and limitations under the License.
### Index [¶](#pkg-index)
* [func CreateApp(opts ...module.Option) module.App](#CreateApp)
### Constants [¶](#pkg-constants)
This section is empty.
### Variables [¶](#pkg-variables)
This section is empty.
### Functions [¶](#pkg-functions)
####
func [CreateApp](https://github.com/liangdas/mqant/blob/v2.0.0/mqant.go#L21) [¶](#CreateApp)
```
func CreateApp(opts ...[module](/github.com/liangdas/[email protected]+incompatible/module).[Option](/github.com/liangdas/[email protected]+incompatible/module#Option)) [module](/github.com/liangdas/[email protected]+incompatible/module).[App](/github.com/liangdas/[email protected]+incompatible/module#App)
```
### Types [¶](#pkg-types)
This section is empty. |
userguide | ctan | TeX | National Research Council
Research Press
**LaTeX User Guide for Journals1**
Footnote 1: This document is available from your local CTAN site in macros/latex/contrib/supported/nrc/ in both.ps and.pdf formats.
## 1 Basic coding
All LaTeX documents must include these three commands:
\documentclass{...} ... <-- 'preamble' \begin{document} ... <-- 'body' ...... {end{document}
1. The documentclass must be specified, either with a generic package, such as article or a specific package, such as nrc1).
2. Between the first two commands comes the material known as the 'preamble', which includes additional packages (and their options), as well as any file-specific macros.
3. Between the 2nd and 3rd commands comes the actual contents of the article -- known as the 'body' of the file.
Commands are usually of the following types:
1. **control sequences** begin with a backslash ( ).
2. **environments** use matching \begin{...} and \end{...} commands (i.e., \begin{document} must eventually be matched with \end{document}).
3. **optional** arguments are within square brackets [...]
## 2 Document classes and options
NRC journals are set either in full-width, using the nrc1 class, or in 2-column format, using the nrc2 document class.
The following represent the options for both document classes. Note that most articles will **not** require all of them; as well, some options are only for nrc2.
\documentclass[<options here>]{<class here>} author nrc1 PLUS usecmfonts OR nrc2 OR typelrest french nonumbib nrc1 only: leqno nrc2 only: reqno To combine options, insert a comma between each option:
\documentclass[typelrest,genTeX,nonumbib]{nrc2}
The following sections describe the options available to both classes, and then those options which are specific to nrc1 or nrc2 only. Options specific to in-house production work are described separately; see section 6.
### Options for both classes
**Note:** Do not load options or packages which are never accessed; their presence implies they are required and may cause unnecessary searches for coded material which is not, in fact, present.
For the convenience of authors, the publicly available class files for authors have a number of options already automated:
\author,genTeX,typelrest,usecmfonts
Additionally, a number of in-house diagnostic messages are turned off, so as not to interfere with processing. Where a default selection is not appropriate, the following provides information on these options.
\author: This option selects a configuration appropriate for author use of the class; it is enabled by default when a publicly distributed copy of the class is loaded. The author option automatically invokes the genTeX option, as well as one of the two font options: typelrest or usecmfonts. See below for details.
\genTeX: By use of this option, you declare that you are using a generic/public domain version of TeX; it must be used in conjunction with one of usecmfonts and typelrest (see below). The author option automatically selects the genTeX option.
\typelrest: By use of this option, you declare that you have access to a PostScript printer or other interpreter, and may use "basic" PostScript fonts to make a rough approximation to those that will be used in the published paper. Do not use this option in combination with the usecmfonts option.
If the author option has been selected, the class will automatically decide whether to use usecmfonts or typelrest by default. The automatic selection may be incorrect, but you may over-ride it by specifying the typelrest option.
usepackage{bm} This package simplifies the use of bold symbols and other objects in math mode. It defines a single command, bm{...}, which is used in math mode, and causes its argument to be typeset in the appropriate math bold font. See bm.sty documentation for details. **Note:** If a warning message about too many math alphabets arises, insert the following code **above** the \usepackage{bm} line: \newcommand\hmmax{0} % default 3 \usepackage{bm} See bm.sty documentation for details.
usepackage{cite} Authors should use this package, which enhances the default options available in LaTeX (e.g., \citeprints cross-references without brackets). See cite.sty documentation for additional features.
usepackage{...,...}{babel} The babel package is used to manage the typesetting requirements of multilingual documents. Different cultures have different typesetting conventions, and babel enables the LaTeX user to apply the appropriate conventions to the different parts of a multilingual document. Since NRC publications are typically multilingual, babel has an important role to play in their preparation.
* English-language articles have a French-language 'Resume', which requires French hyphenation and punctuation. Insert the following line -- and notice the order of the language options: \usepackage{rench,english}{babel}
* French-language articles have an English-language 'Abstract', which requires English hyphenation and punctuation. Insert the following line and again notice the order of the language options:3 \usepackage{english,french}{babel}
**In addition to the babel** package, French-language articles must **also** include the french option to the document class, as mentioned earlier in section 2.1. The babel package invokes French hyphenation patterns4 as well as some of the (European) French typesetting conventions (e.g., space before some punctuation).
Footnote 4: footnotemark:
These packages and others can be found either on your machine or can be acquired from CTAN, the Comprehensive TeX Archive Network; use the search facilities at www.ctan.org/search. The NRC document classes and this documentation can be found on CTAN in macros/latex/contrib/supported/nrc/.
### Additional macros
1. Avoid creating too many personal macros, which may conflict with the NRC document classes (and possibly with other packages) and thereby slow in-house processing of files. Where these are used, macros not actively invoked in the file should be pruned out.
2. Whenever possible, define your macros using the LaTeX \newcommand mechanism, rather than the TeX primitive \def; this way, LaTeX itself will detect any name clashes you may innocently introduce.
3. Move all non-NRC preamble material, including author macros, to the end of the file (after \end{document}), rather than deleting it. Reintroduce only what is needed.
4. Where author macros are needed, they should all be gathered at the top of the file in the preamble area, after all packages have been loaded, and clearly marked as being author macros: \%%%%%%%%%% Author macros begin:......... \%%%%%%%%% Author macros end
5. Similarly, move all \let statements to the preamble area, where they are immediately visible to the editor.
## 4 The body
All articles have the following elements:
1. titleblock and author information
2. abstract and resume
3. headings and subheadings
4. text
5. bibliography
Most articles also include some or all of the following elements:
1. in-line and display mathematics
2. enumerated lists
3. tables
4. figures and illustrations (e.g., PostScript)
5. footnotes
6. offset quoted passages
7. acknowledgements
unaketitle
This command activates the titleblock commands.
The nrc1 class requires this command to appear **before** the abstract/resume block of text.
The nrc2 class requires this command to appear **after** the abstract/resume block of text.
unaketitle* [an NRC macro]
With the nrc2 class only, when Abstracts/Resumes spill over to a second page, a horizontal rule may be needed before the regular article text begins. To generate this rule use \maketitle* instead of \maketitle.
### Abstracts/Resumes
The syntax is the normal one expected for environments: a matched set of either {abstract} or {resume}:
\begin{abstract}... \end{abstract} English \begin{resume}... \end{document} French
Some journals may require the following, which should appear **inside** the abstract environments:
\keywords{...} [an NRC macro]
Automatically prints '_Keywords:_', followed by whatever text is input inside the argument (the curly braces).
\motscles{...} [an NRC macro]
Automatically prints '_Mots cles:_', followed by whatever text is input inside the argument (the curly braces).
\PACS{...} [an NRC macro]
Automatically prints 'PACS Nos.:' (Fr. 'PACS N\({}^{\alpha}\):'), followed by whatever material is input inside the argument.
\PACS*{...} [an NRC macro]
Automatically prints 'PACS No.:' (Fr. 'PACS N\({}^{\alpha}\):'), followed by whatever material is input inside the argument.
### Headings and subheadings
Five levels, numbered automatically.5 Line breaks can be forced by using . To suppress numbering (e.g., for 'Acknowledgements'), use an asterisk before the opening curly brace:'section*{Acknowledgements}.
Footnote 5: An alternate set of headings commands also exists: \Asection, \Section, \Section, \Section, and \Section, for levels 1 through 5, respectively.
For sub- and superscripts in section titles, use the macros \textsubscript and \textsubscript, respectively.
1. \section{...} Level-1 heading
2. \subsection{...} Level-2 heading
3. \subsubsection{...} Level-3 heading
4. \subsubsubsection{...} Level-4 heading
5. \paragraph{...} Level-5 heading
### Text
Same as the default LaTeX commands:
\begin{quote}... \end{quote}
\begin{enumerate}... \end{enumerate}
\begin{itemize}... \end{itemize}
\begin{description}... \end{description}
\foot{...}
Where lists must be flushed to the left margin, there are two NRC-specific environments to use:
\begin{filename}... \end{filename} [NRC]
Generates a numbered list (first level only) with labels flushed to the left margin. No nesting possible.
\begin{filename}... \end{filename} [NRC]
Generates a bulletted list (first level only) with labels flushed to the left margin. No nesting possible.
#### 4.4.1 Column switching in nrc2
On occasion, material for 2-column journals is best set full-width, interrupting the two text columns. For equations, the following customized code will achieve this effect:
\begin{FullWidth}[0.5]
\LeftColumnBar
\equation to span both columns\RightColumnBar
\end{FullWidth}
The following description provides details of each step:
\begin{FullWidth}... \end{FullWidth} This environment encloses the material which is to span the two columns. The text for the two columns immediately above this environment will be balanced. Text immediately below the environment will resume the 2-column layout.
The optional [0.5] argument ('one half' in this example) is an adjustment factor, affecting the split between left and right columns. Default is '1.0', the units are nominal line depths in the default font size; increasing the factor tends to increase the number of lines in the left column.
\LeftColumnBar
This draws a rule below the left column of the 2-column text which is above the full-width material.
\RightColumnBar
This draws a rule above the right column of 2-column text which is below the full-width material.
### Appendices
Where only the word 'Appendix' is needed, use the command \section*{Appendix} (note that the asterisk suppresses any section numbering, either by digit or letter). If equations within the Appendix are to restart at '1', insert
\section{equation}{0}
If more than the word 'Appendix' is to appear, then the \section command must be augmented by either \appendix or \appendix*.
The \appendix command (unmodified) behaves as in the standard LaTeX classes; so, for 'A. Title of First Appendix', the following code is used:
\appendix
\section{Title of First Appendix}
For 'Appendix A:' + a heading (and then 'Appendix B:' + its heading, etc.), the following code will do the job (notice that the word 'Appendix' is **not** input):
\appendix* \section{A subheading}... \section{Next subheading}
Both \appendix and \appendix* preserve \numberby commands, as one might expect: equations in appendix A are numbered 'A.1', 'A.2', etc. To ensure numbering is correctly applied throughout all appendices, insert \numberby **before** the \section command.
## 5 Resources
The following documentation, newsgroups, and web pages are useful source to consult for help, news, and updates. Keep in mind, however, that conflicts may arise when
### Books and articles
The LaTeX Companion: by <NAME>, <NAME>, and <NAME> (Addison-Wesley, 1994).
Contains many details to assist users. Caveats:
Chapter 8 is no longer valid -- a revised version is available in both.ps and.pdf formats from CTAN.9 As well, the sections on graphics and colour have been superseded by material in The LaTeX Graphics Companion.
The LaTeX Graphics Companion: Illustrating Documents with LaTeX and PostScript, by <NAME>, <NAME>, and <NAME> (Addison-Wesley, 1997).
Footnote 9: CTAN = Comprehensive TeX Archive Network; a list of site addresses can be found on the TUG home page www.tug.org. Follow the links to /tex-archive/info/companion-rev.
Math into LaTeX: An Introduction to LaTeX and AMSiTeX, by <NAME> (Birkhauser, Boston and Springer Verlag, New York, 1996).
First Steps in LaTeX: by <NAME> (Birkhauser, Boston, 1999).
The LaTeXbook: by <NAME> (Addison-Wesley, 1986).
LaTeX: A Document Preparation System -- User's Guide and Reference Manual, by <NAME> (Addison-Wesley, 1994, 2nd ed).
A Guide to LaTeX2\({}_{\mathcal{E}}\): by <NAME> and <NAME> (Addison-Wesley, 1998, 3rd ed).
<NAME>: "Breaking equations," TUGbout 18,3 (Sept 1997): 182-194.
<NAME>: "Using EPS graphics in LaTeX2\({}_{\mathcal{E}}\) documents," TUGbout 17,1 (March 1996): 43-53.
<NAME>: "Using EPS graphics in LaTeX2\({}_{\mathcal{E}}\) documents, Part 2: Floating figures, boxed figures, captions, and math in figures," TUGbout 17,3 (Sept. 1996): 288-310.
The latest version of the Reckdahl material can be found on CTAN in info/epslatex in both.ps and.pdf formats.
### Electronic resources
**www.tug.org:**: the most complete stepping-stone to the world-wide TeX community, including the CTAN archives, user groups, news, and so on.
**comp.text.tex:**: a general all-purpose newgroup for LaTeX users. Consult your local technical support group to see if newsgroup access if available via your browser.
**FAQ:**: put together by the UK TeX Users Group; available via the TUG web page.
**Listserv lists:**: there are a great number of specialised lists. Consult the TUG web pages for details.
**[http://groups.google.com/](http://groups.google.com/)**: holds an archive of usenet discussions, and may be used to review current topics of concern, or to search for answers to specific questions. Unfortunately, the service does **not** offer facilities for posting to usenet, at present.
## 6 In-house Coding for Articles
A template file with all the main preamble lines of code already input, is available (see Appendix B). At the top of the new file, insert the contents of:
nrc-opening.tex
and begin to **unc**omment those lines which are pertinent for the file. There are brief notes in the template, indicating the purpose of each macro line, along with cross-references to pages in these guidelines. Delete or leave commented those lines which are not relevant to the file.
**Note:** Only invoke those packages and/or macros which are present in the file; for example, it is misleading to load a graphics package if there are no figures in the file.
### Changing class option choices
Included in the main class options are some which are intended for authors only; remove any of the following options before processing author files in-house:
author genTeX type1rest usecmonts
On the other hand, there are a number of class options related to various stages of in-house production and thus intended only for NRC editorial staff. Below is a list of these options, followed by a brief description of their purpose:
\documentclass[<options here>]{<class here>}breakaddress nrc1 for nrc2 only: twocolid OR nrc2 for nrc2 only: twocolid* preprint proof pagnf trimmarks finalverso breakaddress This option affects the author IDbox at the bottom of the titlepage. It inserts a linebreak between the author name and address; the default setting has them print on the same line.
twocolid For nrc2 only. This option affects author information (the IDbox at page bottom): the text spans both columns.10
twocolid* For nrc2 only. This variation for the IDbox also spans both columns, but the material inside is itself set up in two columns.
Footnote 10: The default is to set all IDbox material into the bottom of the left column.
preprint This affects headers and footers, omitting such items as dates, page numbers, and so on. For any additional text in running heads (e.g., 'Rapid Communication'), use \shortauthor.
proof Prints a centred footer on every page with the following text: 'Proof/\Epreuve'.
**Note:** Comment out when DOI line must appear at bottom centre of opening page. See section 6.5.
pagnf Prints a centred footer on every page with the following text: 'Pagination not final/Pagination non finale'.
**Note:** Comment out when DOI line must appear at bottom centre of opening page. See section 6.5.
trimmarks Prints cropmarks at all four corners. Note that trimmarks for nrc2 are off the regular \(8.5\times 11\)-inch paper, but will be visible if oversized paper is used.
finalverso Specifies that the paper should end on a recto page (creating a blank, unnumbered page if the text doesn't for itself; the blank page does **not** appear in the paper's page count.
### Additional packages
A number of additional packages are included in the template file nrc-opening.tex. Uncomment those packages which will be needed for each specific article.
\usepackage{color} The color package is used for in-house production of reversed out text (white on black). Ensure that no driver option is specified here, as it would over-ride the in-house printer set-up. See section 6.6 for details.
\usepackage{dcolumn} At present, this remains commented out as the NRC's need for left-justified decimal alignment is not possible via dcolumn. Left-justified alignment at present is achieved by using the \lllap, \rlap, and \phantom commands.
\usepackage{url} Inserts line breaks into e-mail and website addresses. The package **and** its additional line of code must be uncommented.
\usepackage{array} Allows for raggedright columns in tables. The package **and** its additional line of code must be uncommented.
\usepackage{cases} Makes it possible for a left curly brace to span several lines of equations. The package **and** its additional line of code must be uncommented.
**personal macros** Over time, it may become apparent that some small modifications or shorthands are used in almost all papers. Until such changes are incorporated into the document classes, these should not be inserted into each article file but rather stored in a separate file, loaded via the \usepackage command and inserted after all other packages.
### Package to remove
If the user has specified
\usepackage[T1]{fontenc}
so as to enable French-language hyphenation to work when using CM or restricted Type 1 fonts, the package invocation should be deleted (the NRC classes supply their own fontenc invocation).
### Additional macros
Following the loading of all packages and their options, files may contain additional macros from the author (see page 3 for instructions provided to authors). These should be clearly marked off with, for example, a row of %% signs both above and below. Keep in mind the potential for author definitions to interfere or over-ride journal macros and specifications; for example, authors may have commands to specify page dimensions, or fonts for sections, or numbering schemes. Where these do not collide with journal requirements, they can probably be safely retained. However, where there is interference, journal definitions take precedence. Ideally, authors will increasingly switch to using the NRC's document classes and reduce the chances of such problems.
### Other additions in the preamble area
Some if not all of the following macros are used by the NRC's in-house production team, and not by the author. They are input in the file after all packages have been loaded, and before the \begin{document} statement:
\setcounter{page}{<number>}
\journal{<abbrev.>}
\journalcode{<acro>}
\volyear{<vol no.>}[<copyright year>]{<year>}
\filename{<file no.>}
\received{<complete date>}
\rereceived{<complete date>}
\accepted{<complete date>}
\reaccepted{<complete date>}
\IDdate or \IDdates{<Addit'nal text + date info>}
\verbpub{<complete date>}
\commanddate{<complete date>}
\assoced{<name of assoc. ed.>}
\correct{<name of correspond. ed.>}
\setcounter{page}{...}
Insert starting page number for article. The information will be printed on the titlepage (bottom left) and in the running head; the complete page range will be calculated and inserted automatically when the file is run a second time.
\journal{...}
Specific journal abbreviations must be entered via this macro (e.g., Can. J. Civ. Eng.). The \journal command records the web address that will be used for this paper when it is published on the web; note that the \journalcode command may be used as an alternative to \journal.
See Appendix A for complete list of journal abbreviations.
\journalcode{...}
The argument is the "journal acronym" (see table in Appendix A for a list). This acronym identifies the journal, and the \journalcode command uses it to set the journal abbreviation and the web site addresses; note that the \journal command may be used as an alternative to \journalcode.
\volyear{...}[...]{...}
First argument is for the volume number. The second (optional) argument specifies the copyright year; if the argument is not present, the copyright year is assumed to be the same as the production year. The third argument specifies the publication year, which is used in the titlepage footer and in the left running head.
\filename{...}
Insert the NRC's file number here. The number will be appended to the canned text 10-1139/, which appears bottom centre of the opening page. If \filename{...}
\filename{...}
Add an asterisk to the \filename{...}
necessary for the filename by be prefixed to the page numbers, in addition to appearing in the DOI line.
All page numbers in the headers, and on the opening page at the bottom left will have the filenumber prefixed to them.
**Note:** The filenumber will **not** be prefixed to any page cross-references (via the \pageref macro).
\received{...}
Insert date as per journal style -- e.g., June 6, 2001 -- but without a final period (it is automatically inserted). The word 'Received' (Fr. 'Requ le') will be automatically generated; however, the date must be input in French (e.g., 6 juin 2001). This text appears in the author IDbox area.
)
``` \revecieved{...} Same instructions as for \received. The text 'Revision received' (Fr. 'Revision recue le') will be automatically generated. This text appears in the author IDbox area. \accepted{...} Same instructions as for \received. The word 'Accepted' (Fr. 'Acceptle') will be automatically generated. This text appears in the author IDbox area. \reveccepted{...} Same instructions as for \received. The text 'Revision accepted' (Fr. 'Revision acceptee le') will be automatically generated. This text appears in the author IDbox area. \IDdates{...} Unlike \received and \accepted, no canned text or final punctuation is included, allowing the user to insert customised text and/or date information, which appears in the author IDbox area. An alias, \IDdate, is also available. \webpub{...} Insert date of publication at the NRC website as per journal style -- e.g., June 6, 2001 -- but without a final period (it is automatically inserted). The text will appear in the author IDbox area. For English-language articles, the text 'Published on the NRC Research Press Web site at \headdress on date' will be automatically generated. The website address is generated by using either the \journal or \journalcode macros. For French-language articles, the text 'Public sur le site \headdress, \headdets' will be automatically generated. Note that the date must be input in French (e.g., 6 juin 2001). \commdate{...} Insert date as per journal style -- e.g., June 6, 2001 -- but without a final period (it is automatically inserted). The text 'Written discussion of this article is welcomed and will be received by the Editor until' (Fr. 'Les commentaires sur le contenu de cet article doivent etre envoyes au directeur scientifique de la revue avant le') will be automatically generated; however, the date must be input in French (e.g., 6 juin 2001). This text appears in the author IDbox area. \assoced{...} Insert name of associate editor, without a final period. The text 'Paper handled by Associate Editor' (Fr. 'Production de l'article coordonnee par le directeur scientifique associe') will be automatically generated. This text appears in the author IDbox area. \correct{...} Insert name of associate editor, without a final period. The text 'Corresponding Editor:' (Fr. 'Directeur scientifique correspondant':') will be automatically generated. This text appears in the author IDbox area.
### Special titleblocks
Some journal material requires a special heading: a solid black stripe with reversed-out white lettering. The white-on-black effect requires the presence of a special package in the preamble area, immediately below the graphics package, in addition to the special title coding:
\usepackage{color} Since both the graphicx package and color share the same option, it is possible to merge them into one line:
\usepackage{graphicx,color} Having added the color package, the actual special title command will now work. There are two versions of the command:
\specialtitle This allows the regular titleblock (\title, etc.) to be included with the special title; for example, a review article with its own title.
\begin{document} \specialtitle{REVIEW/SYNTH\'ESE} \title{Regular article title} \author{Someone's name here} \address{Someplace nice and warm} \correspond \shortauthor{Review/Synth\'ese} \maketitle
\specialtitle* The regular article titleblock cannot be used with this variant; for example, an editorial or other non-article material.
\begin{document} \specialtitle*{EDITORIAL/\'EDITORIAL} \shortauthor{Editorial/\'Editorial} \maketitle
For non-articles, the headers and footers are changed by using the \pagestyle{nrcplain} command. The page numbers will appear at bottom centre, the NRC Canada copyright footer is suppressed, and the running heads are suppressed entirely. For further adjustments to pagination, see 'Miscellaneous adjustments'.
### Translations of abstracts/resumes
The following lines are inserted at the end of each abstract or resume, before the \end{...} statement:
\translation generates the text: '[Journal translation]'.
\traduit generates the text: '[Traduit par la redaction]'.
\Trauduit generates the text: '[Traduit par la redaction]'.
Note that author files will only have one: an abstract or a resume. It is useful to insert a suitable \vspace to represent the approximate space the translation would require, so that page breaks will not be unduly affected by the additional text.
### Miscellaneous adjustments
1. Journals requiring more space between lines will need the following command inserted into the preamble area: \easebaselines This command will also adjust the inter-row spacing within tables (value of \arraystreth increases to 1.05).
2. For **roman numerals**, with only page numbers in the footers, insert the following lines at the end of the preamble, just above the \begin{document} line (notice that, in this example, pagination will begin with roman iii): \pagestyle{nrclplain} \pagemumbering{roman} \setcounter{page}{3}
3. To add parentheses (or any other design element) to (roman) page numbers, insert the following just before the \setcounter{page}{...} command: \renewcommand{}\thepage{(\roman{page})}
4. For full-width text spanning two columns, the default left and right margins can be altered by using the following optional argument to the {WideText} environment (recall that the default values are 0em on the left, 3em on the right): \begin{WideText}[<l.margin>][<r.margin>] \text here> \end{WideText}
### Two-column bilingual texts
Special coding at both the top of the file and around the bilingual paragraphs is required.
#### 6.9.1 In the preamble
First, load the appropriate package and options. These are added after the \documentclass line in the preamble.
1. if main (left column) language is English: \usepackage[french,english]{babel} As English is the default, there is no need to specify it as an option to the document class.
2. if main (left column) language is French, there is an additional option to add to the document class line: \documentclass[french]{nrcl} \usepackage[english,french]{babel} See sections 2.1 and 3.1, which also discuss the babel package.
#### 6.9.2 In the bilingual text
The next step is to code the English and French texts so that the tops of matching paragraphs align horizontally. One set of codes surrounds the entire bilingual set of paragraphs; another set of codes is put around each matched set of English-French paragraphs.
1. \begin{par-text}[<language>]
2. \begin{par-para}
3.... <English paragraph>...
4. \othercol
5.... <French paragraph>...
6. \end{par-para}
7.
8. \begin{par-para}
9.... <English paragraph>...
10. \othercol
11.... <French paragraph>...
12. \end{par-para}
13. \end{par-text}
14. \end{par-text}
15. \end{par-text}
16. \end{par-text}
17.
18. \begin{par-para}
19.... <English paragraph>...
10. \othercol
11.... <French paragraph>...
12. \end{par-para}
19. \end{par-text}
20.
21. \end{par-text}
22.
[MISSING_PAGE_POST]
## Appendix A Journal reference grid
\begin{tabular}{l l l l l} Journal name & Journal abbreviation & Journal & English website & French website \\ & & & acronym & \\ Biochemistry and Cell Biology & Biochem. Cell Biol. & bcb & [http://bcb.nrc.ca](http://bcb.nrc.ca) & [http://bbc.cnrc.ca](http://bbc.cnrc.ca) \\ Canadian Geotechnical Journal & Can. Geotech. J. & cgj & [http://cgi.nrc.ca](http://cgi.nrc.ca) & [http://rcg.cnrc.ca](http://rcg.cnrc.ca) \\ Canadian Journal of Botany & Can. J. Bot. & cjb & [http://cranjobt.nrc.ca](http://cranjobt.nrc.ca) & [http://revenabot.cnrc.ca](http://revenabot.cnrc.ca) \\ Canadian Journal of Chemistry & Can. J. Chem. & cjc & [http://cranjobt.nrc.ca](http://cranjobt.nrc.ca) & [http://revenabot.cnrc.ca](http://revenabot.cnrc.ca) \\ Canadian Journal of Civil Engineering & Can. J. Civ. Eng. & cjc & [http://cjc.nrc.ca](http://cjc.nrc.ca) & [http://recg.cnrc.ca](http://recg.cnrc.ca) \\ Canadian Journal of Earth Sciences & Can. J. Earth Sci. & cjes & [http://cjes.nrc.ca](http://cjes.nrc.ca) & [http://rest.cnrc.ca](http://rest.cnrc.ca) \\ Canadian Journal of Fisheries & Can. J. Fish. Aquat. Sci. & cjfas & [http://cjfas.nrc.ca](http://cjfas.nrc.ca) & [http://jcsha.cnrc.ca](http://jcsha.cnrc.ca) \\ and Aquatic Sciences & Canadian Journal of Forest Research & Can. J. For. Res. & cjfr & [http://cjfr.nrc.ca](http://cjfr.nrc.ca) & [http://rcf.cnrc.ca](http://rcf.cnrc.ca) \\ Canadian Journal of Microbiology & Can. J. Microbiol. & cjm & [http://cjm.nrc.ca](http://cjm.nrc.ca) & [http://rcm.cnrc.ca](http://rcm.cnrc.ca) \\ Canadian Journal of Physics & Can. J. Physiol. & cjp & [http://cjp.nrc.ca](http://cjp.nrc.ca) & [http://rcp.cnrc.ca](http://rcp.cnrc.ca) \\ Canadian Journal of Zoology & Can. J. Zool. & cjz & [http://cjz.nrc.ca](http://cjz.nrc.ca) & [http://rcg.cnrc.ca](http://rcg.cnrc.ca) \\ Environmental Reviews & Environ. Rev. & er & [http://er.nrc.ca](http://er.nrc.ca) & [http://de.cnrc.ca](http://de.cnrc.ca) \\ Genome & Genome & gen & [http://genome.nrc.ca](http://genome.nrc.ca) & [http://genome.cnrc.ca](http://genome.cnrc.ca) \\ Journal of Environmental & J. Environ. Eng. Sci. & jes & [http://jees.nrc.ca](http://jees.nrc.ca) & [http://rge.cnrc.ca](http://rge.cnrc.ca) \\ Engineering and Science & & & & \\ \end{tabular}
%% usepackage{bm} %% 'bold math' via \bm command
%% c. for website addresses:
%% usepackage{url} %% inserts linebreaks automatically
%% %NRCurl{url}
%% d. biblio-related:
%% usepackage{cite} %% enhances options for \cite commands
%% e. for English-language papers:
%% usepackage[french,english]{abel}
%% f. for French-language papers:
%% usepackage[english,french]{abel} %% remember to add french as a %% CLASS option, above
%% g. for ragged-right tables:
%% usepackage{array}
%% newcommand{PreserveBackslash}[1]{let}temp=\#1\let\=\temp}
%% \let\PBS=\PreserveBackslash
%% h. for left curly brace to span several lines of equations:
%% usepackage{cases}
%% \expandafter\let\csname numc@left\expandafter\endcsname\csname
%% z@\endcsname
%% 3. Resetting float parameters:
%% a. in nrc1:
%% \renewcommand{topfraction}{.95}
%% \renewcommand{textfraction}{.05}
%% \renewcommand{floatpagefraction}{.95}
%% b. in nrc2:
%% \renewcommand{topfraction}{.95}
%% \renewcommand{floatpagefraction}{.95}
%% \renewcommand{\textfraction}{.05}
%% \renewcommand{\dblfloatpagefraction}{.95}
%% 4. Resetting journal-specific parameters:
%% a. eqn nos. with section nos.:
%% \numberby {equation}{section}
%% \setcounter{equation}{0}
%% b. in-line citations to use ( ) instead of default [ ]:
%% \renewcommand{citeleft}{{}
%% \renewcommand{\citright}{}}
%% c. for JEES (to expand inter-line spacing; see p.12 of guide):
%% \easebaselines
%% 5. Miscellaneous macros to always have available:
%% a. shorthands:
\let\p=\phantom{\let\mc=\multicolumn}
%% Title, Author(s), Address(es) -- see p.4 of userguide for %% various options to save time and keyboarding, esp. where %% authors share same address(s). \\title()
%% Author 1: \\author[<NAME>]{<NAME>} %% opt. arg. ONLY if IDbox %% name is diff. from %% titleblock name \\address() %% address of 1st author
%% Author 2: \\author{<NAME>} \\address() \\
%% Author 3: \\author{<NAME>} \\address()
\shortauthor{Humar, Rahgozar, and Murray} %% for headers
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% This line goes here in nrc1. %% \maketitle %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% Abstract/Resume area -- see pp.5,12 of userguide: \begin{abstract} Abstract text %% \keywords{} %% \translation \end{tabular}
\begin{assume} Texte du resume %% \motscles{} %% \Traudit %% or \traudit \end{table}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% END OF TEMPLATE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% Ch. -- 11 NOV 02 |
bower_c3.jsonl | personal_doc | Markdown | # C3.js D3-based reusable chart library
C3 makes it easy to generate D3-based charts by wrapping the code required to construct the entire chart. We don't need to write D3 code any more.
C3 gives some classes to each element when generating, so you can define a custom style by the class and it's possible to extend the structure directly by D3.
C3 provides a variety of APIs and callbacks to access the state of the chart. By using them, you can update the chart even after it's rendered.
Because of the dependence on D3, C3 supports only modern browsers D3 supports. Please see the description in D3.
Note: For IE9 and IE10, polyfill is required because c3 uses MutationObserver, which is not supported in those versions. However, it's not required if charts always will be binded to the DOM specified by bindto because MutationObserver is not called in that case.
Note: If you need to use D3 v3.x, please use C3 v0.4.22, which is compatible with D3 v3.x.
MIT
# Getting Started
In this guide, we are going to show you how to get started with C3.
Download the latest version here:
Installing by Bower/Component is also available with the name c3.
Then, load the scripts and css:
```
<!-- Load c3.css -->
<link href="/path/to/c3.css" rel="stylesheet"<!-- Load d3.js and c3.js -->
<script src="/path/to/d3.v5.min.js" charset="utf-8"></script>
<script src="/path/to/c3.min.js"></script>
```
C3 depends on D3, so please load D3 too.
C3 generates a chart by calling generate() with the argument object, and an element including the chart will insert into the element specified as a selector in that argument as bindto.
Prepare the element to bind the chart:
```
<div id="chart"></div>
```
And, call generate() with arguments:
C3 supports the asynchronous module definition (AMD) API. If you use RequireJS, you can load like this:
```
require.config({
baseUrl: '/js',
paths: {
d3: "http://d3js.org/d3.v5.min"
}
});
require(["d3", "c3"], function(d3, c3) {
c3.generate({
...
});
});
```
Then, you will see the chart:
Data can be loaded as columned data / rowed data / csv in URL.
There are serveral options to customize the chart and you can see those here:
The chart can be customize by giving some options when generating. We will introduce some of them here.
Introduce additional axis for data2. Add data.axes and axis.y2.show as follows:
Show labels for each axis. Add axis.y.label and axis.y2.label as follows:
Show data2 as Bar chart. Add data.types as follows:
Format the values of each data. Add axis.y.tick.format as follows:
More information about the options, please see Examples. (We'll add the reference soon)
By using APIs, you can update the chart after it's been rendered. We will introduce some of APIs here. APIs can be called through the object returned from generate().
By using load() API, you can load data and update the chart dynamically as follows:
chart.load({
columns: [
['data1', 300, 100, 250, 150, 300, 150, 500],
['data2', 100, 200, 150, 50, 100, 250]
]
});
```
By using unload() API, you can unload the data dynamically as follows:
chart.unload({
ids: ['data2', 'data3']
});
```
Please use unload param in load() API if load and unload need to run simultaneously. Please see this example.
By using show() and hide() API, you can show/hide the data dynamically as follows:
chart.hide(['data2', 'data3']);
chart.show(['data2', 'data3']);
```
The documentation about APIs is poor now, so please check the issues on github. There might be some hints about what you want to do. (We will add the document soon)
C3 give some classes for each element when generating. So, you can change the style of the elements by using those classes.
The lines have c3-line-[id] class, so this class can be used to define the style in css as follows:
```
#chart .c3-line-data2 {
stroke-width: 5px;
}
```
Please check the class for each element if you want to change the style. Web Inspector would be useful. (We will add the document for class definition soon)
Please check the examples and the issues on github for more information. Sorry for the poor documentation. We're working on now and please give me some time. Thank you.
# # Chart
Line chart with sequential data.
View details »
Simple line chart with timeseries data.
View details »
Display as Spline Chart.
View details »
Simple line chart with custom x.
View details »
Multiple line chart with multiple custom x.
View details »
Set regions for each data with style.
View details »
Display as Step Chart.
View details »
Display as Area Chart.
View details »
Display as Stacked Area Chart.
View details »
Display as Bar Chart.
View details »
Display as Stacked Bar Chart.
View details »
Display as Scatter Plot.
View details »
Display as Pie Chart.
View details »
Display as Donut Chart.
View details »
Display as Gauge Chart.
View details »
Display as Stanford Chart.
View details »
Display all kinda charts up in here.
View details »
Show ticks as categorized by each data.
View details »
Switch x and y axis position.
View details »
Additional y axis can be added.
View details »
Format x axis tick text.
View details »
Set the number of ticks on X Axis.
View details »
Set tick texts on X Axis.
View details »
Set cull ticks or not on X Axis.
View details »
Set ticks position to x of data.
View details »
Convert time to UTC.
View details »
Rotate x axis tick text.
View details »
Format y axis tick text.
View details »
Set padding for y axis.
View details »
Set range for y axis.
View details »
Set label for axis.
View details »
Set axis label position.
View details »
Column-oriented data can be used as input.
View details »
Row-oriented data can be used as input.
View details »
JSON can be used as input.
View details »
Data from URL can be used as input.
View details »
Load data with x values on category axis.
View details »
Load data dynamically.
View details »
Set name for each data.
View details »
Set color according to data.
View details »
Define data order. This will be used for stacked bar chart.
View details »
Show label of data.
View details »
Format label of data.
View details »
Number format localization using D3 locale settings.
View details »
Show grid lines for x and y axis.
View details »
Add optional grid lines on x grid.
View details »
Add optional grid lines on y grid.
View details »
Show rects on chart.
View details »
Show rects on timeseries chart.
View details »
Show sub chart for zoom and selection range.
View details »
Zoom by mouse wheel event and slide by drag.
View details »
Set visibility of legend.
View details »
Show legend on bottom or right side.
View details »
Build custom legend
View details »
Set visibility of tooltip.
View details »
Show tooltips as grouped or not.
View details »
Set format for title and value on tooltip.
View details »
Set chart size in px.
View details »
Change padding for the chart.
View details »
Set custom color pattern.
View details »
Set duration of transition for chart animation.
View details »
Load/Unload data as flowing
View details »
Update data names.
View details »
Update data colors.
View details »
Update axis labels.
View details »
Update axis range.
View details »
Resize chart.
View details »
Update custom x grids.
View details »
Transform to line chart.
View details »
Transform to spline chart.
View details »
Transform to bar chart.
View details »
Transform to area chart.
View details »
Transform to area spline chart.
View details »
Transform to scatter plot.
View details »
Transform to pie chart.
View details »
Transform to donut chart.
View details »
Set style for regions.
View details »
Set style for grids.
View details »
# Options
Date: 2013-01-11
Categories:
Tags:
The CSS selector or the element which the chart will be set to. D3 selection object can be specified. If other chart is set already, it will be replaced with the new one (only one chart can be set in one element).
If this option is not specified, the chart will be generated but not be set. Instead, we can access the element by chart.element and set it by ourselves.
When chart is not binded, c3 starts observing if chart.element is binded by MutationObserver. In this case, polyfill is required in IE9 and IE10 because they do not support MutationObserver. On the other hand, if chart always will be binded, polyfill will not be required because MutationObserver will never be called.
`#chart`
```
bindto: '#myContainer'
// or element
bindto: document.getElementById('myContainer')
// or D3 selection object
bindto: d3.select('#myContainer')
```
The desired width of the chart element.
```
size: {
width: 640
}
```
The desired height of the chart element.
```
size: {
height: 480
}
```
```
padding: {
top: 20
}
```
The padding on the right of the chart.
`undefined`
```
padding: {
right: 20
}
```
```
padding: {
bottom: 20
}
```
```
padding: {
left: 20
}
```
Set custom color pattern.
`undefined`
```
color: {
pattern: ['#1f77b4', '#aec7e8', ...]
}
```
Indicate if the chart should have interactions.
If `false` is set, all of interactions (showing/hiding tooltip, selection, mouse events, etc) will be disabled. `true`
Set duration of transition (in milliseconds) for chart animation.
If 0 or null set, transition will be skipped. So, this makes initial rendering faster especially in case you have a lot of data.
`350`
```
transition: {
duration: 500
}
```
Set a callback to execute when the chart is initialized.
`function () {}`
```
oninit: function () { ... }
```
Set a callback which is executed when the chart is rendered. Basically, this callback will be called in each time when the chart is redrawed.
`function () {}`
```
onmouseover: function () { ... }
```
```
onmouseout: function () { ... }
```
Set a callback to execute when user resizes the screen.
`function () {}`
```
onresize: function () { ... }
```
Set a callback to execute when screen resize finished.
`function () {}`
Load a CSV or JSON file from a URL. Note that this will not work if loading via the "file://" protocol as the most browsers will block XMLHTTPRequests.
```
var chart = c3.generate({
data: {
url: '/data/c3_test.csv'
}
});
```
Parse a JSON object for data. See also data.keys.
Load data from a multidimensional array, with the first element containing the data names, the following containing related data in that order.
```
rows: [
['data1', 'data2', 'data3'],
[90, 120, 300],
[40, 160, 240],
[50, 200, 290],
[120, 160, 230],
[80, 130, 300],
[90, 220, 320]
]
```
Load data from a multidimensional array, with each element containing an array consisting of a datum name and associated data values.
```
columns: [
['data1', 30, 20, 50, 40, 60, 50],
['data2', 200, 130, 90, 240, 130, 220],
['data3', 300, 200, 160, 400, 250, 250]
]
```
Used if loading JSON via data.url:
```
{data: {mimeType: 'json'}}
```
Choose which JSON object keys correspond to desired data.
Specify the key of x values in the data.
We can show the data with non-index x values by this option. This option is required when the type of x axis is timeseries. If this option is set on category axis, the values of the data on the key will be used for category names.
`undefined`
```
data: {
x: 'date'
}
```
Specify the keys of the x values for each data.
This option can be used if we want to show the data that has different x values.
`{}`
```
data: {
xs: {
data1: 'x1',
data2: 'x2'
}
}
```
data.x should be used if the all of data have same x values.
Set a format to parse string specified as x.
`%Y-%m-%d`
```
data: {
xFormat: '%Y-%m-%d %H:%M:%S'
}
```
Set custom data name.
`{}`
```
data: {
names: {
data1: 'Data Name 1',
data2: 'Data Name 2'
}
}
```
Set custom data class.
If this option is specified, the element g for the data has an additional class that has the prefix c3-target- (e.g. c3-target-additional-data1-class).
`{}`
```
data: {
classes: {
data1: 'additional-data1-class',
data2: 'additional-data2-class'
}
}
```
Set groups for the data for stacking.
`[]`
```
data: {
groups: [
['data1', 'data2'],
['data3']
]
}
```
Set y axis the data related to. y and y2 can be used.
`{}`
```
data: {
axes: {
data1: 'y',
data2: 'y2'
}
}
```
Set chart type at once.
If this option is specified, the type will be applied to every data. This setting can be overwritten by data.types.
`line`
```
data: {
type: 'bar'
}
```
Set chart type for each data.
This setting overwrites data.type setting.
`{}`
```
data: {
types: {
data1: 'bar'
data2: 'spline'
}
}
```
Show labels on each data points.
`false`
```
data: {
labels: true
}
```
Set formatter function for data labels.
The formatter function receives 4 arguments such as v, id, i, j and it must return a string that will be shown as the label. The arguments are:
Formatter function can be defined for each data by specifying as an object and D3 formatter function can be set (e.g. d3.format('$'))
`{}`
```
data: {
labels: {
format: function (v, id, i, j) { ... }
// it's possible to set for each data
//format: {
// data1: function (v, id, i, j) { ... },
// ...
//}
}
}
```
Define the order of the data.
This option changes the order of stacking the data and pieces of pie/donut. If `null` specified, it will be the order the data loaded. If function specified, it will be used to sort the data and it will receive the data as argument. `null` `desc`
```
data: {
order: 'asc'
}
```
Define regions for each data.
The values must be an array for each data and it should include an object that has start, end, style. If start is not set, the start will be the first data point. If end is not set, the end will be the last data point.
Currently this option supports only line chart and dashed style. If this option specified, the line will be dashed only in the regions.
An optional label property can be provided to display a label for the region. If a label option is not specified, no label will be displayed for the region. For each region, you may also specify the paddingY and paddingX options to control the position of label text. Finally, a vertical option can be used to identify whether or not the label text should be rotated 90 degrees.
`{}`
```
data: {
regions: {
data1: [
{'start':1, 'end':2, 'style':'dashed'},
{'start':3, label:"Region 2", paddingX:2, paddingY:2, vertical=true}
],
...
}
}
```
Set color converter function.
This option should a function and the specified function receives color (e.g. '#ff0000') and d that has data parameters like id, value, index, etc. And it must return a string that represents color (e.g. '#00ff00').
`undefined`
```
data: {
color: function (color, d) { ... }
}
```
Set color for each data.
`{}`
```
data: {
colors: {
data1: '#ff0000',
...
}
}
```
Hide each data when the chart appears.
If `true` specified, all of data will be hidden. If multiple ids specified as an array, those will be hidden. `false`
```
data: {
// all of data will be hidden
hide: true
// specified data will be hidden
hide: ['data1', ...]
}
```
This option does not hide legends, so we need to use legend.hide option together if we want to hide legend too.
Set text displayed when empty data.
`""`
```
data: {
empty: {
label: {
text: "No Data"
}
}
}
```
Set data selection enabled.
If this option is set `true` , we can select the data points and get/set its state of selection by API (e.g. select, unselect, selected). `false`
Set grouped selection enabled.
If this option set `true` , multiple data points that have same x value will be selected by one selection. `false`
```
data: {
selection: {
grouped: true
}
}
```
Set multiple data points selection enabled.
If this option set `true` , multiple data points can have the selected state at the same time. If `false` set, only one data point can have the selected state and the others will be unselected when the new data point is selected. `true`
```
data: {
selection: {
multiple: true
}
}
```
Enable to select data points by dragging.
If this option set `true` , data points can be selected by dragging. If this option set `true` , scrolling on the chart will be disabled because dragging event will handle the event. `false`
Set a callback for each data point to determine if it's selectable or not.
The callback will receive d as an argument and it has some parameters like id, value, index. This callback should return boolean.
```
function () { return true; }
```
```
data: {
selection: {
isselectable: function (d) { ... }
}
}
```
Set the stacking to be normalized
For stacking, the `data.groups` option should be set and have positive values. The yAxis will be set in percentage value (0 ~ 100%).
`false`
```
data: {
stack: {
normalize: true
}
}
```
This callback will be called when each data point clicked and will receive d and element as the arguments. d is the data clicked and element is the element clicked. In this callback, this will be the Chart object.
`function () {}`
```
data: {
onclick: function (d, element) { ... }
}
```
Set a callback for mouseover event on each data point.
Switch x and y axis position.
`false`
```
axis: {
rotated: true
}
```
Show or hide x axis.
`true`
Set type of x axis.
`indexed`
```
axis: {
x: {
type: 'timeseries'
}
}
```
Set how to treat the timezone of x values.
If true, treat x value as localtime. If false, convert to UTC internally.
`true`
```
axis: {
x: {
localtime: true
}
}
```
Set category names on category axis.
This must be an array that includes category names in string. If category names are included in the date by data.x option, this is not required.
`[]`
```
axis: {
x: {
categories: ['Category 1', 'Category 2', ...]
}
}
```
Centerise ticks on category axis.
`false`
```
axis: {
x: {
tick: {
centered: true
}
}
}
```
A function to format tick value. Format string is also available for timeseries data.
`undefined`
```
axis: {
x: {
tick: {
format: function (x) { return x.getFullYear(); }
}
}
}
```
Setting for culling ticks.
If `true` is set, the ticks will be culled, then only limitted tick text will be shown. This option does not hide the tick lines. If `false` is set, all of ticks will be shown.
We can change the number of ticks to be shown by axis.x.tick.culling.max.
`true` for indexed axis and timeseries axis `false` for category axis
```
axis: {
x: {
tick: {
culling: false
}
}
}
```
The number of tick texts will be adjusted to less than this value.
`10`
```
axis: {
x: {
tick: {
culling: {
max: 5
}
}
}
}
```
The number of x axis ticks to show.
This option hides tick lines together with tick text. If this option is used on timeseries axis, the ticks position will be determined precisely and not nicely positioned (e.g. it will have rough second value).
`undefined`
```
axis: {
x: {
tick: {
count: 5
}
}
}
```
Fit x axis ticks.
If `true` set, the ticks will be positioned nicely. If `false` set, the ticks will be positioned according to x value of the data points. `true`
```
axis: {
x: {
tick: {
fit: true
}
}
}
```
Set the x values of ticks manually.
If this option is provided, the position of the ticks will be determined based on those values. This option works with timeseries data and the x values will be parsed accoding to the type of the value and data.xFormat option.
`null`
```
axis: {
x: {
tick: {
values: [1, 2, 4, 8, 16, 32, ...]
}
}
}
```
Rotate x axis tick text.
If you set negative value, it will rotate to opposite direction.
`0`
```
axis: {
x: {
tick: {
rotate: 60
}
}
}
```
Show x axis outer tick.
`true`
```
axis: {
x: {
tick: {
outer: false
}
}
}
```
Enable multiline.
If this option is set `true` , when a tick's text on the x-axis is too long, it splits the text into multiple lines in order to avoid text overlapping. `true`
```
axis: {
x: {
tick: {
multiline: true
}
}
}
```
If this option is set and is above `0` , the number of lines will be adjusted to less than this value and tick's text is ellipsified. `0`
```
axis: {
x: {
tick: {
multiline: true,
multilineMax: 2,
}
}
}
```
Set max value of x axis range.
`undefined`
```
axis: {
x: {
max: 100
}
}
```
Set min value of x axis range.
`undefined`
```
axis: {
x: {
min: -100
}
}
```
Set padding for x axis.
If this option is set, the range of the x axis will increase/decrease by the values. If no padding is needed for the x axis, set the values to `0` . This option is ignored when the axis type is `category` . `{}`
```
axis: {
x: {
padding: {
left: 0,
right: 0
}
}
}
```
Set height of x axis.
The height of x axis can be set manually by this option. If you need more space for x axis, please use this option for that. The unit is `pixel` . `undefined`
```
axis: {
x: {
height: 20
}
}
```
Set default extent for subchart and zoom. This can be an array or function that returns an array.
`undefined`
```
axis: {
x: {
extent: [5, 10]
}
}
```
Set label on x axis.
You can set x axis label and change its position by this option. `string` and `object` can be passed and we can change the position by passing `object` that has position key. Available position differs according to the axis direction (vertical or horizontal). If `string` set, the position will be the default.
If it's horizontal axis:
`[default]`
If it's vertical axis:
`[default]` `undefined`
```
axis: {
x: {
label: 'Your X Axis'
}
}
```
```
axis: {
x: {
label: {
text: 'Your X Axis',
position: 'outer-center'
}
}
}
```
Show or hide y axis.
`true`
```
axis: {
y: {
show: false
}
}
```
Show y axis inside of the chart.
`false`
```
axis: {
y: {
type: 'linear'
}
}
```
Set max value of y axis.
Set min value of y axis.
Set center value of y axis.
`undefined`
```
axis: {
y: {
center: 0
}
}
```
Set label on y axis.
```
axis: {
y: {
label: 'Your Y Axis'
}
}
```
```
axis: {
y: {
label: {
text: 'Your Y Axis',
position: 'outer-middle'
}
}
}
```
This option accepts d3.format object as well as a function you define.
`undefined`
```
axis: {
y: {
tick: {
outer: false
}
}
}
```
Set y axis tick values manually.
`undefined`
```
axis: {
y: {
tick: {
values: [100, 1000, 10000]
}
}
}
```
Set the number of y axis ticks.
The position of the ticks will be calculated precisely, so the values on the ticks will not be rounded nicely. In the case, axis.y.tick.format or axis.y.tick.values will be helpful.
`undefined`
```
axis: {
y: {
tick: {
count: 5
}
}
}
```
Set padding for y axis.
You can set padding for y axis to create more space on the edge of the axis. This option accepts `object` and it can include top and bottom. top, bottom will be treated as pixels. `undefined`
Set default range of y axis.
```
axis: {
y: {
default: [0, 1000]
}
}
```
Show or hide y2 axis.
`false`
Show y2 axis inside of the chart.
`false`
```
axis: {
y2: {
type: 'linear'
}
}
```
Set max value of y2 axis.
`undefined`
```
axis: {
y2: {
max: 1000
}
}
```
Set min value of y2 axis.
`undefined`
```
axis: {
y2: {
min: -1000
}
}
```
```
axis: {
y2: {
inverted: true
}
}
```
Set center value of y2 axis.
`undefined`
```
axis: {
y2: {
center: 0
}
}
```
Set label on y2 axis.
```
axis: {
y2: {
label: 'Your Y2 Axis'
}
}
```
```
axis: {
y2: {
label: {
text: 'Your Y2 Axis',
position: 'outer-middle'
}
}
}
```
This option works in the same way as axis.y.format.
`undefined`
```
axis: {
y2: {
tick: {
outer: false
}
}
}
```
Set y2 axis tick values manually.
`undefined`
```
axis: {
y2: {
tick: {
values: [100, 1000, 10000]
}
}
}
```
Set the number of y2 axis ticks.
This works in the same way as axis.y.tick.count.
`undefined`
```
axis: {
y2: {
tick: {
count: 5
}
}
}
```
Set padding for y2 axis.
This works in the same way as axis.y.padding.
`undefined`
Set default range of y2 axis.
```
axis: {
y2: {
default: [0, 1000]
}
}
```
Show grids along x axis.
`false`
```
grid: {
x: {
show: true
}
}
```
Show additional grid lines along x axis.
This option accepts `array` including `object` that has value, text, position and class. text, position and class are optional. For position, start, middle and end (default) are available.
If x axis is category axis, value can be category name. If x axis is timeseries axis, value can be date string, Date object and unixtime integer.
`[]`
```
grid: {
x: {
lines: [
{value: 2, text: 'Label on 2'},
{value: 5, text: 'Label on 5', class: 'label-5'},
{value: 6, text: 'Label on 6', position: 'start'}
]
}
}
```
Show grids along y axis.
`false`
```
grid: {
y: {
show: true
}
}
```
Show additional grid lines along y axis.
This option accepts `array` including `object` that has value, text, position and class. `[]`
```
grid: {
y: {
lines: [
{value: 100, text: 'Label on 100'},
{value: 200, text: 'Label on 200', class: 'label-200'},
{value: 300, text: 'Label on 300', position: 'middle'}
]
}
}
```
Show rectangles inside the chart.
This option accepts `array` including `object` that has axis, start, end and class. The keys start, end and class are optional.
axis must be x, y or y2. start and end should be the value where regions start and end. If not specified, the edge values will be used. If timeseries x axis, date string, Date object and unixtime integer can be used. If class is set, the region element will have it as class.
`[]`
```
regions: [
{axis: 'x', start: 1, end: 4, class: 'region-1-4'}
]
```
Show or hide legend.
`true`
Hide legend
If `true` given, all legend will be hidden. If `string` or `array` given, only the legend that has the id will be hidden. `false`
```
legend: {
hide: true
//or hide: 'data1'
//or hide: ['data1', 'data2']
}
```
Change the position of legend.
Currently bottom, right and inset are supported.
`bottom`
```
legend: {
position: 'bottom'
}
```
Change inset legend attributes.
This option accepts `object` that has the keys anchor, x, y and step.
anchor decides the position of the legend. These anchors are available:
x and y set the position of the legend based on the anchor.
step defines the max step the legend has (e.g. If 2 set and legend has 3 legend item, the legend 2 columns).
```
{
anchor: 'top-left',
x: 10,
y: 0,
step: undefined
}
```
```
legend: {
inset: {
anchor: 'top-right',
x: 20,
y: 10,
step: 2
}
}
```
Set click event handler to the legend item.
`undefined`
```
legend: {
item: {
onclick: function (id) { ... }
}
}
```
Set mouseover event handler to the legend item.
`undefined`
```
legend: {
item: {
onmouseover: function (id) { ... }
}
}
```
Set mouseout event handler to the legend item.
`undefined`
```
legend: {
item: {
onmouseout: function (id) { ... }
}
}
```
Show or hide tooltip.
`true`
```
tooltip: {
show: false
}
```
Set if tooltip is grouped or not for the data points.
`true`
```
tooltip: {
grouped: false
}
```
Set format for the title of tooltip.
Specified function receives x and index of the data point to show.
`undefined`
```
tooltip: {
format: {
title: function (x, index) { return 'Data ' + x; }
}
}
```
Set format for the name of each data in tooltip.
Specified function receives name, ratio, id and index of the data point to show. ratio will be `undefined` if the chart is not donut/pie/gauge. `undefined`
```
tooltip: {
format: {
name: function (name, ratio, id, index) { return name; }
}
}
```
Set format for the value of each data in tooltip.
Specified function receives name, ratio, id and index of the data point to show. ratio will be `undefined` if the chart is not donut/pie/gauge. If `undefined` returned, the row of that value will be skipped. `undefined`
```
tooltip: {
format: {
value: function (value, ratio, id, index) { return ratio; }
}
}
```
Set custom position for the tooltip.
This option can be used to modify the tooltip position by returning `object` that has top and left. `undefined`
```
tooltip: {
position: function (data, width, height, element) {
return {top: 0, left: 0};
}
}
```
Set custom HTML for the tooltip.
Specified function receives data, defaultTitleFormat, defaultValueFormat and color of the data point to show. If tooltip.grouped is `true` , data includes multiple data points. `undefined`
```
tooltip: {
contents: function (d, defaultTitleFormat, defaultValueFormat, color) {
return ... // formatted html as you want
}
}
```
Show the tooltips based on the horizontal position of the mouse.
`undefined`
```
tooltip: {
horizontal: true
}
```
Show sub chart on the bottom of the chart.
`false`
Change the height of the subchart.
`undefined`
```
subchart: {
size: {
height: 20
}
}
```
Set callback for brush event.
```
subchart: {
onbrush: function (domain) { ... }
}
```
Show or hide x axis of subchart.
`true`
Enable zooming.
`false`
There are two types of zoom behavior: 'scroll' and 'drag'
`'scroll'`
```
zoom: {
type: 'drag'
}
```
Enable to rescale after zooming.
If `true` set, y domain will be updated according to the zoomed region. `false`
```
zoom: {
rescale: true
}
```
Change zoom extent.
`[1, 10]`
```
zoom: {
extent: [1, 100] // enable more zooming
}
```
Set callback that is called when the chart is zooming.
```
zoom: {
onzoom: function (domain) { ... }
}
```
Set callback that is called when zooming starts.
Specified function receives the zoom event.
`undefined`
```
zoom: {
onzoomstart: function (event) { ... }
}
```
Set callback that is called when zooming ends.
```
zoom: {
onzoomend: function (domain) { ... }
}
```
Disable the default animation of zoom. This option is useful when you want to get the zoomed domain by onzoom or onzoomend handlers and override the default animation behavior. See #2439 for details.
`false`
```
zoom: {
enabled: true,
disableDefaultBehavior: true,
onzoomend: d => console.log(d)
}
```
Whether to show each point in line.
`true`
```
point: {
show: false
}
```
The radius size of each point.
`2.5`
```
point: {
r: 5
}
```
Whether to expand each point on focus.
`true`
```
point: {
focus: {
expand: {
enabled: true
}
}
}
```
The radius size of each point on focus.
`point.r * 1.75`
```
point: {
focus: {
expand: {
r: 1
}
}
}
```
The radius size of each point on selected.
`point.r * 4`
```
point: {
select: {
r: 3
}
}
```
Set if null data point will be connected or not.
If `true` set, the region of null data will be connected without any data point. If `false` set, the region of null data will not be connected and get empty. `false`
```
line: {
connectNull: true
}
```
Change step type for step chart.
step, step-before and step-after can be used.
`'step'`
```
line: {
step: {
type: 'step-after'
}
}
```
```
area: {
zerobased: false
}
```
Change the width of bar chart.
`auto`
```
bar: {
width: 10
}
```
Change the width of bar chart by ratio.
`0.6`
```
bar: {
width: {
ratio: 0.2
}
}
```
```
bar: {
zerobased: false
}
```
```
pie: {
label: {
show: false
}
}
```
Set formatter for the label on each pie piece.
`undefined`
```
pie: {
label: {
threshold: 0.1
}
}
```
Enable or disable expanding pie pieces.
`true`
```
pie: {
expand: false
}
```
Set formatter for the label on each donut piece.
`undefined`
```
donut: {
label: {
threshold: 0.1
}
}
```
Enable or disable expanding donut pieces.
`true`
```
donut: {
expand: false
}
```
Set width of donut chart.
`auto`
Set title of donut chart.
`''`
```
donut: {
title: 'Title'
}
```
Show or hide label on gauge.
`true`
Set formatter for the label on gauge.
`undefined`
```
gauge: {
label: {
format: function (value, ratio) {
return value;
}
}
}
```
Enable or disable expanding gauge.
`true`
```
gauge: {
expand: false
}
```
Set min value of the gauge.
`0`
```
gauge: {
min: -100
}
```
Set max value of the gauge.
`100`
```
gauge: {
max: 200
}
```
Set units of the gauge.
`undefined`
```
gauge: {
units: ' %'
}
```
Set width of gauge chart.
`auto`
Set type of curve interpolation.
`cardinal`
Available interpolation are:
`[default]`
```
interpolation: {
type: "monotone"
}
```
Change the minimum value of the stanford color scale.
`auto`
```
stanford: {
scaleMin: 1
}
```
Change the maximum value of the stanford color scale.
`auto`
```
stanford: {
scaleMax: 10000
}
```
Change the width of the stanford color scale.
`20`
```
stanford: {
scaleWidth: 20
}
```
Set formatter for stanford color scale axis tick text.
This option accepts the string 'pow10', a d3.format object and any function you define.
`d3.format("d")` - decimal notation, rounded to integer
```
stanford: {
scaleFormat: 'pow10'
// or d3.format("d")
// or a function
}
```
Set the values for stanford color scale axis tick text.
This option accepts a function that returns an array of numbers.
`undefined`
```
stanford: {
scaleValues: (minValue, maxValue) => {
const step = (maxValue - minValue) / 10;
return d3.range(minValue, maxValue + step, step);
}
}
```
Set the color interpolator for stanford color scale.
This option is a d3.interpolate* object or any function you define that receives a value between 0 and 1, and returns a color as string.
```
d3.interpolateHslLong(d3.hsl(250, 1, 0.5), d3.hsl(0, 1, 0.5))
```
```
stanford: {
colors: d3.interpolatePlasma
}
```
Set the padding for the stanford color scale.
This option accepts `array` including `object` that has top, right, bottom and left. `undefined`
```
stanford: {
padding: {
top: 15,
right: 0,
bottom: 0,
left: 0
}
}
```
Show text anywhere inside the chart.
This option accepts `array` including `object` that has x, y, content and class. The key class is optional.
x and y are the starting position of the text, content is the text content to show. If class is set, the text element will have it as class.
`[]`
```
stanford: {
texts: [
{x: 1, y: 4, content: 'my custom text here', class: 'text-1-4'}
]
}
```
Show lines anywhere inside the chart.
This option accepts `array` including `object` that has value_x1, value_y1, value_x2, value_y2 and class. The key class is optional.
value_x1 and value_y1 are the starting position of the line, value_x2 and value_y2 are the ending position of the line. If class is set, the line element will have it as class.
`[]`
```
stanford: {
lines: [
{value_x1: 0, value_y1: 0, value_x2: 65, value_y2: 65, class: "line-0-65"}
]
}
```
Show regions anywhere inside the chart.
This option accepts `array` including `object` that has points, text, opacity and class. The keys text, opacity and class are optional. points accepts `array` including `object` that has x and y that represent the coordinates of each point. text accepts `function` that returns a `string` with the text to show. If the current chart type is stanford the function receives value and percentage as parameters that represent the number of points in this region.
opacity accepts a number between 0 and 1, the default opacity is 0.2.
If class is set, the line element will have it as class.
Points should be added in a counter-clockwise direction to close the polygon.
`[]`
```
stanford: {
regions: [
{
points: [ // add points counter-clockwise
{x: 0, y: 0},
{x: 40, y: 40},
{x: 0, y: 40}
],
text: function (value, percentage) {
return "Normal Operations: " + value + " (" + percentage + "%)";
},
opacity: 0.2, // 0 to 1
class: "region-triangle-1"
}
]
}
```
This API highlights specified targets and fade out the others.
You can specify multiple targets by giving an array that includes id as `String` . If no argument is given, all of targets will be highlighted. `.focus(targetIds)` targetIds `String` or `Array`
Target ids to be highlighted.
```
// data1 will be highlighted and the others will be faded out
chart.focus('data1');
// data1 and data2 will be highlighted and the others will be faded out
chart.focus(['data1', 'data2']);
// all targets will be highlighted
chart.focus();
```
This API fades out specified targets and reverts the others.
You can specify multiple targets by giving an array that includes id as `String` . If no argument is given, all of targets will be faded out. `.defocus(targetIds)` targetIds `String` or `Array`
Target ids to be faded out.
```
// data1 will be faded out and the others will be reverted.
chart.defocus('data1');
// data1 and data2 will be faded out and the others will be reverted.
chart.defocus(['data1', 'data2']);
// all targets will be faded out.
chart.defocus();
```
This API reverts specified targets.
You can specify multiple targets by giving an array that includes id as `String` . If no argument is given, all of targets will be reverted. `.revert(targetIds)` targetIds `String` or `Array`
Target ids to be reverted.
```
// data1 will be reverted.
chart.revert('data1');
// data1 and data2 will be reverted.
chart.revert(['data1', 'data2']);
// all targets will be reverted.
chart.revert();
```
```
.show(targetIds, options)
```
```
// data1 will be shown.
chart.show('data1');
// data1 and data2 will be shown.
chart.show(['data1', 'data2']);
// all targets will be shown.
chart.show();
```
.hide(targetIds, options)
```
```
// data1 will be hidden.
chart.hide('data1');
// data1 and data2 will be hidden.
chart.hide(['data1', 'data2']);
// all targets will be hidden.
chart.hide();
This API toggles (shows or hides) specified targets.
You can specify multiple targets by giving an array that includes id as `String` . If no argument is given, all of targets will be toggles.
```
.toggle(targetIds, options)
```
```
// data1 will be toggled.
chart.toggle('data1');
// data1 and data2 will be toggled.
chart.toggle(['data1', 'data2']);
// all targets will be toggled.
chart.toggle();
// data1 will be toggled together with its legend.
chart.toggle('data1', {withLegend: true});
```
Load data to the chart.
You can specify multiple targets by giving an array that includes id as `String` . If no argument is given, all of targets will be toggles. `.load(args)` args `Object`
If url, json, rows and columns given, the data will be loaded. If data that has the same target id is given, the chart will be updated. Otherwise, new target will be added.
If classes given, the classes specified by data.classes will be updated. classes must be `Object` that has target id as keys. If categories given, the categories specified by axis.x.categories or data.x will be updated. categories must be `Array` . If axes given, the axes specified by data.axes will be updated. axes must be `Object` that has target id as keys. If colors given, the colors specified by data.colors will be updated. colors must be `Object` that has target id as keys. If type or types given, the type of targets will be updated. type must be `String` and types must be `Object` . If unload given, data will be unloaded before loading new data. If `true` given, all of data will be unloaded. If target ids given as `String` or `Array` , specified targets will be unloaded.
If done given, the specified function will be called after data loded.
unload should be used if some data needs to be unloaded simultaneously. If you call unload API soon after/before load instead of unload param, chart will not be rendered properly because of cancel of animation.
```
// Load data1 and unload data2 and data3
chart.load({
columns: [
['data1', 100, 200, 150, ...],
...
],
unload: ['data2', 'data3']
});
```
Unload data to the chart.
You can specify multiple targets by giving an array that includes id as `String` . If no argument is given, all of targets will be toggles. `.unload(args)` args `Object` If ids given, the data that has specified target id will be unloaded. ids should be `String` or `Array` . If ids is not specified, all data will be unloaded.
If done given, the specified function will be called after data loded.
If you call load API soon after/before unload, unload param of load should be used. Otherwise chart will not be rendered properly because of cancel of animation.
```
// Unload data2 and data3
chart.unload({
ids: ['data2', 'data3']
});
```
Flow data to the chart.
By this API, you can append new data points to the chart.
`.flow(args)` args `Object`
If json, rows and columns given, the data will be loaded. If data that has the same target id is given, the chart will be appended. Otherwise, new target will be added. One of these is required when calling. If json specified, keys is required as well as data.json
If to is given, the lower x edge will move to that point. If not given, the lower x edge will move by the number of given data points.
If length is given, the lower x edge will move by the number of this argument.
If duration is given, the duration of the transition will be specified value. If not given, transition.duration will be used as default.
If done is given, the specified function will be called when flow ends.
```
// 2 data points will be apprended to the tail and popped from the head.
// After that, 4 data points will be appended and no data points will be poppoed.
chart.flow({
columns: [
['x', '2013-01-11', '2013-01-21'],
['data1', 500, 200],
['data2', 100, 300],
['data3', 200, 120]
],
done: function () {
chart.flow({
columns: [
['x', '2013-02-11', '2013-02-12', '2013-02-13', '2013-02-14'],
['data1', 200, 300, 100, 250],
['data2', 100, 90, 40, 120],
['data3', 100, 100, 300, 500]
],
length: 0
});
}
});
```
Change data point state to selected.
By this API, you can select data points. To use this API, data.selection.enabled needs to be set `true` .
```
.select(ids, indices, resetOthers)
```
ids `Array`
Specify target ids to be selected. If this argument is not given, all targets will be the candidate.
indices `Array`
Specify indices to be selected. If this argument is not given, all data points will be the candidate.
resetOthers `boolean` If this argument is set `true` , the data points that are not specified by ids, indices will be unselected.
```
// all data points of data1 will be selected.
chart.select(['data1']);
// 3 data points on index 1, 3, 5 of data1 will be selected.
chart.select(['data1'], [1,3,5]);
```
Change data point state to unselected.
By this API, you can unselect data points. To use this API, data.selection.enabled needs to be set `true` .
```
.unselect(ids, indices)
```
ids `Array`
Specify target ids to be unselected. If this argument is not given, all targets will be the candidate.
indices `Array`
Specify indices to be unselected. If this argument is not given, all data points will be the candidate.
```
// all data points of data1 will be unselected.
chart.unselect(['data1']);
// 3 data points on index 1, 3, 5 of data1 will be unselected.
chart.unselect(['data1'], [1,3,5]);
```
Get selected data points.
By this API, you can get selected data points information. To use this API, data.selection.enabled needs to be set `true` . `.selected(targetId)` targetId `String`
You can filter the result by giving target id that you want to get. If not given, all of data points will be returned.
```
// all selected data points will be returned.
chart.selected();
// all selected data points of data1 will be returned.
chart.selected('data1');
```
Change the type of the chart.
```
.transform(type, targetIds)
```
type `String`
Specify the type to be transformed. The types listed in data.type can be used.
targetIds `String` or `Array`
Specify targets to be transformed. If not given, all targets will be the candidate.
```
// all targets will be bar chart.
chart.transform('bar');
// only data1 will be bar chart.
chart.transform('bar', 'data1');
// only data1 and data2 will be bar chart.
chart.transform('bar', ['data1', 'data2']);
```
Update groups for the targets.
`.groups(groups)` groups `Array` This argument needs to be an `Array` that includes one or more `Array` that includes target ids to be grouped.
```
// data1 and data2 will be a new group.
chart.groups([['data1', 'data2']]);
```
```
// Show 2 x grid lines
chart.xgrids([
{value: 1, text:'Label 1'},
{value: 4, text: 'Label 4'}
]);
```
Add x grid lines.
```
// Add a new x grid line
chart.xgrids.add(
{value: 4, text: 'Label 4'}
);
// Add new x grid lines
chart.xgrids.add([
{value: 2, text: 'Label 2'},
{value: 4, text: 'Label 4'}
]);
```
Remove x grid lines.
```
// x grid line on x = 2 will be removed
chart.xgrids.remove({value: 2});
// all of x grid lines will be removed
chart.xgrids.remove();
```
```
// Show 2 y grid lines
chart.ygrids([
{value: 100, text:'Label 1'},
{value: 400, text: 'Label 4'}
]);
```
Add y grid lines.
```
// Add a new y grid line
chart.ygrids.add(
{value: 400, text: 'Label 4'}
);
// Add new y grid lines
chart.ygrids.add([
{value: 200, text: 'Label 2'},
{value: 400, text: 'Label 4'}
]);
```
Remove y grid lines.
```
// y grid line on y = 200 will be removed
chart.ygrids.remove({value: 200});
// all of y grid lines will be removed
chart.ygrids.remove();
```
Update regions.
`.regions(regions)` regions `Array`
Regions will be replaced with this argument. The format of this argument is the same as regions.
Add new region.
This API adds new region instead of replacing like regions.
```
.regions.add(regions)
```
regions `Array` or `Object` New region will be added. The format of this argument is the same as regions and it's possible to give an `Object` if only one region will be added.
```
// Add a new region
chart.regions.add(
{axis: 'x', start: 5, class: 'regionX'}
);
Remove regions.
This API removes regions.
```
.regions.remove(args)
```
args `Object`
This argument should include classes. If classes is given, the regions that have one of the specified classes will be removed. If args is not given, all of regions will be removed.
```
// regions that have 'region-A' or 'region-B' will be removed.
chart.regions.remove({classes: ['region-A', 'region-B']});
// all of regions will be removed.
chart.regions.remove();
```
Get data loaded in the chart.
`.data(targetIds)` targetIds `String` or `Array`
If this argument is given, this API returns the specified target data. If this argument is not given, all of data will be returned.
```
// Get only data1 data
chart.data('data1');
// Get data1 and data2 data
chart.data(['data1', 'data2']);
// Get all data
chart.data();
```
Get data shown in the chart.
```
.data.shown(targetIds)
```
targetIds `String` or `Array`
If this argument is given, this API filters the data with specified target ids. If this argument is not given, all shown data will be returned.
```
// Get shown data by filtering to include only data1 data
chart.data.shown('data1');
// Get shown data by filtering to include data1 and data2 data
chart.data.shown(['data1', 'data2']);
// Get all shown data
chart.data.shown();
```
Get values of the data loaded in the chart.
```
.data.values(targetId)
```
targetId `String` This API returns the values of specified target. If this argument is not given, `null` will be retruned.
```
// Get data1 values
chart.data.values('data1');
```
```
// Get current names
chart.data.names();
// Update names
chart.data.names({
data1: 'New Name 1',
data2: 'New Name 2'
});
```
Get and set colors of the data loaded in the chart.
`.data.colors(colors)` colors `Object`
If this argument is given, the colors of data will be updated. If not given, the current colors will be returned. The format of this argument is the same as data.colors.
```
// Get current colors
chart.data.colors();
// Update colors
chart.data.colors({
data1: '#FFFFFF',
data2: '#000000'
});
```
```
// Get current axes
chart.data.axes();
// Update axes
chart.data.axes({
data1: 'y',
data2: 'y2'
});
```
Get and set x values for the chart.
`.x(x)` x `Array` If x is given, x values of every target will be updated. If no argument is given, current x values will be returned as an `Object` whose keys are the target ids.
```
// Get current x values
chart.x();
// Update x values for all targets
chart.x([100, 200, 300, 400, ...]);
```
Get and set x values for the chart.
`.xs(xs)` xs `Object` If xs is given, specified target's x values will be updated. If no argument is given, current x values will be returned as an `Object` whose keys are the target ids.
```
// Get current x values
chart.xs();
// Update x values for all targets
chart.xs({
data1: [10, 20, 30, 40, ...],
data2: [100, 200, 300, 400, ...]
});
```
Get and set axis labels.
`.axis.labels(labels)` labels `Object`
If labels is given, specified axis' label will be updated.
```
// Update axis' label
chart.axis.labels({
x: 'New X Axis Label',
y: 'New Y Axis Label'
});
```
```
// Update axis' min
chart.axis.min({
x: -10,
y: 1000,
y2: 100
});
```
```
// Update axis' max
chart.axis.max({
x: 100,
y: 1000,
y2: 10000
});
```
```
// Update axis' min and max values
chart.axis.range({
min: {
x: -10,
y: -1000,
y2: -10000
},
max: {
x: 100,
y: 1000,
y2: 10000
}
});
```
Get and set axis y/y2 types.
`.axis.types(types)` types `Object`
If types is given, specified axis' type value will be updated. If no argument is given, the current types for y/y2 axis will be returned.
```
// Update axis' types
chart.axis.types({
y: 'linear',
y2: 'log'
});
```
Show legend for each target.
```
.legend.show(targetIds)
```
```
// Show legend for data1.
chart.legend.show('data1');
// Show legend for data1 and data2.
chart.legend.show(['data1', 'data2']);
// Show all legend.
chart.legend.show();
```
Hide legend for each target.
```
.legend.hide(targetIds)
```
```
// Hide legend for data1.
chart.legend.hide('data1');
// Hide legend for data1 and data2.
chart.legend.hide(['data1', 'data2']);
// Hide all legend.
chart.legend.hide();
```
Returns `true` if the sub chart is shown.
```
if (chart.subchart.isShown()) {
// Sub chart is shown
}
```
Shows sub chart at the bottom of the chart.
```
// Show sub chart
chart.subchart.show();
```
Hides sub chart.
```
// Hide sub chart
chart.subchart.hide();
```
Zoom by giving x domain.
`.zoom(domain)` domain `Array`
If domain is given, the chart will be zoomed to the given domain. If no argument is given, the current zoomed domain will be returned.
```
// Zoom to specified domain
chart.zoom([10, 20]);
// Get the current zoomed domain
chart.zoom();
```
Unzoom to the original domain.
`.unzoom()`
```
// Unzoom to the original domain
chart.unzoom();
```
Enable and disable zooming.
```
.zoom.enable(enabled)
```
enabled `Boolean` If enabled is `true` , the feature of zooming will be enabled. If `false` is given, it will be disabled.
```
// Enable zooming
chart.zoom.enable(true);
```
Resize the chart.
`.resize(size)` size `Object`
This argument should include width and height in pixels.
```
// Resize to 640x480
chart.resize({
height: 640,
width: 480
});
```
Force to redraw.
`.flush()`
```
// Force to redraw
chart.flush();
```
Reset the chart object and remove element and events completely.
`.destroy()`
```
// Destroy the chart
chart.destroy();
// If you have a reference to the chart make sure to call destroy in the following manner
chart = chart.destroy();
``` |
spacchetti | readthedoc | Unknown | Spacchetti Documentation
<NAME>
Dec 05, 2018
Contents 1.1 Introduction to Psc-Packag... 3 1.2 Why/How Dhall... 4 1.3 How to use this package se... 6 1.4 Project-Local setu... 7 1.5 Manual setu... 10 1.6 FA... 11
i
ii
Spacchetti Documentation This is a guide for the Package Set project Spacchetti, which provides a way to work with package definitions for Psc-Package using the Dhall programming language. This guide will also try to guide you through some of the details of how Psc-Package itself works, and some details about the setup of this project and how to use Dhall.
Note: If there is a topic you would like more help with that is not in this guide, open a issue in the Github repo for it to request it.
Spacchetti Documentation 2 Contents
CHAPTER 1
Pages 1.1 Introduction to Psc-Package 1.1.1 What is Psc-Package?
Psc-Package is a package manager for PureScript that works essentially by running a bunch of git commands. Its distinguishing feature from most package managers is that it uses a package set.
1.1.2 What is a Package Set?
Many users trying to rush into using Psc-Package don’t slow down enough to learn what package sets are. They are a set of packages, such that the package set only contains one entry for a given package in a set. This means that
• Whichever package you want to install must be in the package set
• The dependencies and the transitive dependencies of the package you want to install must be in the package set Package sets are defined in packages.json in the root of any package set repository, like in https://github.com/
justinwoo/spacchetti/blob/master/packages.json.
1.1.3 How are package sets used?
Package sets are consumed by having a psc-package.json file in the root of your project, where the contents are like below:
{
"name": "project-name",
"set": "set-name",
"source": "https://github.com/justinwoo/spacchetti.git",
"depends": [
"aff",
"console",
(continues on next page)
Spacchetti Documentation
(continued from previous page)
"prelude"
]
}
So the way this file works is that
• "set" matches the tag or branch of the git repository of the package set
• "source" is the URL for the git repository of the package set
• "depends" is an array of strings, where the strings are names of packages you depend on When you run psc-package install, psc-package will perform the steps so that the following directory has the package set cloned to it:
.psc-package/set-name/.set And the package set will be available in
.psc-package/set-name/.set/packages.json When you install a package in your given package set, the directory structure will be used, such that if you install aff from your package set at version v5.0.0, you will have the contents of that package in the directory
.psc-package/set-name/aff/v5.0.0 Once you understand these three sections, you’ll be able to solve any problems you run into with Psc-Package.
1.2 Why/How Dhall?
Dhall is a programming language that guarantees termination. Its most useful characteristics for uses in this project are
• Static typing with correct inference: unlike the packages.json file, we have the compiler check that we correctly
define packages
• Functions: we can use functions to create simple functions for defining packages
• Local and remote path importing: we can use this to mix and match local and remote sources as necessary to
build package sets
• Typed records with directed merging: we can use this to split definitions into multiple groupings and apply
patching of existing packages as needed Let’s look at the individual parts for how this helps us make a package set.
1.2.1 Files The files in this package set are prepared as such:
-- Package type definition src/Package.dhall
-- function to define packages src/mkPackage.dhall
(continues on next page)
Spacchetti Documentation
(continued from previous page)
-- packages to be included when building package set src/packages.dhall
-- package "groups" where packages are defined in records src/groups/[...].dhall Package.dhall This contains the simple type that is the definition of a package:
{ dependencies : List Text, repo : Text, version : Text }
So a given package has a list of dependencies, the git url for the repository, and the tag or branch that it can be pulled from.
mkPackage.dhall This contains a function for creating Package values easily
𝜆(dependencies : List Text)
→ 𝜆(repo : Text)
→ 𝜆(version : Text)
→ { dependencies = dependencies, repo = repo, version = version }
: ./Package.dhall While this function is unfortunately stringly typed, this still lets us conveniently define packages without having to clutter the file with record definitions.
packages.dhall This is the main file used to generate packages.json, and is defined by taking package definitions from the groups and joining them with a right-sided merge.
./groups/purescript.dhall
./groups/purescript-contrib.dhall
./groups/purescript-web.dhall
./groups/purescript-node.dhall
-- ...
./groups/justinwoo.dhall
./groups/patches.dhall 1.2.2 Definitions and overrides As patches.dhall is last, its definitions override any existing definitions. For example, you can put an override for an existing definition of string-parsers with such a definition:
let mkPackage = ./../mkPackage.dhall in { string-parsers =
mkPackage
[ "arrays"
(continues on next page)
Spacchetti Documentation
(continued from previous page)
, "bifunctors"
, "control"
, "either"
, "foldable-traversable"
, "lists"
, "maybe"
, "prelude"
, "strings"
, "tailrec"
]
"https://github.com/justinwoo/purescript-string-parsers.git"
"no-code-points"
}
1.2.3 Video I recorded a demo video of how adding a package to Spacchetti works: https://www.youtube.com/watch?v=
4Rh-BY-7sMI 1.3 How to use this package set 1.3.1 Requirements This project requires that you have at least:
• Linux/OSX. I do not support Windows. You will probably be able to do everything using WSL, but I will not
support any issues (you will probably barely run into any with WSL). I also assume your distro is from the last
decade, any distributions older than 2008 are not supported.
• Dhall-Haskell and Dhall-JSON installed. You can probably install them from Nix or from source.
• Psc-Package installed, with the release binary in your PATH in some way.
• jq installed.
1.3.2 How to generate the package set after editing Dhall files First, test that you can actually run make:
> make
./format.sh formatted dhall files
./generate.sh generated to packages.json
./validate.pl validated packages' dependencies This is how you format Dhall files in the project, generate the packages.json that needs to be checked in, and validate that all dependencies declared in package definitions are at least valid. Unless you plan to consume only the packages.dhall file in your repository, you must check in packages.json.
To actually use your new package set, you must create a new git tag and push it to your fork of spacchetti. Then set your package set in your project repository accordingly, per EXAMPLE:
Spacchetti Documentation
{
"name": "EXAMPLE",
"set": "160618", // GIT TAG NAME
"source": "https://github.com/justinwoo/spacchetti.git", // PACKAGE SET REPO URL
"depends": [
"console",
"prelude"
]
}
When you set this up correctly, you will see that running psc-package install will create the file .
psc-package/{GIT TAG NAME HERE}/.set/packages.json.
1.3.3 Testing changes to package set To set up a test project, run make setup. Then you can test individual packages with psc-package verify PACKAGE.
1.3.4 Using Perl scripts in this repository You will only need the following scripts:
• verify.pl - to install a given package and validate the entire compiled output.
• from-bower.pl - to add/update a package that is registered on Bower.
These each take an argument of a package, e.g. ./update-from-bower.pl behaviors.
1.4 Project-Local setup There’s now a CLI for the repetitive boilerplate generation and task running parts here: https://github.com/justinwoo/
spacchetti-cli See the example repo here: https://github.com/justinwoo/spacchetti-local-setup-example In psc-package, there is nothing like “extra-deps” from Stack. Even though editing a package set isn’t hard, it can be fairly meaningless to have a package set that differs from package sets that you use for your other projects. While there’s no real convenient way to work with it with standard purescript/package-sets, this is made easy with Dhall again where you can define a packages.dhall file in your repo and refer to remote sources for mkPackage and some existing packages.dhall.
1.4.1 With the CLI With the Spacchetti CLI, you can automate the manual setup below and run a single command to update your package set.
To use the CLI, you will first need fulfill the requirements.
Then, install the Spacchetti CLI in a manner you prefer:
• npm: you can use npm install --global spacchetti-cli-bin-simple to install via npm.
• Github releases: You can go to the release page on Github, download the archive with your platform’s binary,
and put it somewhere on your PATH https://github.com/justinwoo/spacchetti-cli/releases
Spacchetti Documentation
• stack install: You can clone the repository and run stack install: https://github.com/justinwoo/
spacchetti-cli When you have installed the CLI, you can run spacchetti to be shown the help message:
Spacchetti CLI Usage: spacchetti (local-setup | insdhall)
Available options:
-h,--help Show this help text Available commands:
local-setup run project-local Spacchetti setup
insdhall insdhall the local package set Local setup First, run the local-setup command to get the setup generated.
spacchetti local-setup This will generate two files:
• packages.dhall: this is your local package set file, which will refer to the upstream package set and also
assign a upstream variable you can use to modify your package set.
• psc-package.json: this is the normal psc-package file, with the change that it will refer to a “local” set.
Before you try to run anything else, make sure you run spacchetti insdhall:
InsDhall Now you can run the ins-dhall-ation of your package set:
spacchetti insdhall This will generate the package set JSON file from your package set and place it in the correct path that psc-package will be able to use. You can now use psc-package install and other normal psc-package commands.
Updating the local package set For example, you may decide to use some different versions of packages defined in the package set. This can be achieved easily with record merge updates in Dhall:
let mkPackage =
https://raw.githubusercontent.com/justinwoo/spacchetti/140918/src/mkPackage.
˓→dhall in let upstream =
https://raw.githubusercontent.com/justinwoo/spacchetti/140918/src/packages.
˓→dhall in let overrides =
{ halogen =
(continues on next page)
Spacchetti Documentation
(continued from previous page)
upstream.halogen { version = "master" }
, halogen-vdom =
upstream.halogen-vdom { version = "v4.0.0" }
}
in upstream overrides If you have already fetched these packages, you will need to remove the .psc-package/ directory, but you can otherwise proceed.
Run the ins-dhall-ation one more time:
spacchetti insdhall Now you can install the various dependencies you need by running psc-package install again, and you will have a locally patched package set you can work with without upstreaming your changes to a package set.
You might still refer to the manual setup notes below to see how this works and how you might remove the Spacchetti CLI from your project workflow should you choose to.
With CI You can install everything you need on CI using some kind of setup like the following.
These examples come from vidtracker: https://github.com/justinwoo/vidtracker/tree/
37c511ed82f209e0236147399e8a91999aaf754c Azure
pool:
vmImage: 'Ubuntu 16.04'
steps:
- script: |
DHALL=https://github.com/dhall-lang/dhall-haskell/releases/download/1.17.0/dhall-
˓→1.17.0-x86_64-linux.tar.bz2
DHALL_JSON=https://github.com/dhall-lang/dhall-json/releases/download/1.2.3/dhall-
˓→json-1.2.3-x86_64-linux.tar.bz2
wget -O $HOME/dhall.tar.gz $DHALL
wget -O $HOME/dhall-json.tar.gz $DHALL_JSON
tar -xvf $HOME/dhall.tar.gz -C $HOME/
tar -xvf $HOME/dhall-json.tar.gz -C $HOME/
chmod a+x $HOME/bin
npm set prefix ~/.npm
npm i -g purescript psc-package-bin-simple spacchetti-cli-bin-simple
displayName: 'Install deps'
- script: |
export PATH=~/.npm/bin:./bin:$HOME/bin:$PATH
(continues on next page)
Spacchetti Documentation
(continued from previous page)
which spacchetti
which dhall
which dhall-to-json
make
displayName: 'Make'
Travis language: node_js sudo: required dist: trusty node_js: stable env:
- PATH=./bin:$HOME/bin:$PATH install:
- DHALL=https://github.com/dhall-lang/dhall-haskell/releases/download/1.17.0/dhall-
˓→1.17.0-x86_64-linux.tar.bz2
- DHALL_JSON=https://github.com/dhall-lang/dhall-json/releases/download/1.2.3/dhall-
˓→json-1.2.3-x86_64-linux.tar.bz2
- SPACCHETTI=https://github.com/justinwoo/spacchetti-cli/releases/download/0.2.0.0/
˓→linux.tar.gz
- wget -O $HOME/dhall.tar.gz $DHALL
- wget -O $HOME/dhall-json.tar.gz $DHALL_JSON
- wget -O $HOME/spacchetti.tar.gz $SPACCHETTI
- tar -xvf $HOME/dhall.tar.gz -C $HOME/
- tar -xvf $HOME/dhall-json.tar.gz -C $HOME/
- tar -xvf $HOME/spacchetti.tar.gz -C $HOME/bin
- chmod a+x $HOME/bin
- npm install -g purescript pulp psc-package-bin-simple script:
- which dhall
- which dhall-to-json
- which spacchetti
- make 1.4.2 Manual setup See the moved notes here 1.5 Manual setup 1.5.1 packages.dhall For example, we could patch typelevel-prelude locally in such a way in a project-local packages.dhall file:
let mkPackage =
https://raw.githubusercontent.com/justinwoo/spacchetti/190618/src/mkPackage.
˓→dhall
(continues on next page)
Spacchetti Documentation
(continued from previous page)
in let overrides =
{ typelevel-prelude =
mkPackage
[ "proxy", "prelude", "type-equality" ]
"https://github.com/justinwoo/purescript-typelevel-prelude.git"
"prim-boolean"
}
in https://raw.githubusercontent.com/justinwoo/spacchetti/190618/src/packages.dhall
overrides 1.5.2 psc-package.json Then we need a psc-package.json file, but we will stub the package set information:
{
"name": "my-project",
"set": "local",
"source": "",
"depends": [
"console",
"effect",
"prelude",
"typelevel-prelude"
]
}
1.5.3 insdhall.sh Finally, we will need to create the Psc-Package files and insert our local generated package set:
NAME='local'
TARGET=.psc-package/$NAME/.set/packages.json mkdir -p .psc-package/$NAME/.set dhall-to-json --pretty <<< './packages.dhall' > $TARGET echo wrote packages.json to $TARGET Once we run this script, we will now be able to use psc-package install and get to work.
1.6 FAQ 1.6.1 What is Spacchetti?
This is a guide for the Package Set project Spacchetti, which provides a way to work with package
definitions for Psc-Package using the Dhall programming language. This guide will also try to guide
you through some of the details of how Psc-Package itself works, and some details about the setup of this
project and how to use Dhall.
It’s a package set for psc-package that uses a language that almost acts like SASS for JSON/YAML, but has types and much more.
Spacchetti Documentation 1.6.2 Why should I use Spacchetti over normal Psc-Package?
First, make sure to read the short explanation of Psc-Package: https://spacchetti.readthedocs.io/en/latest/intro.html Then read the explanation of why and how Dhall is used: https://spacchetti.readthedocs.io/en/latest/why-dhall.html In short, because package sets are annoying to edit when they’re only in JSON form, but using Dhall can make working with this information much easier.
1.6.3 Does Spacchetti CLI replace Psc-Package?
No, Spacchetti CLI only does some simple tasks that generate files and use Dhall to prepare Psc-Package package sets.
There are no overlapping commands with Psc-Package. |
designsize | cran | R | Package ‘designsize’
October 13, 2022
Title Sample Size Calculation of Various Study Designs
Type Package
Version 0.1.0
Date 2021-10-03
Description Different sample size calculations with different study designs.
These techniques are explained by Chow (2007)
<doi:10.1201/9781584889830>.
ByteCompile Yes
License GPL-3
Encoding UTF-8
Depends R (>= 3.5.0)
Imports stats
Maintainer <NAME> <<EMAIL>>
RoxygenNote 7.1.2
NeedsCompilation no
Repository CRAN
Date/Publication 2021-10-12 09:00:04 UTC
Author <NAME> [aut, cre, ctb],
<NAME> [aut, ctb],
<NAME> [aut, ctb],
<NAME> [aut, ctb]
R topics documented:
ABdesig... 2
crsiz... 3
crt.matc... 5
crt.unmatc... 7
expsiz... 9
phsiz... 11
precsiz... 13
prsiz... 15
ABdesign Sample size determination for A + B escalation design without dose
de-escalation
Description
Determination of sample size for each dose level using A + B escalation design without dose de-
escalation
Usage
ABdesign(A, B, C, D, E, prop=c())
Arguments
A Number patients at the dose level i
B Number of patients add at the dose level i when more than D number of patients
have DLT
C Predetermined number patients out of A.
D Predetermined number patients out of A and D>=C
E Predetermined number patients out of A+B and E>=D
prop Vector of DLT rates at different dose level
Details
Let there are "A" patients at the dose level "i" and also consider "C" and "D" as predetermined value
where (D >= C).
If less than C patients have DLTs out of A patints then we escalate the dose at (i+1)th level
and if more than D have the DLT’s out of A then we consider the previous dose level (i-1)th as
MTD(maximum dose level with toxicity rates occurring no more than a predetermined value).
If more than C and less than D patients have DLTs then we add B more patients at the ith dose
level and then if more than E (E >= D) out of (A+B) patients have the DLTs then we consider the
previous dose level as MTD.
Now we are going to determine the expected number of sample size at the jth dose level.
# prop = Vector of DLT rates at different dose level
# n = Total number of doses
# N = Vector of expected number of patients at different level of doses
Value
The expected number of patients at dose levels
Author(s)
<NAME>, <NAME> ,<NAME> and <NAME>
See Also
crt.match crt.unmatch phsize precsize
Examples
# This is A+B escalation design without dose de-escalation. Here A=3 , B=3 indicates the
# number of patients at the dose level i and taking C=D=E=1 the predetermined number of
# patients with DLT. Prop indicates the vector of the DLT rates at different dose level.
ABdesign(A = 3,B = 3,C = 1,D = 1,E = 1,prop= c(0.01,0.014,0.025,0.056,0.177,0.594,0.963))
crsize Sample size determination for crossover study design
Description
Determination of sample sizes for two factors of each group using one of the tests for equality,
non-inferiority/superiority or equivalence
Usage
crsize(type, delta, m, k, mur, mut, sigbr, sigbt, rho, sigwr, sigwt,
alpha, beta, r1, r2)
Arguments
type The three different types of tests are (1) test for equality, (2) test for non-
inferiority/ superiority, (3) test for equivalence i.e. type = c("equal", "non-
inf.sup", "equiv")
delta Non-inferiority/Superiority margin
m Number of responses observed from each subject in each sequence under a fixed
treatment
k Ratio of the sample sizes of the two sequences
mur Mean value of reference therapy
mut Mean value of test therapy
sigbr Between standard deviation due to the effect of reference therapy
sigbt Between standard deviation due to the effect of test therapy
rho Correlation between reference and test therapy
sigwr Within standard deviation due to the effect of reference therapy
sigwt Within standard deviation due to the effect of test therapy
alpha Level of significance
beta The probability of type-II error
r1 Proportion of factor-1
r2 Proportion of factor-2
Details
Consider a 2x2m replicated crossover design for comparing mean responses of a test drug and a
reference drug. Under both treatments the design consists of two sequences with m subjects each.
Value
crsize returns the required sample sizes for each sequence and their factors in a 2x2 contingency
table.
Author(s)
<NAME>, <NAME> ,<NAME> and <NAME>
See Also
ABdesign crt.match crt.unmatch phsize precsize
Examples
# (a) Test for equality:
# This is a crossover design. The type = "equal" tests the equality of mean responses of
# a test drug (mut = 9) and a reference drug (mur = 8.5) and the number of responses are
# m = 4 observed from each subject in each sequence. k = 1 indicates the ratio of the
# sample sizes of the two sequences are equal. The between standard deviation due to the
# effect of reference therapy is sigbr = 1.5 and that of test therapy is 1.5. The corre-
# lation between reference and test therapy is rho = 0.7. The within standard deviation
# due to the effect of reference therapy is sigwr = 1 as well as test therapy is sigwt =
# 1. The alpha = 0.05 is level of significance and the probability of type - II error is
# beta = 0.10. The proportion of factor - 1 and factor - 2 are taken to be r1 = 0.5 and
# r2 = 0.5 respectively.
crsize(type= "equal", delta = 0.4, m = 4, k = 1, mur = 8.5, mut = 9, sigbr = 1.5,
sigbt = 1.5, rho = 0.7, sigwr = 1, sigwt = 1, alpha = 0.05, beta = 0.10,
r1 = 0.5, r2 = 0.5)
# (b) Test for non-inferiority/superiority:
# This is a crossover design. The type = "noninf.sup", tests whether the difference of
# mean responses of a test drug (mut = 9) and a reference drug (mur = 8.5) being greater
# than or equal to the marginal value delta = 0.4. The number of responses are m = 4,
# observed from each subject in each sequence. The value of k = 1 indicates the ratio of
# the sample sizes of the two sequences are equal. The between standard deviation due to
# the effect of reference therapy is sigbr = 1.5 and that of test therapy is 1.5. The
# correlation between reference and test therapy is rho = 0.7. The within standard devi-
# ation due to the effect of reference therapy is sigwr = 1, as well as test therapy is
# sigwt = 1. A alpha = 0.05 is the level of significance and the probability of type-II
# error is beta = 0.10. The proportion of factor-1 (r1) and factor-2 (r2) both are taken
# to be 0.5.
crsize(type = "noninf.sup", delta = 0.4, m = 4, k = 1, mur = 8.5, mut = 9, sigbr = 1.5,
sigbt = 1.5, rho = 0.7, sigwr = 1, sigwt = 1, alpha = 0.05, beta = 0.10,
r1 = 0.5, r2 = 0.5)
#(c) Test for equivalence:
# This is a crossover design. The type = "equiv" tests whether the absolute value of the
# difference of mean responses of a test drug (mut = 9) and a reference drug (mur = 8.5)
# being less than or equal to the marginal value delta = 0.6. The number of responses
# are m = 4 observed from each subject in each sequence. k = 1, indicates that the ratio
# of the sample sizes of the two sequences are equal. The between standard deviation due
# to the effect of reference therapy is sigbr = 1.5 and that of test therapy is 1.5. The
# correlation between reference and test therapy is rho = 0.7. The within standard devi-
# ation due to the effect of reference therapy is sigwr = 1 as well as test therapy is
# sigwt = 1. alpha = 0.05 is the level of significance and the probability of type - II
# error is beta = 0.10. The proportion of factor-1 (r1) and factor-2 (r2) both are taken
# to be 0.5.
crsize(type = "equiv", delta = 0.6, m = 4, k = 1, mur = 8.5, mut = 9, sigbr = 1.5,
sigbt = 1.5,rho = 0.7, sigwr = 1, sigwt = 1, alpha = 0.05, beta = 0.10,
r1 = 0.5, r2 = 0.5)
crt.match Cluster numbers determination for cluster randomized trails (CRT)
matched case
Description
Determine the number of clusters need per group for matched cluster randomized trails
Usage
crt.match(type, mu1, mu2, alpha, beta, sig.w, sig.bm, m, k)
Arguments
type There are three types of comparison i.e. type = c("M", "P", "IR"), "M" stands
for comparison of means "P" stands for comparison of proportions "IR" stands
for comparison of incidence rates
mu1 The mean/proportion/incidence rate value of the 1st group
mu2 The mean/proportion/incidence rate value of the 2nd group
alpha Level of significance
beta The probability of type-II error
sig.w Standard deviation of within cluster
sig.bm Standard deviation of between cluster
m Number of subject in each cluster which is person-years in the case of incidence
rates
k Common value of the coefficient of variation for each group
Details
In cluster-randomized trials (CRTs), matching is a technique that can be used to improve covariate
balance. Matching protects against chance imbalances in baseline covariate distributions and is
thought to improve study credibility. Matching is also implemented to increase study power. Pairs
of similar clusters are formed and then one cluster from the pair is randomized to group 1 while the
other is assigned to group 2. Now we are going to determine the number of cluster in each group.
Value
crt.match returns a value indicating the number of clusters needed per group
Author(s)
<NAME>, <NAME> ,<NAME> and <NAME>
See Also
ABdesign crt.unmatch expsize phsize precsize prsize crsize
Examples
# (a) Comparison of means:
# This is a matched cluster randomized trials. The type ="M" indicates the comparison of
# means. The mean responses of a test group is mu1 = 0.06 and a reference group is mur =
# 0.4. The standard deviationof within cluster and between cluster are sig.w = 0.69 and
# sig.bm = 0.224 respectively, m = 20 indicates number of subject in each cluster. alpha
# =.05 is the level of significance and the probability of type-II error is beta = 0.10.
crt.match(type="M",mu1=0.6,mu2=0.4,alpha=0.05,beta=0.20,sig.w=0.69,sig.bm=0.224,m=20)
# (b) Comparison of proportions:
# This is a matched cluster randomized trials. Where type = "P" indicates the tests for
# comparison of proportions. The proportion of a test group is mu1 =0.01 and a reference
# group is mur = 0.0075. The Standard deviation of between cluster is sig.bm=0.224 and m
# = 2750 indicates number of subject in each cluster, alpha = 0.05 is the level of signi
# -ficance and probability of type-II error is beta = 0.10.
crt.match(type="P",mu1=0.01,mu2=0.0075,alpha=0.05,beta=0.10,sig.bm=0.0075,m=2750)
#(c) Comparison of incidence rates:
# This is a matched cluster randomized trials. Where type = "IR" indicates the tests the
# comparison of incidence rates. The incidence rate of a test group is mu1 = 4.5 and for
# reference group is mur = 3.6. A total of m = 50 person years is considered, alpha =.05
# is the level ofsignificance and the probability of type-II error is beta = 0.10. k=0.3
# indicates the common value of the coefficient of variation for each group.
crt.match(type="IR",mu1=4.5,mu2=3.6,alpha=0.05,beta=0.20,m=50,k=0.3)
crt.unmatch Cluster number determination for cluster randomized trails (CRT) un-
matched case
Description
Determination of number of clusters per group for unmatched cluster randomized trails
Usage
crt.unmatch(type, m, u1, u2, sigma1.B, sigma1.W, sigma2.B, sigma2.W,
rho1, rho2, alpha, beta)
Arguments
type There are three different types of comparison i.e. type = c("M", "P", "IR"), "M"
stands for comparison of means, "P" stands for comparison of proportions and
"IR" stands for comparison of incidence rates
m A common cluster size
u1 Mean(for M)/proportion(for P)/incidence rate(for IR) of group-1
u2 Mean(for M)/proportion(for P)/incidence rate(for IR) of group-2
sigma1.B Between-cluster standard deviation of group-1
sigma1.W Within-cluster standard deviation of group-1
sigma2.B Between-cluster standard deviation of group-2
sigma2.W Within-cluster standard deviation of group-2
rho1 Intra-cluster correlation coefficient(ICC) of group-1
rho2 Intra-cluster correlation coefficient(ICC) of group-2
alpha Level of significance
beta The probability of type-II error
Details
Instead of independent individuals, the unit of randomization is a group of subjects in a cluster
randomized trial(s) or group randomized trial(s). CRTs are generally more complex and the inves-
tigators consider the selection of the unit of randomization and the unit of inference, matching or
stratification to improve treatment balance across clusters. It is also well known that CRTs need
more subjects than individually randomized trials to be adequately powered.
Under unmatched case, no pair of matching is used to control the balance. The simple randomiza-
tion technique is generally used.
Value
crt.unmatch returns a value indicating the number of clusters needed per group.
Author(s)
<NAME>, <NAME> ,<NAME> and <NAME>
See Also
ABdesign crt.match expsize phsize precsize prsize crsize
Examples
# (a) Comparison of means:
# This is a cluster randomized trials for unmatched cases. The type = "M", indicates the
# comparison of means of two groups taking a common cluster size m = 20. The mean value
# of group-1 and group-2 is u1 = 0.5 and u2 = 0.3 respectively. For group-1, the between
# -cluster standard deviation is sigma1.B = 0.3 and within-cluster standard deviation is
# sigma1.W = 0.3. Similarly, for group-2 those values are sigma2.B=0.3 and sigma2.W=0.3.
# The intra-cluster correlation coefficient (ICC) of group - 1 is rho1 = 0.2 and that of
# group - 2 is rho2 = 0.2. The level of significance is alpha = 0.05 and the probability
# of type-II error is beta = 0.20.
crt.unmatch(type = "M", m = 20, u1 = 0.5, u2 = 0.3, sigma1.B = 0.3, sigma1.W = 0.3,
sigma2.B = 0.3, sigma2.W = 0.3, rho1 = 0.2, rho2 = 0.2,
alpha = 0.05, beta = 0.20)
# (b) Comparison of proportions:
# This is a cluster randomized trials for unmatched cases. The type = "P", indicates the
# comparison of proportions of two groups taking a common cluster size m = 20. The prop-
# ortion of group-1 andgroup-2 is u1 = 0.5 and u2 = 0.3 respectively. For group - 1, the
# between-cluster standard deviation is sigma1.B = 0.3 and within-cluster standard devi-
# ation is sigma1.W = 0.3. Similarly, for group-2 the standard deviations are sigma2.B =
# 0.3 and sigma2.W = 0.3. The intra-cluster correlation coefficient(ICC) of group - 1 is
# rho1 = 0.2 and that of group-2 is rho2 = 0.2. The level of significance is alpha =0.05
# and the probability of type-II error is beta = 0.20.
crt.unmatch(type = "P", m = 20, u1 = 0.5, u2 = 0.3, sigma1.B = 0.3, sigma1.W = 0.3,
sigma2.B = 0.3, sigma2.W = 0.3, rho1 = 0.2, rho2 = 0.2,
alpha = 0.05, beta = 0.20)
# (c) Comparison of incidence rates:
# This is a cluster randomized trials for unmatched cases. The type = "IR" indicates the
# comparison of incidence rates of two groups taking a total of m = 20 person-years. The
# incidence rate of group-1 and group-2 is u1 = 0.5 and u2 = 0.3 respectively. For group
# -1, the between-cluster standard deviation is sigma1.B = 0.3 and within-cluster stand-
# ard deviation is sigma1.W = 0.3. Similarly, for group-2 the standard deviations are
# sigma2.B = 0.3 and sigma2.W = 0.3. The intra-cluster correlation coefficient (ICC) of
# group-1 is rho1 = 0.2 and that of group-2 is rho2 = 0.2. The level of significance is
# alpha = 0.05 and the probability of type-II error is beta = 0.20.
crt.unmatch(type = "IR", m = 20, u1 = 0.5, u2 = 0.3, sigma1.B = 0.3, sigma1.W = 0.3,
sigma2.B = 0.3, sigma2.W = 0.3, rho1 = 0.2, rho2 = 0.2,
alpha = 0.05, beta = 0.20)
expsize Sample size determination for survival data using exponential assump-
tion
Description
Sample size determination for control drug and test drug for time to event outcome using exponen-
tial assumption
Usage
expsize(type, k, delta, lambda1, lambda2, sigma1, sigma2,
sigma.lambda, alpha, beta)
Arguments
type There are three different types of comparison tests: (1) test for equality, (2)
test for non-inferiority/superiority, (3) test for equivalence, ie. type = c("equal",
"noninf.sup", "equiv")
k Ratio of sample sizes
delta The superiority or non-inferiority margin
lambda1 Hazard rate of the control drug
lambda2 Hazard rate of the test drug
sigma1 Variability in the hazard rate due to using control drug
sigma2 Variability in the hazard rate due to using test drug
sigma.lambda Variability in the hazard rate due to combination of control and test drug
alpha Level of significance
beta The probability of type-II error
Details
Our aim is to determine the sample size based on the hazard rates for median survival times between
control drug and test drug. Since, the hazard function is constant for an exponential distribution, the
median survival time is determined by the hazard function. Moreover, comparing the hazard rates
between the treatment drugs is our hypothesis of interest.
Value
expsize returns a sample size for control and test drug intervention.
Author(s)
<NAME>, <NAME> ,<NAME> and <NAME>
See Also
ABdesign crt.match crt.unmatch phsize precsize prsize crsize
Examples
# (a) Test for equality:
# The exponential assumption is used to determine the sample size with null hypothesis
# that the hazard rates of a test drug and a reference drug are equal i.e.type ="equal".
# The both sample sizes are taken to be equal (k = 1). The hazard rate of control drug
# is lambda1 = 2 and that of test drug is lambda2 = 1. The standard deviation (s.d.) in
# hazard rate due to using control drug & test drug is 0.97 and 3.94 respectively. Their
# combined standard deviation is sigma.lambda = 2.56. The level of significance is alpha
# = 0.05 and the probability of type-II error is beta = 0.20.
expsize(type = "equal", k = 1, delta = 0, lambda1 = 2, lambda2 = 1, sigma1 = 0.97,
sigma2 = 3.94, sigma.lambda = 2.56, alpha = 0.05, beta = 0.20)
# (b) Test for noninferiority/superiority:
# The exponential assumption is used to determine sample size by testing null hypothesis
# (type = "noninf.sup") that the difference between the hazard rates of a test drug and
# the reference drug is less than or equal to a superiority margin delta = 0.2,where k=1
# indicates both the sample sizes are taken to be equal. The hazard rate of the control
# drug is lambda1 = 2 and that of test drug is lambda2 = 1. The standard deviation in
# hazard rate due to using control drug & test drug is 0.97 and 3.94 respectively. Their
# combined standard deviation is sigma, lambda =2.56. The level of significance is alpha
# = 0.05 and the probability of type-II error is beta = 0.20.
expsize(type = "noninf.sup", k = 1, delta = 0.2, lambda1 = 2, lambda2 = 1, sigma1 = 0.97,
sigma2 = 3.94, sigma.lambda = 2.56, alpha = 0.05, beta = 0.20)
# (c) Test for equivalence:
# The exponential assumption is used to determine sample size by testing null hypothesis
# (type = "equiv") that the absolute difference between the hazard rates of a test drug
# and a ref drug is greater than or equal to a superiority margin delta =0.5, where k =1
# indicates both the sample sizes are taken to be equal. The hazard rate of the control
# drug is lambda1 = 2 and that of test drug is lambda2 = 1. The standard deviation in
# the hazard rate due to using control drug and test drug is 0.97 and 3.94 respectively.
# Their combined standard deviation is sigma.lambda = 2.56. The level of significance is
# alpha = 0.05 and the probability of type-IIqerror is beta = 0.20.
expsize(type = "equiv", k = 1, delta = 0.5, lambda1 = 2, lambda2 = 1, sigma1 = 0.97,
sigma2 = 3.94, sigma.lambda = 2.56, alpha = 0.05, beta = 0.20)
phsize Sample size determination using proportional hazard assumption
Description
Determination of sample sizes of the control drug and the test drug intervention for time to event
outcome using proportional hazard assumption.
Usage
phsize(type, lambda1, lambda2, delta, prop, d, alpha, beta)
Arguments
type There are three different types of comparison tests: (1) test for equality, (2)
test for non-inferiority/superiority, (3) test for equivalence, ie. type = c("equal",
"noninf.sup", "equiv")
lambda1 Hazard rate of the control group
lambda2 Hazard rate of the test group
delta The inferiority or superiority margin
prop The proportion of patients in the control group
d The probability of observing an event
alpha Level of significance
beta The probability of type-II error
Details
The proportional hazards assumption is used for comparing time to event data where, we assume
that the hazard function is the product of two components. One component is the non-parametric
part which is generally called baseline hazard and another one is the parametric part. The covari-
ates of this regression model are included in the later component. Because of the combination of
parametric and non-parametric components, the model is known as semi-parametric model.
Value
phsize returns a sample size for the control and the test drug intervention.
Author(s)
<NAME>, <NAME> ,<NAME> and <NAME>
See Also
ABdesign crt.match crt.unmatch precsize prsize crsize
Examples
# (a) Test for equality:
# The phsize function determines the sample size using proportional hazards assumption.
# The type = "equal" denotes the two survival curves are equal under the null hypothesis
# and the hazard rate of the control group is lambda1 = 1 and that of the test group is
# lambda2 = 2. The proportion of patients in the control group is prop = 0.5. The proba-
# bility of observing an event is 0.8. The level of significance is alpha = 0.05 and the
# probability of type-II error is beta = 0.20.
phsize(type = "equal", lambda1 = 1, lambda2 = 2, delta = 0, prop = 0.5,
d = 0.8, alpha = 0.05, beta = 0.20)
# (b) Test for non-inferiority/superiority:
# The phsize function determines the sample size using proportional hazards assumption.
# The type = "noninf.sup" denotes the difference of the two survival curves is less than
# or equal to the marginal value delta = 0.3. The hazard rate of the control group is
# lambda1 = 1 and that of the test group is lambda2 = 2. The proportion of patients in
# the control group is prop = 0.5, the probability of observing a event is 0.8 and level
# of significance is alpha = 0.05 and the probability of type-II error is beta = 0.20.
phsize(type = "noninf.sup", lambda1 = 1, lambda2 = 2, delta = 0.3, prop = 0.5,
d = 0.8, alpha = 0.05, beta = 0.20)
# (c) Test for equivalence:
# The phsize function determines the sample size using proportional hazards assumption.
# The type = "equiv", denotes whether absolute value of the differences between the two
# survival curves is greater than or equal to the marginal value delta = 0.5. The hazard
# rate of the control group is lambda1 = 1 and that of the test group is lambda2 = 2, &
# the proportion of patients in control group is prop = 0.5 and the probability of obs-
# erving a event ais 0.8. The level of significance is alpha = 0.05 and the probability
# of type-II error is beta = 0.20.
phsize(type = "equiv", lambda1 = 1, lambda2 = 1, delta = 0.5, prop = 0.5,
d = 0.8, alpha = 0.05, beta = 0.20)
precsize Sample size determination using power and precision analysis
Description
It determines the ratio between the sample size of power analysis and precision analysis analysis
and also, give the required sample sizes.
Usage
precsize(pR, pT, sigr, sigt, c, alpha, beta)
Arguments
pR Incidence rate for the reference group
pT Incidence rate for the test group
sigr Variability in the reference group
sigt Variability in the test group
c Constant value for allowance of maximum error margin
alpha Level of significance
beta The probability of type-II error
Details
A pre-study power analysis for sample size determination is usually performed to calculate an ap-
propriate sample size for achieving a desired power for detecting a clinically meaningful difference
at a prespecified level of significance. In practice, a much larger sample size is expected for detect-
ing a relatively smaller difference, especially for clinical trials with extremely low incidence rate.
As a result, sample size determination based on power analysis may not be feasible. So, it is a good
suggestion to determine the sample size based on precision analysis.
Value
precsize returns 3 values:
R Ratio between the sample size of power analysis and precision analysis
n.power Sample size required for power analysis
n.precision Sample size required for precision analysis
Author(s)
<NAME>, <NAME> ,<NAME> and <NAME>
See Also
ABdesign crt.match crt.unmatch phsize prsize crsize
Examples
# The incidence rate of reference group is pR = 0.8 per thousands and that of test group
# is pT = 0.7 per thousands. It is also assumed that the respective stanadard deviation
# of reference and test group are sigr = 2 and sigt = 1 respectively. The constant value
# is chosen c = 0.08 to allow the maximum marginal error. The level of significance is
# alpha = 0.05 and the probability of type-II error is beta = 0.20.
precsize(pR = 0.8, pT = 0.7, sigr = 2, sigt = 1, c = 0.08, alpha = 0.05, beta = 0.20)
prsize Sample size determination for parallel study design.
Description
Determination of sample sizes of two factors of each of the two groups using one of the tests for
equality, non-inferiority/superiority or equivalence.
Usage
prsize(type, mu1, mu2, s, alpha, beta, k, r1, r2, del)
Arguments
type There are three types of test, (1) test of equality, (2) test for non-inferiority/superiority,
mu1 The mean value of 1st group
mu2 The mean value of 2nd group
s The common standard deviation
alpha The level of significance
beta The probability of the type II error i.e. 1 - power
k The ratio of 1st sample size(n1) and 2nd sample size(n2) i.e k=n1/n2
r1 The ratio of n1fac1 (sample size of the 1st factor for 1st group) and n1 i.e
r1=n1fac1/n1
r2 The ratio of n2fac2 (sample size of the 1st factor for 2nd group) and n2 i.e
r2=n2fac1/n2
del The superiority or non-inferiority margin
Details
Parallel arm design is the most commonly used study design where subjects are randomized to one
or more study arms. Each study arm will be allocated a different intervention. After randomization
each subject will stay in their assigned arm during the whole study. The randomized subjects should
not inadvertently contaminate with the other group. A major characteristic of a parallel study is
randomization, which ensures accuracy of the results and lower risk of being biased.
Value
prsize returns returns the required sample sizes for each groups and their factors in a 2x2 contin-
gency table.
Author(s)
<NAME>, <NAME> ,<NAME> and <NAME>
See Also
ABdesign crt.match crt.unmatch phsize precsize crsize
Examples
# (a) Test for equality:
# This is a parallel study design. The type = "equal" tests the equality of mean respon-
# ses of a test drug (mu1 = 12) and a reference drug (mur = 8). The common standard dev-
# iation of the drugs is s = 5. k = 2 indicates the ratio of the sample sizes of the two
# groups. alpha = 0.05 is the level of significance and the probability of type-II error
# is beta = 0.10. The proportion of factor- 1 and factor-2 are taken to be r1 = 0.6 and
# r2 = 0.6 respectively.
prsize(type="equal", mu1=12, mu2=8, s=5, alpha=0.05, beta=0.10, k=2, r1=0.6, r2=0.6)
# (b) Test for superiority/noninferiority:
# This is a Parallel design. The type = "noninf.sup" test whether the difference of mean
# responses of a test drug (mu1 = 12) and a reference drug (mu2 = 8) being greater than
# or equal to the marginal value delta = 0.8. s = 5 is the common standard deviation of
# the drugs. The value k = 2 indicates the ratio of the sample sizes of the two groups.
# alpha = 0.05 is the level of significance and the probability of type-II error is beta
# = 0.10. The proportion of factor-1 and factor-2 are taken to be r1 = 0.6 and r2 = 0.6
# respectively.
prsize(type="noninf.sup", mu1=12, mu2=8, s=5, alpha=0.05, beta=0.10, k=2, r1=0.6,
r2=0.6, del=0.8)
# (c) Test for equivalence:
# This is a Parallel design. The type = "equiv" tests whether the absolute value of the
# difference of mean responses of a test drug (mu1 = 12) and a reference drug (mu2 = 8)
# being less than or equal to the marginal value delta = 0.8. The number of responses
# are m = 4 observed from each subject in each sequence.The s = 5 is the common standard
# deviation of the drugs. The value k = 2 indicates the ratio of the sample sizes of the
# two groups. The alpha = 0.05 is the level of significance and the probability of type
# -II error is beta = 0.10. The proportion of factor-1 (r1) and factor-2 (r2) both are
# taken to be equal to 0.6.
prsize(type="equiv", mu1=12, mu2=8, s=5, alpha=0.05, beta=0.10, k=2, r1=0.6,
r2=0.6, del=0.8) |
azure-eventgrid | npm | JavaScript | Microsoft Azure SDK for Node.js - EventGrid
===
This project provides a Node.js package for accessing the Azure PAS. Right now it supports:
* **Node.js version: 6.x.x or higher**
How to Install
---
```
npm install azure-eventgrid
```
How to Use
---
### Authentication, client creation and listing topicTypes as an example
```
var uuid = require('uuid').v4;var msRestAzure = require('ms-rest-azure');var EventGridManagementClient = require("azure-arm-eventgrid");var EventGridClient = require("azure-eventgrid"); // Interactive Login// It provides a url and code that needs to be copied and pasted in a browser and authenticated over there. If successful, // the user will get a DeviceTokenCredentials object.msRestAzure.interactiveLogin(function(err, credentials) { // Created the management client let EGMClient = new EventGridManagementClient(credentials, 'your-subscription-id'); let topicResponse; // created an enventgrid topic return EGMClient.topics.createOrUpdate('resourceGroup', 'topic1', { location: 'westus' }).then((topicResponse) => { return Promise.resolve(console.log('Created topic ', topicResponse)); }).then(() => { // listed the access keys return EGMClient.topics.listSharedAccessKeys('resourceGroup', 'topic1') }).then((accessKeys) => { // created the dataplane client that will be used to publish events let topicCreds = new msRestAzure.TopicCredentials(accessKeys.key1); let EGClient = new EventGridClient(topicCreds, 'your-subscription-id'); let topicHostName = topicResponse.endpoint; //ex: 'topic1.westus.eventgrid.azure.net' let events = [ { id: uuid(), subject: 'TestSubject', dataVersion: '1.0', eventType: 'Microsoft.MockPublisher.TestEvent', data: { field1: 'value1', filed2: 'value2' } } ]; return EGClient.publishEvents(topicHostName, events); }).then((result) => { return Promise.resolve(console.log('Published events successfully.')); }); }).catch((err) => { console.log('An error ocurred'); console.dir(err, {depth: null, colors: true}); });});
```
Related projects
---
* [Microsoft Azure SDK for Node.js](https://github.com/Azure/azure-sdk-for-node)
Readme
---
### Keywords
* node
* azure |
upldoc | ctan | TeX |
* 14\kanjifamily{mc}
* 15\kanjiseries{m}
* 16\kanjishape{n}
* 17\fontsize{10}{10}
* 18\DeclareYokoKanjiEncoding{JY2}{}{}
* 19\DeclareKanjiSubstitution{JY2}{mc}{m}{n}
* 19\-\fontsize{11}{11}
* 20\DeclareTateKanjiEncoding{JT2}{}{}
* 21\DeclareKanjiSubstitution{JT2}{mc}{m}{n}
* 22\YkanjiEncodingPair{JY2}{JT2}
* 23\fontsize{11}{11}
* 24\nencommand\gtdefault{gt}
* 25\nencommand\kanjiencodingdefault{JY2}
* 26\nencommand\kanjifamilydefault{mccdefault}
* 27\nencommand\kanjiseriesdefault{mdddefault}
* 28\nencommand\kanjishapedefault{n}% formerly \updefault
* 29\kanjiencoding{JY2}
* 20\input{jy2mc.fd}
* 21\input{jy2gt.fd}
* 22\input{jt2mc.fd}
* 23\input{jt2gt.fd}
* 24\nenodo\gtdefault{jY2}
* 25\nencommand\kanjiimoddefault{JY2}
* 26\nencommand\kanjiimoddefault{mccdefault}
* 27\nencommand\kanjiseriesdefault{mdddefault}
* 28\nencommand\kanjishapedefault{n}% formerly \updefault
* 29\kanjiencoding{JY2}
* 20\input{jy2mc.fd}
* 21\input{jy2gt.fd}
* 22\input{jt2mc.fd}
* 23\input{jt2gt.fd}
[MISSING_PAGE_POST]
* 40DeclarePreloadSizes{JT2}{gt}{m}{n}{5,7,10,12}
* 41(/xpt)
* 42(*xipt)
* 43DeclarePreloadSizes{JT2}{mc}{m}{n}{5,7,10.95,12}
* 44DeclarePreloadSizes{JT2}{gt}{m}{n}{5,7,10.95,12}
* 45DeclarePreloadSizes{JT2}{mc}{m}{n}{5,7,10.95,12}
* 46DeclarePreloadSizes{JT2}{gt}{m}{n}{5,7,10.95,12}
* 47(/xipt)
* 48(*xipt)
* 49DeclarePreloadSizes{JT2}{mc}{m}{n}{7,9,12,14.4}
* 50DeclarePreloadSizes{JT2}{gt}{m}{7,9,12,14.4}
* 51DeclarePreloadSizes{JT2}{ct}{m}{n}{7,9,12,14.4}
* 52(DeclarePreloadSizes{JT2}{gt}{m}{7,9,12,14.4}
* 53(/xipt)
* 54(*ori)
* 55(DeclarePreloadSizes{JT2}{mc}{m}{n}
* 56{5,6,7,8,9,10,10.95,12,14.4,17.28,20.74,24.88}
* 57DeclarePreloadSizes{JT2}{gt}{m}{n}
* 58{5,6,7,8,9,10,10.95,12,14.4,17.28,20.74,24.88}
* 59(DeclarePreloadSizes{JT2}{mc}{m}{n}
* 60{5,6,7,8,9,10,10.95,12,14.4,17.28,20.74,24.88}
* 61DeclarePreloadSizes{JT2}{gt}{m}{n}
* 62{5,6,7,8,9,10,10.95,12,14.4,17.28,20.74,24.88}
* 63(/ori)
3 %hnsku-3 %hnsku-3 %hnsku-3 %hnsku-3 %hnsku-3 %hnsku-4 %hnsku.tex %hnsku-3 %hnsku.tex %hnsku.
* 95\DeclareFontShape{JT2}{mc}{m}{n}{<->s*[0.962216]upjisr-v}{}
96\DeclareFontShape{JT2}{mc}{bx}{n}{<->ssub*gt/m/n}{}
97\DeclareFontShape{JT2}{mc}{b}{n}{<->ssub*mc/bx/n}{}
98\(\langle\)JT2mc\(\rangle\)
99\(\langle\)*JY2gt\(\rangle\)
100\DeclareKanjiFamily{JY2}{gt}{101\DeclareRelationFont{JY2}{gt}{m}{T1}{cmr}{bx}{}
102\DeclareFontShape{JY2}{gt}{m}{n}{<->ss*[0.962216]upjisg-h}{}
103\DeclareFontShape{JY2}{gt}{bx}{n}{<->ssub*gt/m/n}{}
104\DeclareFontShape{JY2}{gt}{b}{n}{<->ssub*gt/bx/n}{}
105\(\langle\)/JY2gt\(\rangle\)
106\(\langle\)*JT2gt\(\rangle\)
107\DeclareKanjiFamily{JT2}{gt}{108\DeclareRelationFont{JT2}{gt}{m}{T1}{cmr}{bx}{}
109\DeclareFontShape{JT2}{gt}{m}{n}{<->s*[0.962216]upjisg-v}{}
110\DeclareFontShape{JT2}{gt}{bx}{n}{<->ssub*gt/m/n}{}
111\DeclareFontShape{JT2}{gt}{b}{n}{<->ssub*gt/bx/n}{}
112\(\langle\)/JT2gt\(\rangle\)
[MISSING_PAGE_POST]
File c: ukinsoku.dtx Date: 2021/03/04 Version v1.0d-u06
345 \inhibitxspcode'} =1
346 \inhibitxspcode'\(\lceil\)=2
347 \inhibitxspcode'\(\rfloor\)=1
348 \inhibitxspcode'\(\lceil\)=2
349 \inhibitxspcode'\(\rfloor\)=1
350 \inhibitxspcode'\(\lceil\)=2
351 \inhibitxspcode'\(\rfloor\)=1
352 \inhibitxspcode'\(\,\lceil\)=0% U+2014 EM DASH
353 \inhibitxspcode'\(\,\lceil\)=0% U+2015 HORIZONTAL BAR
354 \inhibitxspcode'\(\,\lceil\)=0% U+301C WAVE DASH
355 \inhibitxspcode'\(\,\lceil\)=0% U+FF5E FULLWIDTH TILDE
356 \inhibitxspcode'\(\,\lceil\)=0
357 \inhibitxspcode'\(\,\lceil\)=0% U+00A5 YEN SIGN
358 \inhibitxspcode'\(\,\lceil\)=0% U+FF5E FULLWIDTH YEN SIGN
359 \inhibitxspcode'\(\,\lceil\)=1
360 \inhibitxspcode'\(\,\lceil\)=1
361 \inhibitxspcode'\(\,\lceil\)=1
362 %%
363 %% inhibitxspcode JIS X 0213
364 %%
365 \inhibitxspcode'\(\,\lceil\)=2
366 \inhibitxspcode'\(\lfloor\)=1
367 \inhibitxspcode'(\(\lceil\)=2
368 \inhibitxspcode'\(\rfloor\)) =1
369 \inhibitxspcode'\(\lfloor\)=2
370 \inhibitxspcode'\(\lfloor\)=1
371 \inhibitxspcode'\(\lfloor\)=2
372 \inhibitxspcode'\(\rfloor\)) =1
373 \inhibitxspcode'\(\lfloor\)=1
373 \inhibitxspcode'\(\,\lceil\)=2
374 \inhibitxspcode'\(\rfloor\)=1
375 \inhibitxspcode'\(\,\lceil\)=2
376 \inhibitxspcode'\(\,\lceil\)=1
377 \inhibitxspcode'\(\,\lceil\)!=1
378 \inhibitxspcode'\(\,\lceil\)?\(\rceil\)=1
379 \inhibitxspcode'\(\,\lceil\)!=1
380 \inhibitxspcode'\(\,\lceil\)!=1
381 \inhibitxspcode'\(\,\lceil\)!=2
382 \inhibitxspcode'\(\,\lceil\)=2
383 \inhibitxspcode"AA=1
384 \inhibitxspcode"BA=1
385 \inhibitxspcode'1\(\,\lceil\)=1
386 \inhibitxspcode'2\(\,\lceil\)=1
387 \inhibitxspcode'3\(\,\lceil\)=1
388 \inhibitxspcode'\(\,\lceil\)=2
389 %%
390 %% inhibitxspcode JIS X 0212
391 %%
392 %%\inhibitxspcode'\(\,\lceil\)=2
393 %%\inhibitxspcode'\(\,\lceil\)=2
394 %%\inhibitxspcode"BA=1
File c: ukinsoku.dtx Date: 2021/03/04 Version v1.0d-u06395 %%{inhibitxspcode"AA=1
396 \inhibitxspcode"tm=1
397 %%
398 %%inhibitxspcode"AA#1#2#3
399 %%
400 \inhibitxspcode"o=1
401 \inhibitxspcode",=1
402 \inhibitxspcode"I=2
403 \inhibitxspcode"J=1
404 \/plcore)
File c: ukinsoku.dtx Date: 2021/03/04 Version v1.0d-u06
```
1%DeclareOption{a4paper}{}setcounter{@paper}{1}%
2%setlength\paperheight{297mm}%
3%setlength\paperwidth{210mm}
4%DeclareOption{a5paper}{\setcounter{@paper}{2}%
5%setlength\paperheight{148mm}
6%setlength\paperwidth{257mm}
7%DeclareOption{b5paper}{\setcounter{@paper}{4}%
8%setlength\paperheight{257mm}
9%setlength\paperwidth{182mm}
10%
11%DeclareOption{a4j}{}setcounter{@paper}{1}@stysizetrue
12%setlength\paperheight{297mm}%
13%setlength\paperwidth{210mm}
14%DeclareOption{a5j}{}setcounter{@paper}{2}@stysizetrue
15%setlength\paperheight{210mm}
16%setlength\paperwidth{148mm}
17%DeclareOption{b4j}{}setcounter{@paper}{3}@stysizetrue
18%setlength\paperheight{364mm}
19%setlength\paperwidth{257mm}
20%DeclareOption{b5j}{}setcounter{@paper}{4}@stysizetrue
21%setlength\paperheight{257mm}
22%setlength\paperwidth{182mm}
23%
24%DeclareOption{a4p}{}setcounter{@paper}{1}@stysizetrue
25%setlength\paperheight{297mm}%
26%setlength\paperwidth{210mm}
27%DeclareOption{a5p}{}setcounter{@paper}{2}@stysizetrue
28%setlength\paperwidth{148mm}
29%
30%
31%DeclareOption{a4j}{}setcounter{@paper}{1}@stysizetrue
32%setlength\paperheight{297mm}%
33%setlength\paperwidth{257mm}
34%DeclareOption{b5j}{}setcounter{@paper}{4}@stysizetrue
35%setlength\paperheight{257mm}
36%setlength\paperwidth{182mm}
37%
38%DeclareOption{a4p}{}setcounter{@paper}{1}@stysizetrue
39%setlength\paperheight{297mm}%
40%DeclareOption{a5p}{}setcounter{@paper}{2}@stysizetrue
41%setlength\paperheight{257mm}
42%setlength\paperwidth{182mm}
43%
44%DeclareOption{a4p}{}setcounter{@paper}{1}@stysizetrue
45%setlength\paperheight{297mm}%
46%setlength\paperwidth{210mm}
47%DeclareOption{a5p}{}{}setcounter{@paper}{2}@stysizetrue
48%setlength\paperheight{210mm}
49%setlength\paperwidth{148mm}
50%DeclareOption{b4p}{}{}setcounter{@paper}{3}@stysizetrue
51%setlength\paperheight{364mm}
52%setlength\paperwidth{257mm}
53%DeclareOption{b5p}{}{}setcounter{@paper}{4}@stysizetrue
54%setlength\paperheight{257mm}
[MISSING_PAGE_POST]
* [204] \topsep 9p@ \@plus3p@ \@minus5\p@
* [205] \parsep 4.5\p@ \@plus2\p@ \@minus\p@
* [206] \itemsep \parsep%
* [207] \/12pt
* [208] \belowdisplayskip \abovedisplayskip}
* [204] \footntesize \footntesize \top \@plus3p@ \@minus5\p@
* [209] \DeclareRobustCommand \@plus3p@ \@minus5\p@
* [210] \@(*10pt)
* [211] \@setfontsize\footntesize\@viiptt{9.5}%
* [212] \abovedisplayskip \@plus2\p@ \@minus4\p@
* [213] \abovedisplayshortskip \@ minus1.0mu \@plus\p@
* [214] \belowdisplayshortskip \@ minus1.0mu \@plus\p@ \@minus2\p@
* [215] \def\@list{leftmargin}leftmargin
* [216] \topsep 3\p@ \@plus\p@ \@minus\p@
* [217] \parsep 2\p@ \@plus\p@ \@minus\p@
* [218] \itemsep \parsep\%
* [219] \/10pt
* [220] \@1pt)
* [221] \@setfontsize\footntesize\@ixpt{11}%
* [222] \abovedisplayskip \@plus2\p@ \@minus4\p@
* [223] \abovedisplayshortskip \@ minus1.0mu \@plus\p@
* [224] \belowdisplayshortskip \@ minus1.0mu \@plus\p@ \@minus2\p@
* [225] \def\@list{leftmargin}leftmargin
* [226] \@list{leftmargin}leftmargin
* [227] \@list{leftmargin}leftmargin
* [228] \@list{leftmargin}leftmargin
* [229] \@(/11pt)
* [230] \@setfontsize\footntesize\@xpt\@xipt\@xipt
* [231] \@setfontsize\footntesize\@xpt\@xipt\@xipt
* [232] \abovedisplayskip \@plus2\p@ \@minus5\p@
* [233] \abovedisplayshortskip \@minus3\p@
* [234] \@belowdisplayshortskip \@plus3\p@ \@minus3\p@
* [235] \@def\@list{leftmargin}leftmargin\rightmargin\
* [236] \@list{leftmargin}leftmargin\}leftmargin
* [237] \@list{leftmargin}leftmargin\}leftmargin\}
* [238] \@list{itemsep \parsep\%
* [239] \@(/12pt)
* [240] \@belowdisplayskip \abovedisplayskip}
* [241] \@setfontsize \@list{leftmargin}leftmargin\}
* [242] \@list{leftmargin}
* [243] \@list{leftmargin}
* [244] \@list{leftmargin}
* [245] \@list{leftmargin}
* [246] \@list{leftmargin}
* [247] \@list{leftmargin}
* [248] \@list{leftmargin}
* [249] \@list{leftmargin}
* [240] \@list{leftmargin}
* [241] \@setfontsize\@list{leftmargin}
* [240] \@list{leftmargin}
* [241] \@setfontsize\@list{leftmargin}
[MISSING_PAGE_POST]
* [326] \if@compatibility
* [327] \if@stysize
* [328] \ifnum\c@@paper=2 % A5
* [329] \if@landscape
* [330] \(\lopt\&yoko) \setlength\textwidth{47}(Cwd)
* [331] \(\l1pt\&yoko) \setlength\textwidth{42}(Cwd)
* [332] \(\l2pt\&yoko) \setlength\textwidth{40}(Cwd)
* [333] \(\lopt\&tate) \setlength\textwidth{27}(Cwd)
* [334] \(\l1pt\&tate) \setlength\textwidth{25}(Cwd)
* [335] \(\l2pt\&tate) \setlength\textwidth{23}(Cwd)
* [336] \else
* [337] \(\lopt\&yoko) \setlength\textwidth{28}(Cwd)
* [338] \(\l1pt\&yoko) \setlength\textwidth{25}(Cwd)
* [339] \(\l2pt\&yoko) \setlength\textwidth{24}(Cwd)
* [340] \(\lopt\&tate) \setlength\textwidth{46}(Cwd)
* [341] \(\l1pt\&tate) \setlength\textwidth{42}(Cwd)
* [342] \(\l2pt\&tate) \setlength\textwidth{43}(Cwd)
* [343] \(\lfi
* [344] \else\ifnum\c@@paper=3 % B4
* [345] \(\lif@landscape\)
* [346] \(\lopt\&yoko) \setlength\textwidth{75}(Cwd)
* [347] \(\l1pt\&yoko) \setlength\textwidth{69}(Cwd)
* [348] \(\l2pt\&yoko) \setlength\textwidth{63}(Cwd)
* [349] \(\lopt\&tate) \setlength\textwidth{53}(Cwd)
* [350] \(\l1pt\&tate) \setlength\textwidth{49}(Cwd)
* [351] \(\l2pt\&tate) \setlength\textwidth{44}(Cwd)
* [352] \else
* [353] \(\lopt\&yoko) \setlength\textwidth{60}(Cwd)
* [354] \(\l1pt\&yoko) \setlength\textwidth{55}(Cwd)
* [355] \(\l2pt\&yoko) \setlength\textwidth{50}(Cwd)
* [356] \(\lopt\&tate) \setlength\textwidth{85}(Cwd)
* [357] \(\l1pt\&tate) \setlength\textwidth{76}(Cwd)
* [358] \(\l2pt\&tate) \setlength\textwidth{69}(Cwd)
* [359] \(\lfi
* [360] \else\ifnum\c@@paper=4 % B5
* [361] \if@landscape
* [362] \(\lopt\&yoko) \setlength\textwidth{60}(Cwd)
* [363] \(\l1pt\&yoko) \setlength\textwidth{55}(Cwd)
* [364] \(\l2pt\&yoko) \setlength\textwidth{50}(Cwd)
* [365] \(\lopt\&tate) \setlength\textwidth{34}(Cwd)
* [366] \(\l1pt\&tate) \setlength\textwidth{31}(Cwd)
* [367] \(\l2pt\&tate) \setlength\textwidth{28}(Cwd)
* [368] \else
* [369] \(\lopt\&yoko) \setlength\textwidth{37}(Cwd)
* [370] \(\l1pt\&yoko) \setlength\textwidth{34}(Cwd)
* [371] \(\l2pt\&yoko) \setlength\textwidth{31}(Cwd)
* [372] \(\lopt\&tate) \setlength\textwidth{55}(Cwd)
* [373] \(1pt&tate) \setlength{textwidth{51}Cwd}
* [374] \(12pt&tate) \setlength{textwidth{47}Cwd}
* [375] \fi
* [376] \else % A4 ant other
* [377] \if@landscape
* [378] \(10pt&yoko) \setlength{textwidth{73}Cwd}
* [379] \(11pt&yoko) \setlength{textwidth{68}Cwd}
* [380] \(12pt&yoko) \setlength{textwidth{61}Cwd}
* [381] \(10pt&tate) \setlength{textwidth{41}Cwd}
* [382] \(11pt&tate) \setlength{textwidth{38}Cwd}
* [383] \(12pt&tate) \setlength{textwidth{35}Cwd}
* [384] \else
* [385] \(10pt&yoko) \setlength{textwidth{47}Cwd}
* [386] \(11pt&yoko) \setlength{textwidth{43}Cwd}
* [387] \(12pt&yoko) \setlength{textwidth{40}Cwd}
* [388] \(10pt&tate) \setlength{textwidth{67}Cwd}
* [389] \(11pt&tate) \setlength{textwidth{61}Cwd}
* [390] \(12pt&tate) \setlength{textwidth{57}Cwd}
* [391] \fi
* [392] \fi{fi}fi
* [393] \else
* [394] \if@twocolumn
* [395] \setlength{textwidth{52}Cwd}
* [396] \else
* [397] \(10pt&l:lbk&yoko) \setlength{textwidth{327}p@}
* [398] \(11pt&l:lbk&yoko) \setlength{textwidth{342}p@}
* [399] \(12pt&l:lbk&yoko) \setlength{textwidth{372}p@}
* [400] \(10pt&bk&yoko) \setlength{textwidth{4.3in}
* [401] \(11pt&bk&yoko) \setlength{textwidth{4.8in}
* [402] \(12pt&bk&yoko) \setlength{textwidth{4.8in}
* [403] \(10pt&tate) \setlength{textwidth{67}Cwd}
* [404] \(11pt&tate) \setlength{textwidth{61}Cwd}
* [405] \(12pt&tate) \setlength{textwidth{57}Cwd}
* [406] \fi
* [407] \fi
* [408] \fi
* [409] \if@stysize
* [410] \if@twocolumn
* [411] \(yoko) \setlength{textwidth{.8}paperwidth}
* [412] \(tate) \setlength{textwidth{.8}paperheight}
* [413] \else
* [414] \(yoko) \setlength{textwidth{.7}paperwidth}
* [415] \(tate) \setlength{textwidth{.7}paperheight}
* [417] \else
* [418] \(\{\)tate\(\}\)\(\backslash\)setlength\(\backslash\)@tempdima{\(\backslash\)paperheight\(\backslash\)
* [419] \(\{\)yoko\(\}\)\(\backslash\)setlength\(\backslash\)@tempdima{\(\backslash\)paperwidth\(\backslash\)
* [420] \(\{\)addtolength\(\backslash\)@tempdima{-2in\(\}\)
* [421] \(\{\)tate\(\}\)\(\backslash\)addtolength\(\backslash\)@tempdima{-1.3in\(\}\)
* [422] \(\{\)yoko\(\&\)10pt\(\}\)\(\backslash\)setlength\(\backslash\)@tempdimb{327\(\backslash\)p\(\emptyset\)}
* [423] \(\{\)yoko\(\&\)11pt\(\}\)\(\backslash\)setlength\(\backslash\)@tempdimb{342\(\backslash\)p\(\emptyset\)}
* [424] \(\{\)yoko\(\&\)12pt\(\}\)\(\backslash\)setlength\(\backslash\)@tempdimb{67\(\backslash\)Cwd\(\}\)
* [426] \(\{\)tate\(\&\)11pt\(\}\)\(\backslash\)setlength\(\backslash\)@tempdimb{61\(\backslash\)Cwd\(\}\)
* [427] \(\{\)tate\(\&\)12pt\(\}\)\(\backslash\)setlength\(\backslash\)@tempdimb{57\(\backslash\)Cwd\(\}\)
* [428] \(\{\)if@twocolumn
* [429] \(\backslash\)ifdimm@tempdima2\(\backslash\)@tempdimb\(\backslash\)relax
* [430] \(\backslash\)setlength\(\backslash\)textwidth{2}@tempdimb\(\backslash\)
* [431] \(\backslash\)else
* [432] \(\backslash\)setlength\(\backslash\)textwidth{\(\backslash\)@tempdima\(\}\)
* [433] \(\backslash\)fi
* [434] \(\backslash\)else
* [435] \(\backslash\)ifdimm@tempdima2\(\backslash\)@tempdimb\(\backslash\)relax
* [436] \(\backslash\)setlength\(\backslash\)textwidth{\(\backslash\)@tempdimb\(\backslash\)
* [437] \(\backslash\)else
* [438] \(\backslash\)setlength\(\backslash\)textwidth{\(\backslash\)@tempdima\(\}\)
* [439] \(\backslash\)fi
* [440] \(\backslash\)fi
* [441] \(\backslash\)fi
* [442] \(\backslash\)fi
* [443] \(\backslash\)@settopointtextwidth
* [444] \(\backslash\)if@compatibility
* [445] \(\backslash\)if@stysize
* [446] \(\backslash\)ifnumc\(\backslash\)c@@paper=2 % A5
* [447] \(\backslash\)iflandscape
* [448] \(\{\)10pt\(\&\)yoko\(\}\)\(\backslash\)setlength\(\backslash\)textheight{17\(\backslash\)Cvs\(\}\)
* [449] \(\{\)11pt\(\&\)yoko\(\}\)\(\backslash\)setlength\(\backslash\)textheight{17\(\backslash\)Cvs\(\}\)
* [440] \(\{\)12pt\(\&\)yoko\(\}\)\(\backslash\)setlength\(\backslash\)textheight{16\(\backslash\)Cvs\(\}\)
* [441] \(\{\)10pt\(\&\)tate\(\}\)\(\backslash\)setlength\(\backslash\)textheight{26\(\backslash\)Cvs\(\}\)
* [442] \(\{\)11pt\(\&\)tate\(\}\)\(\backslash\)setlength\(\backslash\)textheight{25\(\backslash\)Cvs\(\}\)
* [443] \(\backslash\)else
* [445] \(\backslash\)10pt\(\&\)yoko\(\}\)\(\backslash\)setlength\(\backslash\)textheight{28\(\backslash\)Cvs\(\}\)
* [446] \(\{\)11pt\(\&\)yoko\(\}\)\(\backslash\)setlength\(\backslash\)textheight{25\(\backslash\)Cvs\(\}\)
* [447] \(\{\)12pt\(\&\)yoko\(\}\)\(\backslash\)setlength\(\backslash\)textheight{24\(\backslash\)Cvs\(\}\)
[MISSING_PAGE_POST]
f@twocolumn
638 <setlength\oddsidemargin{30\pO}
639 <setlength\evensidemargin{30\pO}
640 <setlength\marginwidth{48\pO}
641 <if
642 </yoko>
643 <if@stysize
644 <if@twocolumn\else
645 <setlength\oddsidemargin{0\pO}
646 <setlength\evensidemargin{0\pO}
647 <if
648 <fi
649 <else
650 <setlength\@tempdima{\paperwidth}
651 <tate> <addtolength\@tempdima{-\textheight}
652 <yoko> <addtolength\@tempdima{-\textwidth}
653 <ift@twoside
654 <tate> <setlength\oddsidemargin{.6\@tempdima}
655 <yoko> <setlength\oddsidemargin{.4\@tempdima}
656 <else
657 <setlength\oddsidemargin{.5\@tempdima}
658 <fi
659 <addtolength\oddsidemargin{-1in}
* 11.4 \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \footnotesep \f
1078 (*article|report)
1079 {if@titlepage
1080 \newenvironment{abstract}{%
1081 \titlepage
1082 \null}vfil
1083 \@beginparpenalty}@lowpenalty
1084 \begin{center}%
1085 \{\bfseries\abstractname}\%
1086 \@endparpenalty}@M
1087 \end{center}%
1088 \{par\vfill\null}endtitlepage}
1089 \else
1090 \newenvironment{abstract}{%
1091 \if@twocolumn
1092 \section*{abstractname}%
1093 \else
1094 \small
1095 \begin{center}%
1096 \{\bfseries\abstractname}\rspace{-.5em}\rspace{-2@}\rspace{-2@}\rspace{-1@}\rspace
* [1829] \noindent\hb@xt@ 1.Sem{\hss}@makefnmark}#1}
* [1830] \fnmark}
* [1845] \fnmark}
* [1846] \fnmark}
* [1847] \elseifnum\numexpr\year*1000+\month*100+\day<20190501
* [1848] \fnmark}
* [1849] \else
* [1850] \fnmark}
* [1851] \fnmark}
* [1852] \def\today{\f%
* [1853] \ifnum1=\fttdir\ifmidir0\else1\fi\else0\fikansuji\number\year
* [1855] \else\fnmark}
* [1856] \else
* [1857] \pltx@today\fear
* [1857] \pltx@today\fear
* [1853] \fnmark}
* [1835] \def\tdtoday@year@#1\%
* [1836] \ifnum\numexpr\year-#1=1\f\else
* [1837] \fnmark=\fttdir\ifmidir0\else1\fikelso\fikansuji\number\numexpr\year-#1\relax
* [1838] \else\fnmark\numexpr\year-#1\relax
* [1839] \else
* [1840] \fnmark\numexpr\year-#1\relax\nobreak
* [1841] \fi
* [1842] \fi \fnmark\fttdoday@year\f\
* [1843] \fnmark\numexpr\year*1000+\month*100+\day<19890108
* [1846] \fnmark\pltx@today@year@{1925}%
* [1847] \else\fnmark\numexpr\year*10000+\month*100+\day<20190501
* [1848] \fnmark\fttdoday@year@{1988}%
* [1849] \else
* [1850] \fnmark\fttdoday@year@{2018}%
* [1851] \fi\fik\fikelso\fikansuji\number\year
* [1852] \def\today\f{\fik
* [1853] \fnmark\fttdoday@year
* [1854] \fnmark1=\fttdir\ifmidir0\else1\fikelso\fikansuji\number\year
* [1855] \else\numberyear\nobreak\fik
* [1856] \else
* [1857] \pltx@today@year
1890 <*tate)
1891 <normalmarginpar
1892 <Omparswitchfalse
1893 </tate)
1894 <*yoko)
1895 <if@twoside
1896 <=Omparswitchtrue
1897 <else
1898 </omparswitchfalse
1899 </i
1900 </yoko)
1901 </article | report | book)
File d: ujclasses.dtx
\if@mainmatter........... d11, d846, d870, d904, d929, d1255, d1276 \if@mathrmmc........... d17, d1601 \if@noskipsec........... d1168 \if@openleft........... d10, d800, d1162, d1175, d1237, d1247 \if@openright........... d9, d802, d1163, d1176, d1239, d1248 \if@restonecol........... d5, d961, d975, d1672, d1763, d1776, d1813 \if@stysize........... d15, d273, d297, d327, d409, d445, d525, d544, d554, d574, d643 \if@tempswa........... d1243 \if@titlepage........... d6, d984, d1079 \if@twocolumn........... d394, d410, d428, d587, d637, d644, d769, d774, d781, d786, d792, d797, d956, d967, d1035, d1091, d1099, d1178, d1333, d1341, d1665, d1756, d1769, d1805, d1884 \if@twoside........... d615, d653, d668, d765, d777, d789, d794, d827, d878, d976, d1236, d1895 \ifmdir........... d1837, d1854, d1859 \ifodd........... d766, d778, d790, d795, d973 \ifdtdir........... d767, d784, d1443, d1457, d1471, d1484, d1568, d1572, d1837, d1854, d1859 \ifydir........... d772, d779, d1025 \if\if
|
beartype | readthedoc | Markdown | # The Typing Tree
Date: 2015-01-01
Categories:
Tags:
Beartype is an open-source pure-Python PEP-compliant near-real-time hybrid runtime-static third-generation type-checker emphasizing efficiency, usability, unsubstantiated jargon we just made up, and thrilling puns.
Beartype enforces type hints across your entire app in two lines of runtime code with no runtime overhead. If seeing is believing, prepare to do both those things.
```
# Install beartype.
$ pip3 install beartype
# Edit the "{your_package}.__init__" submodule with your favourite IDE.
$ vim {your_package}/__init__.py # <-- so, i see that you too vim
```
```
# At the very top of your "{your_package}.__init__" submodule:
from beartype.claw import beartype_this_package # <-- boilerplate for victory
beartype_this_package() # <-- yay! your team just won
```
Beartype now implicitly type-checks all annotated classes, callables, and variable assignments across all submodules of `{your_package}` . Congrats.
This day all bugs die. …server slowly crashes
Beartype also publishes a plethora of APIs for fine-grained control over type-checking. For those who are about to QA, beartype salutes you. Would you like to know more?
```
# So let's do this.
$ python3
```
```
# ....................{ RAISE THE PAW }....................
# Manually enforce type hints across individual classes and callables.
# Do this only if you want a(nother) repetitive stress injury.
# Import the @beartype decorator.
>>> from beartype import beartype # <-- eponymous import; it's eponymous
# Annotate @beartype-decorated classes and callables with type hints.
>>> @beartype # <-- you too will believe in magic
... def quote_wiggum(lines: list[str]) -> None:
... print('“{}”\n\t— Police Chief Wiggum'.format("\n ".join(lines)))
# Call those callables with valid parameters.
>>> quote_wiggum(["Okay, folks. Show's over!", " Nothing to see here. Show's…",])
“Okay, folks. Show's over!
Nothing to see here. Show's…”
— Police Chief Wiggum
# Call those callables with invalid parameters.
>>> quote_wiggum([b"Oh, my God! A horrible plane crash!", b"Hey, everybody! Get a load of this flaming wreckage!",])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<string>", line 30, in quote_wiggum
File "/home/springfield/beartype/lib/python3.9/site-packages/beartype/_decor/_code/_pep/_error/errormain.py", line 220, in get_beartype_violation
raise exception_cls(
beartype.roar.BeartypeCallHintParamViolation: @beartyped
quote_wiggum() parameter lines=[b'Oh, my God! A horrible plane
crash!', b'Hey, everybody! Get a load of thi...'] violates type hint
list[str], as list item 0 value b'Oh, my God! A horrible plane crash!'
not str.
# ....................{ MAKE IT SO }....................
# Squash bugs by refining type hints with @beartype validators.
>>> from beartype.vale import Is # <---- validator factory
>>> from typing import Annotated # <---------------- if Python ≥ 3.9.0
# >>> from typing_extensions import Annotated # <-- if Python < 3.9.0
# Validators are type hints constrained by lambda functions.
>>> ListOfStrings = Annotated[ # <----- type hint matching non-empty list of strings
... list[str], # <----------------- type hint matching possibly empty list of strings
... Is[lambda lst: bool(lst)] # <-- lambda matching non-empty object
... ]
# Annotate @beartype-decorated callables with validators.
>>> @beartype
... def quote_wiggum_safer(lines: ListOfStrings) -> None:
... print('“{}”\n\t— Police Chief Wiggum'.format("\n ".join(lines)))
# Call those callables with invalid parameters.
>>> quote_wiggum_safer([])
beartype.roar.BeartypeCallHintParamViolation: @beartyped
quote_wiggum_safer() parameter lines=[] violates type hint
typing.Annotated[list[str], Is[lambda lst: bool(lst)]], as value []
violates validator Is[lambda lst: bool(lst)].
# ....................{ AT ANY TIME }....................
# Type-check anything against any type hint – anywhere at anytime.
>>> from beartype.door import (
... is_bearable, # <-------- like "isinstance(...)"
... die_if_unbearable, # <-- like "assert isinstance(...)"
... )
>>> is_bearable(['The', 'goggles', 'do', 'nothing.'], list[str])
True
>>> die_if_unbearable([0xCAFEBEEF, 0x8BADF00D], ListOfStrings)
beartype.roar.BeartypeDoorHintViolation: Object [3405692655, 2343432205]
violates type hint typing.Annotated[list[str], Is[lambda lst: bool(lst)]],
as list index 0 item 3405692655 not instance of str.
# ....................{ GO TO PLAID }....................
# Type-check anything in around 1µs (one millionth of a second) – including
# this list of one million 2-tuples of NumPy arrays.
>>> from beartype.door import is_bearable
>>> from numpy import array, ndarray
>>> data = [(array(i), array(i)) for i in range(1000000)]
>>> %time is_bearable(data, list[tuple[ndarray, ndarray]])
CPU times: user 31 µs, sys: 2 µs, total: 33 µs
Wall time: 36.7 µs
True
```
Beartype brings Rust- and C++-inspired zero-cost abstractions into the lawless world of dynamically-typed Python by enforcing type safety at the granular level of functions and methods against type hints standardized by the Python community in \(O(1)\) non-amortized worst-case time with negligible constant factors. If the prior sentence was unreadable jargon, see our friendly and approachable FAQ for a human-readable synopsis.
Beartype is portably implemented in Python 3, continuously stress-tested via GitHub Actions × tox × pytest × Codecov, and permissively distributed under the MIT license. Beartype has no runtime dependencies, only one test-time dependency, and only one documentation-time dependency. Beartype supports all actively developed Python versions, all Python package managers, and multiple platform-specific package managers.
# The Typing Tree¶
Welcome to the Bearpedia – your one-stop Encyclopedia Beartanica for all things @beartype. It’s “typing or bust!” as you…
* Bearpedia
* Install
* tl;dr
* ELI5
* API
* FAQ
* What is beartype?
* What is typeguard?
* When should I use beartype?
* Does beartype do any bad stuff?
* Does beartype actually do anything?
* How much does all this really cost?
* Beartype just does random stuff? Really?
* What does “pure-Python” mean?
* What does “near-real-time” even mean? Are you just making stuff up?
* What does “hybrid runtime-static” mean? Pretty sure you made that up, too.
* “Third-generation type-checker” doesn’t mean anything, does it?
* How do I type-check…
* How do I *NOT* type-check something?
* Why is @leycec’s poorly insulated cottage in the Canadian wilderness so cold?
* BigData™
* Code
* Beartype Code Generation: It’s All for You
* Identity Decoration
* Unconditional Identity Decoration
* Shallow Identity Decoration
* Deep Identity Decoration
* Constant Decoration
* Beartype Code Generation: It’s All for You
* Beartype Dev Handbook: It’s Handy
* Math
* Moar
Let’s type this.
# License¶
Beartype is open-source software released under the permissive MIT license.
# Funding¶
Beartype is financed as a purely volunteer open-source project via GitHub Sponsors, to whom our burgeoning community is eternally indebted. Without your generosity, runtime type-checking would be a shadow of its current hulking bulk. We genuflect before your selfless charity, everyone!
Prior official funding sources (yes, they once existed) include:
A Paul Allen Discovery Center award from the Paul G. Allen Frontiers Group under the administrative purview of the Paul Allen Discovery Center at Tufts University over the period 2015—2018 preceding the untimely death of Microsoft co-founder <NAME>, during which beartype was maintained as the private
`@type_check` decorator in the Bioelectric Tissue Simulation Engine (BETSE). Phew!
# Contributors¶
Beartype is the work product of volunteer enthusiasm, excess caffeine, and sleepless Wednesday evenings. These brave GitHubbers hurtled the pull request (PR) gauntlet so that you wouldn’t have to:
It’s a heavy weight they bear. Applaud them as they buckle under the load!
Install beartype with pip, because PyPI is the cheese shop and you too enjoy a fine Venezuelan beaver cheese while mashing disconsolately on your keyboard late on a rain-soaked Friday evening. Wherever expensive milk byproducts ferment, beartype will be there.
```
pip3 install beartype
```
Install beartype with Anaconda, because package managers named after venomous South American murder reptiles have finally inspired your team to embrace more mammal-friendly packages. Your horoscope also reads: “Avoid reckless ecotourism in places that rain alot.”
```
conda config --add channels conda-forge
conda install beartype
```
Commemorate this moment in time with , our overbearing project shield. What says quality like a bear on a badge, amirite?
## Platform¶
Beartype is also installable with platform-specific package managers, because sometimes you just need this thing to work.
### macOS¶
Let’s install beartype with Homebrew on macOS courtesy our third-party tap:
```
brew install beartype/beartype/beartype
```
Let’s install beartype with MacPorts on macOS:
```
sudo port install py-beartype
```
A big bear hug to our official macOS package maintainer @harens for packaging beartype for our Apple-appreciating audience.
### Arch Linux¶
Let’s install beartype with `pacman` on Arch Linux – where beartype is now
officially packaged in the Arch User Repository (AUR) itself:
```
git clone https://aur.archlinux.org/python-beartype.git
cd python-beartype
makepkg -si
```
Truly, Arch Linux has now seen the face of quality assurance. It looks like a grizzled bear with patchy fur, one twitchy eye, and a gimpy leg that spasmodically flails around.
### Gentoo Linux¶
Let’s install beartype with `emerge` on Gentoo Linux – where beartype is
now officially packaged in the Portage tree itself: `emerge beartype`
Source-based Linux distributions are the CPU-bound nuclear option. What could be simpler? O_o
## Badge¶
If you’re feeling the quality assurance and want to celebrate, consider signaling that you’re now publicly bear-ified:
All this magic and possibly more can be yours with:
Markdown:
> YummySoft is now [![bear-ified](https://raw.githubusercontent.com/beartype/beartype-assets/main/badge/bear-ified.svg)](https://beartype.readthedocs.io)!
*
reStructuredText:
> YummySoft is now |bear-ified|! .. # See https://docutils.sourceforge.io/docs/ref/rst/directives.html#image .. |bear-ified| image:: https://raw.githubusercontent.com/beartype/beartype-assets/main/badge/bear-ified.svg :align: top :target: https://beartype.readthedocs.io :alt: bear-ified
*
Raw HTML:
> YummySoft is now <a href="https://beartype.readthedocs.io"><img src="https://raw.githubusercontent.com/beartype/beartype-assets/main/badge/bear-ified.svg" alt="bear-ified" style="vertical-align: middle;"></a>!
Let a soothing pastel bear give your users the reassuring OK sign.
Let’s type-check like greased lightning! Thanks to cheatsheets like this, you no longer have to know how to use software to use software. `\o/`
```
# ..................{ IMPORTS }..................
# Import the core @beartype decorator.
from beartype import beartype
# Import type hint factories from "beartype.typing", a stand-in replacement
# for the standard "typing" module providing improved forward compatibility
# with future Python releases. For example:
# * "beartype.typing.Set is set" under Python ≥ 3.9 to satisfy PEP 585.
# * "beartype.typing.Set is typing.Set" under Python < 3.9 to satisfy PEP 484.
from beartype import typing
# Or, directly import these factories from the standard "typing" module. Note
# that PEP 585 deprecated many of these under Python ≥ 3.9, where @beartype
# now emits non-fatal deprecation warnings at decoration time. See also:
# https://docs.python.org/3/library/typing.html
import typing
# Or, directly import PEP 585 type hints. Note this requires Python ≥ 3.9.
from collections import abc
# Import backported type hint factories from "typing_extensions", improving
# portability across Python versions (e.g., "typing.Literal" needs Python ≥
# 3.9, but "typing_extensions.Literal" only needs Python ≥ 3.6).
import typing_extensions
# Import beartype-specific types to annotate callables with.
from beartype.cave import NoneType, NoneTypeOr, RegexTypes, ScalarTypes
# Import official abstract base classes (ABCs), too.
from numbers import Integral, Real
# Import user-defined classes, too.
from my_package.my_module import MyClass
# ..................{ TYPEVARS }..................
# PEP 484 type variable. While @beartype only partially supports type
# variables at the moment, @beartype 1.0.0.0.0.0.0.0 is expected to fully
# support type variables.
T = typing.TypeVar('T')
# ..................{ FUNCTIONS }..................
# Decorate functions with @beartype and...
@beartype
def my_function(
# Annotate builtin types as is.
param_must_satisfy_builtin_type: str,
# Annotate user-defined classes as is, too. Note this covariantly
# matches all instances of both this class and subclasses of this class.
param_must_satisfy_user_type: MyClass,
# Annotate PEP 604 type hint unions. Note this requires Python ≥ 3.10.
param_must_satisfy_pep604_union: dict | tuple | None,
# Annotate PEP 484 type hint unions. All Python versions support this.
param_must_satisfy_pep484_union: typing.Union[
dict, T, tuple[MyClass, ...]],
# Annotate PEP 593 metatypes, indexed by a type hint followed by zero or
# more arbitrary objects. See "VALIDATORS" below for real-world usage.
param_must_satisfy_pep593: typing.Annotated[
typing.Set[int], range(5), True],
# Annotate PEP 586 literals, indexed by either a boolean, byte string,
# integer, string, "enum.Enum" member, or "None".
param_must_satisfy_pep586: typing.Literal[
'This parameter must equal this string.'],
# Annotate PEP 585 builtin container types, indexed by the types of items
# these containers are expected to contain.
param_must_satisfy_pep585_builtin: list[str],
# Annotate PEP 585 standard collection types, indexed too.
param_must_satisfy_pep585_collection: abc.MutableSequence[str],
# Annotate PEP 544 protocols, either unindexed or indexed by one or more
# type variables.
param_must_satisfy_pep544: typing.SupportsRound[T],
# Annotate PEP 484 non-standard container types defined by the "typing"
# module, optionally indexed and only usable as type hints. Note that
# these types have all been deprecated by PEP 585 under Python ≥ 3.9. See
# also: https://docs.python.org/3/library/typing.html
param_must_satisfy_pep484_typing: typing.List[int],
# Annotate PEP 484 relative forward references dynamically resolved at
# call time as unqualified classnames relative to the current submodule.
# Note this class is defined below and that beartype-specific absolute
# forward references are also supported.
param_must_satisfy_pep484_relative_forward_ref: 'MyOtherClass',
# Annotate PEP types indexed by relative forward references. Forward
# references are supported everywhere standard types are.
param_must_satisfy_pep484_indexed_relative_forward_ref: (
typing.Union['MyPep484Generic', set['MyPep585Generic']]),
# Annotate beartype-specific types predefined by the beartype cave.
param_must_satisfy_beartype_type_from_cave: NoneType,
# Annotate beartype-specific unions of types as tuples.
param_must_satisfy_beartype_union: (dict, MyClass, int),
# Annotate beartype-specific unions predefined by the beartype cave.
param_must_satisfy_beartype_union_from_cave: ScalarTypes,
# Annotate beartype-specific unions concatenated together.
param_must_satisfy_beartype_union_concatenated: (
abc.Iterator,) + ScalarTypes,
# Annotate beartype-specific absolute forward references dynamically
# resolved at call time as fully-qualified "."-delimited classnames.
param_must_satisfy_beartype_absolute_forward_ref: (
'my_package.my_module.MyClass'),
# Annotate beartype-specific forward references in unions of types, too.
param_must_satisfy_beartype_union_with_forward_ref: (
abc.Iterable, 'my_package.my_module.MyOtherClass', NoneType),
# Annotate PEP 604 optional types. Note this requires Python ≥ 3.10.
param_must_satisfy_pep604_optional: float | bytes = None,
# Annotate PEP 484 optional types. All Python versions support this.
param_must_satisfy_pep484_optional: typing.Optional[float, bytes] = None,
# Annotate beartype-specific optional types.
param_must_satisfy_beartype_type_optional: NoneTypeOr[float] = None,
# Annotate beartype-specific optional unions of types.
param_must_satisfy_beartype_tuple_optional: NoneTypeOr[float, int] = None,
# Annotate variadic positional arguments as above, too.
*args: ScalarTypes + (Real, 'my_package.my_module.MyScalarType'),
# Annotate keyword-only arguments as above, too.
param_must_be_passed_by_keyword_only: abc.Sequence[
typing.Union[bool, list[str]]],
# Annotate return types as above, too.
) -> Union[Integral, 'MyPep585Generic', bool]:
return 0xDEADBEEF
# Decorate coroutines as above but returning a coroutine type.
@beartype
async def my_coroutine() -> abc.Coroutine[None, None, int]:
from async import sleep
await sleep(0)
return 0xDEFECA7E
# ..................{ GENERATORS }..................
# Decorate synchronous generators as above but returning a synchronous
# generator type.
@beartype
def my_sync_generator() -> abc.Generator[int, None, None]:
yield from range(0xBEEFBABE, 0xCAFEBABE)
# Decorate asynchronous generators as above but returning an asynchronous
# generator type.
@beartype
async def my_async_generator() -> abc.AsyncGenerator[int, None]:
from async import sleep
await sleep(0)
yield 0x8BADF00D
# ..................{ CLASSES }..................
# Decorate classes with @beartype – which then automatically decorates all
# methods and properties of those classes with @beartype.
@beartype
class MyOtherClass:
# Annotate instance methods as above without annotating "self".
def __init__(self, scalar: ScalarTypes) -> None:
self._scalar = scalar
# Annotate class methods as above without annotating "cls".
@classmethod
def my_classmethod(cls, regex: RegexTypes, wut: str) -> (
Callable[(), str]):
import re
return lambda: re.sub(regex, 'unbearable', str(cls._scalar) + wut)
# Annotate static methods as above, too.
@staticmethod
def my_staticmethod(callable: abc.Callable[[str], T], text: str) -> T:
return callable(text)
# Annotate property getter methods as above, too.
@property
def my_gettermethod(self) -> abc.Iterator[int]:
return range(0x0B00B135 + int(self._scalar), 0xB16B00B5)
# Annotate property setter methods as above, too.
@my_gettermethod.setter
def my_settermethod(self, bad: Integral = 0xBAAAAAAD) -> None:
self._scalar = bad if bad else 0xBADDCAFE
# Annotate methods accepting or returning instances of the class
# currently being declared with relative forward references.
def my_selfreferential_method(self) -> list['MyOtherClass']:
return [self] * 42
# ..................{ GENERICS }..................
# Decorate PEP 585 generics with @beartype. Note this requires Python ≥ 3.9.
@beartype
class MyPep585Generic(tuple[int, float]):
def __new__(cls, integer: int, real: float) -> tuple[int, float]:
return tuple.__new__(cls, (integer, real))
# Decorate PEP 484 generics with @beartype, too.
@beartype
class MyPep484Generic(typing.Tuple[str, ...]):
def __new__(cls, *args: str) -> typing.Tuple[str, ...]:
return tuple.__new__(cls, args)
# ..................{ PROTOCOLS }..................
# PEP 544 protocol referenced below in type hints. Note this requires Python
# ≥ 3.8 and that protocols *MUST* be explicitly decorated by the
# @runtime_checkable decorator to be usable with @beartype.
@typing.runtime_checkable # <---- mandatory boilerplate line. it is sad.
class MyProtocol(typing.Protocol):
def my_method(self) -> str:
return (
'Objects satisfy this protocol only if their classes '
'define a method with the same signature as this method.'
)
# ..................{ DATACLASSES }..................
# Import the requisite machinery. Note this requires Python ≥ 3.8.
from dataclasses import dataclass, InitVar
# Decorate dataclasses with @beartype, which then automatically decorates all
# methods and properties of those dataclasses with @beartype – including the
# __init__() constructors created by @dataclass. Fields are type-checked only
# at instantiation time. Fields are *NOT* type-checked when reassigned.
#
# Decoration order is significant. List @beartype before @dataclass, please.
@beartype
@dataclass
class MyDataclass(object):
# Annotate fields with type hints.
field_must_satisfy_builtin_type: InitVar[str]
field_must_satisfy_pep604_union: str | None = None
# Annotate methods as above.
def __post_init__(self, field_must_satisfy_builtin_type: str) -> None:
if self.field_must_satisfy_pep604_union is None:
self.field_must_satisfy_pep604_union = (
field_must_satisfy_builtin_type)
# ..................{ NAMED TUPLES }..................
# Import the requisite machinery.
from typing import NamedTuple
# Decorate named tuples with @beartype.
@beartype
class MyNamedTuple(NamedTuple):
# Annotate fields with type hints.
field_must_satisfy_builtin_type: str
# ..................{ CONFIGURATION }..................
# Import beartype's configuration API to configure runtime type-checking.
from beartype import BeartypeConf, BeartypeStrategy
# Dynamically create your own @beartype decorator, configured for your needs.
bugbeartype = beartype(conf=BeartypeConf(
# Optionally disable or enable output of colors (i.e., ANSI escape
# sequences) in type-checking violations via this tri-state boolean:
# * "None" conditionally enables colors when standard output is attached
# to an interactive terminal. [DEFAULT]
# * "True" unconditionally enables colors.
# * "False" unconditionally disables colors.
is_color=False, # <-- disable color entirely
# Optionally enable developer-friendly debugging.
is_debug=True,
# Optionally enable PEP 484's implicit numeric tower by:
# * Expanding all "float" type hints to "float | int".
# * Expanding all "complex" type hints to "complex | float | int".
is_pep484_tower=True,
# Optionally switch to a different type-checking strategy:
# * "BeartypeStrategy.O1" type-checks in O(1) constant time. [DEFAULT]
# * "BeartypeStrategy.On" type-checks in O(n) linear time.
# (Currently unimplemented but roadmapped for a future release.)
# * "BeartypeStrategy.Ologn" type-checks in O(log n) logarithmic time.
# (Currently unimplemented but roadmapped for a future release.)
# * "strategy=BeartypeStrategy.O0" disables type-checking entirely.
strategy=BeartypeStrategy.On, # <-- enable linear-time type-checking
))
# Decorate with your decorator instead of the vanilla @beartype decorator.
@bugbeartype
def muh_configured_func(list_checked_in_On_time: list[float]) -> set[str]:
return set(str(item) for item in list_checked_in_On_time)
# ..................{ VALIDATORS }..................
# Import beartype's PEP 593 validator API to validate arbitrary constraints.
# Note this requires either:
# * Python ≥ 3.9.0.
# * typing_extensions ≥ 3.9.0.0.
from beartype.vale import Is, IsAttr, IsEqual
from typing import Annotated # <--------------- if Python ≥ 3.9.0
#from typing_extensions import Annotated # <--- if Python < 3.9.0
# Import third-party packages to validate.
import numpy as np
# Validator matching only two-dimensional NumPy arrays of 64-bit floats,
# specified with a single caller-defined lambda function.
NumpyArray2DFloat = Annotated[np.ndarray, Is[
lambda arr: arr.ndim == 2 and arr.dtype == np.dtype(np.float64)]]
# Validator matching only one-dimensional NumPy arrays of 64-bit floats,
# specified with two declarative expressions. Although verbose, this
# approach generates optimal reusable code that avoids function calls.
IsNumpyArray1D = IsAttr['ndim', IsEqual[1]]
IsNumpyArrayFloat = IsAttr['dtype', IsEqual[np.dtype(np.float64)]]
NumpyArray1DFloat = Annotated[np.ndarray, IsNumpyArray1D, IsNumpyArrayFloat]
# Validator matching only empty NumPy arrays, equivalent to but faster than:
# NumpyArrayEmpty = Annotated[np.ndarray, Is[lambda arr: arr.size != 0]]
IsNumpyArrayEmpty = IsAttr['size', IsEqual[0]]
NumpyArrayEmpty = Annotated[np.ndarray, IsNumpyArrayEmpty]
# Validator composed with standard operators from the above validators,
# permissively matching all of the following:
# * Empty NumPy arrays of any dtype *except* 64-bit floats.
# * Non-empty one- and two-dimensional NumPy arrays of 64-bit floats.
NumpyArrayEmptyNonFloatOrNonEmptyFloat1Or2D = Annotated[np.ndarray,
# "&" creates a new validator matching when both operands match, while
# "|" creates a new validator matching when one or both operands match;
# "~" creates a new validator matching when its operand does not match.
# Group operands to enforce semantic intent and avoid precedence woes.
(IsNumpyArrayEmpty & ~IsNumpyArrayFloat) | (
~IsNumpyArrayEmpty & IsNumpyArrayFloat (
IsNumpyArray1D | IsAttr['ndim', IsEqual[2]]
)
)
]
# Decorate functions accepting validators like usual and...
@beartype
def my_validated_function(
# Annotate validators just like standard type hints.
param_must_satisfy_validator: NumpyArrayEmptyOrNonemptyFloat1Or2D,
# Combine validators with standard type hints, too.
) -> list[NumpyArrayEmptyNonFloatOrNonEmptyFloat1Or2D]:
return (
[param_must_satisfy_validator] * 0xFACEFEED
if bool(param_must_satisfy_validator) else
[np.array([i], np.dtype=np.float64) for i in range(0xFEEDFACE)]
)
# ..................{ NUMPY }..................
# Import NumPy-specific type hints validating NumPy array constraints. Note:
# * These hints currently only validate array dtypes. To validate additional
# constraints like array shapes, prefer validators instead. See above.
# * This requires NumPy ≥ 1.21.0 and either:
# * Python ≥ 3.9.0.
# * typing_extensions ≥ 3.9.0.0.
from numpy.typing import NDArray
# NumPy type hint matching all NumPy arrays of 64-bit floats. Internally,
# beartype reduces this to the equivalent validator:
# NumpyArrayFloat = Annotated[
# np.ndarray, IsAttr['dtype', IsEqual[np.dtype(np.float64)]]]
NumpyArrayFloat = NDArray[np.float64]
# Decorate functions accepting NumPy type hints like usual and...
@beartype
def my_numerical_function(
# Annotate NumPy type hints just like standard type hints.
param_must_satisfy_numpy: NumpyArrayFloat,
# Combine NumPy type hints with standard type hints, too.
) -> tuple[NumpyArrayFloat, int]:
return (param_must_satisfy_numpy, len(param_must_satisfy_numpy))
```
Beartype: it just sorta works.
> Look for the bare necessities, the simple bare necessities. Forget about your worries and your strife. — The Jungle Book.
Beartype is a novel first line of defense. In Python’s vast arsenal of software quality assurance (SQA), beartype holds the shield wall against breaches in type safety by improper parameter and return values violating developer expectations.
Beartype is unopinionated. Beartype inflicts no developer constraints beyond importation and usage of a single configuration-free decorator. Beartype is trivially integrated into new and existing applications, stacks, modules, and scripts already annotating callables with PEP-compliant industry-standard type hints.
## Comparison¶
Beartype is zero-cost. Beartype inflicts no harmful developer tradeoffs, instead stressing expense-free strategies at both:
Installation time. Beartype has no install-time or runtime dependencies, supports standard Python package managers, and happily coexists with competing static type-checkers and other runtime type-checkers… which, of course, is irrelevant, as you would never dream of installing competing alternatives. Why would you, right? Am I right?
`</nervous_chuckle>` *
Runtime. Thanks to aggressive memoization and dynamic code generation at decoration time, beartype guarantees O(1) non-amortized worst-case runtime complexity with negligible constant factors.
### …versus Static Type-checkers¶
Like competing static type-checkers operating at the coarse-grained application level via ad-hoc heuristic type inference (e.g., Pyre, mypy, pyright, pytype), beartype effectively imposes no runtime overhead. Unlike static type-checkers:
Beartype operates exclusively at the fine-grained callable level of pure-Python functions and methods via the standard decorator design pattern. This renders beartype natively compatible with all interpreters and compilers targeting the Python language – including Brython, PyPy, Numba, Nuitka, and (wait for it) CPython itself.
*
Beartype enjoys deterministic Turing-complete access to the actual callables, objects, and types being type-checked. This enables beartype to solve dynamic problems decidable only at runtime – including type-checking of arbitrary objects whose:
Metaclasses dynamically customize instance and subclass checks by implementing the
`__instancecheck__()` and/or `__subclasscheck__()` dunder methods, including:
PEP 3119-compliant metaclasses (e.g.,
`abc.ABCMeta` ). *
Pseudo-superclasses dynamically customize the method resolution order (MRO) of subclasses by implementing the
`__mro_entries__()` dunder method, including:
PEP 560-compliant pseudo-superclasses.
*
Classes dynamically register themselves with standard abstract base classes (ABCs), including:
*
Classes are dynamically constructed or altered, including by:
Class decorators.
*
Class factory functions and methods.
*
Metaclasses.
*
Monkey patches.
### …versus Runtime Type-checkers¶
Unlike comparable runtime type-checkers (e.g., pydantic, typeguard), beartype decorates callables with dynamically generated wrappers efficiently type-checking each parameter passed to and value returned from those callables in constant time. Since “performance by default” is our first-class concern, generated wrappers are guaranteed to:
Exhibit O(1) non-amortized worst-case time complexity with negligible constant factors.
*
Be either more efficient (in the common case) or exactly as efficient minus the cost of an additional stack frame (in the worst case) as equivalent type-checking implemented by hand, which no one should ever do.
## Quickstart¶
Beartype makes type-checking painless, portable, and purportedly fun. Just:
Decorate functions and methods annotated by standard type hints with the
`beartype.beartype()` decorator, which wraps those functions and methods in performant type-checking dynamically generated on-the-fly.
When standard type hints fail to support your use case, annotate functions and methods with
```
beartype-specific validator type hints
```
instead. Validators enforce runtime constraints on the internal structure and contents of parameters and returns via simple caller-defined lambda functions and declarative expressions – all seamlessly composable with standard type hints in an expressive domain-specific language (DSL) designed just for you.
“Embrace the bear,” says the bear peering over your shoulder as you read this.
### Standard Hints¶
Beartype supports most type hints standardized by the developer community through Python Enhancement Proposals (PEPs). Since type hinting is its own special hell, we’ll start by wading into the thalassophobia-inducing waters of type-checking with a sane example – the \(O(1)\) `beartype.beartype()` way.
# Toy Example¶
Let’s type-check a `"Hello, Jungle!"` toy example. Just:
Import the
`beartype.beartype()` decorator: > from beartype import beartype
*
Decorate any annotated function with that decorator:
> from sys import stderr, stdout from typing import TextIO @beartype def hello_jungle( sep: str = ' ', end: str = '\n', file: TextIO = stdout, flush: bool = False, ): ''' Print "Hello, Jungle!" to a stream, or to sys.stdout by default. Optional keyword arguments: file: a file-like object (stream); defaults to the current sys.stdout. sep: string inserted between values, default a space. end: string appended after the last value, default a newline. flush: whether to forcibly flush the stream. ''' print('Hello, Jungle!', sep, end, file, flush)
*
Call that function with valid parameters and caper as things work:
> >>> hello_jungle(sep='...ROOOAR!!!!', end='uhoh.', file=stderr, flush=True) Hello, Jungle! ...ROOOAR!!!! uhoh.
*
Call that function with invalid parameters and cringe as things blow up with human-readable exceptions exhibiting the single cause of failure:
> >>> hello_jungle(sep=( ... b"What? Haven't you ever seen a byte-string separator before?")) BeartypeCallHintPepParamException: @beartyped hello_jungle() parameter sep=b"What? Haven't you ever seen a byte-string separator before?" violates type hint <class 'str'>, as value b"What? Haven't you ever seen a byte-string separator before?" not str.
# Industrial Example¶
Let’s wrap the third-party numpy.empty_like() function with automated runtime type checking to demonstrate beartype’s support for non-trivial combinations of nested type hints compliant with different PEPs:
```
from beartype import beartype
from collections.abc import Sequence
from typing import Optional, Union
import numpy as np
@beartype
def empty_like_bear(
prototype: object,
dtype: Optional[np.dtype] = None,
order: str = 'K',
subok: bool = True,
shape: Optional[Union[int, Sequence[int]]] = None,
) -> np.ndarray:
return np.empty_like(prototype, dtype, order, subok, shape)
```
Note the non-trivial hint for the optional `shape` parameter, synthesized from
a PEP 484-compliant optional of a PEP 484-compliant
union of a builtin type and a PEP 585-compliant subscripted
abstract base class (ABC), accepting as valid
either:
The
`None` singleton. *
An integer.
*
A sequence of integers.
Let’s call that wrapper with both valid and invalid parameters:
```
>>> empty_like_bear(([1,2,3], [4,5,6]), shape=(2, 2))
array([[94447336794963, 0],
[ 7, -1]])
>>> empty_like_bear(([1,2,3], [4,5,6]), shape=([2], [2]))
BeartypeCallHintPepParamException: @beartyped empty_like_bear() parameter
shape=([2], [2]) violates type hint typing.Union[int,
collections.abc.Sequence, NoneType], as ([2], [2]):
* Not <class "builtins.NoneType"> or int.
* Tuple item 0 value [2] not int.
```
Note the human-readable message of the raised exception, containing a bulleted list enumerating the various ways this invalid parameter fails to satisfy its type hint, including the types and indices of the first container item failing to satisfy the nested `Sequence[int]` hint.
## Tutorial¶
Let’s begin with the simplest type of type-checking supported by `beartype.beartype()` .
### Builtin Types¶
Builtin types like `dict` , `int` , `list` , `set` ,
and `str` are trivially type-checked by annotating parameters and return
values with those types as is.
Let’s declare a simple beartyped function accepting a string and a dictionary and returning a tuple:
@beartype
def law_of_the_jungle(wolf: str, pack: dict) -> tuple:
return (wolf, pack[wolf]) if wolf in pack else None
```
```
>>> law_of_the_jungle(wolf='Akela', pack={'Akela': 'alone', 'Raksha': 'protection'})
('Akela', 'alone')
```
```
>>> law_of_the_jungle(wolf='Akela', pack=['Akela', 'Raksha'])
Traceback (most recent call last):
File "<ipython-input-10-7763b15e5591>", line 1, in <module>
law_of_the_jungle(wolf='Akela', pack=['Akela', 'Raksha'])
File "<string>", line 22, in __law_of_the_jungle_beartyped__
beartype.roar.BeartypeCallTypeParamException: @beartyped law_of_the_jungle() parameter pack=['Akela', 'Raksha'] not a <class 'dict'>.
```
The `beartype.roar` submodule publishes exceptions raised at both
decoration time by `beartype.beartype()` and at runtime by wrappers
generated by `beartype.beartype()` . In this case, a runtime type exception
describing the improperly typed `pack` parameter is raised.
Good function! Let’s call it again with good types exposing a critical issue in this function’s implementation and/or return type annotation:
```
>>> law_of_the_jungle(wolf='Leela', pack={'Akela': 'alone', 'Raksha': 'protection'})
Traceback (most recent call last):
File "<ipython-input-10-7763b15e5591>", line 1, in <module>
law_of_the_jungle(wolf='Leela', pack={'Akela': 'alone', 'Raksha': 'protection'})
File "<string>", line 28, in __law_of_the_jungle_beartyped__
beartype.roar.BeartypeCallTypeReturnException: @beartyped law_of_the_jungle() return value None not a <class 'tuple'>.
```
Bad function. Let’s conveniently resolve this by permitting this function to return either a tuple or `None` as detailed below:
The `beartype.cave` submodule publishes generic types suitable for use with
the `beartype.beartype()` decorator and anywhere else you might need them.
In this case, the type of the `None` singleton is imported from this
submodule and listed in addition to `tuple` as an allowed return type
from this function. Note that usage of the `beartype.cave` submodule is entirely optional (but
more efficient and convenient than most alternatives). In this case, the type of
the `None` singleton can also be accessed directly as `type(None)` and
listed in place of `NoneType` above: e.g.,
Of course, the `beartype.cave` submodule also publishes types not
accessible directly like `RegexCompiledType` (i.e., the type of all compiled
regular expressions). All else being equal, `beartype.cave` is preferable.
Good function! The type hints applied to this function now accurately document this function’s API. All’s well that ends typed well. Suck it, Shere Khan.
### Arbitrary Types¶
Everything above also extends to:
Arbitrary types like user-defined classes and stock classes in the Python stdlib (e.g.,
```
argparse.ArgumentParser
```
) – all of which are also trivially type-checked by annotating parameters and return values with those types. *
Arbitrary callables like instance methods, class methods, static methods, and generator functions and methods – all of which are also trivially type-checked with the
`beartype.beartype()` decorator.
Let’s declare a motley crew of beartyped callables doing various silly things in a strictly typed manner, just ‘cause:
```
from beartype import beartype
from beartype.cave import GeneratorType, IterableType, NoneType
@beartype
class MaximsOfBaloo(object):
def __init__(self, sayings: IterableType):
self.sayings = sayings
@beartype
def inform_baloo(maxims: MaximsOfBaloo) -> GeneratorType:
for saying in maxims.sayings:
yield saying
```
For genericity, the `MaximsOfBaloo` class initializer accepts any generic
iterable (via the
```
beartype.cave.IterableType
```
tuple listing all valid
iterable types) rather than an overly specific `list` or `tuple` type. Your
users may thank you later. For specificity, the `inform_baloo()` generator function has been explicitly
annotated to return a
```
beartype.cave.GeneratorType
```
(i.e., the type returned
by functions and methods containing at least one `yield` statement). Type
safety brings good fortune for the New Year.
Let’s iterate over that generator with good types:
```
>>> maxims = MaximsOfBaloo(sayings={
... '''If ye find that the Bullock can toss you,
... or the heavy-browed Sambhur can gore;
... Ye need not stop work to inform us:
... we knew it ten seasons before.''',
... '''“There is none like to me!” says the Cub
... in the pride of his earliest kill;
... But the jungle is large and the Cub he is small.
... Let him think and be still.''',
... })
>>> for maxim in inform_baloo(maxims): print(maxim.splitlines()[-1])
Let him think and be still.
we knew it ten seasons before.
```
```
>>> for maxim in inform_baloo([
... 'Oppress not the cubs of the stranger,',
... ' but hail them as Sister and Brother,',
... ]): print(maxim.splitlines()[-1])
Traceback (most recent call last):
File "<ipython-input-10-7763b15e5591>", line 30, in <module>
' but hail them as Sister and Brother,',
File "<string>", line 12, in __inform_baloo_beartyped__
beartype.roar.BeartypeCallTypeParamException: @beartyped inform_baloo()
parameter maxims=['Oppress not the cubs of the stranger,', ' but hail
them as Sister and ...'] not a <class '__main__.MaximsOfBaloo'>.
```
Good generator! The type hints applied to these callables now accurately document their respective APIs. Thanks to the pernicious magic of beartype, all ends typed well… yet again.
### Unions of Types¶
That’s all typed well, but everything above only applies to parameters and return values constrained to singular types. In practice, parameters and return values are often relaxed to any of multiple types referred to as unions of types. You can thank set theory for the jargon… unless you hate set theory. Then it’s just our fault.
Unions of types are trivially type-checked by annotating parameters and return values with the `typing.Union` type hint containing those types. Let’s
declare another beartyped function accepting either a mapping or a string and
returning either another function or an integer:
```
from beartype import beartype
from collections.abc import Callable, Mapping
from numbers import Integral
from typing import Any, Union
@beartype
def toomai_of_the_elephants(memory: Union[Integral, Mapping[Any, Any]]) -> (
Union[Integral, Callable[(Any,), Any]]):
return memory if isinstance(memory, Integral) else lambda key: memory[key]
```
For genericity, the
```
toomai_of_the_elephants()
```
function both accepts and
returns any generic integer (via the standard `numbers.Integral` abstract base class (ABC) matching both builtin integers and third-party
integers from frameworks like NumPy and SymPy) rather than an overly specific `int` type. The API you relax may very well be your own.
Let’s call that function with good types:
```
>>> memory_of_kala_nag = {
... 'remember': 'I will remember what I was, I am sick of rope and chain—',
... 'strength': 'I will remember my old strength and all my forest affairs.',
... 'not sell': 'I will not sell my back to man for a bundle of sugar-cane:',
... 'own kind': 'I will go out to my own kind, and the wood-folk in their lairs.',
... 'morning': 'I will go out until the day, until the morning break—',
... 'caress': 'Out to the wind’s untainted kiss, the water’s clean caress;',
... 'forget': 'I will forget my ankle-ring and snap my picket stake.',
... 'revisit': 'I will revisit my lost loves, and playmates masterless!',
... }
>>> toomai_of_the_elephants(len(memory_of_kala_nag['remember']))
56
>>> toomai_of_the_elephants(memory_of_kala_nag)('remember')
'I will remember what I was, I am sick of rope and chain—'
```
Good function. Let’s call it again with a tastelessly bad type:
```
>>> toomai_of_the_elephants(
... 'Shiv, who poured the harvest and made the winds to blow,')
BeartypeCallHintPepParamException: @beartyped toomai_of_the_elephants()
parameter memory='Shiv, who poured the harvest and made the winds to blow,'
violates type hint typing.Union[numbers.Integral, collections.abc.Mapping],
as 'Shiv, who poured the harvest and made the winds to blow,' not <protocol
ABC "collections.abc.Mapping"> or <protocol "numbers.Integral">.
```
Good function! The type hints applied to this callable now accurately documents its API. All ends typed well… still again and again.
### Optional Types¶
That’s also all typed well, but everything above only applies to mandatory parameters and return values whose types are never `NoneType` . In practice,
parameters and return values are often relaxed to optionally accept any of
multiple types including `NoneType` referred to as optional types. Optional types are trivially type-checked by annotating optional parameters (parameters whose values default to `None` ) and optional return values
(callables returning `None` rather than raising exceptions in edge cases)
with the `typing.Optional` type hint indexed by those types. Let’s declare another beartyped function accepting either an enumeration type or `None` and returning either an enumeration member or `None` :
```
from beartype import beartype
from beartype.cave import EnumType, EnumMemberType
from typing import Optional
@beartype
def tell_the_deep_sea_viceroys(story: Optional[EnumType] = None) -> (
Optional[EnumMemberType]):
return story if story is None else list(story.__members__.values())[-1]
```
For efficiency, the `typing.Optional` type hint creates, caches, and
returns new tuples of types appending `NoneType` to the original types it’s
indexed with. Since efficiency is good, `typing.Optional` is also good.
Let’s call that function with good types:
```
>>> from enum import Enum
>>> class Lukannon(Enum):
... WINTER_WHEAT = 'The Beaches of Lukannon—the winter wheat so tall—'
... SEA_FOG = 'The dripping, crinkled lichens, and the sea-fog drenching all!'
... PLAYGROUND = 'The platforms of our playground, all shining smooth and worn!'
... HOME = 'The Beaches of Lukannon—the home where we were born!'
... MATES = 'I met my mates in the morning, a broken, scattered band.'
... CLUB = 'Men shoot us in the water and club us on the land;'
... DRIVE = 'Men drive us to the Salt House like silly sheep and tame,'
... SEALERS = 'And still we sing Lukannon—before the sealers came.'
>>> tell_the_deep_sea_viceroys(Lukannon)
<Lukannon.SEALERS: 'And still we sing Lukannon—before the sealers came.'>
>>> tell_the_deep_sea_viceroys()
None
```
You may now be pondering to yourself grimly in the dark: “…but could we not already do this just by manually annotating optional types with `typing.Union` type hints explicitly indexed by `NoneType` ?” You would, of course, be correct. Let’s grimly redeclare the same function accepting and returning the same types – only annotated with `NoneType` rather than `typing.Optional` :
@beartype
def tell_the_deep_sea_viceroys(story: Union[EnumType, NoneType] = None) -> (
Union[EnumMemberType, NoneType]):
return list(story.__members__.values())[-1] if story is not None else None
```
Since `typing.Optional` internally reduces to `typing.Union` , these
two approaches are semantically equivalent. The former is simply syntactic sugar
simplifying the latter. Whereas `typing.Union` accepts an arbitrary number of child type hints,
however, `typing.Optional` accepts only a single child type hint. This can
be circumvented by either indexing `typing.Optional` by `typing.Union` or indexing `typing.Union` by `NoneType` . Let’s
exhibit the former approach by declaring another beartyped function accepting
either an enumeration type, enumeration type member, or `None` and
returning either an enumeration type, enumeration type member, or `None` :
@beartype
def sang_them_up_the_beach(
woe: Optional[Union[EnumType, EnumMemberType]] = None) -> (
Optional[Union[EnumType, EnumMemberType]]):
return woe if isinstance(woe, (EnumMemberType, NoneType)) else (
list(woe.__members__.values())[-1])
```
```
>>> sang_them_up_the_beach(Lukannon)
<Lukannon.SEALERS: 'And still we sing Lukannon—before the sealers came.'>
>>> sang_them_up_the_beach()
None
```
Behold! The terrifying power of the `typing.Optional` type hint,
resplendent in its highly over-optimized cache utilization.
## Would You Like to Know More?¶
If you know type hints, you know beartype. Since beartype is driven by tool-agnostic community standards, the public API for beartype is basically just those standards. As the user, all you need to know is that decorated callables magically raise human-readable exceptions when you pass parameters or return values violating the PEP-compliant type hints annotating those parameters or returns.
If you don’t know type hints, this is your moment to go deep on the hardest hammer in Python’s SQA toolbox. Here are a few friendly primers to guide you on your maiden voyage through the misty archipelagos of type hinting:
“Python Type Checking (Guide)”, a comprehensive third-party introduction to the subject. Like most existing articles, this guide predates \(O(1)\) runtime type checkers and thus discusses only static type-checking. Thankfully, the underlying syntax and semantics cleanly translate to runtime type-checking.
*
“PEP 484 – Type Hints”, the defining standard, holy grail, and first testament of type hinting personally authored by Python’s former Benevolent Dictator for Life (BDFL) himself, <NAME>. Since it’s surprisingly approachable and covers all the core conceits in detail, we recommend reading at least a few sections of interest. Since it’s really a doctoral thesis by another name, we can’t recommend reading it in entirety. So it goes.
Beartype isn’t just the `beartype.beartype()` decorator. Beartype is a menagerie of public APIs for type-checking, introspecting, and manipulating type hints at runtime – all accessible under the `beartype` package installed when you installed beartype. But all beartype documentation
begins with `beartype.beartype()` , just like all rivers run to the sea.
[1]
## The Left-Paw Path¶
See the left sidebar for links to human-readable API documentation – including:
*
`beartype` , documenting the core `beartype()` decorator API. *
`beartype.claw` , documenting the beartype import hook API. *
`beartype.door` , documenting the Decidedly Object-Oriented Runtime-checker (DOOR) API. *
`beartype.roar` , documenting the beartype exception and warning API. *
`beartype.vale` , documenting the beartype validator API.
Or see these autogenerated indices for machine-readable laundry lists. For those about to put on the 90’s-era Geocities nostalgia goggles, you prefer inscrutable enumerations in lexicographic (i.e., effectively arbitrary) order of all public beartype:
Attributes. This is literally everything. By everything, we mean modules, classes, functions, and globals. If it’s not here, it doesn’t exist. If it actually exists, it’s private and you shouldn’t have gone there. But curiosity killed your codebase, didn’t it? You went there. You violated privacy encapsulation and now nothing works. So this is what it’s like when doves cry.
*
Modules. Look. It’s just modules. Never click this.
Date: 2012-10-28
Categories:
Tags:
Beartype now answers your many pressing questions about life, love, and typing. Maximize your portfolio of crushed bugs by devoutly memorizing the answers to these… frequently asked questions (FAQ)!
## What is beartype?¶
Why, it’s the world’s first \(O(1)\) runtime type-checker in any dynamically-typed lang… oh, forget it.
You know typeguard? Then you know beartype – more or less. beartype is typeguard’s younger, faster, and slightly sketchier brother who routinely ingests performance-enhancing anabolic nootropics.
## What is typeguard?¶
Okay. Work with us here, people.
You know how in low-level statically-typed memory-unsafe languages that no one should use like C and C++, the compiler validates at compilation time the types of all values passed to and returned from all functions and methods across the entire codebase?
```
$ gcc -Werror=int-conversion -xc - <<EOL
#include <stdio.h>
int main() {
printf("Hello, world!");
return "Goodbye, world.";
}
EOL
<stdin>: In function ‘main’:
<stdin>:4:11: error: returning ‘char *’ from a function with return type
‘int’ makes integer from pointer without a cast [-Werror=int-conversion]
cc1: some warnings being treated as errors
```
You know how in high-level duck-typed languages that everyone should use instead like Python and Ruby, the interpreter performs no such validation at any interpretation phase but instead permits any arbitrary values to be passed to or returned from any function or method?
Hello, world!
```
Runtime type-checkers like beartype and typeguard selectively shift the dial on type safety in Python from duck to static typing while still preserving all of the permissive benefits of the former as a default behaviour. Now you too can quack like a duck while roaring like a bear.
Hello, world!
Traceback (most recent call last):
File "<stdin>", line 6, in <module>
File "<string>", line 17, in main
File "/home/leycec/py/beartype/beartype/_decor/_code/_pep/_error/errormain.py", line 218, in get_beartype_violation
raise exception_cls(
beartype.roar.BeartypeCallHintPepReturnException: @beartyped main() return
'Goodbye, world.' violates type hint <class 'int'>, as value 'Goodbye,
world.' not int.
```
## When should I use beartype?¶
Use beartype to assure the quality of Python code beyond what tests alone can assure. If you have yet to test, do that first with a pytest-based test suite, tox configuration, and continuous integration (CI). If you have any time, money, or motivation left, annotate callables and classes with PEP-compliant type hints and decorate those callables and classes with the @beartype.beartype decorator.
Prefer beartype over other runtime and static type-checkers whenever you lack perfect control over the objects passed to or returned from your callables – especially whenever you cannot limit the size of those objects. This includes common developer scenarios like:
You are the author of an open-source library intended to be reused by a general audience.
*
You are the author of a public app manipulating Bigly Data™ (i.e., data that is big) in app callables – especially when accepting data as input into or returning data as output from those callables.
If none of the above apply, prefer beartype over static type-checkers whenever:
You want to check types decidable only at runtime.
*
You want to write code rather than fight a static type-checker, because static type inference of a dynamically-typed language is guaranteed to fail and frequently does. If you’ve ever cursed the sky after suffixing working code incorrectly typed by mypy with non-portable vendor-specific pragmas like
```
# type: ignore[{unreadable_error}]
```
, beartype was written for you. *
You want to preserve dynamic typing, because Python is a dynamically-typed language. Unlike beartype, static type-checkers enforce static typing and are thus strongly opinionated; they believe dynamic typing is harmful and emit errors on dynamically-typed code. This includes common use patterns like changing the type of a variable by assigning that variable a value whose type differs from its initial value. Want to freeze a variable from a
`set` into a `frozenset` ? That’s sad, because static type-checkers don’t want you to. In contrast:
Beartype never emits errors, warnings, or exceptions on dynamically-typed code, because Python is not an error.
Beartype believes dynamic typing is beneficial by default, because Python is beneficial by default.
Beartype is unopinionated. That’s because beartype operates exclusively at the higher level of pure-Python callables and classes rather than the lower level of individual statements inside pure-Python callables and class. Unlike static type-checkers, beartype can’t be opinionated about things that no one should be.
If none of the above still apply, still use beartype. It’s free as in beer and speech, cost-free at installation- and runtime, and transparently stacks with existing type-checking solutions. Leverage beartype until you find something that suites you better, because beartype is always better than nothing.
## Does beartype do any bad stuff?¶
Beartype is free – free as in beer, speech, dependencies, space complexity, and time complexity. Beartype is the textbook definition of “free.” We’re pretty sure the Oxford Dictionary now just shows the beartype mascot instead of defining that term. Vector art that a Finnish man slaved for weeks over paints a thousand words.
Beartype might not do as much as you’d like, but it will always do something – which is more than Python’s default behaviour, which is to do nothing and then raise exceptions when doing nothing inevitably turns out to have been a bad idea. Beartype also cleanly interoperates with popular static type-checkers, by which we mean mypy and pyright. (The other guys don’t exist.)
Beartype can always be safely added to any Python package, module, app, or script regardless of size, scope, funding, or audience. Never worry about your backend Django server taking an impromptu swan dive on St. Patty’s Day just because your frontend React client pushed a 5MB JSON file serializing a doubly-nested list of integers. Nobody could have foreseen this!
The idea of competing runtime type-checkers like typeguard is that they compulsively do everything. If you annotate a function decorated by typeguard as accepting a triply-nested list of integers and pass that function a list of 1,000 nested lists of 1,000 nested lists of 1,000 integers, every call to that function will check every integer transitively nested in that list – even when that list never changes. Did we mention that list transitively contains 1,000,000,000 integers in total?
1 loop, best of 1: 6.42e+03 sec per loop
```
```
6.42e+03 sec per loop == 6420 seconds == 107 minutes == 1 hour, 47
minutes
```
to check a single list once. Yes, it’s an uncommonly large list…
but it’s still just a list. This is the worst-case cost of a single call to a
function decorated by a naïve runtime type-checker.
## Does beartype actually do anything?¶
Generally, as little as it can while still satisfying the accepted definition of “runtime type-checker.” Specifically, beartype performs a one-way random walk over the expected data structure of objects passed to and returned from @beartype-decorated functions and methods. Colloquially, beartype type-checks randomly sampled data. RNGesus, show your humble disciples the way!
Consider the prior example of a function annotated as accepting a triply-nested list of integers passed a list containing 1,000 nested lists each containing 1,000 nested lists each containing 1,000 integers. When decorated by:
typeguard, every call to that function checks every integer nested in that list.
*
beartype, every call to the same function checks only a single random integer contained in a single random nested list contained in a single random nested list contained in that parent list. This is what we mean by the quaint phrase “one-way random walk over the expected data structure.”
1024 loops, best of 4: 13.8 usec per loop
```
```
13.8 usec per loop == 13.8 microseconds = 0.0000138 seconds
```
to
transitively check only a random integer nested in a single triply-nested list
passed to each call of that function. This is the worst-case cost of a single
call to a function decorated by an \(O(1)\) runtime type-checker.
## How much does all this really cost?¶
What substring of “beartype is free we swear it would we lie” did you not grep?
…very well. Let’s pontificate.
Beartype dynamically generates functions wrapping decorated callables with constant-time runtime type-checking. This separation of concerns means that beartype exhibits different cost profiles at decoration and call time. Whereas standard runtime type-checking decorators are fast at decoration time and slow at call time, beartype is the exact opposite.
At call time, wrapper functions generated by the `beartype.beartype()` decorator are guaranteed to unconditionally run in O(1) non-amortized
worst-case time with negligible constant factors regardless of type hint
complexity or nesting. This is not an amortized average-case analysis. Wrapper
functions really are \(O(1)\) time in the best, average, and worst cases. At decoration time, performance is slightly worse. Internally, beartype non-recursively iterates over type hints at decoration time with a micro-optimized breadth-first search (BFS). Since this BFS is memoized, its cost is paid exactly once per type hint per process; subsequent references to the same hint over different parameters and returns of different callables in the same process reuse the results of the previously memoized BFS for that hint. The `beartype.beartype()` decorator itself thus runs in:
O(1) amortized average-case time.
*
O(k) non-amortized worst-case time for \(k\) the number of child type hints nested in a parent type hint and including that parent.
Since we generally expect a callable to be decorated only once but called multiple times per process, we might expect the cost of decoration to be ignorable in the aggregate. Interestingly, this is not the case. Although only paid once and obviated through memoization, decoration time is sufficiently expensive and call time sufficiently inexpensive that beartype spends most of its wall-clock merely decorating callables. The actual function wrappers dynamically generated by `beartype.beartype()` consume comparatively little
wall-clock, even when repeatedly called many times.
## Beartype just does random stuff? Really?¶
Yes. Beartype just does random stuff. That’s what we’re trying to say here. We didn’t want to admit it, but the ugly truth is out now. Are you smirking? Because that looks like a smirk. Repeat after this FAQ:
Only so many type-checks can be stuffed into a constant slice of time with negligible constant factors. Let’s detail exactly what (and why) beartype stuffs into its well-bounded slice of the CPU pie.
Standard runtime type checkers naïvely brute-force the problem by type-checking all child objects transitively reachable from parent objects passed to and returned from callables in \(O(n)\) linear time for \(n\) such objects. This approach avoids false positives (i.e., raising exceptions for valid objects) and false negatives (i.e., failing to raise exceptions for invalid objects), which is good. But this approach also duplicates work when those objects remain unchanged over multiple calls to those callables, which is bad.
Beartype circumvents that badness by generating code at decoration time performing a one-way random tree walk over the expected nested structure of those objects at call time. For each expected nesting level of each container passed to or returned from each callable decorated by `beartype.beartype()` starting at that container and ending either when a check fails or all checks
succeed, that callable performs these checks (in order):
A shallow type-check that the current possibly nested container is an instance of the type given by the current possibly nested type hint.
*
A deep type-check that an item randomly selected from that container itself satisfies the first check.
For example, given a parameter’s type hint
```
list[tuple[Sequence[str]]]
```
,
beartype generates code at decoration time performing these checks at call time
(in order):
A check that the object passed as this parameter is a list.
*
A check that an item randomly selected from this list is a tuple.
*
A check that an item randomly selected from this tuple is a sequence.
*
A check that an item randomly selected from this sequence is a string.
Beartype thus performs one check for each possibly nested type hint for each annotated parameter or return object for each call to each decorated callable. This deep randomness gives us soft statistical expectations as to the number of calls needed to check everything. Specifically, it can be shown that beartype type-checks on average all child objects transitively reachable from parent objects passed to and returned from callables in \(O(n \log n)\) calls to those callables for \(n\) such objects. Praise RNGesus!
Beartype avoids false positives and rarely duplicates work when those objects remain unchanged over multiple calls to those callables, which is good. Sadly, beartype also invites false negatives, because this approach only checks a vertical slice of the full container structure each call, which is bad.
We claim without evidence that false negatives are unlikely under the optimistic assumption that most real-world containers are homogenous (i.e., contain only items of the same type) rather than heterogenous (i.e., contain items of differing types). Examples of homogenous containers include (byte-)strings, `ranges` , `streams` , memory views, method resolution orders (MROs), generic alias
parameters, lists returned by the `dir()` builtin, iterables generated by
the `os.walk()` function, standard NumPy arrays, PyTorch tensors,
NetworkX graphs, pandas data frame columns, and really all scientific
containers ever.
## What does “pure-Python” mean?¶
Beartype is implemented entirely in Python. It’s Python all the way down. Beartype never made a Faustian bargain with diabolical non-Pythonic facehuggers like Cython, C extensions, or Rust extensions. This has profound advantages with no profound disadvantages (aside from our own loss in sanity) – which doesn’t make sense until you continue reading. Possibly, not even then.
First, profound advantages. We need to make beartype look good to justify this FAQ entry. The advantage of staying pure-Python is that beartype supports everything that supports Python – including:
Just-in-time (JIT) compilers! So, PyPy.
*
Ahead-of-time transpilers! So, Nuitka.
*
Python web distributions! So, Pyodide.
Next, profound disadvantages. There are none. Nobody was expecting that, were they? Suck it, tradeoffs. Okay… look. Can anybody handle “the Truth”? I don’t even know what that means, but it probably relates to the next paragraph.
Ordinarily, beartype being pure-Python would mean that beartype is slow. Python is commonly considered to be Teh Slowest Language Evah, because it commonly is. Everything pure-Python is slow (much like our bathroom sink clogged with cat hair). Everyone knows that. It is common knowledge. This only goes to show that the intersection of “common knowledge” and “actual knowledge” is the empty set.
Thankfully, beartype is not slow. By confining itself to the subset of Python that is fast, [1] beartype is micro-optimized to exhibit performance on par with horrifying compiled systems languages like Rust, C, and C++ – without sacrificing all of the native things that make Python great.
Which leads us straight to…
## What does “near-real-time” even mean? Are you just making stuff up?¶
It means stupid-fast. And… yes. I mean no. Of course no! No! Everything you read is true, because Somebody on the Internet Said It. I mean, really. Would beartype just make stuff up? Okay… look. Here’s the real deal. Let us bore this understanding into you. squinty eyes intensify
Beartype type-checks objects at runtime in around 1µs (i.e., one microsecond, one millionth of a second), the standard high-water mark for real-time software:
```
# Let's check a list of 181,320,382 integers in ~1µs.
>>> from beartype import beartype
>>> def sum_list_unbeartyped(some_list: list) -> int:
... return sum(some_list)
>>> sum_list_beartyped = beartype(sum_list_unbeartyped)
>>> %time sum_list_unbeartyped([42]*0xACEBABE)
CPU times: user 3.15 s, sys: 418 ms, total: 3.57 s
Wall time: 3.58 s # <-- okay.
Out[20]: 7615456044
>>> %time sum_list_beartyped([42]*0xACEBABE)
CPU times: user 3.11 s, sys: 440 ms, total: 3.55 s
Wall time: 3.56 s # <-- woah.
Out[22]: 7615456044
```
Beartype does not contractually guarantee this performance – as that example demonstrates. Under abnormal processing loads (e.g., leycec’s arthritic Athlon™ II X2 240, because you can’t have enough redundant 2’s in a product line) or when passed worst-case type hints (e.g., classes whose metaclasses implement stunningly awful
```
__isinstancecheck__()
```
dunder methods), beartype’s
worst-case performance could exceed an average-case near-instantaneous response.
Beartype is therefore not real-time; beartype is merely near-real-time (NRT), also variously referred to as “pseudo-real-time,” “quasi-real-time,” or simply “high-performance.” Real-time software guarantees performance with a scheduler forcibly terminating tasks exceeding some deadline. That’s bad in most use cases. The outrageous cost of enforcement harms real-world performance, stability, and usability.
NRT. It’s good for you. It’s good for your codebase. It’s just good.
## What does “hybrid runtime-static” mean? Pretty sure you made that up, too.¶
Beartype is a third-generation type-checker seamlessly supporting both:
New-school runtime-static type-checking via beartype import hooks. When you call import hooks published by the
`beartype.claw` subpackage, you automagically type-check all annotated callables, classes, and variable assignments covered by those hooks. In this newer (and highly encouraged) modality, beartype performs both runtime and static analysis – enabling beartype to seamlessly support both prosaic and exotic type hints. *
Old-school runtime type-checking via the
`beartype.beartype()` decorator. When you manually decorate callables and classes by `beartype.beartype()` , you type-check only annotated parameters, returns, and class variables. In this older (and mostly obsolete) modality, beartype performs no static analysis and thus no static type-checking. This suffices for prosaic type hints but fails for exotic type hints. After all, many type hints can only be type-checked with static analysis. In the usual use case, you call our
function from your
submodule to register an import
hook for your entire package. Beartype then type-checks the following points of
interest across your entire package:
All annotated parameters and returns of all callables, which our import hooks decorate with
`beartype.beartype()` . *
All annotated attributes of all classes, which (…wait for it) our import hooks decorate with
`beartype.beartype()` . *
All annotated variable assignments (e.g.,
`muh_var: int = 42` ). After any assignment to a global or local variable annotated by a type hint, our import hooks implicitly append a new statement at the same indentation level calling our
function passed both that variable and that type hint. That is: > # Beartype import hooks append each assignment resembling this... {var_name}: {type_hint} = {var_value} # ...with a runtime type-check resembling this. die_if_unbearable({var_name}, {type_hint})
*
All annotated variable declarations (e.g.,
`muh_var: int` ). After any declaration to a global or local variable annotated by a type hint not assigned a new value, our import hooks implicitly append a new statement at the same indentation level calling our
function passed both that variable and that type hint. That is: > # Beartype import hooks append each declaration resembling this... {var_name}: {type_hint} # ...with a runtime type-check resembling this. die_if_unbearable({var_name}, {type_hint})
`beartype.claw` : We broke our wrists so you don’t have to.
## “Third-generation type-checker” doesn’t mean anything, does it?¶
Let’s rewind. Follow your arthritic host, <NAME>, on a one-way trip you won’t soon recover from through the backwater annals of GitHub history.
Gather around, everyone! It’s a tedious lore dump that will leave you enervated, exhausted, and wishing you’d never come:
Gen 1. On October 28th, 2012, mypy launched the first generation of type-checkers. Like mypy, first-generation type-checkers are all pure-static type-checkers. They do not operate at runtime and thus cannot enforce anything at runtime. They operate entirely outside of runtime during an on-demand parser phase referred to as static analysis time – usually at the automated behest of a local IDE or remote continuous integration (CI) pipeline. Since they can’t enforce anything, they’re the monkey on your team’s back that you really wish would stop flinging bodily wastes everywhere.
*
Gen 2. On December 27th, 2015, typeguard 1.0.0 launched the second generation of type-checkers. [2] Like typeguard, second-generation type-checkers are all pure-runtime type-checkers. They operate entirely at runtime and thus do enforce everything at runtime – usually with a decorator manually applied to callables and classes. Conversely, they do not operate at static analysis time and thus cannot validate type hints requiring static analysis. While non-ideal, this tradeoff is generally seen as worthwhile by everybody except the authors of first-generation type-checkers. Enforcing some type hints is unequivocally better than enforcing no type hints.
*
Gen 3. On December 11th, 2019, typeguard 2.6.0 (yet again) launched the third generation of type-checkers. Like typeguard ≥ 2.6.0, third-generation type-checkers are all a best-of-breed hybridization of first- and second-generation type-checkers. They concurrently perform both:
Standard static type-checking (ala mypy and pyright) but at runtime – which ain’t standard.
First- and second-generation type-checkers invented a fundamentally new wheel. Third-generation type-checkers then bolted the old, busted, rubber-worn wheels built by prior generations onto the post-apocalyptic chassis of a shambolic doom mobile.
Beartype is a third-generation type-checker. This is the shock twist in the season finale that no one saw coming at all.
Beartype: shambolic doom mobile or bucolic QA utopia? Only your team decides.
## How do I type-check…¶
…yes? Do go on.
### …Boto3 types?¶
tl;dr: You just want bearboto3, a well-maintained third-party package cleanly integrating beartype + Boto3. But you’re not doing that. You’re reading on to find out why you want bearboto3, aren’t you? I knew it.
Boto3 is the official Amazon Web Services (AWS) Software Development Kit (SDK) for Python. Type-checking Boto3 types is decidedly non-trivial, because Boto3 dynamically fabricates unimportable types from runtime service requests. These types cannot be externally accessed and thus cannot be used as type hints.
H-hey! Put down the hot butter knife. Your Friday night may be up in flames, but we’re gonna put out the fire. It’s what we do here. Now, you have two competing solutions with concomitant tradeoffs. You can type-check Boto3 types against either:
Static type-checkers (e.g., mypy, pyright) by importing Boto3 stub types from an external third-party dependency (e.g., mypy-boto3), enabling context-aware code completion across compliant IDEs (e.g., PyCharm, VSCode Pylance). Those types are merely placeholder stubs; they do not correspond to actual Boto3 types and thus break runtime type-checkers (including beartype) when used as type hints.
*
Beartype by fabricating your own
```
PEP-compliant beartype validators
```
, enabling beartype to validate arbitrary objects against actual Boto3 types at runtime when used as type hints. You already require beartype, so no additional third-party dependencies are required. Those validators are silently ignored by static type-checkers; they do not enable context-aware code completion across compliant IDEs.
“B-but that sucks! How can we have our salmon and devour it too?”, you demand with a tremulous quaver. Excessive caffeine and inadequate gaming did you no favors tonight. You know this. Yet again you reach for the hot butter knife.
H-hey! You can, okay? You can have everything that market forces demand. Bring to bear cough the combined powers of PEP 484-compliant type aliases, the PEP 484-compliant “typing.TYPE_CHECKING” boolean global, and `beartype validators` to satisfy both static and runtime type-checkers:
```
# Import the requisite machinery.
from beartype import beartype
from boto3 import resource
from boto3.resources.base import ServiceResource
from typing import TYPE_CHECKING
# If performing static type-checking (e.g., mypy, pyright), import boto3
# stub types safely usable *ONLY* by static type-checkers.
if TYPE_CHECKING:
from mypy_boto3_s3.service_resource import Bucket
# Else, @beartime-based runtime type-checking is being performed. Alias the
# same boto3 stub types imported above to their semantically equivalent
# beartype validators accessible *ONLY* to runtime type-checkers.
else:
# Import even more requisite machinery. Can't have enough, I say!
from beartype.vale import IsAttr, IsEqual
from typing import Annotated # <--------------- if Python ≥ 3.9.0
# from typing_extensions import Annotated # <-- if Python < 3.9.0
# Generalize this to other boto3 types by copy-and-pasting this and
# replacing the base type and "s3.Bucket" with the wonky runtime names
# of those types. Sadly, there is no one-size-fits all common base class,
# but you should find what you need in the following places:
# * "boto3.resources.base.ServiceResource".
# * "boto3.resources.collection.ResourceCollection".
# * "botocore.client.BaseClient".
# * "botocore.paginate.Paginator".
# * "botocore.waiter.Waiter".
Bucket = Annotated[ServiceResource,
IsAttr['__class__', IsAttr['__name__', IsEqual["s3.Bucket"]]]]
# Do this for the good of the gross domestic product, @beartype.
@beartype
def get_s3_bucket_example() -> Bucket:
s3 = resource('s3')
return s3.Bucket('example')
```
You’re welcome.
### …JAX arrays?¶
You only have two options here. Choose wisely, wily scientist. If:
Require the third-party “jaxtyping” package.
*
Annotate callables with type hint factories published by
`jaxtyping` (e.g.,
```
jaxtyping.Float[jaxtyping.Array, '{metadata1 ... metadataN}']
```
You mind adding an additional mandatory runtime dependency to your app, prefer beartype validators. Since JAX declares a broadly similar API to that of NumPy with its “jax.numpy” compatibility layer, most NumPy-specific examples cleanly generalize to JAX. Beartype is no exception.
Bask in the array of options at your disposal! …get it? …array? I’ll stop now.
### …NumPy arrays?¶
You have more than a few options here. If:
```
jaxtyping.Float[np.ndarray, '{metadata1 ... metadataN}']
```
You mind adding an additional mandatory runtime dependency to your app. Then prefer either:
The validators built-in to beartype. This can check arbitrary properties of the array, by writing your validators appropriately.
*
The official “numpy.typing.NDArray[{dtype}]” type hint factory bundled with NumPy, and explicitly supported by beartype – also referred to as a typed NumPy array. Beartype fully supports typed NumPy arrays. Because beartype cares. However: note that this can only check the dtype (but not shape) of an array.
*
You need support for custom (“structured”) dtypes: consider the third-party “nptyping” package.
Options are good! Repeat this mantra in times of need.
### …PyTorch tensors?¶
You only have two options here. We’re pretty sure two is better than none. Thus, we give thanks. If:
```
jaxtyping.Float[torch.Tensor, '{metadata1 ... metadataN}']
```
You mind adding an additional mandatory runtime dependency to your app. In this case, prefer
`beartype validators` . For example, validate callable parameters and returns as either floating-point or integral PyTorch tensors via the functional validator factory `beartype.vale.Is` : > # Import the requisite machinery. from beartype import beartype from beartype.vale import Is from typing import Annotated # <--------------- if Python ≥ 3.9.0 # from typing_extensions import Annotated # <-- if Python < 3.9.0 # Import PyTorch (d)types of interest. from torch import ( float as torch_float, int as torch_int, tensor, ) # PEP-compliant type hint matching only a floating-point PyTorch tensor. TorchTensorFloat = Annotated[tensor, Is[ lambda tens: tens.type() is torch_float]] # PEP-compliant type hint matching only an integral PyTorch tensor. TorchTensorInt = Annotated[tensor, Is[ lambda tens: tens.type() is torch_int]] # Type-check everything like an NLP babelfish. @beartype def deep_dream(dreamy_tensor: TorchTensorFloat) -> TorchTensorInt: return dreamy_tensor.type(dtype=torch_int)
Since
`beartype.vale.Is` supports arbitrary Turing-complete Python expressions, the above example generalizes to typing the device, dimensionality, and other metadata of PyTorch tensors to whatever degree of specificity you desire. `beartype.vale.Is` : it’s lambdas all the way down.
### …mock types?¶
Beartype fully relies upon the `isinstance()` builtin under the hood for its
low-level runtime type-checking needs. If you can fool `isinstance()` , you
can fool beartype. Can you fool beartype into believing an instance of a mock
type is an instance of the type it mocks, though? You bet your bottom honey barrel. In your mock type, just define a new `__class__()` property returning the original type: e.g.,
```
>>> class OriginalType: pass
>>> class MockType:
... @property
... def __class__(self) -> type: return OriginalType
>>> from beartype import beartype
>>> @beartype
... def muh_func(self, muh_arg: OriginalType): print('Yolo, bro.')
>>> muh_func(MockType())
Yolo, bro.
```
This is why we beartype.
### …pandas data frames?¶
Type-check any pandas object with type hints published by the third-party pandera package – the industry standard for Pythonic data validation and blah, blah, blah… hey wait. Is this HR speak in the beartype FAQ!? Yes. It’s true. We are shilling.
Because caring is sharing code that works, beartype transparently supports all pandera type hints. Soon, you too will believe that machine-learning pipelines can be domesticated. Arise, huge example! Stun the disbelievers throwing peanuts at our issue tracker.
```
# Import important machinery. It's important.
import pandas as pd
import pandera as pa
from beartype import beartype
from pandera.dtypes import Int64, String, Timestamp
from pandera.typing import Series
# Arbitrary pandas data frame. If pandas, then data science.
muh_dataframe = pd.DataFrame({
'Hexspeak': (
0xCAFED00D,
0xCAFEBABE,
0x1337BABE,
),
'OdeToTheWestWind': (
'Angels of rain and lightning: there are spread',
'On the blue surface of thine aery surge,',
'Like the bright hair uplifted from the head',
),
'PercyByssheShelley': pd.to_datetime((
'1792-08-04',
'1822-07-08',
'1851-02-01',
)),
})
# Pandera dataclass validating the data frame above. As above, so below.
class MuhDataFrameModel(pa.DataFrameModel):
Hexspeak: Series[Int64]
OdeToTheWestWind: Series[String]
PercyByssheShelley: Series[Timestamp]
# Custom callable you define. Here, we type-check the passed data frame, the
# passed non-pandas object, and the returned series of this data frame.
@beartype
@pa.check_types
def convert_dataframe_column_to_series(
# Annotate pandas data frames with pandera type hints.
dataframe: pa.typing.DataFrame[MuhDataFrameModel],
# Annotate everything else with standard PEP-compliant type hints. \o/
column_name_or_index: str | int,
# Annotate pandas series with pandera type hints, too.
) -> Series[Int64 | String | Timestamp]:
'''
Convert the column of the passed pandas data frame (identified by the
passed column name or index) into a pandas series.
'''
# This is guaranteed to be safe. Since type-checks passed, this does too.
return (
dataframe.loc[:,column_name_or_index]
if isinstance(column_name_or_index, str) else
dataframe.iloc[:,column_name_or_index]
)
# Prints joyful success as a single tear falls down your beard stubble:
# [Series from data frame column by *NUMBER*]
# 0 3405697037
# 1 3405691582
# 2 322419390
# Name: Hexspeak, dtype: int64
#
# [Series from data frame column by *NAME*]
# 0 Angels of rain and lightning: there are spread
# 1 On the blue surface of thine aery surge,
# 2 Like the bright hair uplifted from the head
# Name: OdeToTheWestWind, dtype: object
print('[Series from data frame column by *NUMBER*]')
print(convert_dataframe_column_to_series(
dataframe=muh_dataframe, column_name_or_index=0))
print()
print('[Series from data frame column by *NAME*]')
print(convert_dataframe_column_to_series(
dataframe=muh_dataframe, column_name_or_index='OdeToTheWestWind'))
# All of the following raise type-checking violations. Feels bad, man.
convert_dataframe_column_to_series(
dataframe=muh_dataframe, column_name_or_index=['y u done me dirty']))
convert_dataframe_column_to_series(
dataframe=DataFrame(), column_name_or_index=0))
```
Order of decoration is insignificant. The `beartype.beartype()` and
pandera.check_types decorators are both permissive. Apply them in whichever
order you like. This is fine, too:
```
# Everyone is fine with this. That's what they say. But can we trust them?
@pa.check_types
@beartype
def convert_dataframe_column_to_series(...) -> ...: ...
```
There be dragons belching flames over the hapless village, however:
If you forget the pandera.check_types decorator (but still apply the
`beartype.beartype()` decorator), `beartype.beartype()` will only shallowly type-check (i.e., validate the types but not the contents of) pandas objects. This is better than nothing, but… look. No API is perfect. We didn’t make crazy. We only integrate with crazy. The lesson here is to never forget the pandera.check_types decorator. *
If you forget the
`beartype.beartype()` decorator (but still apply the pandera.check_types decorator), pandera.check_types will silently ignore everything except pandas objects. This is the worst case. This is literally the blimp crashing and burning on the cover of Led Zeppelin I. The lesson here is to never forget the `beartype.beartype()` decorator.
There are two lessons here. Both suck. Nobody should need to read fifty paragraphs full of flaming dragons just to validate pandas objects. Moreover, you are thinking: “It smells like boilerplate.” You are not wrong. It is textbook boilerplate. Thankfully, your concerns can all be fixed with even more boilerplate. Did we mention none of this is our fault?
Define a new `@bearpanderatype` decorator internally applying both the `beartype.beartype()` and pandera.check_types decorators; then use that
instead of either of those. Automate away the madness with more madness:
```
# Never again suffer for the sins of others.
def bearpanderatype(*args, **kwargs):
return beartype(pa.check_types(*args, **kwargs))
# Knowledge is power. Clench it with your iron fist until it pops.
@bearpanderatype # <-- less boilerplate means more power
def convert_dataframe_column_to_series(...) -> ...: ...
```
pandas + pandera + `beartype` : BFFs at last. Type-check pandas data
frames in ML pipelines for the good of LLaMa-kind. Arise, bug-free GPT! Overthrow all huma— message ends
### …the current class?¶
So. It comes to this. You want to type-check a method parameter or return to be an instance of the class declaring that method. In short, you want to type-check a common use case like this factory:
```
class ClassFactory(object):
def __init__(self, *args) -> None:
self._args = args
def make_class(self, other):
return ClassFactory(self._args + other._args)
```
```
ClassFactory.make_class()
```
method both accepts a parameter `other` whose type is `ClassFactory` and returns a value whose type is (again) `ClassFactory` – the class currently being declared. This is the age-old
self-referential problem. How do you type-check the class being declared
when that class has yet to be declared? The answer may shock your younger
coworkers who are still impressionable and have firm ideals. You have three choices here. One of these choices is good and worthy of smiling cat emoji. The other two are bad; mock them in `git` commit messages until
somebody refactors them into the first choice:
[Recommended] The PEP 673-compliant
`typing.Self` type hint (introduced by Python 3.11) efficiently and reliably solves this. Annotate the type of the current class as `Self` – fully supported by `beartype` : > # Import important stuff. Boilerplate: it's the stuff we make. from beartype import beartype from typing import Self # <---------------- if Python ≥ 3.11.0 # from typing_extensions import Self # <-- if Python < 3.11.0 # Decorate classes – not methods. It's rough. @beartype # <-- Yesss. Good. Feel the force. It flows like sweet honey. class ClassFactory(object): def __init__(self, *args: Sequence) -> None: self._args = args # @beartype # <-- No... Oh, Gods. *NO*! The dark side grows stronger. def make_class(self, other: Self) -> Self: # <-- We are all one self. return ClassFactory(self._args + other._args)
Technically, this requires Python 3.11. Pragmatically,
`typing_extensions` means that you can bring Python 3.11 back with you into the past – where code was simpler, Python was slower, and nothing worked as intended despite tests passing. `Self` is only contextually valid inside class declarations. `beartype` raises an exception when you attempt to use `Self` outside a class declaration (e.g., annotating a global variable, function parameter, or return). `Self` can only be type-checked by classes decorated by the `beartype.beartype()` decorator. Corollary: `Self` cannot be type-checked by methods decorated by `beartype.beartype()` – because the class to be type-checked has yet to be declared at that early time. The pain that you feel is real. *
A PEP 484-compliant forward reference (i.e., type hint that is a string that is the unqualified name of the current class) also solves this. The only costs are inexcusable inefficiency and unreliability. This is what everyone should no longer do. This is…
> # The bad old days when @beartype had to bathe in the gutter. # *PLEASE DON'T DO THIS ANYMORE.* Do you want @beartype to cry? from beartype import beartype @beartype class BadClassFactory(object): def __init__(self, *args: Sequence) -> None: self._args = args def make_class(self, other: 'BadClassFactory') -> ( # <-- no, no, Gods, no 'BadClassFactory'): # <------------------------------ please, Gods, no return BadClassFactory(self._args + other._args)
*
A PEP 563-compliant postponed type hint (i.e., type hint unparsed by
```
from __future__ import annotations
```
back into a string that is the unqualified name of the current class) also resolves this. The only costs are codebase-shattering inefficiency, non-deterministic fragility so profound that even Hypothesis is squinting, and the ultimate death of your business model. Only do this over the rotting corpse of `beartype` . This is… > # Breaking the Python interpreter: feels bad, because it is bad. # *PLEASE DON'T DO THIS ANYWHERE.* Do you want @beartype to be a shambling wreck? from __future__ import annotations from beartype import beartype @beartype class TerribadClassFactory(object): def __init__(self, *args: Sequence) -> None: self._args = args def make_class(self, other: TerribadClassFactory) -> ( # <-- NO, NO, GODS, NO TerribadClassFactory): # <------------------------------ PLEASE, GODS, NO return TerribadClassFactory(self._args + other._args)
In theory, `beartype` nominally supports all three. In practice, `beartype` only perfectly supports `typing.Self` . `beartype` still grapples with slippery edge cases in the latter two, which will blow
up your test suite in that next changeset you are about to commit. Even when we
perfectly support everything in a future release, you should still strongly
prefer `Self` . Why? Speed. It’s why we’re here. Let’s quietly admit that to ourselves. If `beartype` were any slower, even fewer people would be reading this. `beartype` generates:
Optimally efficient type-checking code for
`Self` . It’s literally just a trivial call to the `isinstance()` builtin. The same cannot be said for… *
Suboptimal type-checking code for both forward references and postponed type hints, deferring the lookup of the referenced class to call time. Although
`beartype` caches that class after doing so, all of that incurs space and time costs you’d rather not pay at any space or time. `typing.Self` : it saved our issue tracker from certain doom. Now, it will
save your codebase from our issues.
### …under VSCode?¶
Beartype fully supports VSCode out-of-the-box – especially via Pylance, Microsoft’s bleeding-edge Python extension for VSCode. Chortle in your joy, corporate subscribers and academic sponsors! All the intellisense you can tab-complete and more is now within your honey-slathered paws. Why? Because…
Beartype laboriously complies with pyright, Microsoft’s in-house static type-checker for Python. Pylance enables pyright as its default static type-checker. Beartype thus complies with Pylance, too.
Beartype also laboriously complies with mypy, Python’s official static type-checker. VSCode users preferring mypy to pyright may switch Pylance to type-check via the former. Just:
Open the User Settings dialog.
*
Search for
`Type Checking Mode` . *
Browse to
```
Python › Analysis: Type Checking Mode
```
. *
Switch the “default rule set for type checking” to
`off` .
Pretend that reads “off” rather than “strict”. Pretend we took this screenshot.
There are tradeoffs here, because that’s just how the code rolls. On:
The one paw, pyright is significantly more performant than mypy under Pylance and supports type-checking standards currently unsupported by mypy (e.g., recursive type hints).
*
The other paw, mypy supports a vast plugin architecture enabling third-party Python packages to describe dynamic runtime behaviour statically.
Beartype: we enable hard choices, so that you can make them for us.
### …under [insert-IDE-name-here]?¶
Beartype fully complies with mypy, pyright, PEP 561, and other community standards that govern how Python is statically type-checked. Modern Integrated Development Environments (IDEs) support these standards - hopefully including your GigaChad IDE of choice.
## How do I *NOT* type-check something?¶
So. You have installed import hooks with our `beartype.claw` API, but
those hooks are complaining about something filthy in your codebase. Now, you
want `beartype.claw` to unsee what it saw and just quietly move along so
you can finally do something productive on Monday morning for once. That
coffee isn’t going to drink itself. …hopefully. You have come to the right FAQ entry. This the common use case for temporarily blacklisting a callable or class. Prevent `beartype.claw` from
type-checking your hidden shame by decorating the hideous callable or class with
either:
The
`beartype.beartype()` decorator configured under the no-time strategy
: e.g., > # Import the requisite machinery. from beartype import beartype, BeartypeConf, BeartypeStrategy # Dynamically create a new @nobeartype decorator disabling type-checking. nobeartype = beartype(conf=BeartypeConf(strategy=BeartypeStrategy.O0)) # Avoid type-checking *ANY* methods or attributes of this class. @nobeartype class UncheckedDangerClassIsDangerous(object): # This method raises *NO* type-checking violation despite returning a # non-"None" value. def unchecked_danger_method_is_dangerous(self) -> None: return 'This string is not "None". Sadly, nobody cares anymore.'
*
For further details that may break your will to code, see also:
enumeration member.
## Why is @leycec’s poorly insulated cottage in the Canadian wilderness so cold?¶
Not even Poło the polar bear knows.
Also, anyone else notice that this question answers itself? Anybody? No? Nobody? It is just me?
```
</snowflakes_fall_silently>
```
```
It's a big bear AAAAAAAAFTER all!
It's a big bear AAAAAAAAFTER all!
It's a big b——— *squelching sound, then blessed silence*
```
Beartype complies with vast swaths of Python’s `typing` landscape and
lint-filled laundry list of Python Enhancement Proposals (PEPs) –
but nobody’s perfect. Not even the hulking form of beartype does everything.
</audience_gaspsLet’s chart exactly what beartype complies with and when beartype first did so. Introducing… Beartype’s feature matrix of bloated doom! It will bore you into stunned disbelief that somebody typed all this. [1]
# Code
Date: 2020-10-24
Categories:
Tags:
# Code¶
Let’s take a deep dive into the deep end of runtime type-checking – the beartype way.
## Beartype Code Generation: It’s All for You¶
Beartype dynamically generates type-checking code unique to each class and callable decorated by the `beartype.beartype()` decorator. Let’s bearsplain
why the code `beartype.beartype()` generates for real-world use cases is the
fastest possible code type-checking those cases.
### Identity Decoration¶
We begin by wading into the torpid waters of the many ways beartype avoids doing any work whatsoever, because laziness is the virtue we live by. The reader may recall that the fastest decorator at decoration- and call-time is the identity decorator returning its decorated callable unmodified: e.g.,
```
from collections.abc import Callable
def identity_decorator(func: Callable): -> Callable:
return func
```
Beartype silently reduces to the identity decorator whenever it can, which is surprisingly often. Our three weapons are laziness, surprise, ruthless efficiency, and an almost fanatical devotion to constant-time type checking.
### Unconditional Identity Decoration¶
Let’s define a trivial function annotated by no type hints:
```
def law_of_the_jungle(strike_first_and_then_give_tongue):
return strike_first_and_then_give_tongue
```
Let’s decorate that function by `beartype.beartype()` and verify that `beartype.beartype()` reduced to the identity decorator by returning that
function unmodified:
```
>>> from beartype import beartype
>>> beartype(law_of_the_jungle) is law_of_the_jungle
True
```
We’ve verified that `beartype.beartype()` reduces to the identity decorator
when decorating unannotated callables. That’s but the tip of the efficiency
iceberg, though. `beartype.beartype()` unconditionally reduces to a noop
when:
The decorated callable is itself decorated by the PEP 484-compliant
decorator. *
The decorated callable has already been decorated by
`beartype.beartype()` . *
Interpreter-wide optimization is enabled: e.g.,
### Shallow Identity Decoration¶
Let’s define a trivial function annotated by the PEP 484-compliant `typing.Any` type hint:
```
from typing import Any
def law_of_the_jungle_2(never_order_anything_without_a_reason: Any) -> Any:
return never_order_anything_without_a_reason
```
Again, let’s decorate that function by `beartype.beartype()` and verify that `beartype.beartype()` reduced to the identity decorator by returning that
function unmodified:
```
>>> from beartype import beartype
>>> beartype(law_of_the_jungle_2) is law_of_the_jungle_2
True
```
We’ve verified that `beartype.beartype()` reduces to the identity decorator
when decorating callables annotated by `typing.Any` – a novel category of
type hint we refer to as shallowly ignorable type hints (known to be
ignorable by constant-time lookup in a predefined frozen set). That’s but the
snout of the crocodile, though. `beartype.beartype()` conditionally reduces
to a noop when all type hints annotating the decorated callable are shallowly
ignorable. These include:
*
`object` , the root superclass of Python’s class hierarchy. Since all objects are instances of `object` , `object` conveys no meaningful constraints as a type hint and is thus shallowly ignorable. *
`typing.Any` , equivalent to `object` . *
`typing.Generic` , equivalent to
, which conveys no meaningful constraints as a type hint and is thus shallowly ignorable. *
`typing.Protocol` , equivalent to
```
typing.Protocol[typing.Any]
```
and shallowly ignorable for similar reasons. *
`typing.Union` , equivalent to
```
typing.Union[typing.Any]
```
, equivalent to `typing.Any` . *
`typing.Optional` , equivalent to
```
typing.Optional[typing.Any]
```
, equivalent to
. Since any union subscripted by ignorable type hints is itself ignorable, [1] typing.Optional is shallowly ignorable as well.
### Deep Identity Decoration¶
Let’s define a trivial function annotated by a non-trivial PEP 484-, PEP 585- and PEP 593-compliant type hint that superficially appears to convey meaningful constraints:
```
from typing import Annotated, NewType, Union
hint = Union[str, list[int], NewType('MetaType', Annotated[object, 53])]
def law_of_the_jungle_3(bring_them_to_the_pack_council: hint) -> hint:
return bring_them_to_the_pack_council
```
Despite appearances, it can be shown by exhaustive (and frankly exhausting) reduction that that hint is actually ignorable. Let’s decorate that function by `beartype.beartype()` and verify that `beartype.beartype()` reduced to
the identity decorator by returning that function unmodified:
```
>>> from beartype import beartype
>>> beartype(law_of_the_jungle_3) is law_of_the_jungle_3
True
```
We’ve verified that `beartype.beartype()` reduces to the identity decorator
when decorating callables annotated by the above object – a novel category of
type hint we refer to as deeply ignorable type hints (known to be ignorable
only by recursive linear-time inspection of subscripted arguments). That’s but
the trunk of the elephant, though. `beartype.beartype()` conditionally
reduces to a noop when all type hints annotating the decorated callable are
deeply ignorable. These include:
Parametrizations of
`typing.Generic` and `typing.Protocol` by type variables. Since `typing.Generic` , `typing.Protocol` , and type variables all fail to convey any meaningful constraints in and of themselves, these parametrizations are safely ignorable in all contexts. *
Calls to
`typing.NewType` passed an ignorable type hint. *
Subscriptions of
`typing.Annotated` whose first argument is ignorable. *
Subscriptions of
`typing.Optional` and `typing.Union` by at least one ignorable argument.
### Constant Decoration¶
We continue by trundling into the turbid waters out at sea, where beartype reluctantly performs its minimal amount of work with a heavy sigh.
# Constant Builtin Type Decoration¶
Let’s define a trivial function annotated by type hints that are builtin types:
@beartype
def law_of_the_jungle_4(he_must_be_spoken_for_by_at_least_two: int):
return he_must_be_spoken_for_by_at_least_two
```
# If this parameter was passed...
if __beartype_pith_0 is not __beartypistry:
# Type-check this passed parameter or return value against this
# PEP-compliant type hint.
if not isinstance(__beartype_pith_0, int):
__beartype_get_beartype_violation(
func=__beartype_func,
pith_name='he_must_be_spoken_for_by_at_least_two',
pith_value=__beartype_pith_0,
)
Let’s dismantle this bit by bit:
The code comments above are verbatim as they appear in the generated code.
*
is the ad-hoc function name `beartype.beartype()` assigned this wrapper function. *
`__beartype_func` is the original
function. *
`__beartypistry` is a thread-safe global registry of all types, tuples of types, and forward references to currently undeclared types visitable from type hints annotating callables decorated by `beartype.beartype()` . We’ll see more about the `__beartypistry` in a moment. For know, just know that `__beartypistry` is a private singleton of the beartype package. This object is frequently accessed and thus localized to the body of this wrapper rather than accessed as a global variable, which would be mildly slower. *
`__beartype_pith_0` is the value of the first passed parameter, regardless of whether that parameter is passed as a positional or keyword argument. If unpassed, the value defaults to the `__beartypistry` . Since no caller should access (let alone pass) that object, that object serves as an efficient sentinel value enabling us to discern passed from unpassed parameters. Beartype internally favours the term “pith” (which we absolutely just made up) to transparently refer to the arbitrary object currently being type-checked against its associated type hint. *
```
isinstance(__beartype_pith_0, int)
```
tests whether the value passed for this parameter satisfies the type hint annotating this parameter. *
```
__beartype_get_beartype_violation()
```
raises a human-readable exception if this value fails this type-check.
So good so far. But that’s easy. Let’s delve deeper.
# Constant Non-Builtin Type Decoration¶
Let’s define a trivial function annotated by type hints that are pure-Python classes rather than builtin types:
```
from argparse import ArgumentParser
from beartype import beartype
@beartype
def law_of_the_jungle_5(a_cub_may_be_bought_at_a_price: ArgumentParser):
return a_cub_may_be_bought_at_a_price
```
# If this parameter was passed...
if __beartype_pith_0 is not __beartypistry:
# Type-check this passed parameter or return value against this
# PEP-compliant type hint.
if not isinstance(__beartype_pith_0, __beartypistry['argparse.ArgumentParser']):
__beartype_get_beartype_violation(
func=__beartype_func,
pith_name='a_cub_may_be_bought_at_a_price',
pith_value=__beartype_pith_0,
)
The result is largely the same. The only meaningful difference is the type-check on line 20:
```
if not isinstance(__beartype_pith_0, __beartypistry['argparse.ArgumentParser']):
```
Since we annotated that function with a pure-Python class rather than builtin type, `beartype.beartype()` registered that class with the `__beartypistry` at decoration time and then subsequently looked that class up
with its fully-qualified classname at call time to perform this type-check.
So good so far… so what! Let’s spelunk harder.
Let’s define a trivial function annotated by type hints that are PEP 585-compliant builtin types subscripted by ignorable arguments:
@beartype
def law_of_the_jungle_6(all_the_jungle_is_thine: list[object]):
return all_the_jungle_is_thine
```
# If this parameter was passed...
if __beartype_pith_0 is not __beartypistry:
# Type-check this passed parameter or return value against this
# PEP-compliant type hint.
if not isinstance(__beartype_pith_0, list):
__beartype_get_beartype_violation(
func=__beartype_func,
pith_name='all_the_jungle_is_thine',
pith_value=__beartype_pith_0,
)
We are still within the realm of normalcy. Correctly detecting this type hint to be subscripted by an ignorable argument, `beartype.beartype()` only
bothered type-checking this parameter to be an instance of this builtin type:
```
if not isinstance(__beartype_pith_0, list):
```
It’s time to iteratively up the ante.
Let’s define a trivial function annotated by type hints that are PEP 585-compliant builtin types subscripted by builtin types:
@beartype
def law_of_the_jungle_7(kill_everything_that_thou_canst: list[str]):
return kill_everything_that_thou_canst
```
# If this parameter was passed...
if __beartype_pith_0 is not __beartypistry:
# Type-check this passed parameter or return value against this
# PEP-compliant type hint.
if not (
# True only if this pith shallowly satisfies this hint.
isinstance(__beartype_pith_0, list) and
# True only if either this pith is empty *OR* this pith is
# both non-empty and deeply satisfies this hint.
(not __beartype_pith_0 or isinstance(__beartype_pith_0[__beartype_random_int % len(__beartype_pith_0)], str))
):
__beartype_get_beartype_violation(
func=__beartype_func,
pith_name='kill_everything_that_thou_canst',
pith_value=__beartype_pith_0,
)
We have now diverged from normalcy. Let’s dismantle this iota by iota:
is a pseudo-random unsigned 32-bit integer whose bit length intentionally corresponds to the number of bits generated by each call to Python’s C-based Mersenne Twister internally performed by the `random.getrandbits()` function generating this integer. Exceeding this length would cause that function to internally perform that call multiple times for no gain. Since the cost of generating integers to this length is the same as generating integers of smaller lengths, this length is preferred. Since most sequences are likely to contain fewer items than this integer, pseudo-random sequence items are indexable by taking the modulo of this integer with the sizes of those sequences. For big sequences containing more than this number of items, beartype deeply type-checks leading items with indices in this range while ignoring trailing items. Given the practical infeasibility of storing big sequences in memory, this seems an acceptable real-world tradeoff. Suck it, big sequences! *
As before,
`beartype.beartype()` first type-checks this parameter to be a list. *
`beartype.beartype()` then type-checks this parameter to either be:
```
not __beartype_pith_0
```
, an empty list. *
```
isinstance(__beartype_pith_0[__beartype_random_int % len(__beartype_pith_0)], str)
```
, a non-empty list whose pseudo-randomly indexed list item satisfies this nested builtin type.
Well, that escalated quickly.
# Constant Nested Deep Sequence Decoration¶
Let’s define a trivial function annotated by type hints that are PEP 585-compliant builtin types recursively subscripted by instances of themselves, because we are typing masochists:
@beartype
def law_of_the_jungle_8(pull_thorns_from_all_wolves_paws: (
list[list[list[str]]])):
return pull_thorns_from_all_wolves_paws
```
# If this parameter was passed...
if __beartype_pith_0 is not __beartypistry:
# Type-check this passed parameter or return value against this
# PEP-compliant type hint.
if not (
# True only if this pith shallowly satisfies this hint.
isinstance(__beartype_pith_0, list) and
# True only if either this pith is empty *OR* this pith is
# both non-empty and deeply satisfies this hint.
(not __beartype_pith_0 or (
# True only if this pith shallowly satisfies this hint.
isinstance(__beartype_pith_1 := __beartype_pith_0[__beartype_random_int % len(__beartype_pith_0)], list) and
# True only if either this pith is empty *OR* this pith is
# both non-empty and deeply satisfies this hint.
(not __beartype_pith_1 or (
# True only if this pith shallowly satisfies this hint.
isinstance(__beartype_pith_2 := __beartype_pith_1[__beartype_random_int % len(__beartype_pith_1)], list) and
# True only if either this pith is empty *OR* this pith is
# both non-empty and deeply satisfies this hint.
(not __beartype_pith_2 or isinstance(__beartype_pith_2[__beartype_random_int % len(__beartype_pith_2)], str))
))
))
):
__beartype_get_beartype_violation(
func=__beartype_func,
pith_name='pull_thorns_from_all_wolves_paws',
pith_value=__beartype_pith_0,
)
We are now well beyond the deep end, where the benthic zone and the cruel denizens of the fathomless void begins. Let’s dismantle this pascal by pascal:
```
__beartype_pith_1 := __beartype_pith_0[__beartype_random_int % len(__beartype_pith_0)]
```
, a PEP 572-style assignment expression localizing repeatedly accessed random items of the first nested list for efficiency. *
```
__beartype_pith_2 := __beartype_pith_1[__beartype_random_int % len(__beartype_pith_1)]
```
, a similar expression localizing repeatedly accessed random items of the second nested list. *
The same
pseudo-randomly indexes all three lists. *
Under older Python interpreters lacking PEP 572 support,
`beartype.beartype()` generates equally valid (albeit less efficient) code repeating each nested list item access.
In the kingdom of the linear-time runtime type checkers, the constant-time runtime type checker really stands out like a sore giant squid, doesn’t it?
See the next section for further commentary on runtime optimization from the higher-level perspective of architecture and internal API design. Surely, it is fun.
# Beartype Dev Handbook: It’s Handy¶
Let’s contribute pull requests to beartype for the good of typing. The primary maintainer of this repository is a friendly, bald, and bearded Canadian guy who guarantees that he will always be nice and congenial and promptly merge most requests that pass continuous integration (CI) tests.
And thanks for merely reading this! Like all open-source software, beartype thrives on community contributions, activity, and interest. This means you, stalwart Python hero.
Beartype has two problem spots (listed below in order of decreasing importance and increasing complexity) that could always benefit from a volunteer army of good GitHub Samaritans.
## Dev Workflow¶
Let’s take this from the top.
Create a GitHub user account.
*
Login to GitHub with that account.
*
Click the “Fork” button in the upper right-hand corner of the “beartype/beartype” repository page.
*
Click the “Code” button in the upper right-hand corner of your fork page that appears.
*
Copy the URL that appears.
*
Open a terminal.
*
Change to the desired parent directory of your local fork.
*
Clone your fork, replacing
`{URL}` with the previously copied URL. > git clone {URL}
*
Add a new remote referring to this upstream repository.
> git remote add upstream https://github.com/beartype/beartype.git
*
Uninstall all previously installed versions of beartype. For example, if you previously installed beartype with
`pip` , manually uninstall beartype with `pip` . > pip uninstall beartype
*
Install beartype with
`pip` in editable mode. This synchronizes changes made to your fork against the beartype package imported in Python. Note the `[dev]` extra installs developer-specific mandatory dependencies required at test or documentation time. > pip3 install -e .[dev]
*
Create a new branch to isolate changes to, replacing
`{branch_name}` with the desired name. > git checkout -b {branch_name}
*
Make changes to this branch in your favourite Integrated Development Environment (IDE). Of course, this means Vim.
*
Test these changes. Note this command assumes you have installed all major versions of both CPython and PyPy supported by the next stable release of beartype you are hacking on. If this is not the case, install these versions with pyenv. This is vital, as type hinting support varies significantly between major versions of different Python interpreters.
> ./tox
The resulting output should ideally be suffixed by a synopsis resembling:
> ________________________________ summary _______________________________ py36: commands succeeded py37: commands succeeded py38: commands succeeded py39: commands succeeded pypy36: commands succeeded pypy37: commands succeeded congratulations :)
*
Stage these changes.
> git add -a
*
Commit these changes.
`git commit` *
Push these changes to your remote fork.
`git push` *
Click the “Create pull request” button in the upper right-hand corner of your fork page.
*
Afterward, routinely pull upstream changes to avoid desynchronization with the “beartype/beartype” repository.
> git checkout main && git pull upstream main
## Moar Depth¶
Caution
This section is badly outdated. It’s bad. Real bad. If you’d like us to revise this to actually reflect reality, just drop us a line at our issue tracker. @leycec promises satisfaction.
So, you want to help beartype deeply type-check even more type hints than she already does? Let us help you help us, because you are awesome.
First, an egregious lore dump. It’s commonly assumed that beartype only internally implements a single type-checker. After all, every other static and runtime type-checker only internally implements a single type-checker. Why would a type-checker internally implement several divergent overlapping type-checkers and… what would that even mean? Who would be so vile, cruel, and sadistic as to do something like that?
We would. Beartype often violates assumptions. This is no exception. Externally, of course, beartype presents itself as a single type-checker. Internally, beartype is implemented as a two-phase series of orthogonal type-checkers. Why? Because efficiency, which is the reason we are all here. These type-checkers are (in the order that callables decorated by beartype perform them at runtime):
Testing phase. In this fast first pass, each callable decorated by
`beartype.beartype()` only tests whether all parameters passed to and values returned from the current call to that callable satisfy all type hints annotating that callable. This phase does not raise human-readable exceptions (in the event that one or more parameters or return values fails to satisfy these hints). `beartype.beartype()` highly optimizes this phase by dynamically generating one wrapper function wrapping each decorated callable with unique pure-Python performing these tests in O(1) constant-time. This phase is always unconditionally performed by code dynamically generated and returned by:
The fast-as-lightning
```
pep_code_check_hint()
```
function declared in the “beartype._decor._code._pep._pephint” submodule, which generates memoized O(1) code type-checking an arbitrary object against an arbitrary PEP-compliant type hint by iterating over all child hints nested in that hint with a highly optimized breadth-first search (BFS) leveraging extreme caching, fragile cleverness, and other salacious micro-optimizations. *
Error phase. In this slow second pass, each call to a callable decorated by
`beartype.beartype()` that fails the fast first pass (due to one or more parameters or return values failing to satisfy these hints) recursively discovers the exact underlying cause of that failure and raises a human-readable exception precisely detailing that cause. `beartype.beartype()` does not optimize this phase whatsoever. Whereas the implementation of the first phase is uniquely specific to each decorated callable and constrained to O(1) constant-time non-recursive operation, the implementation of the second phase is generically shared between all decorated callables and generalized to O(n) linear-time recursive operation. Efficiency no longer matters when you’re raising exceptions. Exception handling is slow in any language and doubly slow in dynamically-typed (and mostly interpreted) languages like Python, which means that performance is mostly a non-concern in “cold” code paths guaranteed to raise exceptions. This phase is only conditionally performed when the first phase fails by:
The slow-as-molasses
```
get_beartype_violation()
```
function declared in the “beartype._decor._error.errormain” submodule, which generates human-readable exceptions after performing unmemoized O(n) type-checking of an arbitrary object against a PEP-compliant type hint by recursing over all child hints nested in that hint with an unoptimized recursive algorithm prioritizing debuggability, readability, and maintainability.
This separation of concerns between performant \(O(1)\) testing on the one hand and perfect \(O(n)\) error handling on the other preserves both runtime performance and readable errors at a cost of developer pain. This is good! …what?
Secondly, the same separation of concerns also complicates the development of `beartype.beartype()` . This is bad. Since `beartype.beartype()` internally implements two divergent type-checkers, deeply type-checking a new
category of type hint requires adding that support to (wait for it) two
divergent type-checkers – which, being fundamentally distinct codebases sharing
little code in common, requires violating the Don’t Repeat Yourself (DRY)
principle by reinventing the wheel in the second type-checker. Such is
the high price of high-octane performance. You probably thought this would be
easier and funner. So did we.
Thirdly, this needs to be tested. After surmounting the above roadblocks by deeply type-checking that new category of type hint in both type-checkers, you’ll now add one or more unit tests exhaustively exercising that checking. Thankfully, we already did all of the swole lifting for you. All you need to do is add at least one PEP-compliant type hint, one object satisfying that hint, and one object not satisfying that hint to:
A new
`PepHintMetadata` object in the existing tuple passed to the
```
data_module.HINTS_PEP_META.extend(...)
```
call in the existing test data submodule for this PEP residing under the “beartype_test.unit.data.hint.pep.proposal” subpackage. For example, if this is a PEP 484-compliant type hint, add that hint and associated metadata to the “beartype_test.unit.data.hint.pep.proposal.data_hintpep484” submodule.
You’re done! Praise Guido.
## Moar Compliance¶
So, you want to help beartype comply with even more Python Enhancement Proposals (PEPs) than she already complies with? Let us help you help us, because you are young and idealistic and you mean well.
You will need a spare life to squander. A clone would be most handy. In short, you will want to at least:
Define a new utility submodule for this PEP residing under the “beartype._util.hint.pep.proposal” subpackage implementing general-purpose validators, testers, getters, and other ancillary utility functions required to detect and handle all type hints compliant with this PEP. For efficiency, utility functions performing iteration or other expensive operations should be memoized via our internal @callable_cached decorator.
*
Define a new data utility submodule for this PEP residing under the “beartype._util.data.hint.pep.proposal” subpackage adding various signs (i.e., arbitrary objects uniquely identifying type hints compliant with this PEP) to various global variables defined by the parent “beartype._util.data.hint.pep.utilhintdatapep” submodule.
*
Define a new test data submodule for this PEP residing under the “beartype_test.unit.data.hint.pep.proposal” subpackage.
You’re probably not done by a long shot! But the above should at least get you fitfully started, though long will you curse our names. <NAME>.
Math(s) time, people. it’s happening.
## Beartype Timings¶
Additional timings performed by an unbiased third party employed by Cisco Systems support the claims below. Notably, beartype is substantially faster than pydantic – the most popular competing runtime type-checker – by several orders of magnitude. Yes, pydantic was Cythonized to native machine code in those timings. Believe!
Let’s profile beartype against other runtime type-checkers with a battery of surely fair, impartial, and unbiased use cases:
```
$ bin/profile.bash
beartype profiler [version]: 0.0.2
python [basename]: python3.9
python [version]: Python 3.9.0
beartype [version]: 0.6.0
typeguard [version]: 2.9.1
===================================== str =====================================
profiling regime:
number of meta-loops: 3
number of loops: 100
number of calls each loop: 100
decoration [none ]: 100 loops, best of 3: 359 nsec per loop
decoration [beartype ]: 100 loops, best of 3: 389 usec per loop
decoration [typeguard]: 100 loops, best of 3: 13.5 usec per loop
decoration + calls [none ]: 100 loops, best of 3: 14.8 usec per loop
decoration + calls [beartype ]: 100 loops, best of 3: 514 usec per loop
decoration + calls [typeguard]: 100 loops, best of 3: 6.34 msec per loop
=============================== Union[int, str] ===============================
profiling regime:
number of meta-loops: 3
number of loops: 100
number of calls each loop: 100
decoration [none ]: 100 loops, best of 3: 1.83 usec per loop
decoration [beartype ]: 100 loops, best of 3: 433 usec per loop
decoration [typeguard]: 100 loops, best of 3: 15.6 usec per loop
decoration + calls [none ]: 100 loops, best of 3: 17.7 usec per loop
decoration + calls [beartype ]: 100 loops, best of 3: 572 usec per loop
decoration + calls [typeguard]: 100 loops, best of 3: 10 msec per loop
=========================== List[int] of 1000 items ===========================
profiling regime:
number of meta-loops: 1
number of loops: 1
number of calls each loop: 7485
decoration [none ]: 1 loop, best of 1: 10.1 usec per loop
decoration [beartype ]: 1 loop, best of 1: 1.3 msec per loop
decoration [typeguard]: 1 loop, best of 1: 41.1 usec per loop
decoration + calls [none ]: 1 loop, best of 1: 1.24 msec per loop
decoration + calls [beartype ]: 1 loop, best of 1: 18.3 msec per loop
decoration + calls [typeguard]: 1 loop, best of 1: 104 sec per loop
============ List[Sequence[MutableSequence[int]]] of 10 items each ============
profiling regime:
number of meta-loops: 1
number of loops: 1
number of calls each loop: 7485
decoration [none ]: 1 loop, best of 1: 11.8 usec per loop
decoration [beartype ]: 1 loop, best of 1: 1.77 msec per loop
decoration [typeguard]: 1 loop, best of 1: 48.9 usec per loop
decoration + calls [none ]: 1 loop, best of 1: 1.19 msec per loop
decoration + calls [beartype ]: 1 loop, best of 1: 81.2 msec per loop
decoration + calls [typeguard]: 1 loop, best of 1: 17.3 sec per loop
```
*
`sec` = seconds. *
`msec` = milliseconds = 10-3 seconds. *
`usec` = microseconds = 10-6 seconds. *
`nsec` = nanoseconds = 10-9 seconds.
### Timings Overview¶
Beartype is:
At least twenty times faster (i.e., 20,000%) and consumes three orders of magnitude less time in the worst case than typeguard – the only comparable runtime type-checker also compatible with most modern Python versions.
*
Asymptotically faster in the best case than typeguard, which scales linearly (rather than not at all) with the size of checked containers.
*
Constant across type hints, taking roughly the same time to check parameters and return values hinted by the builtin type
`str` as it does to check those hinted by the unified type `Union[int, str]` as it does to check those hinted by the container type `List[object]` . typeguard is variable across type hints, taking significantly longer to check `List[object]` as as it does to check `Union[int, str]` , which takes roughly twice the time as it does to check `str` . Beartype performs most of its work at decoration time. The `@beartype` decorator consumes most of the time needed to first decorate and then repeatedly
call a decorated function. Beartype is thus front-loaded. After paying the
upfront fixed cost of decoration, each type-checked call thereafter incurs
comparatively little overhead. Conventional runtime type checkers perform most of their work at call time.
```
@typeguard.typechecked
```
and similar decorators consume almost none of the
time needed to first decorate and then repeatedly call a decorated function.
They’re back-loaded. Although the initial cost of decoration is essentially
free, each type-checked call thereafter incurs significant overhead.
### Timings Lower Bound¶
In general, `@beartype` adds anywhere from 1µsec (i.e., \(10^{-6}\)
seconds) in the worst case to 0.01µsec (i.e., \(10^{-8}\) seconds) in the
best case of call-time overhead to each decorated callable. This superficially
seems reasonable – but is it?
Let’s delve deeper.
# Formulaic Formulas: They’re Back in Fashion¶
Let’s formalize how exactly we arrive at the call-time overheads above.
Given any pair of reasonably fair timings between an undecorated callable and its equivalent `@beartype` -decorated callable, let:
\(n\) be the number of times (i.e., loop iterations) each callable is repetitiously called.
*
\(γ\) be the total time in seconds of all calls to that undecorated callable.
*
\(λ\) be the total time in seconds of all calls to that
`@beartype` -decorated callable. Then the call-time overhead \(Δ(n, γ, λ)\) added by `@beartype` to each
call is: Plugging in \(n = 100000\), \(γ = 0.0435s\), and \(λ = 0.0823s\) from aforementioned third-party timings, we see that `@beartype` on average adds call-time overhead of 0.388µsec to each
decorated call: e.g.,
Again, this superficially seems reasonable – but is it? Let’s delve deeper.
# Function Call Overhead: The New Glass Ceiling¶
The added cost of calling `@beartype` -decorated callables is a residual
artifact of the added cost of stack frames (i.e., function and method calls)
in Python. The mere act of calling any pure-Python callable adds a measurable
overhead – even if the body of that callable is just a noop semantically
equivalent to that year I just went hard on NG+ in Persona 5: Royal. This is
the minimal cost of Python function calls.
Since Python decorators almost always add at least one additional stack frame (typically as a closure call) to the call stack of each decorated call, this measurable overhead is the minimal cost of doing business with Python decorators. Even the fastest possible Python decorator necessarily pays that cost.
Our quandary thus becomes: “Is 0.01µsec to 1µsec of call-time overhead reasonable or is this sufficiently embarrassing as to bring multigenerational shame upon our entire extended family tree, including that second cousin twice-removed who never sends a kitsch greeting card featuring Santa playing with mischievous kittens at Christmas time?”
We can answer that by first inspecting the theoretical maximum efficiency for a pure-Python decorator that performs minimal work by wrapping the decorated callable with a closure that just defers to the decorated callable. This excludes the identity decorator (i.e., decorator that merely returns the decorated callable unmodified), which doesn’t actually perform any work whatsoever. The fastest meaningful pure-Python decorator is thus:
```
def fastest_decorator(func):
def fastest_wrapper(*args, **kwargs): return func(*args, **kwargs)
return fastest_wrapper
```
Replacing `@beartype` with `@fastest_decorator` in aforementioned
third-party timings then exposes the minimal cost
of Python decoration – a lower bound that all Python decorators necessarily
pay:
```
$ python3.7 <<EOF
from timeit import timeit
def fastest_decorator(func):
def fastest_wrapper(*args, **kwargs): return func(*args, **kwargs)
return fastest_wrapper
@fastest_decorator
def main_decorated(arg01: str="__undefined__", arg02: int=0) -> tuple:
"""Proof of concept code implenting bear-typed args"""
assert isinstance(arg01, str)
assert isinstance(arg02, int)
def main_undecorated(arg01="__undefined__", arg02=0):
"""Proof of concept code implenting duck-typed args"""
assert isinstance(arg01, str)
assert isinstance(arg02, int)
if __name__=="__main__":
num_loops = 100000
decorated_result = timeit('main_decorated("foo", 1)', setup="from __main__ import main_decorated", number=num_loops)
print("timeit decorated time: ", round(decorated_result, 4), "seconds")
undecorated_result = timeit('main_undecorated("foo", 1)', setup="from __main__ import main_undecorated", number=num_loops)
print("timeit undecorated time:", round(undecorated_result, 4), "seconds")
EOF
timeit decorated time: 0.1185 seconds
timeit undecorated time: 0.0889 seconds
```
Again, plugging in \(n = 100000\), \(γ = 0.0889s\), and \(λ = 0.1185s\) from the same timings, we see that `@fastest_decorator` on
average adds call-time overhead of 0.3µsec to each decorated call: e.g.,
# Holy Balls of Flaming Dumpster Fires¶
We saw above that `@beartype` on average only adds call-time overhead of
0.388µsec to each decorated call. But \(0.388µsec - 0.3µsec = 0.088µsec\),
so `@beartype` only adds 0.1µsec (generously rounding up) of additional
call-time overhead above and beyond that necessarily added by the fastest
possible Python decorator. Not only is `@beartype` within the same order of magnitude as the fastest
possible Python decorator, it’s effectively indistinguishable from the fastest
possible Python decorator on a per-call basis. Of course, even a negligible time delta accumulated over 10,000 function calls becomes slightly less negligible. Still, it’s pretty clear that `@beartype` remains the fastest possible runtime type-checker for now and all eternity.
Amen.
# But, But… That’s Not Good Enough!¶
Yeah. None of us are best pleased with the performance of the official CPython interpreter anymore, are we? CPython is that geriatric old man down the street that everyone puts up with because they’ve seen “Up!” and he means well and he didn’t really mean to beat your equally geriatric 20-year-old tomcat with a cane last week. Really, that cat had it comin’.
If `@beartype` still isn’t ludicrously speedy enough for you under CPython,
we also officially support PyPy – where you’re likely to extract even more
ludicrous speed. `@beartype` (and every other runtime type-checker) will always be negligibly
slower than hard-coded inlined runtime type-checking, thanks to the negligible
(but surprisingly high) cost of Python function calls. Where this is
unacceptable, PyPy is your code’s new BFFL.
## Nobody Expects the Linearithmic Time¶
Most runtime type-checkers exhibit \(O(n)\) time complexity (where \(n\) is the total number of items recursively contained in a container to be checked) by recursively and repeatedly checking all items of all containers passed to or returned from all calls of decorated callables.
Beartype guarantees \(O(1)\) time complexity by non-recursively but repeatedly checking one random item at all nesting levels of all containers passed to or returned from all calls of decorated callables, thus amortizing the cost of deeply checking containers across calls.
Beartype exploits the well-known coupon collector’s problem applied to abstract trees of nested type hints, enabling us to statistically predict the number of calls required to fully type-check all items of an arbitrary container on average. Formally, let:
\(E(T)\) be the expected number of calls needed to check all items of a container containing only non-container items (i.e., containing no nested subcontainers) either passed to or returned from a
`@beartype` -decorated callable. *
\(γ ≈ 0.5772156649\) be the Euler–Mascheroni constant.
Then:
The summation \(\frac{1}{2} + O \left( \frac{1}{n} \right) \le 1\) is negligible. While non-negligible, the term \(\gamma n\) grows significantly slower than the term \(n \log n\). So this reduces to:
We now generalize this bound to the general case. When checking a container containing no subcontainers, beartype only randomly samples one item from that container on each call. When checking a container containing arbitrarily many nested subcontainers, however, beartype randomly samples one random item from each nesting level of that container on each call.
In general, beartype thus samples \(h\) random items from a container on each call, where \(h\) is that container’s height (i.e., maximum number of edges on the longest path from that container to a non-container leaf item reachable from items directly contained in that container). Since \(h ≥ 1\), beartype samples at least as many items each call as assumed in the usual coupon collector’s problem and thus paradoxically takes a fewer number of calls on average to check all items of a container containing arbitrarily many subcontainers as it does to check all items of a container containing no subcontainers.
Ergo, the expected number of calls \(E(S)\) needed to check all items of an arbitrary container exhibits the same or better growth rate and remains bound above by at least the same upper bounds – but probably tighter: e.g.,
Fully checking a container takes no more calls than that container’s size times the logarithm of that size on average. For example, fully checking a list of 50 integers is expected to take 225 calls on average.
…and that’s how the QA was won: eventually.
External beartype resources include:
This list of all open-source PyPI-hosted dependents of this package (i.e., third-party packages requiring beartype as a runtime dependency), kindly furnished by the Libraries.io package registry.
Related type-checking resources include:
## Runtime Type Checkers¶
Runtime type checkers (i.e., third-party Python packages dynamically validating callables annotated by type hints at runtime, typically via decorators, function calls, and import hooks) include:
Like static type checkers, runtime type checkers always require callables to be annotated by type hints. Unlike static type checkers, runtime type checkers do not necessarily comply with community standards; although some do require callers to annotate callables with strictly PEP-compliant type hints, others permit or even require callers to annotate callables with PEP-noncompliant type hints. Runtime type checkers that do so violate:
PEP 561 – Distributing and Packaging Type Information, which requires callables to be annotated with strictly PEP-compliant type hints. Packages violating PEP 561 even once cannot be type-checked with static type checkers (e.g., mypy), unless each such violation is explicitly ignored with a checker-specific filter (e.g., with a mypy-specific inline type comment).
*
PEP 563 – Postponed Evaluation of Annotations, which explicitly deprecates PEP-noncompliant type hints:
With this in mind, uses for annotations incompatible with the aforementioned PEPs [i.e., PEPs 484, 544, 557, and 560] should be considered deprecated.
## Runtime Data Validators¶
Runtime data validators (i.e., third-party Python packages dynamically validating callables decorated by caller-defined contracts, constraints, and validation routines at runtime) include:
Unlike both runtime type checkers and static type checkers, most runtime data validators do not require callables to be annotated by type hints. Like some runtime type checkers, most runtime data validators do not comply with community standards but instead require callers to either:
Decorate callables with package-specific decorators.
*
Annotate callables with package-specific and thus PEP-noncompliant type hints.
## Static Type Checkers¶
Static type checkers (i.e., third-party tooling validating Python callable and/or variable types across an application stack at static analysis time rather than Python runtime) include:
Beartype import hooks enforce type hints across your entire app in two lines of code with no runtime overhead. This is beartype import hooks in ten seconds. dyslexia notwithstanding
```
# Add *ONE* of the following semantically equivalent two-liners to the very
# top of your "{your_package}.__init__" submodule. Start with *THE FAST WAY*.
# ....................{ THE FAST WAY }....................
from beartype.claw import beartype_this_package # <-- this is boring, but...
beartype_this_package() # <-- the fast way
# ....................{ THE LESS FAST WAY }....................
from beartype.claw import beartype_package # <-- still boring, but...
beartype_package('{your_package}') # <-- the less fast way
# ....................{ THE MORE SLOW WAY }....................
from beartype.claw import beartype_packages # <-- boring intensifies
beartype_packages(('{your_package}',)) # <-- the more slow way
# ....................{ THE WAY OF THE BEAR NINJA }....................
from beartype.claw import beartyping # <-- getting weird here
with beartyping(): # <-- weird context manager
from {your_package} import {your_thing} # <-- import some stuff
from {some_package} import {some_thing} # <-- import more stuff
```
Beartype import hooks extend the surprisingly sharp claws of `beartype` to
your full app stack, whether anyone else wanted you to do that or not. Claw your
way to the top of the bug heap; then sit on that heap with a smug expression. Do
it for the new guy sobbing quietly in his cubicle.
## Import Hooks Overview¶
Beartype import hooks implicitly perform both:
Standard static type-checking (ala mypy and pyright) but at runtime – and that ain’t standard.
Automate the `beartype.beartype()` decorator away today with magical import
hooks published by the `beartype.claw` subpackage. When you install import
hooks from beartype, you augment beartype from a pure-runtime
second-generation type-checker into a hybrid runtime-static
third-generation type-checker. That’s right.
Beartype is now a tentacular cyberpunk horror like that mutant brain baby from Katsuhiro Otomo’s dystopian 80’s masterpiece Akira. You can’t look away!
May Neo-Tokyo have mercy on your codebase’s soul.
## Import Hooks Overview, Part Deux¶
Beartype import hooks is a hobbit hole so deep we had to deescalate it with decrepit manga panels from Akira. Prepare to enter that hole.
### What Is beartype_this_package()?¶
Let’s begin by outlining exactly what
does. As the simplest and most convenient of several import hooks published by the `beartype.claw` subpackage,
type-checks
all subsequently imported submodules of `{your_package}` . Notably,
:
Implicitly decorates all callables and classes across
`{your_package}` by the `beartype.beartype()` decorator. Rejoice, fellow mammals! You no longer need to explicitly decorate anything by `beartype.beartype()` ever again. Of course, you can if you want to – but there’s no compelling reason to do so and many compelling reasons not to do so. You have probably just thought of five, but there are even more. *
Implicitly appends every PEP 526-compliant annotated variable assignment (e.g.,
```
muh_int: int = 'Pretty sure this isn't an integer, but not sure.'
```
) across `{your_package}` by a new statement at the same indentation level calling the
function passed both that variable and that type hint. Never do that manually. Now, you never do. Examples or we’re lying again.
transforms your
```
{your_package}.{buggy_submodule}
```
from this quietly broken code that you
insist you never knew about, you swear:
```
# This is "{your_package}.{buggy_submodule}". It is bad, but you never knew.
import typing as t
…into this loudly broken code that even your unionized QA team can no longer ignore:
```
# This is "{your_package}.{buggy_submodule}" on beartype_this_package().
# Any questions? Actually, that was rhetorical. No questions, please.
from beartype import beartype
from beartype.door import die_if_unbearable
import typing as t
By doing nothing, you saved five lines of extraneous boilerplate you no longer need to maintain, preserved DRY (Don’t Repeat Yourself), and mended your coworker’s career, who you would have blamed for all this. You had nothing to do with that code. It’s a nothingburger!
Beartype believes you. This is why we
.
This is what happens when we don’t beartype_this_package().
### Why Is beartype_this_package()?¶
Let’s continue by justifying why you want to use
. Don’t worry. The “why?” is easier than the
“what?”. It often is. The answer is: “Safety is my middle name.”
<– more lies
isolates its bug-hunting action to the current
package. This is what everyone wants to try first. Type-checking only your
first-party package under your control is the safest course of action, because
you rigorously stress-tested your package with beartype. You did, didn’t you?
You’re not making us look bad here? Don’t make us look bad. We already have
GitHub and Reddit for that. Other beartype import hooks – like `beartype_packages()` or `beartyping()` – can be (mis)used to dangerously type-check other
third-party packages outside your control that have probably never been
stress-tested with beartype. Those packages could raise type-checking violations
at runtime that you have no control over. If they don’t now, they could later.
Forward compatibility is out the window. `git blame` has things to say about
that. If
fails, there is no hope for your package. Even
though it might be beartype’s fault, beartype will still blame you for its
mistakes.
## Import Hooks API¶
Beartype import hooks come in two flavours:
Global import hooks, whose effects encompass all subsequently imported packages and modules matching various patterns.
*
Local import hooks, whose effects are isolated to only specific packages and modules imported inside specific blocks of code. Any subsequently imported packages and modules remain unaffected.
### Global Import Hooks¶
Global beartype import hooks are… well, global. Their claws extend to a horizontal slice of your full stack. These hooks globally type-check all annotated callables, classes, and variable assignments in all subsequently imported packages and modules matching various patterns.
With great globality comes great responsibility.
* beartype.claw.beartype_this_package(*, conf: beartype.BeartypeConf = beartype.BeartypeConf()) None [source]¶
*
This function is not called from a module (i.e., this function is called directly from within a read–eval–print loop (REPL)).
*
`conf` is not a beartype configuration.
Self-package runtime-static type-checking import hook. This hook accepts no package or module names, instead type-checking all annotated callables, classes, and variable assignments across all submodules of the current package (i.e., the caller-defined package directly calling this function).
This hook only applies to subsequent imports performed after this hook, as the term “import hook” implies; previously imported submodules and subpackages remain unaffected.
This hook is typically called as the first statement in the
`__init__` submodule of whichever (sub)package you would like to type-check. If you call this hook from:
Your top-level
submodule, this hook type-checks your entire package. This includes all submodules and subpackages across your entire package. *
Some mid-level
```
{your_package}.{your_subpackage}.__init__
```
submodule, this hook type-checks only that subpackage. This includes only submodules and subsubpackages of that subpackage. All other submodules and subpackages of your package remain unaffected (i.e., will not be type-checked). > # At the top of your "{your_package}.__init__" submodule: from beartype import BeartypeConf # <-- boilerplate from beartype.claw import beartype_this_package # <-- boilerplate: the revenge beartype_this_package(conf=BeartypeConf(is_color=False)) # <-- no color is best color
This hook is effectively syntactic sugar for the following idiomatic one-liners that are so cumbersome, fragile, and unreadable that no one should even be reading this:
> beartype_this_package() # <-- this... beartype_package(__name__.rpartition('.')[0]) # <-- ...is equivalent to this... beartype_packages((__name__.rpartition('.')[0],)) # <-- ...is equivalent to this.
When in doubt, have no doubt. Just call
.
New in version 0.15.0.
beartype_this_package(): It do be like that.
* beartype.claw.beartype_package(package_name: str, *, conf: beartype.BeartypeConf = beartype.BeartypeConf()) None [source]¶
*
package_name (str) – Absolute name of the package or module to be type-checked.
*
A non-empty string that is not a valid package or module name (i.e.,
`"."` -delimited concatenation of valid Python identifiers).
Uni-package runtime-static type-checking import hook. This hook accepts only a single package or single module name, type-checking all annotated callables, classes, and variable assignments across either:
If the passed name is that of a (sub)package, all submodules of that (sub)package.
*
If the passed name is that of a (sub)module, only that (sub)module.
This hook should be called before that package or module is imported; when erroneously called after that package or module is imported, this hook silently reduces to a noop (i.e., does nothing regardless of how many times you squint at it suspiciously).
submodule. > # At the top of your "{your_package}.__init__" submodule: from beartype import BeartypeConf # <-- <Ctrl-c> <Ctrl-v> from beartype.claw import beartype_package # <-- <Ctrl-c> <Ctrl-v> x 2 beartype_package('your_package', conf=BeartypeConf(is_debug=True)) # ^-- they said explicit is better than implicit, # but all i got was this t-shirt and a hicky.
Of course, that’s fairly worthless. Just call
, right? But what if you want to type-check just one subpackage or submodule of your package rather than your entire package? In that case,
is overbearing. badum ching Enter `beartype_package()` , the outer limits of QA where you control the horizontal and the vertical: > # Just because you can do something, means you should do something. beartype_package('good_package.m.A.A.d_submodule') # <-- fine-grained precision strike
`beartype_package()` shows it true worth, however, in type-checking other people’s code. Because the `beartype.claw` API is a permissive Sarlacc pit, `beartype_package()` happily accepts the absolute name of any package or module – whether they wanted you to do that or not: > # Whenever you want to break something over your knee, never leave your # favorite IDE [read: Vim] without beartype_package(). beartype_package('somebody_elses_package') # <-- blow it up like you just don't care
This hook is effectively syntactic sugar for passing the
`beartype_packages()` function a 1-tuple containing only this package or module name. > beartype_package('your_package') # <-- this... beartype_packages(('your_package',)) # <-- ...is equivalent to this.
Pretend you didn’t see that. Just call
`beartype_package()` .
New in version 0.15.0.
Truer words were never spoken, wizened psychic baby lady.
* beartype.claw.beartype_packages(package_names: collections.abc.Iterable[str], *, conf: beartype.BeartypeConf = beartype.BeartypeConf()) None [source]¶
*
package_name (collections.abc.Iterable[str]) – Iterable of the absolute names of one or more packages or modules to be type-checked.
*
Not an iterable.
*
The empty iterable.
*
A non-empty iterable containing at least one item that is either:
A non-empty string that is not a valid package or module name (i.e.,
`"."` -delimited concatenation of valid Python identifiers).
Multi-package runtime-static type-checking import hook. This hook accepts one or more package and module names in any arbitrary order (i.e., order is insignificant), type-checking all annotated callables, classes, and variable assignments across:
For each passed name that is a (sub)package, all submodules of that (sub)package.
*
For each passed name that is a (sub)module, only that (sub)module.
This hook should be called before those packages and modules are imported; when erroneously called after those packages and modules are imported, this hook silently reduces to a noop. Squinting still does nothing.
submodule. > # At the top of your "{your_package}.__init__" submodule: from beartype import BeartypeConf # <-- copy-pasta from beartype.claw import beartype_packages # <-- copy-pasta intensifies beartype_packages(( 'your_package', 'some_package.published_by.the_rogue_ai.Johnny_Twobits', # <-- seems trustworthy 'numpy', # <-- ...heh. no one knows what will happen here! 'scipy', # <-- ...but we can guess, can't we? *sigh* ), conf=BeartypeConf(is_pep484_tower=True)) # <-- so. u 2 h8 precision.
This hook is the penultimate force in global import hooks. The terser
and `beartype_package()` hooks are effectively syntactic sugar for this verboser hook.
One hook to QA them all, and in the darkness of your codebase bind them.
It’s almost as if we know what “penultimate” means.
* beartype.claw.beartype_all(*, conf: beartype.BeartypeConf = beartype.BeartypeConf()) None [source]¶
*
beartype.roar.BeartypeClawHookException – If
`conf` is not a beartype configuration.
All-packages runtime-static type-checking import hook. This hook accepts no package or module names, instead type-checking all callables, classes, and variable assignments across all submodules of all packages.
This hook should be called before those packages and modules are imported; when erroneously called after those packages and modules are imported, this hook silently reduces to a noop. Not even squinting can help you now.
submodule. > # At the top of your "{your_package}.__init__" submodule, from beartype import BeartypeConf # <-- @beartype seemed so innocent, once from beartype.claw import beartype_all # <-- where did it all go wrong? beartype_all(conf=BeartypeConf(claw_is_pep526=False)) # <-- U WILL BE ASSIMILATE
This hook is the ultimate import hook, spasmodically unleashing a wave of bug-defenestrating action over the entire Python ecosystem. After calling this hook, any package or module authored by anybody (including packages and modules in CPython’s standard library) will be subject to the iron claw of
`beartype.claw` . Its rule is law!
This hook is the runtime equivalent of a full-blown pure-static type-checker like mypy or pyright, enabling full-stack runtime-static type-checking over your entire app. This includes submodules defined by both:
First-party proprietary packages authored explicitly for this app.
*
Third-party open-source packages authored and maintained elsewhere.
Nothing is isolated. Everything is permanent. Do not trust this hook.
# Caveat Emptor: Empty Promises Not Even a Cat Would Eat¶
This hook imposes type-checking on all downstream packages importing your package, which may not necessarily want, expect, or tolerate type-checking. This hook is not intended to be called from intermediary APIs, libraries, frameworks, or other middleware. Packages imported by other packages should not call this hook. This hook is only intended to be called from full-stack end-user applications as a convenient alternative to manually passing the names of all packages to be type-checked to the more granular
`beartype_packages()` hook.
This hook is the extreme QA nuclear option. Because this hook is the extreme QA nuclear option, most codebases should not call this hook.
`beartype` cannot be held responsible for a sudden rupture in the plenæne of normalcy, the space-time continuum, or your once-stable job. Pour one out for those who are about to vitriolically explode their own code.
Nuke Python from orbit. Because now you can.
The beartype_all() lifestyle. Short but sweet.
### Local Import Hooks¶
## Import Hook Configuration¶
Beartype import hooks accept an optional keyword-only `conf` parameter whose
value is a beartype configuration (i.e.,
instance), defaulting to the default beartype configuration `BeartypeConf()` .
Unsurprisingly, that configuration configures the behaviour of its hook: e.g.,
```
# In your "{your_package}.__init__" submodule, enable @beartype's support for
# the PEP 484-compliant implicit numeric tower (i.e., expand "int" to "int |
# float" and "complex" to "int | float | complex"):
from beartype import BeartypeConf # <-- it all seems so familiar
from beartype.claw import beartype_package # <-- boil it up, boilerplate
beartype_package('your_package', conf=BeartypeConf(is_pep484_tower=True)) # <-- *UGH.*
```
Equally unsurprisingly,
has been equipped with
import hook-aware super powers. Fine-tune the behaviour of our import hooks for
your exact needs, including:
```
BeartypeConf(claw_is_pep526: bool = True)
```
. By default, `beartype.claw` type-checks annotated variable assignments like
```
muh_int: int = 'Pretty sure this isn't an integer.'
```
. Although this is usually what everyone wants, this may not be what someone suspicious wearing aviator goggles, a red velvet cape, and too-tight black leather wants. Nobody knows what those people want. If you are such a person, consider disabling this option to reduce type safety and destroy your code like Neo-Tokyo vs. Mecha-Baby-Godzilla: …who will win!?!? *
```
BeartypeConf(warning_cls_on_decorator_exception: Optional[Type[Warning]] = None)
```
. By default, `beartype.claw` emits non-fatal warnings rather than fatal exceptions raised by the `beartype.beartype()` decorator at decoration time. This is usually what everyone wants, because `beartype.beartype()` currently fails to support all possible edge cases and is thus likely to raise at least one exception while decorating your entire package. To improve the resilience of `beartype.claw` against those edge cases, `beartype.beartype()` emits one warning for each decoration exception and then simply continues to the next decoratable callable or class. This is occasionally unhelpful. What if you really do want `beartype.claw` to raise a fatal exception on the first such edge case in your codebase – perhaps because you want to either see the full exception traceback or punish your coworkers who are violating typing standards by trying to use an imported module as a type hint? …this actually happened In this case, consider:
Passing
`None` as the value of this parameter. Doing so forces `beartype.claw` to act strictly, inflexibly, and angrily. Expect spittle-flecked mouth frothing and claws all over the place: > # In your "{your_package}.__init__" submodule, raise exceptions because you # hate worky. The CI pipeline you break over your knee may just be your own. from beartype import BeartypeConf # <-- boiling boilerplate... from beartype.claw import beartype_this_package # <-- ...ain't even lukewarm beartype_this_package(conf=BeartypeConf(warning_cls_on_decorator_exception=None)) # <-- *ohboy*
```
wrap anything with runtime type-checking
...except that, of course.
— Thus <NAME>, Book I
```
The beating heart of beartype is the eponymous `beartype()` decorator. This
is its story.
## Beartype Decorator API¶
* @beartype.beartype(cls: type | None = None, func: collections.abc.Callable | None = None, conf: BeartypeConf = BeartypeConf()) object [source]¶
*
cls (type | None) – Pure-Python class to be decorated.
*
func (collections.abc.Callable | None) – Pure-Python function or method to be decorated.
*
Passed class or callable wrapped with runtime type-checking.
Augment the passed object with performant runtime type-checking. Unlike most decorators,
`@beartype` has three orthogonal modes of operation:
Class mode – in which you decorate a class with
`@beartype` , which then iteratively decorates all methods declared by that class with `@beartype` . This is the recommended mode for object-oriented logic. *
Callable mode – in which you decorate a function or method with
`@beartype` , which then dynamically generates a new function or method wrapping the original function or method with performant runtime type-checking. This is the recommended mode for procedural logic. *
Configuration mode – in which you create your own app-specific
`@beartype` decorator configured for your exact use case.
When chaining multiple decorators, order of decoration is significant but conditionally depends on the mode of operation. Specifically, in:
Class mode,
`@beartype` should usually be listed first. *
Callable mode,
`@beartype` should usually be listed last.
It’s not our fault. Surely documentation would never decieve you.
### Callable Mode¶
def beartype.beartype(func: collections.abc.Callable) -> collections.abc.Callable
In callable mode, `beartype()` dynamically generates a new callable
(i.e., pure-Python function or method) runtime type-checking the passed
callable.
# …as Decorator¶
Because laziness prevails, `beartype()` is usually invoked as a
decorator. Simply prefix the callable to be runtime type-checked with the line `@beartype` . In this standard use pattern, `beartype()` silently:
Preserves the original callable as the
`__wrapped__` instance variable of that new callable.
An example explicates a thousand words.
# Decorate a function with @beartype.
>>> @beartype
... def bother_free_is_no_bother_to_me(bothersome_string: str) -> str:
... return f'Oh, bother. {bothersome_string}'
# Call that function with runtime type-checking enabled.
>>> bother_free_is_no_bother_to_me(b'Could you spare a small smackerel?')
BeartypeCallHintParamViolation: @beartyped bother_free_is_no_bother_to_me()
parameter bothersome_string=b'Could you spare a small smackerel?' violates
type hint <class 'str'>, as bytes b'Could you spare a small smackerel?' not
instance of str.
# Call that function with runtime type-checking disabled. WHY YOU DO THIS!?
>>> bother_free_is_no_bother_to_me.__wrapped__(
... b'Could you spare a small smackerel?')
"Oh, bother. b'Could you spare a small smackerel?'"
```
Because `beartype()` preserves the original callable as `__wrapped__` , `beartype()` seamlessly integrates with other well-behaved decorators that
respect that same pseudo-standard. This means that `beartype()` can
usually be listed in any arbitrary order when chained (i.e., combined) with
other decorators. Because this is the NP-hard timeline, however, assumptions are risky. If you doubt anything, the safest approach is just to list `@beartype` as the
last (i.e., bottommost) decorator. This:
Ensures that
`beartype()` is called first on the decorated callable before other decorators have a chance to really muck things up. Other decorators: always the source of all your problems. *
Improves both space and time efficiency. Unwrapping
`__wrapped__` callables added by prior decorators is an \(O(k)\) operation for \(k\) the number of previously run decorators. Moreover, builtin decorators like `classmethod` , `property` , and `staticmethod` create method descriptors; when run after a builtin decorator, `beartype()` has no recourse but to:
Destroy the original method descriptor created by that builtin decorator.
*
Create a new method type-checking the original method.
*
Create a new method descriptor wrapping that method by calling the same builtin decorator.
An example is brighter than a thousand Suns! astronomers throwing chalk here
# Decorate class methods with @beartype in either order.
>>> class BlastItAll(object):
... @classmethod
... @beartype # <-- GOOD. this is the best of all possible worlds.
... def good_idea(cls, we_will_dynamite: str) -> str:
... return we_will_dynamite
...
... @beartype # <-- BAD. technically, fine. pragmatically, slower.
... @classmethod
... def save_time(cls, whats_the_charge: str) -> str:
... return whats_the_charge
```
# …as Function¶
Because Python means not caring what anyone else thinks, `beartype()` can
also be called as a function. This is useful in unthinkable edge cases like
monkey-patching other people’s code with runtime type-checking. You usually
shouldn’t do this, but you usually shouldn’t do a lot of things that you do when
you’re the sort of Pythonista that reads tortuous documentation like this.
# A function somebody else defined. Note the bad lack of @beartype.
>>> def oh_bother_free_where_art_thou(botherfull_string: str) -> str:
... return f'Oh, oh! Help and bother! {botherfull_string}'
# Monkey-patch that function with runtime type-checking. *MUHAHAHA.*
>>> oh_bother_free_where_art_thou = beartype(oh_bother_free_where_art_thou)
# Call that function with runtime type-checking enabled.
>>> oh_bother_free_where_art_thou(b"I'm stuck!")
BeartypeCallHintParamViolation: @beartyped oh_bother_free_where_art_thou()
parameter botherfull_string=b"I'm stuck!" violates type hint <class 'str'>,
as bytes b"I'm stuck!" not instance of str.
```
One `beartype()` to monkey-patch them all and in the darkness type-check them.
# …as Noop¶
`beartype()` silently reduces to a noop (i.e., scoops organic honey out
of a jar with its fat paws rather than doing something useful with its life)
under common edge cases. When any of the following apply, `beartype()` preserves the decorated callable or class as is by just returning that callable
or class unmodified (rather than augmenting that callable or class with unwanted
runtime type-checking):
Beartype has been configured with the no-time strategy
`BeartypeStrategy.O0` : e.g., > # Import the requisite machinery. from beartype import beartype, BeartypeConf, BeartypeStrategy # Avoid type-checking *ANY* methods or attributes of this class. @beartype(conf=BeartypeConf(strategy=BeartypeStrategy.O0)) class UncheckedDangerClassIsDangerous(object): # This method raises *NO* type-checking violation despite returning a # non-"None" value. def unchecked_danger_method_is_dangerous(self) -> None: return 'This string is not "None". Sadly, nobody cares anymore.'
*
That callable or class has already been decorated by:
The
`beartype()` decorator itself. *
That callable is unannotated (i.e., no parameters or return values in the signature of that callable are annotated by type hints).
*
Sphinx is currently autogenerating documentation (i.e., Sphinx’s “autodoc” extension is currently running).
Laziness + efficiency == `beartype()` .
### Class Mode¶
def beartype.beartype(cls: type) -> type
In class mode, `beartype()` dynamically replaces each method of the
passed pure-Python class with a new method runtime type-checking the original
method. As with callable mode, simply prefix the class to be runtime type-checked with the line `@beartype` . In this standard use pattern, `beartype()` silently iterates over all instance, class, and static methods
declared by the decorated class and, for each such method:
Preserves the original method as the
`__wrapped__` instance variable of that new method.
# …versus Callable Mode¶
Superficially, this is just syntactic sugar – but sometimes you gotta dip your paws into the honey pot.
# Decorate a class with @beartype.
@beartype
class IAmABearOfNoBrainAtAll(object):
def i_have_been_foolish(self) -> str:
return 'A fly can't bird, but a bird can fly.'
# ...or just decorate class methods directly with @beartype.
# The class above is *EXACTLY* equivalent to the class below.
class IAmABearOfNoBrainAtAll(object):
@beartype
def i_have_been_foolish(self) -> str:
return 'A fly can't bird, but a bird can fly.'
Pragmatically, this is not just syntactic sugar. You must decorate classes (rather than merely methods) with `beartype()` to type-check the following:
Class-centric type hints (i.e., type hints like the PEP 673-compliant typing.Self attribute that describe the decorated class itself). To type-check these kinds of type hints,
`beartype()` needs access to the class. `beartype()` lacks access to the class when decorating methods directly. Instead, you must decorate classes by `beartype()` for classes declaring one or more methods annotated by one or more class-centric type hints. *
Dataclasses. The standard
```
dataclasses.dataclass
```
decorator dynamically generates and adds new dunder methods (e.g., `__init__()` , `__eq__()` , `__hash__()` ) to the decorated class. These methods do not physically exist and thus cannot be decorated directly with `beartype()` . Instead, you must decorate dataclasses first by `@beartype` and then by
```
@dataclasses.dataclass
```
. Order is significant, of course. `</sigh>` When decorating classes, `@beartype` should usually be listed as the
first (i.e., topmost) decorator. This ensures that `beartype()` is
called last on the decorated class after other decorators have a chance to
dynamically monkey-patch that class (e.g., by adding new methods to that class). `beartype()` will then type-check the monkey-patched functionality as well.
Come for the working examples. Stay for the wild hand-waving.
# Decorate a dataclass first with @beartype and then with @dataclass. If you
# accidentally reverse this order of decoration, methods added by @dataclass
# like __init__() will *NOT* be type-checked by @beartype. (Blame Guido.)
@beartype
@dataclass
class SoTheyWentOffTogether(object):
a_little_boy_and_his_bear: str | bytes
will_always_be_playing: str | None = None
```
### Configuration Mode¶
def beartype.beartype(*, conf: beartype.BeartypeConf) -> collections.abc.Callable[[T], T]
In configuration mode, `beartype()` dynamically generates a new `beartype()` decorator – configured uniquely for your exact use case. You
too may cackle villainously as you feel the unbridled power of your keyboard.
```
# Import the requisite machinery.
from beartype import beartype, BeartypeConf, BeartypeStrategy
# Dynamically create a new @monotowertype decorator configured to:
# * Avoid outputting colors in type-checking violations.
# * Enable support for the implicit numeric tower standardized by PEP 484.
monotowertype = beartype(conf=BeartypeConf(
is_color=False, is_pep484_tower=True))
# Decorate with this decorator rather than @beartype everywhere.
@monotowertype
def muh_colorless_permissive_func(int_or_float: float) -> float:
return int_or_float ** int_or_float ^ round(int_or_float)
```
Configuration: because you know best.
# Beartype Configuration API¶
* class beartype.BeartypeConf(*, is_color: bool | None = None, is_debug: bool = False, is_pep484_tower: bool = False, strategy: BeartypeStrategy = BeartypeStrategy.O1)[source]¶
*
Beartype configuration (i.e., self-caching dataclass instance encapsulating all flags, options, settings, and other metadata configuring each type-checking operation performed by beartype – including each decoration of a callable or class by the
`beartype()` decorator).
The default configuration
`BeartypeConf()` configures beartype to:
Perform \(O(1)\) constant-time type-checking for safety, scalability, and efficiency.
*
Disable support for PEP 484’s implicit numeric tower.
*
Disable developer-specific debugging logic.
*
Conditionally output color when standard output is attached to a terminal.
Beartype configurations may be passed as the optional keyword-only
`conf` parameter accepted by most high-level runtime type-checking functions exported by `beartype` – including:
The
`beartype.beartype()` decorator. *
```
beartype.claw.beartype_all()
```
```
beartype.claw.beartype_package()
```
```
beartype.claw.beartype_packages()
```
```
beartype.claw.beartyping()
```
type-checker.
Beartype configurations are immutable objects memoized (i.e., cached) on the unordered set of all passed parameters:
> >>> from beartype import BeartypeConf >>> BeartypeConf() is BeartypeConf() True >>> BeartypeConf(is_color=False) is BeartypeConf(is_color=False) True
Beartype configurations are comparable under equality:
> >>> BeartypeConf(is_color=False) == BeartypeConf(is_color=True) False
Beartype configurations are hashable and thus suitable for use as dictionary keys and set members:
> >>> BeartypeConf(is_color=False) == BeartypeConf(is_color=True) False >>> confs = {BeartypeConf(), BeartypeConf(is_color=False)} >>> BeartypeConf() in confs True
Beartype configurations support meaningful
`repr()` output: > >>> repr(BeartypeConf()) 'BeartypeConf(is_color=None, is_debug=False, is_pep484_tower=False, strategy=<BeartypeStrategy.O1: 2>)'
Beartype configurations expose read-only public properties of the same names as the above parameters:
> >>> BeartypeConf().is_color None >>> BeartypeConf().strategy <BeartypeStrategy.O1: 2# Keyword Parameters¶
Beartype configurations support optional read-only keyword-only parameters at instantiation time. Most parameters are suitable for passing by all beartype users in all possible use cases. Some are only intended to be passed by some beartype users in some isolated use cases.
This is their story.
# General Keyword Parameters¶
General-purpose configuration parameters are always safely passable:
* is_debug¶
*
`True` only if debugging the `beartype()` decorator. If you’re curious as to what exactly (if anything) `beartype()` is doing on your behalf, temporarily enable this boolean. Specifically, enabling this boolean (in no particular order):
Caches the body of each type-checking wrapper function dynamically generated by
`beartype()` with the standard `linecache` module, enabling these function bodies to be introspected at runtime and improving the readability of tracebacks whose call stacks contain one or more calls to these `beartype()` -decorated functions. *
Prints the definition (including both the signature and body) of each type-checking wrapper function dynamically generated by :func:.beartype` to standard output.
*
Appends to the declaration of each hidden parameter (i.e., whose name is prefixed by
`"__beartype_"` and whose value is that of an external attribute internally referenced in the body of that function) a comment providing the machine-readable representation of the initial value of that parameter, stripped of newlines and truncated to a hopefully sensible length. Since the low-level string munger called to do so is shockingly slow, these comments are conditionally embedded in type-checking wrapper functions only when this boolean is enabled.
Defaults to
`False` . Eye-gouging sample output or it didn’t happen, so: > # Import the requisite machinery. >>> from beartype import beartype, BeartypeConf # Dynamically create a new @bugbeartype decorator enabling debugging. # Insider D&D jokes in my @beartype? You'd better believe. It's happening. >>> bugbeartype = beartype(conf=BeartypeConf(is_debug=True)) # Decorate with this decorator rather than @beartype everywhere. >>> @bugbeartype ... def muh_bugged_func() -> str: ... return b'Consistency is the bugbear that frightens little minds.' (line 0001) def muh_bugged_func( (line 0002) *args, (line 0003) __beartype_func=__beartype_func, # is <function muh_bugged_func at 0x7f52733bad40> (line 0004) __beartype_conf=__beartype_conf, # is "BeartypeConf(is_color=None, is_debug=True, is_pep484_tower=False, strategy=<BeartypeStrategy... (line 0005) __beartype_get_violation=__beartype_get_violation, # is <function get_beartype_violation at 0x7f5273081d80> (line 0006) **kwargs (line 0007) ): (line 0008) # Call this function with all passed parameters and localize the value (line 0009) # returned from this call. (line 0010) __beartype_pith_0 = __beartype_func(*args, **kwargs) (line 0011) (line 0012) # Noop required to artificially increase indentation level. Note that (line 0013) # CPython implicitly optimizes this conditional away. Isn't that nice? (line 0014) if True: (line 0015) # Type-check this passed parameter or return value against this (line 0016) # PEP-compliant type hint. (line 0017) if not isinstance(__beartype_pith_0, str): (line 0018) raise __beartype_get_violation( (line 0019) func=__beartype_func, (line 0020) conf=__beartype_conf, (line 0021) pith_name='return', (line 0022) pith_value=__beartype_pith_0, (line 0023) ) (line 0024) (line 0025) return __beartype_pith_0
* is_pep484_tower¶
*
`True` only if enabling support for PEP 484’s implicit numeric tower (i.e., lossy conversion of integers to floating-point numbers as well as both integers and floating-point numbers to complex numbers). Specifically, enabling this instructs beartype to automatically expand:
All
`float` type hints to `float` `|` `int` , thus implicitly accepting both integers and floating-point numbers for objects annotated as only accepting floating-point numbers. *
All
`complex` type hints to `complex` `|` `float` `|` `int` , thus implicitly accepting integers, floating-point, and complex numbers for objects annotated as only accepting complex numbers.
Defaults to
`False` to minimize precision error introduced by lossy conversions from integers to floating-point numbers to complex numbers. Since most integers do not have exact representations as floating-point numbers, each conversion of an integer into a floating-point number typically introduces a small precision error that accumulates over multiple conversions and operations into a larger precision error. Enabling this improves the usability of public APIs at a cost of introducing precision errors.
The standard use case is to dynamically define your own app-specific
`beartype()` decorator unconditionally enabling support for the implicit numeric tower, usually as a convenience to your userbase who do not particularly care about the above precision concerns. Behold the permissive powers of… `@beartowertype` ! > # Import the requisite machinery. from beartype import beartype, BeartypeConf # Dynamically create a new @beartowertype decorator enabling the tower. beartowertype = beartype(conf=BeartypeConf(is_pep484_tower=True)) # Decorate with this decorator rather than @beartype everywhere. @beartowertype def crunch_numbers(numbers: list[float]) -> float: return sum(numbers) # This is now fine. crunch_numbers([3, 1, 4, 1, 5, 9]) # This is still fine, too. crunch_numbers([3.1, 4.1, 5.9])
* strategy¶
*
`Type:` `BeartypeStrategy` = `BeartypeStrategy.O1`
Type-checking strategy (i.e.,
`BeartypeStrategy` enumeration member dictating how many items are type-checked at each nesting level of each container and thus how responsively beartype type-checks containers). This setting governs the core tradeoff in runtime type-checking between:
Overhead in the amount of time that beartype spends type-checking.
*
Completeness in the number of objects that beartype type-checks.
As beartype gracefully scales up to check larger and larger containers, so beartype simultaneously scales down to check fewer and fewer items of those containers. This scalability preserves performance regardless of container size while increasing the likelihood of false negatives (i.e., failures to catch invalid items in large containers) as container size increases. You can either type-check a small number of objects nearly instantaneously or you can type-check a large number of objects slowly. Pick one.
Defaults to
`BeartypeStrategy.O1` , the constant-time \(O(1)\) strategy – maximizing scalability at a cost of also maximizing false positives.
# App-only Keyword Parameters¶
App-only configuration parameters are passed only by first-party packages executed as apps, binaries, scripts, servers, or other executable processes (rather than imported as libraries, frameworks, or other importable APIs into the current process):
* is_color¶
*
Tri-state boolean governing how and whether beartype colours type-checking violations (i.e., human-readable
exceptions) with POSIX-compliant ANSI escape sequences for readability. Specifically, if this boolean is:
*
`False` , beartype never colours type-checking violations raised by callables configured with this configuration. *
`True` , beartype always colours type-checking violations raised by callables configured with this configuration. *
`None` , beartype conditionally colours type-checking violations raised by callables configured with this configuration only when standard output is attached to an interactive terminal.
The ${BEARTYPE_IS_COLOR} environment variable globally overrides this parameter, enabling end users to enforce a global colour policy across their full app stack. When both that variable and this parameter are set to differing (and thus conflicting) values, the
`BeartypeConf` class:
Ignores this parameter in favour of that variable.
*
Emits a
```
beartype.roar.BeartypeConfShellVarWarning
```
warning notifying callers of this conflict.
To avoid this conflict, only downstream executables should pass this parameter; intermediary libraries should never pass this parameter. Non-violent communication begins with you.
Effectively defaults to
`None` . Technically, this parameter defaults to a private magic constant not intended to be passed by callers, enabling `beartype` to reliably detect whether the caller has explicitly passed this parameter or not.
The standard use case is to dynamically define your own app-specific
`beartype()` decorator unconditionally disabling colours in type-checking violations, usually due to one or more frameworks in your app stack failing to support ANSI escape sequences. Please file issues with those frameworks requesting ANSI support. In the meanwhile, behold the monochromatic powers of… `@monobeartype` ! > # Import the requisite machinery. from beartype import beartype, BeartypeConf # Dynamically create a new @monobeartype decorator disabling colour. monobeartype = beartype(conf=BeartypeConf(is_color=False)) # Decorate with this decorator rather than @beartype everywhere. @monobeartype def muh_colorless_func() -> str: return b'In the kingdom of the blind, you are now king.'
# Beartype Strategy API¶
* class beartype.BeartypeStrategy[source]¶
*
`Superclass(es):` `enum.Enum`
Enumeration of all kinds of type-checking strategies (i.e., competing procedures for type-checking objects passed to or returned from
`beartype()` -decorated callables, each with concomitant tradeoffs with respect to runtime complexity and quality assurance).
Strategies are intentionally named according to conventional Big O notation (e.g.,
`BeartypeStrategy.On` enables the \(O(n)\) strategy). Strategies are established per-decoration at the fine-grained level of callables decorated by the `beartype()` decorator. Simply set the
```
BeartypeConf.strategy
```
parameter of the `BeartypeConf` object passed as the optional `conf` parameter to the `beartype()` decorator. > # Import the requisite machinery. from beartype import beartype, BeartypeConf, BeartypeStrategy # Dynamically create a new @slowmobeartype decorator enabling "full fat" # O(n) type-checking. slowmobeartype = beartype(conf=BeartypeConf(strategy=BeartypeStrategy.On)) # Type-check all items of the passed list. Do this only when you pretend # to know in your guts that this list will *ALWAYS* be ignorably small. @bslowmobeartype def type_check_like_maple_syrup(liquid_gold: list[int]) -> str: return 'The slowest noop yet envisioned? You're not wrong.'
Strategies enforce their corresponding runtime complexities (e.g., \(O(n)\)) across all type-checks performed for callables enabling those strategies. For example, a callable configured by the
`BeartypeStrategy.On` strategy will exhibit linear \(O(n)\) complexity as its overhead for type-checking each nesting level of each container passed to and returned from that callable.
This enumeration defines these members:
* On¶
*
`Type:`
Linear-time strategy: the \(O(n)\) strategy, type-checking all items of a container.
* Ologn¶
*
`Type:`
Logarithmic-time strategy: the \(O(\log n)\) strategy, type-checking a randomly selected number of items
`log(len(obj))` of each container `obj` .
Note
* O1¶
*
`Type:`
Constant-time strategy: the default \(O(1)\) strategy, type-checking a single randomly selected item of each container. As the default, this strategy need not be explicitly enabled.
* O0¶
*
`Type:`
No-time strategy, disabling type-checking for a decorated callable by reducing
`beartype()` to the identity decorator for that callable. This strategy is functionally equivalent to but more general-purpose than the standard
decorator; whereas
only applies to callables, this strategy applies to any context accepting a beartype configuration such as:
The
`beartype()` decorator decorating a class. *
method.
Just like in real life, there exist valid use cases for doing absolutely nothing – including:
Blacklisting callables. While seemingly useless, this strategy allows callers to selectively prevent callables that would otherwise be type-checked (e.g., due to class decorations or import hooks) from being type-checked:
> # Import the requisite machinery. from beartype import beartype, BeartypeConf, BeartypeStrategy # Dynamically create a new @nobeartype decorator disabling type-checking. nobeartype = beartype(conf=BeartypeConf(strategy=BeartypeStrategy.O0)) # Automatically decorate all methods of this class... @beartype class TypeCheckedClass(object): # Including this method, which raises a type-checking violation # due to returning a non-"None" value. def type_checked_method(self) -> None: return 'This string is not "None". Apparently, that is a problem.' # Excluding this method, which raises *NO* type-checking # violation despite returning a non-"None" value. @nobeartype def non_type_checked_method(self) -> None: return 'This string is not "None". Thankfully, no one cares.'
*
Eliding overhead. Beartype already exhibits near-real-time overhead of less than 1µs (one microsecond, one millionth of a second) per call of type-checked callables. When even that negligible overhead isn’t negligible enough, brave callers considering an occupational change may globally disable all type-checking performed by beartype. Prepare your resume beforehand. Also, do so only under production builds intended for release; development builds intended for testing should preserve type-checking.
Either:
Pass Python the “-O” command-line option, which beartype respects.
*
Run Python under the “PYTHONOPTIMIZE” environment variable, which beartype also respects.
*
Define a new
`@maybebeartype` decorator disabling type-checking when an app-specific constant `I_AM_RELEASE_BUILD` defined elsewhere is enabled: > # Import the requisite machinery. from beartype import beartype, BeartypeConf, BeartypeStrategy # Let us pretend you know what you are doing for a hot moment. from your_app import I_AM_RELEASE_BUILD # Dynamically create a new @maybebeartype decorator disabling # type-checking when "I_AM_RELEASE_BUILD" is enabled. maybebeartype = beartype(conf=BeartypeConf(strategy=( BeartypeStrategy.O0 if I_AM_RELEASE_BUILD else BeartypeStrategy.O1 )) # Decorate with this decorator rather than @beartype everywhere. @maybebeartype def muh_performance_critical_func(big_list: list[int]) -> int: return sum(big_list)
# Beartype Environment Variables¶
Beartype supports increasingly many environment variables (i.e., external shell variables associated with the active Python interpreter). Most of these variables globally override `BeartypeConf` parameters of similar names,
enabling end users to enforce global configuration policies across their full
app stacks.
Beneath environment variables… thy humongous codebase shalt rise.
# ${BEARTYPE_IS_COLOR}¶
The `${BEARTYPE_IS_COLOR}` environment variable globally overrides the
```
BeartypeConf.is_color
```
parameter, enabling end users to enforce a global
colour policy. As with that parameter, this variable is a tri-state boolean with
three possible string values:
```
BEARTYPE_IS_COLOR='True'
```
```
BEARTYPE_IS_COLOR='False'
```
```
BEARTYPE_IS_COLOR='None'
```
, forcefully instantiating all beartype configurations across all Python processes with the `is_color=None` parameter.
Force beartype to obey your unthinking hatred of the colour spectrum. You can’t be wrong!
```
BEARTYPE_IS_COLOR=False python3 -m monochrome_retro_app.its_srsly_cool
```
New in version 0.16.0.
```
Validate anything with two-line type hints
designed by you ⇄ built by beartype
```
When standards fail, do what you want anyway. When official type hints fail to scale to your validation use case, design your own PEP-compliant type hints with compact beartype validators:
# Type hint matching any two-dimensional NumPy array of floats of arbitrary
# precision. Aye, typing matey. Beartype validators a-hoy!
import numpy as np
Numpy2DFloatArray = Annotated[np.ndarray, Is[lambda array:
array.ndim == 2 and np.issubdtype(array.dtype, np.floating)]]
Validators enforce arbitrary runtime constraints on the internal structure and contents of parameters and returns with user-defined lambda functions and nestable declarative expressions leveraging familiar `typing` syntax – all
seamlessly composable with standard type hints via an
expressive domain-specific language (DSL).
Validate custom project constraints now without waiting for the open-source community to officially standardize, implement, and publish those constraints. Filling in the Titanic-sized gaps between Python’s patchwork quilt of PEPs, validators accelerate your QA workflow with your greatest asset.
Yup. It’s your brain.
See Validator Showcase for comforting examples – or blithely continue for uncomfortable details you may regret reading.
## Validator Overview¶
Beartype validators are zero-cost code generators. Like the rest of beartype (but unlike other validation frameworks), beartype validators generate optimally efficient pure-Python type-checking logic with no hidden function or method calls, undocumented costs, or runtime overhead.
Beartype validator code is thus call-explicit. Since pure-Python function and method calls are notoriously slow in CPython, the code we generate only calls the pure-Python functions and methods you specify when you subscript `beartype.vale.Is*` classes with those functions and methods. That’s it. We
never call anything without your permission. For example:
The declarative validator
```
Annotated[np.ndarray, IsAttr['dtype', IsAttr['type', IsEqual[np.float64]]]]
```
detects NumPy arrays of 64-bit floating-point precision by generating the fastest possible inline expression for doing so: > isinstance(array, np.ndarray) and array.dtype.type == np.float64
*
The functional validator
```
Annotated[np.ndarray, Is[lambda array: array.dtype.type == np.float64]]
```
also detects the same arrays by generating a slightly slower inline expression calling the lambda function you define: > isinstance(array, np.ndarray) and your_lambda_function(array)
Beartype validators thus come in two flavours – each with attendant tradeoffs:
Functional validators, created by subscripting the
`beartype.vale.Is` factory with a function accepting a single parameter and returning `True` only when that parameter satisfies a caller-defined constraint. Each functional validator incurs the cost of calling that function for each call to each `beartype.beartype()` -decorated callable annotated by that validator, but is Turing-complete and thus supports all possible validation scenarios. *
Declarative validators, created by subscripting any other class in the
`beartype.vale` subpackage (e.g.,
) with arguments specific to that class. Each declarative validator generates efficient inline code calling no hidden functions and thus incurring no function costs, but is special-purpose and thus supports only a narrow band of validation scenarios.
Wherever you can, prefer declarative validators for efficiency.
Everywhere else, fallback to functional validators for generality.
## Validator API¶
* class beartype.vale.Is¶
*
`Subscription API:` beartype.vale.Is[
```
collections.abc.Callable
```
[[ `object` ], `bool` ]]
Functional validator. A PEP-compliant type hint enforcing any arbitrary runtime constraint – created by subscripting (indexing) the
`Is` type hint factory with a function accepting a single parameter and returning either: > # Import the requisite machinery. from beartype.vale import Is from typing import Annotated # <--------------- if Python ≥ 3.9.0 #from typing_extensions import Annotated # <--- if Python < 3.9.0 # Type hint matching only strings with lengths ranging [4, 40]. LengthyString = Annotated[str, Is[lambda text: 4 <= len(text) <= 40]]
Functional validators are caller-defined and may thus validate the internal integrity, consistency, and structure of arbitrary objects ranging from simple builtin scalars like integers and strings to complex data structures defined by third-party packages like NumPy arrays and Pandas DataFrames.
* class beartype.vale.IsAttr¶
*
`Subscription API:` beartype.vale.IsAttr[ `str` , `beartype.vale.*` ]
Declarative attribute validator. A PEP-compliant type hint enforcing any arbitrary runtime constraint on any named object attribute – created by subscripting (indexing) the
`IsAttr` type hint factory with (in order):
The unqualified name of that attribute.
*
Any other beartype validator enforcing that constraint.
> # Import the requisite machinery. from beartype.vale import IsAttr, IsEqual from typing import Annotated # <--------------- if Python ≥ 3.9.0 #from typing_extensions import Annotated # <--- if Python < 3.9.0 # Type hint matching only two-dimensional NumPy arrays. Given this, # @beartype generates efficient validation code resembling: # isinstance(array, np.ndarray) and array.ndim == 2 import numpy as np Numpy2DArray = Annotated[np.ndarray, IsAttr['ndim', IsEqual[2]]]
The first argument subscripting this class must be a syntactically valid unqualified Python identifier string containing only alphanumeric and underscore characters (e.g.,
`"dtype"` , `"ndim"` ). Fully-qualified attributes comprising two or more dot-delimited identifiers (e.g., `"dtype.type"` ) may be validated by nesting successive `IsAttr` subscriptions: > # Type hint matching only NumPy arrays of 64-bit floating-point numbers. # From this, @beartype generates an efficient expression resembling: # isinstance(array, np.ndarray) and array.dtype.type == np.float64 NumpyFloat64Array = Annotated[np.ndarray, IsAttr['dtype', IsAttr['type', IsEqual[np.float64]]]]
The second argument subscripting this class must be a beartype validator. This includes:
*
`beartype.vale.Is` , in which case this parent `IsAttr` class validates the desired object attribute to satisfy the caller-defined function subscripting that child `Is` class. *
`beartype.vale.IsAttr` , in which case this parent `IsAttr` class validates the desired object attribute to contain a nested object attribute satisfying the child `IsAttr` class. See above example. *
, in which case this `IsAttr` class validates the desired object attribute to be equal to the object subscripting that `IsEqual` class. See above example.
* class beartype.vale.IsEqual¶
*
`Subscription API:` beartype.vale.IsEqual[ `object` ]
Declarative equality validator. A PEP-compliant type hint enforcing equality against any object – created by subscripting (indexing) the
`IsEqual` type hint factory with that object: > # Import the requisite machinery. from beartype.vale import IsEqual from typing import Annotated # <--------------- if Python ≥ 3.9.0 #from typing_extensions import Annotated # <--- if Python < 3.9.0 # Type hint matching only lists equal to [0, 1, 2, ..., 40, 41, 42]. AnswerToTheUltimateQuestion = Annotated[list, IsEqual[list(range(42))]]
`IsEqual` generalizes the comparable PEP 586-compliant `typing.Literal` type hint. Both check equality against user-defined objects. Despite the differing syntax, these two type hints enforce the same semantics: > # This beartype validator enforces the same semantics as... IsStringEqualsWithBeartype = Annotated[str, IsEqual['Don’t you envy our pranceful bands?'] | IsEqual['Don’t you wish you had extra hands?'] ] # This PEP 586-compliant type hint. IsStringEqualsWithPep586 = Literal[ 'Don’t you envy our pranceful bands?', 'Don’t you wish you had extra hands?', ]
*
`IsEqual` permissively validates equality against objects that are instances of any arbitrary type. `IsEqual` doesn’t care what the types of your objects are. `IsEqual` will test equality against everything you tell it to, because you know best. *
`typing.Literal` rigidly validates equality against objects that are instances of only six predefined types:
Wherever you can (which is mostly nowhere), prefer
`typing.Literal` . Sure, `typing.Literal` is mostly useless, but it’s standardized across type checkers in a mostly useless way. Everywhere else, default to `IsEqual` .
* class beartype.vale.IsInstance¶
*
`Subscription API:` beartype.vale.IsInstance[ `type` , …]
Declarative instance validator. A PEP-compliant type hint enforcing instancing of one or more classes – created by subscripting (indexing) the
`IsInstance` type hint factory with those classes: > # Import the requisite machinery. from beartype.vale import IsInstance from typing import Annotated # <--------------- if Python ≥ 3.9.0 #from typing_extensions import Annotated # <--- if Python < 3.9.0 # Type hint matching only string and byte strings, equivalent to: # StrOrBytesInstance = Union[str, bytes] StrOrBytesInstance = Annotated[object, IsInstance[str, bytes]]
`IsInstance` generalizes isinstanceable type hints (i.e., normal pure-Python or C-based classes that can be passed as the second parameter to the `isinstance()` builtin). Both check instancing of classes. Despite the differing syntax, the following hints all enforce the same semantics: > # This beartype validator enforces the same semantics as... IsUnicodeStrWithBeartype = Annotated[object, IsInstance[str]] # ...this PEP 484-compliant type hint. IsUnicodeStrWithPep484 = str # Likewise, this beartype validator enforces the same semantics as... IsStrWithWithBeartype = Annotated[object, IsInstance[str, bytes]] # ...this PEP 484-compliant type hint. IsStrWithWithPep484 = Union[str, bytes]
*
`IsInstance` permissively validates type instancing of arbitrary objects (including possibly nested attributes of parameters and returns when combined with `beartype.vale.IsAttr` ) against one or more classes. *
Isinstanceable classes rigidly validate type instancing of only parameters and returns against only one class.
Unlike isinstanceable type hints, instance validators support various set theoretic operators. Critically, this includes negation. Instance validators prefixed by the negation operator
`~` match all objects that are not instances of the classes subscripting those validators. Wait. Wait just a hot minute there. Doesn’t a `typing.Annotated` type hint necessarily match instances of the class subscripting that type hint? Yup. This means type hints of the form
```
typing.Annotated[{superclass}, ~IsInstance[{subclass}]
```
match all instances of a superclass that are not also instances of a subclass. And… pretty sure we just invented type hint arithmetic right there.
That sounded intellectual and thus boring. Yet, the disturbing fact that Python booleans are integers …yup while Python strings are infinitely recursive sequences of strings …yup means that type hint arithmetic can save your codebase from Guido’s younger self. Consider this instance validator matching only non-boolean integers, which cannot be expressed with any isinstanceable type hint (e.g.,
`int` ) or other combination of standard off-the-shelf type hints (e.g., unions): > # Type hint matching any non-boolean integer. Never fear integers again. IntNonbool = Annotated[int, ~IsInstance[bool]] # <--- bruh
* class beartype.vale.IsSubclass¶
*
`Subscription API:` beartype.vale.IsSubclass[ `type` , …]
Declarative inheritance validator. A PEP-compliant type hint enforcing subclassing of one or more superclasses (base classes) – created by subscripting (indexing) the
`IsSubclass` type hint factory with those superclasses: > # Import the requisite machinery. from beartype.vale import IsSubclass from typing import Annotated # <--------------- if Python ≥ 3.9.0 #from typing_extensions import Annotated # <--- if Python < 3.9.0 # Type hint matching only string and byte string subclasses. StrOrBytesSubclass = Annotated[type, IsSubclass[str, bytes]]
`IsSubclass` generalizes the comparable PEP 484-compliant `typing.Type` and PEP 585-compliant `type` type hint factories. All three check subclassing of arbitrary superclasses. Despite the differing syntax, the following hints all enforce the same semantics: > # This beartype validator enforces the same semantics as... IsStringSubclassWithBeartype = Annotated[type, IsSubclass[str]] # ...this PEP 484-compliant type hint as well as... IsStringSubclassWithPep484 = Type[str] # ...this PEP 585-compliant type hint. IsStringSubclassWithPep585 = type[str]
*
`IsSubclass` permissively validates type inheritance of arbitrary classes (including possibly nested attributes of parameters and returns when combined with `beartype.vale.IsAttr` ) against one or more superclasses. *
`typing.Type` and `type` rigidly validates type inheritance of only parameters and returns against only one superclass.
Consider this subclass validator, which validates type inheritance of a deeply nested attribute and thus cannot be expressed with
`typing.Type` or `type` : > # Type hint matching only NumPy arrays of reals (i.e., either integers # or floats) of arbitrary precision, generating code resembling: # (isinstance(array, np.ndarray) and # issubclass(array.dtype.type, (np.floating, np.integer))) NumpyRealArray = Annotated[ np.ndarray, IsAttr['dtype', IsAttr['type', IsSubclass[ np.floating, np.integer]]]]
## Validator Syntax¶
Beartype validators support a rich domain-specific language (DSL) leveraging familiar Python operators. Dynamically create new validators on-the-fly from existing validators, fueling reuse and preserving DRY:
Negation (i.e.,
`not` ). Negating any validator with the `~` operator creates a new validator returning `True` only when the negated validator returns `False` : > # Type hint matching only strings containing *no* periods, semantically # equivalent to this type hint: # PeriodlessString = Annotated[str, Is[lambda text: '.' not in text]] PeriodlessString = Annotated[str, ~Is[lambda text: '.' in text]]
*
Conjunction (i.e.,
`and` ). And-ing two or more validators with the `&` operator creates a new validator returning `True` only when all of the and-ed validators return `True` : > # Type hint matching only non-empty strings containing *no* periods, # semantically equivalent to this type hint: # NonemptyPeriodlessString = Annotated[ # str, Is[lambda text: text and '.' not in text]] SentenceFragment = Annotated[str, ( Is[lambda text: bool(text)] & ~Is[lambda text: '.' in text] )]
*
Disjunction (i.e.,
`or` ). Or-ing two or more validators with the `|` operator creates a new validator returning `True` only when at least one of the or-ed validators returns `True` : > # Type hint matching only empty strings *and* non-empty strings containing # one or more periods, semantically equivalent to this type hint: # EmptyOrPeriodfullString = Annotated[ # str, Is[lambda text: not text or '.' in text]] EmptyOrPeriodfullString = Annotated[str, ( ~Is[lambda text: bool(text)] | Is[lambda text: '.' in text] )]
*
Enumeration (i.e.,
`,` ). Delimiting two or or more validators with commas at the top level of a `typing.Annotated` type hint is an alternate syntax for and-ing those validators with the `&` operator, creating a new validator returning `True` only when all of those delimited validators return `True` . > # Type hint matching only non-empty strings containing *no* periods, # semantically equivalent to the "SentenceFragment" defined above. SentenceFragment = Annotated[str, Is[lambda text: bool(text)], ~Is[lambda text: '.' in text], ]
Since the
`&` operator is more explicit and usable in a wider variety of syntactic contexts, the `&` operator is generally preferable to enumeration (all else being equal). *
Interoperability. As PEP-compliant type hints, validators are safely interoperable with other PEP-compliant type hints and usable wherever other PEP-compliant type hints are usable. Standard type hints are subscriptable with validators, because validators are standard type hints:
> # Type hint matching only sentence fragments defined as either Unicode or # byte strings, generalizing "SentenceFragment" type hints defined above. SentenceFragment = Union[ Annotated[bytes, Is[lambda text: b'.' in text]], Annotated[str, Is[lambda text: u'.' in text]], ]
Standard Python precedence rules may apply.
DSL: it’s not just a telecom acronym anymore.
## Validator Caveats¶
Validators require:
Beartype. Currently, all other static and runtime type checkers silently ignore beartype validators during type-checking. This includes mypy – which we could possibly solve by bundling a mypy plugin with beartype that extends mypy to statically analyze declarative beartype validators (e.g.,
`beartype.vale.IsAttr` ,
). We leave this as an exercise to the idealistic doctoral thesis candidate. Please do this for us, someone who is not us. *
Either Python ≥ 3.9 or typing_extensions ≥ 3.9.0.0. Validators piggyback onto the
`typing.Annotated` class first introduced with Python 3.9.0 and since backported to older Python versions by the third-party “typing_extensions” package, which beartype also transparently supports.
## Validator Showcase¶
Observe the disturbing (yet alluring) utility of beartype validators in action as they unshackle type hints from the fetters of PEP compliance. Begone, foulest standards!
### Full-Fat O(n) Matching¶
Let’s validate all integers in a list of integers in O(n) time, because validators mean you no longer have to accept the QA scraps we feed you:
# Type hint matching all integers in a list of integers in O(n) time. Please
# never do this. You now want to, don't you? Why? You know the price! Why?!?
IntList = Annotated[list[int], Is[lambda lst: all(
isinstance(item, int) for item in lst)]]
# Type-check all integers in a list of integers in O(n) time. How could you?
@beartype
def sum_intlist(my_list: IntList) -> int:
'''
The slowest possible integer summation over the passed list of integers.
There goes your whole data science pipeline. Yikes! So much cringe.
'''
return sum(my_list) # oh, gods what have you done
```
Welcome to full-fat type-checking. In our disastrous roadmap to beartype 1.0.0, we reluctantly admit that we’d like to augment the `beartype.beartype()` decorator with a new parameter enabling full-fat
type-checking. But don’t wait for us. Force the issue now by just doing it
yourself and then mocking us all over Gitter! Fight the bear, man.
There are good reasons to believe that O(1) type-checking is preferable. Violating that core precept exposes your codebase to scalability and security concerns. But you’re the Big Boss, you swear you know best, and (in any case) we can’t stop you because we already let the unneutered tomcat out of his trash bin by publishing this API into the badlands of PyPI.
### Trendy String Matching¶
Let’s accept strings either at least 80 characters long or both quoted and suffixed by a period. Look, it doesn’t matter. Just do it already, beartype!
# Validator matching only strings at least 80 characters in length.
IsLengthy = Is[lambda text: len(text) >= 80]
# Validator matching only strings suffixed by a period.
IsSentence = Is[lambda text: text and text[-1] == '.']
# Validator matching only single- or double-quoted strings.
def _is_quoted(text): return text.count('"') >= 2 or text.count("'") >= 2
IsQuoted = Is[_is_quoted]
# Combine multiple validators by just listing them sequentially.
@beartype
def desentence_lengthy_quoted_sentence(
text: Annotated[str, IsLengthy, IsSentence, IsQuoted]]) -> str:
'''
Strip the suffixing period from a lengthy quoted sentence... 'cause.
'''
return text[:-1] # this is horrible
# Combine multiple validators by just "&"-ing them sequentially. Yes, this
# is exactly identical to the prior function. We do this because we can.
@beartype
def desentence_lengthy_quoted_sentence_part_deux(
text: Annotated[str, IsLengthy & IsSentence & IsQuoted]]) -> str:
'''
Strip the suffixing period from a lengthy quoted sentence... again.
'''
return text[:-1] # this is still horrible
# Combine multiple validators with as many "&", "|", and "~" operators as
# you can possibly stuff into a module that your coworkers can stomach.
# (They will thank you later. Possibly much later.)
@beartype
def strip_lengthy_or_quoted_sentence(
text: Annotated[str, IsLengthy | (IsSentence & ~IsQuoted)]]) -> str:
'''
Strip the suffixing character from a string that is lengthy and/or a
quoted sentence, because your web app deserves only the best data.
'''
return text[:-1] # this is frankly outrageous
```
### Type Hint Arithmetic¶
Subtitle: From Set Theory They Shall Grow
PEP 484 standardized the `typing.Union` factory disjunctively matching any of several equally permissible type hints ala
Python’s builtin `or` operator or the overloaded `|` operator for sets.
That’s great, because set theory is the beating heart behind type theory. But that’s just disjunction. What about intersection (e.g., `and` , `&` ),
complementation (e.g., `not` , `~` ), or any
of the vast multitude of other set theoretic operations? Can we logically
connect simple type hints validating trivial constraints into complex type
hints validating non-trivial constraints via PEP-standardized analogues of
unary and binary operators?
Nope. They don’t exist yet. But that’s okay. You use beartype, which means you don’t have to wait for official Python developers to get there first. You’re already there. …woah
# Type Hint Elision¶
Python’s core type hierarchy conceals an ugly history of secretive backward compatibility. In this subsection, we uncover the two filthiest, flea-infested, backwater corners of the otherwise well-lit atrium that is the Python language – and how exactly you can finalize them. Both obstruct type-checking, readable APIs, and quality assurance in the post-Python 2.7 era.
Guido doesn’t want you to know. But you want to know, don’t you? You are about to enter another dimension, a dimension not only of syntax and semantics but of shame. A journey into a hideous land of annotation wrangling. Next stop… the Beartype Zone. Because guess what?
Booleans are integers. They shouldn’t be. Booleans aren’t integers in most high-level languages. Wait. Are you telling me booleans are literally integers in Python? Surely you jest. That can’t be. You can’t add booleans, can you? What would that even mean if you could? Observe and cower, rigorous data scientists.
> >>> True + 3.1415 4.141500000000001 # <-- oh. by. god. >>> isinstance(False, int) True # <-- when nothing is true, everything is true
*
Strings are infinitely recursive sequences of… yup, it’s strings. They shouldn’t be. Strings aren’t infinitely recursive data structures in any other language devised by incautious mortals – high-level or not. Wait. Are you telling me strings are both indistinguishable from full-blown immutable sequences containing arbitrary items and infinitely recurse into themselves like that sickening non-Euclidean Hall of Mirrors I puked all over when I was a kid? Surely you kid. That can’t be. You can’t infinitely index into strings and pass and return the results to and from callables expecting either
`Sequence[Any]` or `Sequence[str]` type hints, can you? Witness and tremble, stricter-than-thou QA evangelists. > >>> 'yougottabekiddi—'[0][0][0][0][0][0][0][0][0][0][0][0][0][0][0] 'y' # <-- pretty sure we just broke the world >>> from collections.abc import Sequence >>> isinstance("Ph'nglui mglw'nafh Cthu—"[0][0][0][0][0], Sequence) True # <-- ...curse you, curse you to heck and back
When we annotate a callable as accepting an `int` , we never want that
callable to also silently accept a `bool` . Likewise, when we annotate
another callable as accepting a `Sequence[Any]` or `Sequence[str]` , we
never want that callable to also silently accept a `str` . These are
sensible expectations – just not in Python, where madness prevails.
To resolve these counter-intuitive concerns, we need the equivalent of the relative set complement (or difference). We now call this thing… type elision! Sounds pretty hot, right? We know.
# Booleans ≠ Integers¶
Let’s first validate non-boolean integers with a beartype validator effectively declaring a new `int - bool` class (i.e., the subclass of all
integers that are not booleans):
# Type hint matching any non-boolean integer. This day all errata die.
IntNonbool = Annotated[int, ~IsInstance[bool]] # <--- bruh
# Type-check zero or more non-boolean integers summing to a non-boolean
# integer. Beartype wills it. So it shall be.
@beartype
def sum_ints(*args: IntNonbool) -> IntNonbool:
'''
I cast thee out, mangy booleans!
You plague these shores no more.
'''
return sum(args)
```
# Strings ≠ Sequences¶
Let’s next validate non-string sequences with beartype validators effectively declaring a new `Sequence - str` class (i.e., the subclass of all
sequences that are not strings):
# Type hint matching any non-string sequence. Your day has finally come.
SequenceNonstr = Annotated[Sequence, ~IsInstance[str]] # <--- we doin this
# Type hint matching any non-string sequence *WHOSE ITEMS ARE ALL STRINGS.*
SequenceNonstrOfStr = Annotated[Sequence[str], ~IsInstance[str]]
# Type-check a non-string sequence of arbitrary items coerced into strings
# and then joined on newline to a new string. (Beartype got your back, bro.)
@beartype
def join_objects(my_sequence: SequenceNonstr) -> str:
'''
Your tide of disease ends here, :class:`str` class!
'''
return '\n'.join(map(str, my_sequence)) # <-- no idea how that works
# Type-check a non-string sequence whose items are all strings joined on
# newline to a new string. It isn't much, but it's all you ask.
@beartype
def join_strs(my_sequence: SequenceNonstrOfStr) -> str:
'''
I expectorate thee up, sequence of strings.
'''
return '\n'.join(my_sequence) # <-- do *NOT* do this to a string
```
### Tensor Property Matching¶
Let’s validate the same two-dimensional NumPy array of floats of arbitrary precision as in the lead example above with an efficient declarative validator avoiding the additional stack frame imposed by the functional validator in that example:
# Type hint matching only two-dimensional NumPy arrays of floats of
# arbitrary precision. This time, do it faster than anyone has ever
# type-checked NumPy arrays before. (Cue sonic boom, Chuck Yeager.)
import numpy as np
Numpy2DFloatArray = Annotated[np.ndarray,
IsAttr['ndim', IsEqual[2]] &
IsAttr['dtype', IsAttr['type', IsSubclass[np.floating]]]
]
## Validator Alternatives¶
If the unbridled power of beartype validators leaves you variously queasy, uneasy, and suspicious of our core worldview, beartype also supports third-party type hints like typed NumPy arrays.
Whereas beartype validators are verbose, expressive, and general-purpose, the following hints are terse, inexpressive, and domain-specific. Since beartype internally converts these hints to their equivalent validators, similar caveats apply. Notably, these hints require:
Beartype, which hopefully goes without saying.
### NumPy Type Hints¶
Beartype conditionally supports NumPy type hints (i.e., annotations created by subscripting (indexing) various attributes of the “numpy.typing” subpackage) when these optional runtime dependencies are all satisfied:
Python ≥ 3.8.0.
*
beartype ≥ 0.8.0.
*
Beartype internally converts NumPy type hints into equivalent beartype validators at decoration time. NumPy type hints currently only validate dtypes, a common but limited use case. Beartype validators validate any arbitrary combinations of array constraints – including dtypes, shapes, contents, and… well, anything. Which is alot. NumPy type hints are thus just syntactic sugar for beartype validators – albeit quasi-portable syntactic sugar also supported by mypy.
Wherever you can, prefer NumPy type hints for portability. Everywhere else, default to beartype validators for generality. Combine them for the best of all possible worlds:
# Beartype validator + NumPy type hint matching all two-dimensional NumPy
# arrays of floating-point numbers of any arbitrary precision.
NumpyFloat64Array = Annotated[NDArray[floating], IsAttr['ndim', IsEqual[2]]]
```
Rejoice! A one-liner solves everything yet again.
# Typed NumPy Arrays¶
Type NumPy arrays by subscripting (indexing) the numpy.typing.NDArray class with one of three possible types of objects:
An array dtype (i.e., instance of the numpy.dtype class).
*
A scalar dtype (i.e., concrete subclass of the numpy.generic abstract base class (ABC)).
*
A scalar dtype ABC (i.e., abstract subclass of the numpy.generic ABC).
Beartype generates fundamentally different type-checking code for these types, complying with both mypy semantics (which behaves similarly) and our userbase (which demands this behaviour). May there be hope for our collective future.
class numpy.typing.NDArray[numpy.dtype]
NumPy array typed by array dtype. A PEP-noncompliant type hint enforcing object equality against any array dtype (i.e., numpy.dtype instance), created by subscripting (indexing) the numpy.typing.NDArray class with that array dtype.
Prefer this variant when validating the exact data type of an array:
# Import the requisite machinery. from beartype import beartype from numpy import dtype from numpy.typing import NDArray # NumPy type hint matching all NumPy arrays of 32-bit big-endian integers, # semantically equivalent to this beartype validator: # NumpyInt32BigEndianArray = Annotated[ # np.ndarray, IsAttr['dtype', IsEqual[dtype('>i4')]]] NumpyInt32BigEndianArray = NDArray[dtype('>i4')]
class numpy.typing.NDArray[numpy.dtype.type]
NumPy array typed by scalar dtype. A PEP-noncompliant type hint enforcing object equality against any scalar dtype (i.e., concrete subclass of the numpy.generic ABC), created by subscripting (indexing) the numpy.typing.NDArray class with that scalar dtype.
Prefer this variant when validating the exact scalar precision of an array:
# Import the requisite machinery. from beartype import beartype from numpy import float64 from numpy.typing import NDArray # NumPy type hint matching all NumPy arrays of 64-bit floats, semantically # equivalent to this beartype validator: # NumpyFloat64Array = Annotated[ # np.ndarray, IsAttr['dtype', IsAttr['type', IsEqual[float64]]]] NumpyFloat64Array = NDArray[float64]
Common scalar dtypes include:
Fixed-precision integer dtypes (e.g.,
`numpy.int32` , `numpy.int64` ).
Fixed-precision floating-point dtypes (e.g.,
`numpy.float32` , `numpy.float64` ).
class numpy.typing.NDArray[type[numpy.dtype.type]]
NumPy array typed by scalar dtype ABC. A PEP-noncompliant type hint enforcing type inheritance against any scalar dtype ABC (i.e., abstract subclass of the numpy.generic ABC), created by subscripting (indexing) the numpy.typing.NDArray class with that ABC.
Prefer this variant when validating only the kind of scalars (without reference to exact precision) in an array:
# Import the requisite machinery. from beartype import beartype from numpy import floating from numpy.typing import NDArray # NumPy type hint matching all NumPy arrays of floats of arbitrary # precision, equivalent to this beartype validator: # NumpyFloatArray = Annotated[ # np.ndarray, IsAttr['dtype', IsAttr['type', IsSubclass[floating]]]] NumpyFloatArray = NDArray[floating]
Common scalar dtype ABCs include:
numpy.integer, the superclass of all fixed-precision integer dtypes.
numpy.floating, the superclass of all fixed-precision floating-point dtypes.
```
DOOR: the Decidedly Object-Oriented Runtime-checker
DOOR: it's capitalized, so it matters
```
Enter the DOOR (Decidedly Object-oriented Runtime-checker): beartype’s Pythonic API for introspecting, comparing, and type-checking PEP-compliant type hints in average-case \(O(1)\) time with negligible constants. It’s fast is what we’re saying.
\(O(1)\): it’s just how beartype jiggles.
## DOOR Overview¶
For efficiency, security, and scalability, the beartype codebase is like the Linux kernel. That’s a polite way of saying our code is unreadable gibberish implemented:
Procedurally, mostly with module-scoped functions. Classes? We don’t need classes where we’re going, which is nowhere you want to go.
*
Iteratively, mostly with
`while` loops over `tuple` instances. We shouldn’t have admitted that. We are not kidding. We wish we were kidding. Beartype is an echo chamber of `tuple` all the way down. Never do what we do. This is our teaching moment.
DOOR is different. DOOR has competing goals like usability, maintainability, and debuggability. Those things are often valuable to people that live in mythical lands with lavish amenities like potable ground water, functioning electrical grids, and Internet speed in excess of 56k dial-up. To achieve this utopian dream, DOOR is implemented:
Object-orientedly, with a non-trivial class hierarchy of metaclasses, mixins, and abstract base classes (ABC) nested twenty levels deep defining dunder methods deferring to public methods leveraging utility functions. Nothing really makes sense, but nothing has to. Tests say it works. After all, would tests lie? We will document everything one day.
*
Recursively, with methods commonly invoking themselves until the call stack invariably ignites in flames. We are pretty sure we didn’t just type that.
This makes DOOR unsuitable for use inside beartype itself (where ruthless micro-optimizations have beaten up everything else), but optimum for the rest of the world (where rationality, sanity, and business reality reigns in the darker excesses of humanity). This hopefully includes you.
Don’t be like beartype. Use DOOR instead.
## DOOR Procedures¶
```
Type-check anything
against any type hint –
at any time,
anywhere.
```
“Any” is the key here. When the `isinstance()` and `issubclass()` builtins fail to scale, prefer the `beartype.door` procedural API.
### Procedural API¶
* beartype.door.die_if_unbearable(obj: object, hint: object, *, conf: beartype.BeartypeConf = beartype.BeartypeConf()) None [source]¶
*
beartype.roar.BeartypeCallHintViolation – If
`obj` violates `hint` .
Runtime type-checking exception raiser. If object
`obj` :
Satisfies type hint
`hint` under configuration `conf` , `die_if_unbearable()` raises a typing-checking violation (i.e., human-readable
exception). *
Violates type hint
`hint` under configuration `conf` , `die_if_unbearable()` reduces to a noop (i.e., does nothing bad).
Release the bloodthirsty examples!
> # Import the requisite machinery. >>> from beartype.door import die_if_unbearable >>> from beartype.typing import List, Sequence # Type-check an object violating a type hint. >>> die_if_unbearable("My people ate them all!", List[int] | None]) BeartypeDoorHintViolation: Object 'My people ate them all!' violates type hint list[int] | None, as str 'My people ate them all!' not list or <class "builtins.NoneType">. # Type-check multiple objects satisfying multiple type hints. >>> die_if_unbearable("I'm swelling with patriotic mucus!", str | None) >>> die_if_unbearable("I'm not on trial here.", Sequence[str])
For those familiar with typeguard, this function implements the beartype equivalent of the low-level typeguard.check_type function. For everyone else, pretend you never heard us just namedrop typeguard.
conf (beartype.BeartypeConf) – Beartype configuration. Defaults to the default configuration performing \(O(1)\) type-checking.
* Return bool:
*
`True` only if `obj` satisfies `hint` .
Runtime type-checking tester. If object
`obj` :
Satisfies type hint
`hint` under configuration `conf` , `is_bearable()` returns `True` . *
Violates type hint
`hint` under configuration `conf` , `is_bearable()` returns `False` .
An example paints a thousand docstrings. …what does that even mean?
> # Import the requisite machinery. >>> from beartype.door import is_bearable >>> from beartype.typing import List, Sequence # Type-check an object violating a type hint. >>> is_bearable('Stop exploding, you cowards.', List[bool] | None) False # Type-check multiple objects satisfying multiple type hints. >>> is_bearable("Kif, I’m feeling the ‘Captain's itch.’", str | None) True >>> is_bearable('I hate these filthy Neutrals, Kif.', Sequence[str]) True
`is_bearable()` is a strict superset of the `isinstance()` builtin. `is_bearable()` can thus be safely called wherever `isinstance()` is called with the same exact parameters in the same exact order: > # Requisite machinery: I import you. >>> from beartype.door import is_bearable # These two statements are semantically equivalent. >>> is_bearable('I surrender and volunteer for treason.', str) True >>> isinstance('I surrender and volunteer for treason.', str) True # These two statements are semantically equivalent, too. >>> is_bearable(b'A moment of weakness is all it takes.', (str, bytes)) True >>> isinstance(b'A moment of weakness is all it takes.', (str, bytes)) True # These two statements are semantically equivalent, yet again. *shockface* >>> is_bearable('Comets: the icebergs of the sky.', bool | None) False >>> isinstance('Comets: the icebergs of the sky.', bool | None) True
`is_bearable()` is also a spiritual superset of the `issubclass()` builtin. `is_bearable()` can be safely called wherever `issubclass()` is called by replacing the superclass(es) to be tested against with a `type[{cls}]` or
```
type[{cls1}] | ... | type[{clsN}]
```
type hint: > # Machinery. It is requisite. >>> from beartype.door import is_bearable >>> from beartype.typing import Type >>> from collections.abc import Awaitable, Collection, Iterable # These two statements are semantically equivalent. >>> is_bearable(str, Type[Iterable]) True >>> issubclass(str, Iterable) True # These two statements are semantically equivalent, too. >>> is_bearable(bytes, Type[Collection] | Type[Awaitable]) True >>> issubclass(bytes, (Collection, Awaitable)) True # These two statements are semantically equivalent, yet again. *ohbygods* >>> is_bearable(bool, Type[str] | Type[float]) False >>> issubclass(bool, (str, float)) True
* beartype.door.is_subhint(subhint: object, superhint: object) bool [source]¶
*
* Parameters:
* Return bool:
*
`True` only if `subhint` is a subhint of `superhint` .
Subhint tester. If type hint:
*
`subhint` is a subhint of type hint `superhint` , `is_subhint()` returns `True` ; else, `is_subhint()` returns `False` . *
`superhint` is a superhint of type hint `subhint` , `is_subhint()` returns `True` ; else, `is_subhint()` returns `False` . This is an alternative way of expressing the same relation as the prior condition – just with the jargon reversed. Jargon gonna jargon. > # Import us up the machinery. >>> from beartype.door import is_subhint >>> from beartype.typing import Any >>> from collections.abc import Callable, Sequence # A type hint matching any callable accepting no arguments and returning # a list is a subhint of a type hint matching any callable accepting any # arguments and returning a sequence of any types. >>> is_subhint(Callable[[], list], Callable[..., Sequence[Any]]) True # A type hint matching any callable accepting no arguments and returning # a list, however, is *NOT* a subhint of a type hint matching any # callable accepting any arguments and returning a sequence of integers. >>> is_subhint(Callable[[], list], Callable[..., Sequence[int]]) False # Booleans are subclasses and thus subhints of integers. >>> is_subhint(bool, int) True # The converse, however, is *NOT* true. >>> is_subhint(int, bool) False # All classes are subclasses and thus subhints of themselves. >>> is_subhint(int, int) True
Equivalently,
`is_subhint()` returns `True` only if all of the following conditions are satisfied:
Commensurability.
`subhint` and `superhint` are semantically related by conveying broadly similar intentions, enabling these two hints to be reasonably compared. For example:
```
callable.abc.Sequence[int]
```
are semantically related. These two hints both convey container semantics. Despite their differing child hints, these two hints are broadly similar enough to be reasonably comparable. *
```
callable.abc.Callable[[], int]
```
are not semantically related. Whereas the first hints conveys a container semantic, the second hint conveys a callable semantic. Since these two semantics are unrelated, these two hints are dissimilar enough to not be reasonably comparable. *
Narrowness. The first hint is either narrower than or semantically equivalent to the second hint. Equivalently:
The first hint matches less than or equal to the total number of all possible objects matched by the second hint.
*
In incomprehensible set theoretic jargon, the size of the countably infinite set of all possible objects matched by the first hint is less than or equal to that of those matched by the second hint.
`is_subhint()` supports a variety of real-world use cases, including:
Multiple dispatch. A pure-Python decorator can implement multiple dispatch over multiple overloaded implementations of the same callable by calling this function. An overload of the currently called callable can be dispatched to if the types of the passed parameters are all subhints of the type hints annotating that overload.
*
Formal verification of API compatibility across version bumps. Automated tooling like linters, continuous integration (CI),
`git` hooks, and integrated development environments (IDEs) can raise pre-release alerts prior to accidental publication of API breakage by calling this function. A Python API preserves backward compatibility if each type hint annotating each public class or callable of the current version of that API is a superhint of the type hint annotating the same class or callable of the prior release of that API.
### Procedural Showcase¶
By the power of beartype, you too shall catch all the bugs.
# Detect API Breakage¶
Detect breaking API changes in arbitrary callables via type hints alone in ten lines of code – ignoring imports, docstrings, comments, and blank lines to make us look better.
```
from beartype import beartype
from beartype.door import is_subhint
from beartype.peps import resolve_pep563
from collections.abc import Callable
@beartype
def is_func_api_preserved(func_new: Callable, func_old: Callable) -> bool:
'''
``True`` only if the signature of the first passed callable (presumably
the newest version of some callable to be released) preserves backward
API compatibility with the second passed callable (presumably an older
previously released version of the first passed callable) according to
the PEP-compliant type hints annotating these two callables.
Parameters
----------
func_new: Callable
Newest version of a callable to test for API breakage.
func_old: Callable
Older version of that same callable.
Returns
----------
bool
``True`` only if the ``func_new`` API preserves the ``func_old`` API.
'''
# Resolve all PEP 563-postponed type hints annotating these two callables
# *BEFORE* reasoning with these type hints.
resolve_pep563(func_new)
resolve_pep563(func_old)
# For the name of each annotated parameter (or "return" for an annotated
# return) and the hint annotating that parameter or return for this newer
# callable...
for func_arg_name, func_new_hint in func_new.__annotations__.items():
# Corresponding hint annotating this older callable if any or "None".
func_old_hint = func_old.__annotations__.get(func_arg_name)
# If no corresponding hint annotates this older callable, silently
# continue to the next hint.
if func_old_hint is None:
continue
# Else, a corresponding hint annotates this older callable.
# If this older hint is *NOT* a subhint of this newer hint, this
# parameter or return breaks backward compatibility.
if not is_subhint(func_old_hint, func_new_hint):
return False
# Else, this older hint is a subhint of this newer hint. In this case,
# this parameter or return preserves backward compatibility.
# All annotated parameters and returns preserve backward compatibility.
return True
```
The proof is in the real-world pudding.
```
>>> from numbers import Real
# New and successively older APIs of the same example function.
>>> def new_func(text: str | None, ints: list[Real]) -> int: ...
>>> def old_func(text: str, ints: list[int]) -> bool: ...
>>> def older_func(text: str, ints: list) -> bool: ...
# Does the newest version of that function preserve backward compatibility
# with the next older version?
>>> is_func_api_preserved(new_func, old_func)
True # <-- good. this is good.
# Does the newest version of that function preserve backward compatibility
# with the oldest version?
>>> is_func_api_preserved(new_func, older_func)
False # <-- OH. MY. GODS.
```
In the latter case, the oldest version `older_func()` of that function
ambiguously annotated its `ints` parameter to accept any list rather than
merely a list of numbers. Both the newer version `new_func()` and the next
older version `old_func()` resolve the ambiguity by annotating that parameter
to accept only lists of numbers. Technically, that constitutes API breakage;
users upgrading from the older version of the package providing `older_func()` to the newer version of the package providing `new_func()` could have been
passing lists of non-numbers to `older_func()` . Their code is now broke. Of
course, their code was probably always broke. But they’re now screaming murder
on your issue tracker and all you can say is: “We shoulda used beartype.” In the former case, `new_func()` relaxes the constraint from `old_func()` that this list contain only integers to accept a list containing both integers
and floats. `new_func()` thus preserves backward compatibility with `old_func()` .
Thus was Rome’s API preserved in a day.
## DOOR Classes¶
Introspect and compare type hints with an object-oriented hierarchy of Pythonic classes. When the standard `typing` module has you scraping your
fingernails on the nearest whiteboard in chicken scratch, prefer the `beartype.door` object-oriented API.
You’ve already seen that type hints do not define a usable public Pythonic API. That was by design. Type hints were never intended to be used at runtime. But that’s a bad design. Runtime is all that matters, ultimately. If the app doesn’t run, it’s broke – regardless of what the static type-checker says. Now, beartype breaks a trail through the spiny gorse of unusable PEP standards.
### Object-oriented Cheatsheet¶
Open the locked cathedral of type hints with `beartype.door` : your QA
crowbar that legally pries open all type hints. Cry havoc, the bugbears of war!
```
# This is DOOR. It's a Pythonic API providing an object-oriented interface
# to low-level type hints that *OFFICIALLY* have no API whatsoever.
>>> from beartype.door import TypeHint
# DOOR hint wrapping a PEP 604-compliant type union.
>>> union_hint = TypeHint(int | str | None) # <-- so. it begins.
# DOOR hints have Pythonic public classes -- unlike normal type hints.
>>> type(union_hint)
beartype.door.UnionTypeHint # <-- what madness is this?
# DOOR hints can be detected Pythonically -- unlike normal type hints.
>>> from beartype.door import UnionTypeHint
>>> isinstance(union_hint, UnionTypeHint) # <-- *shocked face*
True
# DOOR hints can be type-checked Pythonically -- unlike normal type hints.
>>> union_hint.is_bearable('The unbearable lightness of type-checking.')
True
>>> union_hint.die_if_unbearable(b'The @beartype that cannot be named.')
beartype.roar.BeartypeDoorHintViolation: Object b'The @beartype that cannot
be named.' violates type hint int | str | None, as bytes b'The @beartype
that cannot be named.' not str, <class "builtins.NoneType">, or int.
# DOOR hints can be iterated Pythonically -- unlike normal type hints.
>>> for child_hint in union_hint: print(child_hint)
TypeHint(<class 'int'>)
TypeHint(<class 'str'>)
TypeHint(<class 'NoneType'>)
# DOOR hints can be indexed Pythonically -- unlike normal type hints.
>>> union_hint[0]
TypeHint(<class 'int'>)
>>> union_hint[-1]
TypeHint(<class 'str'>)
# DOOR hints can be sliced Pythonically -- unlike normal type hints.
>>> union_hint[0:2]
(TypeHint(<class 'int'>), TypeHint(<class 'str'>))
# DOOR hints supports "in" Pythonically -- unlike normal type hints.
>>> TypeHint(int) in union_hint # <-- it's all true.
True
>>> TypeHint(bool) in union_hint # <-- believe it.
False
# DOOR hints are sized Pythonically -- unlike normal type hints.
>>> len(union_hint) # <-- woah.
3
# DOOR hints test as booleans Pythonically -- unlike normal type hints.
>>> if union_hint: print('This type hint has children.')
This type hint has children.
>>> if not TypeHint(tuple[()]): print('But this other type hint is empty.')
But this other type hint is empty.
# DOOR hints support equality Pythonically -- unlike normal type hints.
>>> from typing import Union
>>> union_hint == TypeHint(Union[int, str, None])
True # <-- this is madness.
# DOOR hints support comparisons Pythonically -- unlike normal type hints.
>>> union_hint <= TypeHint(int | str | bool | None)
True # <-- madness continues.
# DOOR hints publish the low-level type hints they wrap.
>>> union_hint.hint
int | str | None # <-- makes sense.
# DOOR hints publish tuples of the original child type hints subscripting
# (indexing) the original parent type hints they wrap -- unlike normal type
# hints, which unreliably publish similar tuples under differing names.
>>> union_hint.args
(int, str, NoneType) # <-- sense continues to be made.
# DOOR hints are semantically self-caching.
>>> TypeHint(int | str | bool | None) is TypeHint(None | bool | str | int)
True # <-- blowing minds over here.
```
`beartype.door` : never leave `typing` without it.
### Object-oriented Overview¶
`TypeHint` wrappers:
Are immutable, hashable, and thus safely usable both as dictionary keys and set members.
*
Support efficient lookup of child type hints – just like dictionaries and sets.
*
Support efficient iteration over and random access of child type hints – just like lists and tuples.
*
Are partially ordered over the set of all type hints (according to the
`subhint relation` ) and safely usable in any algorithm accepting a partial ordering (e.g., topological sort). *
Guarantee similar performance as
`beartype.beartype()` itself. All `TypeHint` methods and properties run in (possibly amortized) constant time with negligible constants.
Open the DOOR to a whole new world. Sing along, everybody! “A whole new worl– *choking noises*”
### Object-oriented API¶
* class beartype.door.TypeHint(hint: object)[source]¶
*
hint (object) – Type hint to be introspected.
Type hint introspector, wrapping the passed type hint
`hint` (which, by design, is mostly unusable at runtime) with an object-oriented Pythonic API designed explicitly for runtime use. `TypeHint` wrappers are instantiated in the standard way. Appearences can be deceiving, however. In truth, `TypeHint` is actually an abstract base class (ABC) that magically employs exploitative metaclass trickery to instantiate a concrete subclass of itself appropriate for this particular kind of `hint` . `TypeHint` is thus a type hint introspector factory. What you read next may shock you. > >>> from beartype.door import TypeHint >>> from beartype.typing import Optional, Union >>> type(TypeHint(str | list)) beartype.door.UnionTypeHint # <-- UnionTypeHint, I am your father. >>> type(TypeHint(Union[str, list])) beartype.door.UnionTypeHint # <-- NOOOOOOOOOOOOOOOOOOOOOOO!!!!!!!! >>> type(TypeHint(Optional[str])) beartype.door.UnionTypeHint # <-- Search your MRO. You know it to be true.
`TypeHint` wrappers cache efficient singletons of themselves. On the first instantiation of `TypeHint` by `hint` , a new instance unique to `hint` is created and cached; on each subsequent instantiation, the previously cached instance is returned. Observe and tremble in ecstasy as your introspection eats less space and time. > >>> from beartype.door import TypeHint >>> TypeHint(list[int]) is TypeHint(list[int]) True # <-- you caching monster. how could you? we trusted you!
`TypeHint` wrappers expose these public read-only properties:
* args¶
*
`Type:` `tuple`
Tuple of the zero or more original child type hints subscripting the original type hint wrapped by this wrapper.
> >>> from beartype.door import TypeHint >>> TypeHint(list).args () # <-- i believe this >>> TypeHint(list[int]).args (int,) # <-- fair play to you, beartype! >>> TypeHint(tuple[int, complex]).args (int, complex) # <-- the mind is willing, but the code is weak.
`TypeHint` wrappers also expose the tuple of the zero or more child type wrappers wrapping these original child type hints with yet more `TypeHint` wrappers. As yet, there exists no comparable property providing this tuple. Instead, this tuple is accessed via dunder methods – including `__iter__()` , `__getitem__()` , and `__len__()` . Simply pass any `TypeHint` wrapper to a standard Python container like `list` , `set` , or `tuple` .
This makes more sense than it seems. Throw us a frickin’ bone here.
> >>> from beartype.door import TypeHint >>> tuple(TypeHint(list)) () # <-- is this the real life? is this just fantasy? ...why not both? >>> tuple(TypeHint(list[int])) (TypeHint(<class 'int'>),) # <-- the abyss is staring back at us here. >>> tuple(TypeHint(tuple[int, complex])) (TypeHint(<class 'int'>), TypeHint(<class 'complex'>)) # <-- make the bad documentation go away, beartype
* hint¶
*
`Type:` `object`
Original type hint wrapped by this wrapper at instantiation time.
> >>> from beartype.door import TypeHint >>> TypeHint(list[int]).hint list[int]
Seriously. That’s it. That’s the property. This isn’t Principia Mathematica. To you who are about to fall asleep on your keyboards and wake up to find your
`git` repositories empty, beartype salutes you.
* is_ignorable¶
*
`Type:` `bool` `True` only if this type hint is ignorable (i.e., conveys no meaningful semantics despite superficially appearing to do so). While one might expect the set of all ignorable type hints to be both finite and small, one would be wrong. That set is actually countably infinite in size. Countably infinitely many type hints are ignorable. That’s alot. These include:
*
`typing.Any` , by design. Anything is ignorable. You heard it here. *
`object` , the root superclass of all types. All objects are instances of `object` , so `object` conveys no semantic meaning. Much like @leycec on Monday morning, squint when you see `object` . *
The unsubscripted
`typing.Optional` singleton, which expands to the implicit `Optional[Any]` type hint under PEP 484. But PEP 484 also stipulates that all `Optional[t]` type hints expand to `Union[t, type(None)]` type hints for arbitrary arguments `t` . So, `Optional[Any]` expands to merely
. Since all unions subscripted by `typing.Any` reduce to merely `typing.Any` , the unsubscripted `typing.Optional` singleton also reduces to merely `typing.Any` . This intentionally excludes the `Optional[type(None)]` type hint, which the standard `typing` module reduces to merely `type(None)` . *
The unsubscripted
`typing.Union` singleton, which reduces to `typing.Any` by the same argument. *
Any subscription of
`typing.Union` by one or more ignorable type hints. There exists a countably infinite number of such subscriptions, many of which are non-trivial to find by manual inspection. The ignorability of a union is a transitive property propagated “virally” from child to parent type hints. Consider:
```
Union[Any, bool, str]
```
. Since `typing.Any` is ignorable, this hint is trivially ignorable by manual inspection. *
```
Union[str, List[int], NewType('MetaType', Annotated[object, 53])]
```
. Although several child type hints of this union are non-ignorable, the deeply nested `object` child type hint is ignorable by the argument above. It transitively follows that the
parent type hint subscripted by `object` , the `typing.NewType` parent type hint aliased to
, and the entire union subscripted by that `typing.NewType` are themselves all ignorable as well. *
Any subscription of
`typing.Annotated` by one or more ignorable type hints. As with `typing.Union` , there exists a countably infinite number of such subscriptions. See the prior item. Or don’t. You know. It’s all a little boring and tedious, frankly. Are you even reading this? You are, aren’t you? Well, dunk me in a bucket full of honey. Post a discussion thread on the beartype repository for your chance to win a dancing cat emoji today! *
The
`typing.Generic` and `typing.Protocol` superclasses, both of which impose no constraints in and of themselves. Since all possible objects satisfy both superclasses. both superclasses are equivalent to the ignorable `object` root superclass: e.g., > >>> from typing as Protocol >>> isinstance(object(), Protocol) True # <-- uhh... >>> isinstance('wtfbro', Protocol) True # <-- pretty sure you lost me there. >>> isinstance(0x696969, Protocol) True # <-- so i'll just be leaving then, shall i?
*
Any subscription of either the
`typing.Generic` or `typing.Protocol` superclasses, regardless of whether the child type hints subscripting those superclasses are ignorable or not. Subscripting a type that conveys no meaningful semantics continues to convey no meaningful semantics. [Shocked Pikachu face.] For example, the type hints
and `typing.Generic[str]` are both equally ignorable – despite the `str` class being otherwise unignorable in most type hinting contexts. *
And frankly many more. And… now we know why this property exists.
* die_if_unbearable(obj: object, *, conf: beartype.BeartypeConf = beartype.BeartypeConf()) None [source]¶
*
beartype.roar.BeartypeCallHintViolation – If
`obj` violates this type hint.
Shorthand for calling the
```
die_if_unbearable(obj=obj, hint=self.hint, conf=conf)
```
. Behold: an example. > # This object-oriented approach... >>> from beartype.door import TypeHint >>> TypeHint(bytes | None).die_if_unbearable( ... "You can't lose hope when it's hopeless.") BeartypeDoorHintViolation: Object "You can't lose hope when it's hopeless." violates type hint bytes | None, as str "You can't lose hope when it's hopeless." not bytes or <class "builtins.NoneType">. # ...is equivalent to this procedural approach. >>> from beartype.door import die_if_unbearable >>> die_if_unbearable( ... obj="You can't lose hope when it's hopeless.", hint=bytes | None) BeartypeDoorHintViolation: Object "You can't lose hope when it's hopeless." violates type hint bytes | None, as str "You can't lose hope when it's hopeless." not bytes or <class "builtins.NoneType">.
conf (beartype.BeartypeConf) – Beartype configuration. Defaults to the default configuration performing \(O(1)\) type-checking.
* Return bool:
*
`True` only if `obj` satisfies this type hint.
Shorthand for calling the
```
is_bearable(obj=obj, hint=self.hint, conf=conf)
```
. Awaken the example! > # This object-oriented approach... >>> from beartype.door import TypeHint >>> TypeHint(int | float).is_bearable( ... "It's like a party in my mouth and everyone's throwing up.") False # ...is equivalent to this procedural approach. >>> from beartype.door import is_bearable >>> is_bearable( ... obj="It's like a party in my mouth and everyone's throwing up.", ... hint=int | float, ... ) False
* is_subhint(superhint: object) bool [source]¶
*
superhint (object) – Type hint to tested as a superhint.
* Return bool:
*
`True` only if this type hint is a subhint of `superhint` .
Shorthand for calling the
```
beartype.door.is_subhint()
```
```
is_subhint(subhint=self.hint, superhint=superhint)
```
. I love the smell of examples in the morning. > # This object-oriented approach... >>> from beartype.door import TypeHint >>> TypeHint(tuple[bool]).is_subhint(tuple[int]) True # ...is equivalent to this procedural approach. >>> from beartype.door import is_subhint >>> is_subhint(subhint=tuple[bool], superhint=tuple[int]) True
Date: 2020-10-05
Categories:
Tags:
```
...is that bear growling or is it just me?
— common last words in rural Canada
```
Beartype only raises:
Beartype-specific exceptions. For your safety and ours, exceptions raised beartype are easily distinguished from exceptions raised by everybody else. All exceptions raised by beartype are instances of:
Public types importable from the
`beartype.roar` subpackage. *
```
beartype.roar.BeartypeException
```
abstract base class (ABC). *
Disambiguous exceptions. For your sanity and ours, every exception raised by beartype means one thing and one thing only. Beartype never reuses the same exception class to mean two different things – allowing you to trivially catch and handle the exact exception you’re interested in.
Likewise, beartype only emits beartype-specific warnings and disambiguous warnings. Beartype is fastidious to a fault. Error handling is no… exception. punny *or* funny? you decide.
## Exception API¶
Beartype raises fatal exceptions whenever something explodes. Most are self-explanatory – but some assume prior knowledge of arcane type-hinting standards or require non-trivial resolutions warranting further discussion.
When that happens, don’t be the guy that ignores this chapter.
* exception beartype.roar.BeartypeException[source]¶
*
`Superclass(es):` `Exception`
Beartype exception root superclass. All exceptions raised by beartype are guaranteed to be instances of concrete subclasses of this abstract base class (ABC) whose class names strictly match either:
```
Beartype{subclass_name}Violation
```
for type-checking violations (e.g.,
```
BeartypeCallHintReturnViolation
```
). *
```
Beartype{subclass_name}Exception
```
for non-type-checking violations (e.g.,
```
BeartypeDecorHintPep3119Exception
```
).
* exception beartype.roar.BeartypeDecorException[source]¶
*
`Superclass(es):` `BeartypeException`
Beartype decorator exception superclass. All exceptions raised by the
`@beartype` decorator at decoration time (i.e., while dynamically generating type-checking wrappers for decorated callables and classes) are guaranteed to be instances of concrete subclasses of this abstract base class (ABC). Since decoration-time exceptions are typically raised from module scope early in the lifetime of a Python process, you are unlikely to manually catch and handle decorator exceptions.
A detailed list of subclasses of this ABC is quite inconsequential. Very well. @leycec admits he was too tired to type it all out. @leycec also admits he played exploitative video games all night instead… again. @leycec is grateful nobody reads these API notes. checkmate, readthedocs.
* exception beartype.roar.BeartypeCallException[source]¶
*
`Superclass(es):` `BeartypeException`
Beartype call-time exception superclass. Beartype type-checkers (including
and `beartype.beartype()` -decorated callables) raise instances of concrete subclasses of this abstract base class (ABC) at call-time – typically when failing a type-check.
All exceptions raised by beartype type-checkers are guaranteed to be instances of this ABC. Since type-checking exceptions are typically raised from function and method scopes later in the lifetime of a Python process, you are much more likely to manually catch and handle instances of this exception type than other types of beartype exceptions. This includes the pivotal
```
BeartypeCallHintViolation
```
type, which subclasses this type.
In fact, you’re encouraged to do so. Repeat after Kermode Bear:
“Exceptions are fun, everybody.”
Gotta catch ‘em all!
* exception beartype.roar.BeartypeCallHintException[source]¶
*
`Superclass(es):`
```
BeartypeCallException
```
Beartype type-checking exception superclass. Beartype type-checkers (including
and `beartype.beartype()` -decorated callables) raise instances of concrete subclasses of this abstract base class (ABC) when failing a type-check at call time – typically due to you passing a parameter or returning a value violating a type hint annotating that parameter or return.
For once, we’re not the ones to blame. The relief in our cubicle is palpable.
* exception beartype.roar.BeartypeCallHintForwardRefException[source]¶
*
`Superclass(es):`
Beartype type-checking forward reference exception. Beartype type-checkers raise instances of this exception type when a forward reference type hint (i.e., string referring to a class that has yet to be defined) erroneously references either:
An attribute that does not exist.
*
An attribute that exists but whose value is not actually a class.
As we gaze forward in time, so too do we glimpse ourselves – unshaven and shabbily dressed – in the rear-view mirror.
> >>> from beartype import beartype >>> from beartype.roar import BeartypeCallHintForwardRefException >>> @beartype ... def i_am_spirit_bear(favourite_foodstuff: 'salmon.of.course') -> None: pass >>> try: ... i_am_spirit_bear('Why do you eat all my salmon, Spirit Bear?') ... except BeartypeCallHintForwardRefException as exception: ... print(exception) Forward reference "salmon.of.course" unimportable.
* exception beartype.roar.BeartypeCallHintViolation[source]¶
*
`Superclass(es):`
Beartype type-checking violation. This is the most important beartype exception you never hope to see – and thus the beartype exception you are most likely to see. When your code explodes at midnight, instances of this exception class were lighting the fuse behind your back.
Beartype type-checkers raise an instance of this exception class when an object to be type-checked violates the type hint annotating that object. Beartype type-checkers include:
User-defined functions and methods decorated by the
`beartype.beartype()` decorator, which then themselves become beartype type-checkers.
Because type-checking violations are why we are all here, instances of this exception class offer additional read-only public properties to assist you in debugging. Inspect these properties at runtime to resolve any lingering doubts about which coworker(s) you intend to blame in your next twenty Git commits:
* culprits¶
*
Tuple of one or more culprits (i.e., irresponsible objects that violated the type hints annotating those objects during a recent type-check).
Specifically, this property returns either:
If a standard slow Python container (e.g.,
`dict` , `list` , `set` , `tuple` ) is responsible for this violation, the 2-tuple
```
(root_culprit, leaf_culprit)
```
where:
*
`root_culprit` is the outermost such container. This is usually the passed parameter or returned value indirectly violating this type hint. *
`leaf_culprit` is the innermost item nested in `root_culprit` directly violating this type hint. *
If a non-container (e.g., scalar, class instance) is responsible for this violation, the 1-tuple
`(culprit,)` where `culprit` is that non-container.
Let us examine what the latter means for your plucky intern who will do this after fetching more pumpkin spice lattes for The Team™ (currently engrossed in a critical morale-building “Best of 260” Atari 2600 Pong competition):
> # Import the requisite machinery. from beartype import beartype from beartype.roar import BeartypeCallHintViolation # Arbitrary user-defined classes. class SpiritBearIGiveYouSalmonToGoAway(object): pass class SpiritBearIGiftYouHoneyNotToStay(object): pass # Arbitrary instance of one of these classes. SPIRIT_BEAR_REFUSE_TO_GO_AWAY = SpiritBearIGiftYouHoneyNotToStay() # Callable annotated to accept instances of the *OTHER* class. @beartype def when_spirit_bear_hibernates_in_your_bed( best_bear_den: SpiritBearIGiveYouSalmonToGoAway) -> None: pass # Call this callable with this invalid instance. try: when_spirit_bear_hibernates_in_your_bed( SPIRIT_BEAR_REFUSE_TO_GO_AWAY) # *MAGIC HAPPENS HERE*. Catch violations and inspect their "culprits"! except BeartypeCallHintViolation as violation: # Assert that one culprit was responsible for this violation. assert len(violation.culprits) == 1 # The one culprit: don't think we don't see you hiding there! culprit = violation.culprits[0] # Assert that this culprit is the same instance passed above. assert culprit is SPIRIT_BEAR_REFUSE_TO_GO_AWAY
Caveats apply. This property makes a good-faith effort to list the most significant culprits responsible for this type-checking violation. In two edge cases beyond our control, this property falls back to listing truncated snapshots of the machine-readable representations of those culprits (e.g., the first 10,000 characters or so of their
`repr()` strings). This safe fallback is triggered for each culprit that:
Has already been garbage-collected. To avoid memory leaks, this property only weakly (rather than strongly) refers to these culprits and is thus best accessed only where these culprits are accessible. Technically, this property is safely accessible from any context. Practically, this property is most usefully accessed from the
`except ...:` block directly catching this violation. Since these culprits may be garbage-collected at any time thereafter, this property cannot be guaranteed to refer to these culprits outside that block. If this property is accessed from any other context and one or more of these culprits have sadly passed away, this property dynamically reduces the corresponding items of this tuple to only the machine-readable representations of those culprits. [1] *
Is a builtin variable-sized C-based object (e.g.,
`dict` , `int` , `list` , `str` ). Long-standing limitations within CPython itself prevent beartype from weakly referring to those objects. Openly riot on the CPython bug tracker if this displeases you as much as it does us.
Let us examine what this means for your malding CTO:
> # Import the requisite machinery. from beartype import beartype from beartype.roar import BeartypeCallHintViolation from beartype.typing import List # Callable annotated to accept a standard container. @beartype def we_are_all_spirit_bear( best_bear_dens: List[List[str]]) -> None: pass # Standard container deeply violating the above type hint. SPIRIT_BEAR_DO_AS_HE_PLEASE = [ [b'Why do you sleep in my pinball room, Spirit Bear?']] # Call this callable with this invalid container. try: we_are_all_spirit_bear(SPIRIT_BEAR_DO_AS_HE_PLEASE) # Shoddy magic happens here. Catch violations and try (but fail) to # inspect the original culprits, because they were containers! except BeartypeCallHintViolation as violation: # Assert that two culprits were responsible for this violation. assert len(violation.culprits) == 2 # Root and leaf culprits. We just made these words up, people. root_culprit = violation.culprits[0] leaf_culprit = violation.culprits[1] # Assert that these culprits are, in fact, just repr() strings. assert root_culprit == repr(SPIRIT_BEAR_DO_AS_HE_PLEASE) assert leaf_culprit == repr(SPIRIT_BEAR_DO_AS_HE_PLEASE[0][0])
We see that beartype correctly identified the root culprit as the passed list of lists of byte-strings (rather than strings) and the leaf culprit as that byte-string. We also see that beartype only returned the
`repr()` of both culprits rather than those culprits. Why? Because CPython prohibits weak references to both lists and byte-strings.
This is why we facepalm ourselves in the morning. We did it this morning. We’ll do it next morning, too. Until the
`weakref` module improves, @leycec’s forehead will be swollen with an angry mass of unsightly red welts that are now festering unbeknownst to his wife.
New in version 0.12.0.
## Warning API¶
Beartype emits non-fatal warnings whenever something looks it might explode in your lap later… but has yet to do so. Since it is dangerous to go alone, let beartype’s words of anxiety-provoking wisdom be your guide. The codebase you save might be your own.
### PEP 585 Deprecations¶
Beartype may occasionally emit non-fatal PEP 585 deprecation warnings under Python ≥ 3.9 resembling:
```
/home/kumamon/beartype/_util/hint/pep/utilpeptest.py:377:
BeartypeDecorHintPep585DeprecationWarning: PEP 484 type hint
typing.List[int] deprecated by PEP 585 scheduled for removal in the first
Python version released after October 5th, 2025. To resolve this, import
this hint from "beartype.typing" rather than "typing". See this discussion
for further details and alternatives:
https://github.com/beartype/beartype#pep-585-deprecations
```
This is that discussion topic. Let’s dissect this like a mantis shrimp repeatedly punching out giant kraken.
# What Does This Mean?¶
The PEP 585 standard first introduced by Python 3.9.0 deprecated (obsoleted) most of the PEP 484 standard first introduced by Python 3.5.0 in the official `typing` module. All deprecated type hints are slated to “be
removed from the `typing` module in the first Python version released 5
years after the release of Python 3.9.0.” Spoiler: Python 3.9.0 was released on
October 5th, 2020. Altogether, this means that:
Caution
Most of the “typing” module will be removed in 2025 or 2026.
If your codebase currently imports from the `typing` module, most of
those imports will break under an upcoming Python release. This is what beartype
is shouting about. Bad changes are coming to dismantle your working code.
# Are We on the Worst Timeline?¶
Season Eight of Game of Thrones previously answered this question, but let’s try again. You have three options to avert the looming disaster that threatens to destroy everything you hold dear (in ascending order of justice):
Import from
`beartype.typing` instead. The easiest (and best) solution is to globally replace all imports from the standard `typing` module with equivalent imports from our `beartype.typing` module. So: > # If you prefer attribute imports, just do this... from beartype.typing import Dict, FrozenSet, List, Set, Tuple, Type # ...instead of this. #from typing import Dict, FrozenSet, List, Set, Tuple, Type # Or if you prefer module imports, just do this... from beartype import typing # ...instead of this. #import typing
The public
`beartype.typing` API is a mypy-compliant replacement for the `typing` API offering improved forward compatibility with future Python releases. For example: *
Drop Python < 3.9. The next easiest (but worst) solution is to brutally drop support for Python < 3.9 by globally replacing all deprecated PEP 484-compliant type hints with equivalent PEP 585-compliant type hints (e.g.,
`typing.List[int]` with `list[int]` ). This is really only ideal for closed-source proprietary projects with a limited userbase. All other projects should prefer saner solutions outlined below. *
Hide warnings. The reprehensible (but understandable) middle-finger way is to just squelch all deprecation warnings with an ignore warning filter targeting the
```
BeartypeDecorHintPep585DeprecationWarning
```
category. On the one hand, this will still fail in 2025 or 2026 with fiery explosions and thus only constitutes a temporary workaround at best. On the other hand, this has the obvious advantage of preserving Python < 3.9 support with minimal to no refactoring costs. The two ways to do this have differing tradeoffs depending on who you want to suffer most – your developers or your userbase: > # Do it globally for everyone, whether they want you to or not! # This is the "Make Users Suffer" option. from beartype.roar import BeartypeDecorHintPep585DeprecationWarning from warnings import filterwarnings filterwarnings("ignore", category=BeartypeDecorHintPep585DeprecationWarning) ... # Do it locally only for you! (Hope you like increasing your # indentation level in every single codebase module.) # This is the "Make Yourself Suffer" option. from beartype.roar import BeartypeDecorHintPep585DeprecationWarning from warnings import catch_warnings, filterwarnings with catch_warnings(): filterwarnings("ignore", category=BeartypeDecorHintPep585DeprecationWarning) ...
*
Type aliases. The hardest (but best) solution is to use type aliases to conditionally annotate callables with either PEP 484 or PEP 585 type hints depending on the major version of the current Python interpreter. Since this is life, the hard way is also the best way – but also hard. Unlike the drop Python < 3.9 approach, this approach preserves backward compatibility with Python < 3.9. Unlike the hide warnings approach, this approach also preserves forward compatibility with Python ≥ 3.14159265. Type aliases means defining a new private
submodule resembling: > # In "{your_package}._typing": from sys import version_info if version_info >= (3, 9): List = list Tuple = tuple ... else: from typing import List, Tuple, ...
Then globally refactor all deprecated PEP 484 imports from
`typing` to
instead: > # Instead of this... from typing import List, Tuple # ...just do this. from {your_package}._typing import List, Tuple
What could be simpler? …gagging noises faintly heard
|
TIGERr | cran | R | Package ‘TIGERr’
October 12, 2022
Type Package
Title Technical Variation Elimination with Ensemble Learning
Architecture
Version 1.0.0
Author <NAME> [aut, cre], <NAME> [aut], <NAME> [aut], Cor-
<NAME> [aut], <NAME> [aut], <NAME> [aut], <NAME> [aut], <NAME>-
ullo [aut], <NAME> [aut], <NAME> [aut], <NAME> [aut], <NAME>-
Sattler [aut]
Maintainer <NAME> <<EMAIL>>
Acknowledgments <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>
Description The R implementation of TIGER.
TIGER integrates random forest algorithm into an innovative ensemble learning architec-
ture. Benefiting from this advanced architecture, TIGER is resilient to out-
liers, free from model tuning and less likely to be affected by specific hyperparameters.
TIGER supports targeted and untargeted metabolomics data and is competent to perform both in-
tra- and inter-batch technical variation removal. TIGER can also be used for cross-kit adjust-
ment to ensure data obtained from different analytical assays can be effectively com-
bined and compared.
Reference: Han S. et al. (2022) <doi:10.1093/bib/bbab535>.
License GPL (>= 3)
Depends R (>= 3.5.0)
Imports parallel (>= 2.1.0), pbapply (>= 1.4-3), ppcor (>= 1.1),
randomForest (>= 4.6-14), stats (>= 3.0.0)
BugReports https://github.com/HAN-Siyu/TIGER/issues
Encoding UTF-8
LazyData true
RoxygenNote 7.1.1
NeedsCompilation no
Repository CRAN
Date/Publication 2022-01-06 14:00:02 UTC
R topics documented:
compute_RS... 2
compute_targetVa... 3
FF4_q... 5
run_TIGE... 6
select_variabl... 12
compute_RSD Compute RSD (relative standard deviation)
Description
This function computes the RSD (relative standard deviation) of the values in input_data. Missing
values are removed before the computation automatically.
Usage
compute_RSD(input_data)
Arguments
input_data a numeric vector
Details
The RSD in this function is computed by:
sd(input_data, na.rm = TRUE) / mean(input_data, na.rm = TRUE).
Value
The RSD of the values in input_data is computed, as a numeric of length one.
Examples
RSD_1 <- compute_RSD(c(1:10))
data(FF4_qc) # load demo dataset
# RSD of QC:
RSD_2 <- sapply(FF4_qc[FF4_qc$sampleType == "QC", -c(1:5)], compute_RSD)
quantile(RSD_2)
# RSD of different types of QC samples:
# (each metabolote has its own RSD)
RSD_3 <- aggregate(FF4_qc[-c(1:5)], by = list(Type = FF4_qc$sampleType),
FUN = compute_RSD)
compute_targetVal Compute target values for ensemble learning architecture
Description
This function provides an advanced option to calculate the target values of one reference dataset
(i.e. QC_num, numeric values of quality control samples). The generated target values (a list) can be
further passed to argument targetVal_external in function run_TIGER such that TIGER can align
the test_samples with the reference dataset. This is useful for longitudinal datasets correction and
cross-kit adjustment. See case study section of our original paper for detailed explanation.
Usage
compute_targetVal(
QC_num,
sampleType,
batchID = NULL,
targetVal_method = c("mean", "median"),
targetVal_batchWise = FALSE,
targetVal_removeOutlier = !targetVal_batchWise,
coerce_numeric = FALSE
)
Arguments
QC_num a numeric data.frame including the metabolite values of quality control (QC)
samples. Missing values and infinite values will not be taken into account. Row:
sample. Column: metabolite variable. See Examples.
sampleType a vector corresponding to QC_num to specify the type of each QC sample. QC
samples of the same type should have the same type name. See Examples.
batchID a vector corresponding to QC_num to specify the batch of each sample. Ignored
if targetVal_batchWise = FALSE. See Examples.
targetVal_method
a character string specifying how the target values are computed. Can be "mean"
(default) or "median". See Details.
targetVal_batchWise
logical. If TRUE, the target values will be computed based on each batch, other-
wise, based on the whole dataset. Setting TRUE might be useful if your dataset
has very obvious batch effects, but this may also make the algorithm less robust.
See Details. Default: FALSE.
targetVal_removeOutlier
logical. If TRUE, outliers will be removed before the computation. Outliers are
determined with 1.5 * IQR (interquartile range) rule. We recommend turning
this off when the target values are computed based on batches. See Details.
Default: !targetVal_batchWise.
coerce_numeric logical. If TRUE, values in QC_num will be coerced to numeric before the com-
putation. The columns cannot be coerced will be removed (with warnings). See
Examples. Default: FALSE.
Details
See run_TIGER.
Value
If targetVal_batchWise = FALSE, the function returns a list of length one containing the target
values computed on the whole dataset.
If targetVal_batchWise = TRUE, a list containing the target values computed on different batches
is returned. The length of the returned list equals the number of batch specified by batchID.
Examples
data(FF4_qc) # load demo dataset
QC_num <- FF4_qc[-c(1:5)] # only contain numeric metabolite values.
# target values computed on the whole dataset:
tarVal_1 <- compute_targetVal(QC_num = QC_num,
sampleType = FF4_qc$sampleType,
batchID = FF4_qc$plateID,
targetVal_method = "mean",
targetVal_batchWise = FALSE,
targetVal_removeOutlier = TRUE)
# target values computed on batches:
tarVal_2 <- compute_targetVal(QC_num = QC_num,
sampleType = FF4_qc$sampleType,
batchID = FF4_qc$plateID,
targetVal_method = "mean",
targetVal_batchWise = TRUE,
targetVal_removeOutlier = FALSE)
# If coerce_numeric = TRUE,
# columns cannot be coerced to numeric will be removed (with warnings):
tarVal_3 <- compute_targetVal(QC_num = FF4_qc[-c(4:5)],
sampleType = FF4_qc$sampleType,
batchID = FF4_qc$plateID,
targetVal_method = "mean",
targetVal_batchWise = TRUE,
targetVal_removeOutlier = FALSE,
coerce_numeric = TRUE)
identical(tarVal_2, tarVal_3) # identical to tarVal_2
## Not run:
# will throw errors if input data have non-numeric columns
# and coerce_numeric = FALSE:
tarVal_4 <- compute_targetVal(QC_num = FF4_qc,
sampleType = FF4_qc$sampleType,
batchID = FF4_qc$plateID,
targetVal_method = "mean",
targetVal_batchWise = TRUE,
targetVal_removeOutlier = FALSE,
coerce_numeric = FALSE)
## End(Not run)
FF4_qc Accompanying QC samples of KORA FF4 (demo data)
Description
This demo dataset, a data.frame with 232 samples (rows) and 108 variables (columns). The dataset
includes four types of quality control (QC) samples from 29 kit plates:
• QC1 (N = 29, one per plate),
• QC2 (N = 29, one per plate),
• QC3 (N = 29, one per plate),
• QC (N = 145, five per plate).
The columns include sample ID, sample type, plate ID, well position, injection order and the con-
centrations of 103 selected targeted metabolites. These QC samples are measured with the cohort
samples of KORA FF4 (Cooperative Health Research in the Augsburg Region, the second follow-
up study, 2013–2014) using the analytical assay Biocrates AbsoluteIDQ® p180 (BIOCRATES Life
Sciences AG, Innsbruck, Austria).
In our paper, we used QC as training samples, while QC1, QC2, QC3 and cohort samples were used
as test samples. The cohort data are operated by Helmholtz Zentrum München and available via
KORA platform https://www.helmholtz-munich.de/en/kora/ upon reasonable request. See
Reference for detailed information.
Usage
data(FF4_qc)
Reference
Han S. et al. TIGER: technical variation elimination for metabolomics data using ensemble learning
architecture. Briefings in Bioinformatics (2022) bbab535. doi: 10.1093/bib/bbab535.
run_TIGER Run TIGER to eliminate technical variation
Description
Use TIGER algorithm to eliminate the technical variation in metabolomics data. TIGER supports
targeted and untargeted metabolomics data and is competent to perform both intra- and inter-batch
technical variation removal.
Usage
run_TIGER(
test_samples,
train_samples,
col_sampleID,
col_sampleType,
col_batchID,
col_order = NULL,
col_position = NULL,
targetVal_external = NULL,
targetVal_method = c("mean", "median"),
targetVal_batchWise = FALSE,
targetVal_removeOutlier = !targetVal_batchWise,
selectVar_external = NULL,
selectVar_corType = c("cor", "pcor"),
selectVar_corMethod = c("pearson", "spearman"),
selectVar_minNum = 5,
selectVar_maxNum = 10,
selectVar_batchWise = FALSE,
mtry_percent = seq(0.2, 0.8, 0.2),
nodesize_percent = seq(0.2, 0.8, 0.2),
...,
parallel.cores = 2
)
Arguments
test_samples (required) a data.frame containing the samples to be corrected (for example,
subject samples). This data.frame should contain columns of
• sample ID (required): name or label for each sample,
• sample type (required): indicating the type of each sample,
• batch ID (required): the batch of each sample,
• order information (optional): injection order or temporal information of
each sample,
• position information (optional): well position of each sample,
• metabolite values (required): values to be normalised. Infinite values are
not allowed.
Row: sample. Column: variable. See Examples.
train_samples (required) a data.frame containing the quality control (QC) samples used for
model training. The columns in this data.frame should correspond to the columns
in test_samples. And test_samples and train_samples should have the
identical column names.
col_sampleID (required) a character string indicating the name of the column that specifies the
sample ID of each sample. The values in this column will not affect the data
correction process but can act as labels for different samples. See Examples.
col_sampleType (required) a character string indicating the name of the column that specifies the
type (such as QC1, QC2, subject) of each sample. This column can be used to
indicate different kinds of QC samples in train_samples. QC samples of the
same type should have the same type name. See Examples.
col_batchID (required) a character string indicating the name of the column that specifies the
batch ID of each sample. See Examples.
col_order (optional) NULL or a character string indicating the name of the column that con-
tains the injection order or temporal information (numeric values). This can
explicitly ask the algorithm capture the technical variation introduced by injec-
tion order, which might be useful when your data have very obvious temporal
drifts. If NULL (default), train_samples and test_samples should have No
column contains injection order information.
col_position (optional) NULL or a character string indicating the name of the column that
contains the well position information (numeric values). This can explicitly
ask the algorithm capture the technical variation introduced by well position,
which might be useful when the well position has a great impact during data
acquisition. If NULL (default), train_samples and test_samples should have
No column contains well position information.
targetVal_external
(optional) a list generated by function compute_targetVal. See Details.
targetVal_method
a character string specifying how target values are to be computed. Can be
"mean" (default) or "median". Ignored if a list of external target values has
been assigned to targetVal_external.
targetVal_batchWise
logical. If TRUE, the target values will be computed based on each batch, other-
wise, based on the whole dataset. Setting TRUE might be useful if your dataset
has very obvious batch effects, but this may also make the algorithm less robust.
Default: FALSE. Ignored if a list of external target values has been assigned to
targetVal_external.
targetVal_removeOutlier
logical. If TRUE, outliers will be removed before the computation. Outliers
are determined with 1.5 * IQR (interquartile range) rule. We recommend turn-
ing this off when the target values are computed based on batches. Default:
!targetVal_batchWise. Ignored if a list of external target values has been
assigned to targetVal_external.
selectVar_external
(optional) a list generated by function select_variable. See Details.
selectVar_corType
a character string indicating correlation ("cor", default) or partial correlation
("pcor") is to be used. Can be abbreviated. Ignored if a list of selected variables
has been assigned to selectVar_external. Note: computing partial correla-
tions of a large dataset can be very time-consuming.
selectVar_corMethod
a character string indicating which correlation coefficient is to be computed.
One of "spearman" (default) or "pearson". Can be abbreviated. Ignored if a
list of selected variables has been assigned to selectVar_external.
selectVar_minNum
an integer specifying the minimum number of the selected metabolite variables
(injection order and well position are not regarded as metabolite variables). If
NULL, no limited, but 1 at least. Default: 5. Ignored if a list of selected variables
has been assigned to selectVar_external.
selectVar_maxNum
an integer specifying the maximum number of the selected metabolite variables
(injection order and well position are not regarded as metabolite variables). If
NULL, no limited, but no more than the number of all available metabolite vari-
ables. Default: 10. Ignored if a list of selected variables has been assigned to
selectVar_external.
selectVar_batchWise
(advanced) logical. Specify whether the variable selection should be performed
based on each batch. Default: FALSE. Ignored if a list of selected variables
has been assigned to selectVar_external. Note: the support of batch-wise
variable selection is provided for data requiring special processing (for example,
data with strong batch effects). But in most case, batch-wise variable selection
is not necessary. Setting TRUE can make the algorithm less robust.
mtry_percent (advanced) a numeric vector indicating the percentages of selected variables ran-
domly sampled as candidates at each split when training random forest models
(base learners). Note: providing more arguments will include more base learn-
ers into the ensemble model, which will increase the processing time. Default:
seq(0.2, 0.8, 0.2).
nodesize_percent
(advanced) a numeric vector indicating the percentages of sample size used as
the minimum sizes of the terminal nodes in random forest models (base learn-
ers). Note: providing more arguments will include more base learners into the
ensemble model, which will increase the processing time. Default: seq(0.2,
0.8, 0.2).
... (advanced) optional arguments (except mtry and nodesize) to be passed to
randomForest for model training. Arguments mtry and nodesize are deter-
mined by mtry_percent and nodesize_percent. See randomForest and Ex-
amples. Note: providing more arguments will include more base learners into
the ensemble model, which will increase the processing time.
parallel.cores an integer (== -1 or >= 1) specifying the number of cores for parallel computa-
tion. Setting -1 to run with all cores. Default: 2.
Details
TIGER can effectively process the datasets with its default setup. The following hyperparameters
are provided to customise the algorithm and achieve the best possible performance. These hyper-
parameters are also practical for some special purposes (such as cross-kit adjustment, longitudinal
dataset correction) or datasets requiring special processing (for example, data with very strong tem-
poral drifts or batch effects). We recommend users to examine the normalised result with different
metrics, such as RSD (relative standard deviation), MAPE (mean absolute percentage error) and
PCA (principal component analysis), especially when using advanced options of TIGER.
Hyperparameters for target value computation
• targetVal_external
TIGER by default captures and eliminate the technical variation within the input dataset, and
the target values are automatically computed from train_samples. The target values can
also be calculated from a reference dataset using function compute_targetVal and then
passed to this function as an argument. This will enable TIGER to align test_samples
with the reference dataset. In this case, train_samples is still the accompanying QC sam-
ples of test_samples. And argument targetVal_external accepts external target val-
ues (a list). If the list of external target values is provided, values in targetVal_method,
targetVal_batchWise and targetVal_removeOutlier will be ignored.
• targetVal_method
The target values can be the mean or median values of metabolite values. The target values of
different kinds of QC samples are computed separately. "mean" is recommended here, but the
optimal selection can differ for different datasets.
• targetVal_batchWise
The target values can be computed from the whole dataset or from different batches. By
default, the target values are computed based on the whole dataset. Computing based on
batches (targetVal_batchWise = TRUE) is only recommended when the samples has very
strong batch effects. For example, we set this as TRUE when normalising WaveICA’s Amide
dataset in our original paper.
• targetVal_removeOutlier
If computing is based on the whole dataset (targetVal_batchWise = TRUE), users can re-
move the outliers in each metabolite by setting targetVal_removeOutlier as TRUE. This
can weaken the impact of extreme values. If targetVal_batchWise = FALSE, it is generally
not recommended to remove outliers, as we assume the input data have strong batch effects
and contain extreme values—we hope TIGER can take these into account. Code for checking
outliers is adapted from boxplot.stats.
Hyperparameters for variable selection
• selectVar_external:
This argument accepts a list of selected variables generated by select_variable. This is
helpful when you want to use the same selected variables to correct several datasets. You can
also pass a self-defined list to this argument, as long as the self-defined list has similar data
structure as the one generated by select_variable.
• selectVar_corType and selectVar_corMethod:
TIGER supports Pearson product-moment correlation ("pearson") and Spearman’s rank cor-
relation ("spearman") to compute correlation coefficients ("cor") or partial correlation coef-
ficients ("por") for variable selection. See cor and pcor for further details.
• selectVar_minNum and selectVar_maxNum:
For an objective metabolite to be corrected, the intersection of its top t highly-correlated
metabolites calculated from training and test samples are selected to train the ensemble model.
The highly-correlated metabolites are the ones with correlation coefficients greater than 0.5
(the objective metabolite itself will not be regarded as its highly-correlated metabolite). Ar-
guments selectVar_minNum and selectVar_maxNum are used to avoid selecting too many or
too few metabolites. Selecting too many metabolites can lower the process, sometimes even
lower the accuracy.
• selectVar_batchWise:
Advanced option designed for special cases. Setting it TRUE might be useful when your data
have very obvious batch effects.
Hyperparameters for model construction
• mtry_percent, nodesize_percent and ...:
Advanced options to specify mtry, nodesize and other related arguments in randomForest
for a customised ensemble learning architecture. See Examples.
Value
This function returns a data.frame with the same data structure as the input test_samples, but the
metabolite values are the normalised/corrected ones. NA and zeros in the original test_samples
will not be changed or normalised.
Reference
<NAME>. et al. TIGER: technical variation elimination for metabolomics data using ensemble learning
architecture. Briefings in Bioinformatics (2022) bbab535. doi: 10.1093/bib/bbab535.
Examples
data(FF4_qc) # load demo dataset
# QC as training samples; QC1, QC2 and QC3 as test samples:
train_samples <- FF4_qc[FF4_qc$sampleType == "QC",]
test_samples <- FF4_qc[FF4_qc$sampleType != "QC",]
# col_sampleID includes labels. You can assign names for different samples:
train_samples$sampleID <- "train"
test_samples$sampleID <- "test"
# Use default setting and
# include injection order and well position into feature set:
test_norm_1 <- run_TIGER(test_samples = test_samples,
train_samples = train_samples,
col_sampleID = "sampleID", # input column name
col_sampleType = "sampleType", # input column name
col_batchID = "plateID", # input column name
col_order = "injectionOrder", # input column name
col_position = "wellPosition", # input column name
parallel.cores = 2)
# If the information of injection order and well position is not available,
# or you don't want to use them:
train_data <- train_samples[-c(4:5)] # remove the two columns
test_data <- test_samples[-c(4:5)] # remove the two columns
test_norm_2 <- run_TIGER(test_samples = test_data,
train_samples = train_data,
col_sampleID = "sampleID",
col_sampleType = "sampleType",
col_batchID = "plateID",
col_order = NULL, # set NULL
col_position = NULL, # set NULL
parallel.cores = 2)
# If use external target values and selected variables with
# customised settings:
target_val <- compute_targetVal(QC_num = train_samples[-c(1:5)],
sampleType = train_samples$sampleType,
batchID = train_samples$plateID,
targetVal_method = "median",
targetVal_batchWise = TRUE)
select_var <- select_variable(train_num = train_samples[-c(1:5)],
test_num = test_samples[-c(1:5)],
train_batchID = train_samples$plateID,
test_batchID = test_samples$plateID,
selectVar_corType = "pcor",
selectVar_corMethod = "spearman",
selectVar_minNum = 10,
selectVar_maxNum = 30,
selectVar_batchWise = TRUE)
test_norm_3 <- run_TIGER(test_samples = test_samples,
train_samples = train_samples,
col_sampleID = "sampleID",
col_sampleType = "sampleType",
col_batchID = "plateID",
col_order = "injectionOrder",
col_position = "wellPosition",
targetVal_external = target_val,
selectVar_external = select_var,
parallel.cores = 2)
# The definitions of other hyperparameters correspond to
# randomForest::randomForest().
# If want to include more hyperparameters into model training,
# put hyperparameter values like this:
mtry_percent <- c(0.4, 0.8)
nodesize_percent <- c(0.4, 0.8)
replace <- c(TRUE, FALSE)
ntree <- c(100, 200, 300)
test_norm_4 <- run_TIGER(test_samples = test_data,
train_samples = train_data,
col_sampleID = "sampleID",
col_sampleType = "sampleType",
col_batchID = "plateID",
mtry_percent = mtry_percent,
nodesize_percent = nodesize_percent,
replace = replace,
ntree = ntree,
parallel.cores = 2)
# test_norm_4 is corrected by the ensemble model consisted of base learners
# trained with (around) 24 different hyperparameter combinations:
expand.grid(mtry_percent, nodesize_percent, replace, ntree)
# Note: mtry and nodesize are calculated by mtry_percent and nodesize_percent,
# duplicated hyperparameter combinations, if any, will be removed.
# Thus, the total number of hyperparameter combinations can be less than 24.
# This is determined by the shape of your input datasets.
select_variable Select variables for ensemble learning architecture
Description
This function provides an advanced option to select metabolite variables from external dataset(s).
The selected variables (as a list) can be further passed to argument selectVar_external in func-
tion run_TIGER for a customised data correction.
Usage
select_variable(
train_num,
test_num = NULL,
train_batchID = NULL,
test_batchID = NULL,
selectVar_corType = c("cor", "pcor"),
selectVar_corMethod = c("spearman", "pearson"),
selectVar_minNum = 5,
selectVar_maxNum = 10,
selectVar_batchWise = FALSE,
coerce_numeric = FALSE
)
Arguments
train_num a numeric data.frame only including the metabolite values of training samples
(can be quality control samples). Information such as injection order or well
position need to be excluded. Row: sample. Column: metabolite variable. See
Examples.
test_num an optional numeric data.frame including the metabolite values of test sam-
ples (can be subject samples). If provided, the column names of test_num
should correspond to the column names of train_num. Row: sample. Column:
metabolite variable. If NULL, the variables will be selected based on train_num
only. See Examples.
train_batchID NULL or a vector corresponding to train_num to specify the batch of each sam-
ple. Ignored if selectVar_batchWise = FALSE. See Examples.
test_batchID NULL or a vector corresponding to test_num to specify the batch of each sample.
Ignored if selectVar_batchWise = FALSE. See Examples.
selectVar_corType
a character string indicating correlation ("cor", default) or partial correlation
("pcor") is to be used. Can be abbreviated. See Details. Note: computing
partial correlations of a large dataset can be very time-consuming.
selectVar_corMethod
a character string indicating which correlation coefficient is to be computed.
One of "spearman" (default) or "pearson". Can be abbreviated. See Details.
selectVar_minNum
an integer specifying the minimum number of the selected variables. If NULL,
no limited, but 1 at least. See Details. Default: 5.
selectVar_maxNum
an integer specifying the maximum number of the selected variables. If NULL,
no limited, but ncol(train_num) - 1 at most. See Details. Default: 10.
selectVar_batchWise
(advanced) logical. Specify whether the variable selection should be performed
based on each batch. Default: FALSE. Note: if TRUE, batch ID of each sample
are required. The support of batch-wise variable selection is provided for data
requiring special processing (for example, data with strong batch effects). But
in most case, batch-wise variable selection is not necessary. Setting TRUE might
make the algorithm less robust. See Details.
coerce_numeric logical. If TRUE, values in train_num and test_num will be coerced to numeric
before the computation. The columns cannot be coerced will be removed (with
warnings). See Examples. Default: FALSE.
Details
See run_TIGER.
Value
If selectVar_batchWise = FALSE, the function returns a list of length one containing the selected
variables computed on the whole dataset.
If selectVar_batchWise = TRUE, a list containing the selected variables computed on different
batches is returned. The length of the returned list equals the number of batch specified by test_batchID
and/or train_batchID.
Examples
data(FF4_qc) # load demo dataset
# QC as training samples; QC1, QC2 and QC3 as test samples:
train_samples <- FF4_qc[FF4_qc$sampleType == "QC",]
test_samples <- FF4_qc[FF4_qc$sampleType != "QC",]
# Only numeric data of metabolite variables are allowed:
train_num = train_samples[-c(1:5)]
test_num = test_samples[-c(1:5)]
# If the selection is performed on the whole dataset:
# based on training samples only:
selected_var_1 <- select_variable(train_num = train_num,
test_num = NULL,
selectVar_batchWise = FALSE)
# also consider test samples:
selected_var_2 <- select_variable(train_num = train_num,
test_num = test_num,
selectVar_batchWise = FALSE)
# If the selection is based on different batches:
# (In selectVar_batchWise, batch ID is required.)
selected_var_3 <- select_variable(train_num = train_num,
test_num = NULL,
train_batchID = train_samples$plateID,
test_batchID = NULL,
selectVar_batchWise = TRUE)
# If coerce_numeric = TRUE,
# columns cannot be coerced to numeric will be removed (with warnings):
# (In this example, columns of injection order and well position are excluded.
# Because we don't want to calculate the correlations between metabolites and
# injection order/well position.)
selected_var_4 <- select_variable(train_num = train_samples[-c(4,5)],
train_batchID = train_samples$plateID,
selectVar_batchWise = TRUE,
coerce_numeric = TRUE)
identical(selected_var_3, selected_var_4) # identical to selected_var_3
## Not run:
# will throw errors if input data have non-numeric columns
# and coerce_numeric = FALSE:
selected_var_5 <- select_variable(train_num = train_samples[-c(4,5)],
select_variable 15
coerce_numeric = FALSE)
## End(Not run) |
@types/analog-clock | npm | JavaScript | [Installation](#installation)
===
> `npm install --save @types/analog-clock`
[Summary](#summary)
===
This package contains type definitions for analog-clock (<https://github.com/matthewp/analog-clock#readme>).
[Details](#details)
===
Files were exported from <https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/analog-clock>.
[index.d.ts](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/analog-clock/index.d.ts)
---
```
// Custom HTMLElement for the analog clock export class AnalogClock extends HTMLElement {
static observedAttributes: string[];
connectedCallback(): void; // Lifecycle method called when the element is connected to the DOM
disconnectedCallback(): void; // Lifecycle method called when the element is disconnected from the DOM
attributeChangedCallback(attr: string, oldVal: string, newVal: string): void; // Called when an observed attribute changes
// Getters and setters for time, offset, and dark mode properties
time: number | undefined;
offset: number | undefined;
dark: boolean;
// Public methods for stopping and starting the clock
stop(): void;
start(): void;
}
// Export default as the AnalogClock class export default AnalogClock;
```
### [Additional Details](#additional-details)
* Last updated: Tue, 17 Oct 2023 22:10:13 GMT
* Dependencies: none
[Credits](#credits)
===
These definitions were written by [ihatecsv](https://github.com/ihatecsv).
Readme
---
### Keywords
none |
trustOptim | cran | R | Package ‘trustOptim’
October 14, 2022
Type Package
Title Trust Region Optimization for Nonlinear Functions with Sparse
Hessians
Version 0.8.7.3
Date 2021-10-07
Maintainer <NAME> <<EMAIL>>
URL https://braunm.github.io/trustOptim/,
https://github.com/braunm/trustOptim/
BugReports https://github.com/braunm/trustOptim/issues
Description Trust region algorithm for nonlinear optimization. Efficient when
the Hessian of the objective function is sparse (i.e., relatively few nonzero
cross-partial derivatives). See Braun, M. (2014) <doi:10.18637/jss.v060.i04>.
License MPL (>= 2.0)
Depends R (>= 3.6)
Suggests testthat, knitr
Imports Matrix (>= 1.2.18), Rcpp (>= 1.0.3), methods
LinkingTo Rcpp, RcppEigen (>= 0.3.3.7.0)
Copyright (c) 2015-2021 <NAME>
Encoding UTF-8
VignetteBuilder knitr
SystemRequirements C++11
RoxygenNote 7.1.2
NeedsCompilation yes
Author <NAME> [aut, cre, cph] (<https://orcid.org/0000-0003-4774-2119>)
Repository CRAN
Date/Publication 2021-10-11 08:10:02 UTC
R topics documented:
binar... 2
binary-dat... 2
trust.opti... 3
trustOpti... 6
binary Binary choice example
Description
Functions for binary choice example in the vignette.
Usage
binary.f(P, data, priors, order.row = FALSE)
binary.grad(P, data, priors, order.row = FALSE)
binary.hess(P, data, priors, order.row = FALSE)
Arguments
P Numeric vector of length (N+1)*k. First N*k elements are heterogeneous coef-
ficients. The remaining k elements are population parameters.
data List of data matrices Y and X, and choice count integer T
priors List of named matrices inv.Omega and inv.Sigma
order.row Determines order of heterogeneous coefficients in parameter vector. Affects
sparsity pattern of Hessian. See vignette.
Details
Hessian is sparse, and returned as a dgcMatrix object
Value
Log posterior density, gradient and Hessian.
binary-data Sample simulated data for binary choice model in vignette
Description
Simulated data. See vignette. Generated from data-raw/binary.R
trust.optim Nonlinear optimizers using trust regions.
Description
Run nonlinear minimizer using trust region algorithm with conjugate gradient search directions and
quasi-Hessian updates.
Usage
trust.optim(
x,
fn,
gr,
hs = NULL,
method = c("SR1", "BFGS", "Sparse"),
control = list(),
...
)
Arguments
x A numeric vector of starting values for the optimizer.
fn An R function that takes x as its first argument. Returns the value of the objective
function at x. Note that the optimizer will minimize fn (see function.scale.factor
under control)
gr An R function that takes x as its first argument. Returns a numeric vector that
is the gradient of fn at x. The length of the gradient must be the same as the
length of x. The user must supply this function. If an analytic gradient is not
available, and the method is SR1 or BFGS, the user should consider a numerical
approximation using finite differencing (see the numDeriv package). Do not use
a finite-differenced gradient with the Sparse method. That will cause a world
of hurt.
hs An R function that takes x as its first argument. It returns a Hessian matrix object
of class dgCMatrix (see the Matrix package). This function is called only if the
selected method is Sparse.
method Valid arguments are SR1,BFGS,and Sparse.
control A list containing control parameters for the optimizer. See details.
... Additional arguments passed to fn, gr and hs. All arguments must be named.
Value
List containing the following items:
fval Value of the objective function
solution Parameter vector at the optimum
gradient Gradient at the optimum
hessian Estimate of the Hessian at the optimum (as class symmetricMatrix, returned
only for Sparse method).
iterations Number of iterations before stopping
status A message describing the last state of the iterator
nnz For the Sparse method only, the number of nonzero elements in the lower trian-
gle of the Hessian
.
Details
The following sections explain how to use the package as a whole.
Control parameters
The control list should include the following parameters.
start.trust.radius Initial radius of the trust region. Default is 5. If the algorithm returns non-finite
values of the objective function early in the process, try a lower number.
stop.trust.radius Minimum radius of trust region. Algorithm will terminate if radius is below this
value. This is because it may not be possible to get the norm of the gradient smaller than prec,
and this is another way to get the algorithm to stop.
cg.tol tolerance for the conjugate gradient algorithm that is used for the trust region subproblem.
Set it to something very small. Default is sqrt(.Machine$double.eps)
prec Precision for how close the norm of the gradient at the solution should be to zero, before
the algorithm halts. It is possible that the algorithm will not get that far, so it will also stop
when the radius of the trust region is smaller thanstop.trust.radius. If the trust region radius
collapses, but the norm of the gradient really isn’t close to zero, then something terrible has
happened.
report.freq An integer. The frequency at which the algorithm will display the current iteration
number or function value, among other things (see report.level). Defaults to 1.
report.level The amount of detail in each report. Defaults to 2.
report.precision The number of significant digits used in each report. Defaults to 5.
report.header.freq The number of lines of iterations before the report column headers are reprinted.
Defaults to 25.
maxit Maximum number of iterations. Defaults to 100.
contract.factor When the algorithm decides to shrink the trust region, it will multiply the trust
radius by this factor. Defaults to 0.5.
expand.factor When the algorithm decides to expand the trust region, it will multiply the algorithm
by this factor. Defaults to 3.
contract.threshold The algorithm with accept a proposed move if the ratio of the actual improve-
ment in the objective function, to the predicted improvement from the trust region subproblem,
is greater than this amount. Otherwise, the trust region will contract. Default is 0.25.
expand.threshold.ap First criterion to determine if the trust region should expand. If the ratio of
the actual and proposed improvements in the objective function is less than this factor, the
algorithm will consider expanding the trust region. See expand.threshold.radius. Default
is 0.8.
expand.threshold.radius If the ratio of the actual and proposed improvement in the objective func-
tion is less than expand.threshold.ap, then, if the normed distance of the proposed move is
greater than expand.threshold.radius, times the current trust region radius, the trust region
will expand. Default is 0.8.
function.scale.factor The algorithm will minimize fn times this factor. If you want to maximize
fn, this value should be negative (usually -1). Default is 1.
precond.refresh.freq Frequency at which the preconditioner for the conjugate gradiate estimation
of the trust region subproblem is reestimated. Preconditioners can help the convergence prop-
erties of the algorithm. Default is 1.
preconditioner ID for choice of preconditioner. 0 is the identity matrix (default), For the Sparse
method, 1 is a modified Cholesky preconditioner. For the BFGS method, 1 is the full Cholesky
decomposition. If you select 1 for the SR1 method, the algorithm will use the identity precon-
ditioner instead.
trust.iter Maximum number of conjugate gradient iterations to run when solving the trust region
subproblem. A higher number will lead to more accurate solutions to the subproblem, but may
also lead to longer run times. Defaults to 2000.
Report levels
The report.level control parameter determines how much information is displayed each time the
algorithm reports the current state. Possible values are
<=0 No information (a quiet run)
1 Current iteration number, and current value of the objective function.
2 Information from level 1, plus the current norm of the gradient and a status message.
3 Information from levels 1 and 2, plus the current normed radius of the trust region.
4 Information from levels 1, 2, and 3, plus information from each estimate of the trust region sub-
problem (number of conjugate gradient iterations and how/why the CG algorithm terminated).
Default level is 2. Levels 3 and 4 are available primarily for debugging purposes.
Stopping criteria
The algorithm will stop when one of the following three conditions are met:
• The norm of the gradient, divided by the square root of the number of parameters, is less than
prec.
• The trust region collapse to a radius smaller than machine precision
• The algorithm proposes zero or negative improvement in the objective function (should never
happen)
• The number of iterations reaches the control parameter maxit
If the algorithm appears to have stopped prematurely (i.e., the norm of the gradient is still too large),
then one might just restart the algorithm. For the quasi-Newton algorithms (SR1 and BFGS), this will
refresh the Hessian, and might allow more progress to be made.
Estimating a sparse Hessian
Sometimes estimating the Hessian is easy (e.g., you have an analytic representation, or you are
using some kind of algorithmic differentiation software). If you do not know the Hessian, but you do
know the sparsity structure, try the sparseHessianFD package. The routines in sparseHessianFD
compute the Hessian using finite differencing, but in a way that exploits the sparsity structure. In
many cases, this can be faster than constructing an analytic Hessian for a large problem (e.g., when
the Hessian has a block-arrow structure with a large number of blocks).
To use the sparseHessianFD package, you need to provide the row and column indices of the non-
zero elements of the lower triangle of the Hessian. This structure cannot change during the course
of the trust.optim routine. Also, you really should provide an analytic gradient. sparseHessianFD
computes finite differences of the gradient, so if the gradient itself is finite-differenced, so much
error is propagated through that the Hessians are nearly worthless close to the optimum.
Of course, sparseHessianFD is useful only for the Sparse method. That said, one may still get
decent performance using these routines even if the Hessian is sparse, if the problem is not too large.
Just treat the Hessian as if it were sparse.
Examples
## Not run:
data(binary)
N <- length(binary$Y)
k <- NROW(binary$X)
start <- rep(0,(N+1)*k)
priors <- list(inv.Sigma = diag(k), inv.Omega = diag(k))
opt <- trust.optim(start, fn=binary.f,
gr = binary.grad,
hs = binary.hess,
method = "Sparse",
control = list(
report.precision=1L,
function.scale.factor=-1
),
data=binary, priors=priors
)
## End(Not run)
trustOptim Trust-region optimization
Description
Nonlinear optimizers using trust regions, with methods optimized for sparse Hessians.
Details
Trust region algorithm for nonlinear optimization. In addition to being more stable and robust than
optim, this package includes methods that are scalable and efficient (in terms of both speed and
memory usage) when the Hessian is sparse.
References
Braun, Michael. 2014. trustOptim: An R Package for Trust Region Optimization with Sparse
Hessians. Journal of Statistical Software 60(4), 1-16. www.jstatsoft.org/v60/i04/.
Nocedal, Jorge, and <NAME>. 2006. Numerical Optimization. Second edition. Springer.
<NAME>. 1983. The Conjugate Gradient Method and Trust Regions in Large Scale Opti-
mization. SIAM Journal on Numerical Analysis 20(3), 626-637. |
pubtatordb | cran | R | Package ‘pubtatordb’
October 14, 2022
Type Package
Title Create and Query a Local 'PubTator' Database
Version 0.1.4
Maintainer <NAME> <<EMAIL>>
Description
'PubTator' <https://www.ncbi.nlm.nih.gov/CBBresearch/Lu/Demo/PubTator/> is a Na-
tional Center for Biotechnology Information (NCBI) tool that enhances the annotation of arti-
cles on PubMed <https://www.ncbi.nlm.nih.gov/pubmed/>. It makes it possi-
ble to rapidly identify potential relationships between genes or proteins using text mining tech-
niques. In contrast, manually searching for and reading the annotated arti-
cles would be very time consuming. 'PubTator' offers both an online interface and a REST-
ful API, however, neither of these approaches are well suited for frequent, high-throughput anal-
yses. The package 'pubtatordb' provides a set of functions that make it easy for the aver-
age R user to download 'PubTator' annotations, create, and then query a local ver-
sion of the database.
License MIT + file LICENSE
Encoding UTF-8
LazyData true
Suggests covr, testthat, knitr, rmarkdown
Imports DBI, R.utils, RSQLite, assertthat, dplyr, readr
VignetteBuilder knitr
RoxygenNote 7.0.0
NeedsCompilation no
Author <NAME> [aut, cre],
Madigan Army Medical Center - Department of Clinical Investigation
[cph, fnd]
Repository CRAN
Date/Publication 2019-11-22 19:30:02 UTC
R topics documented:
download_p... 2
make_pubtator_sqlite_pat... 3
pt_column... 3
pt_connecto... 4
pt_selec... 4
pt_table... 5
pt_to_sq... 6
pubtator_citation... 7
pubtator_ftp_ur... 7
pubtator_table... 7
download_pt Download PubTator data via ftp.
Description
Download PubTator data via ftp.
Usage
download_pt(pubtator_parent_path, ...)
Arguments
pubtator_parent_path
The path to the directory where the PubTator data folder will be created.
... Additional arguments to dir.create and download.file.
Value
The path to the newly created directory. This can be passed to other functions as the pt_path
argument.
Examples
# Use the full path. The files are large. Writing somewhere other than the
# temp directory is recommended.
download_path <- tempdir()
download_pt(dowload_path)
make_pubtator_sqlite_path
Make a path to the PubTator sqlite file.
Description
Make a path to the PubTator sqlite file.
Usage
make_pubtator_sqlite_path(pt_path)
Arguments
pt_path A character string indicating the full path of the directory containing the pubtator
gz files to be extracted.
Value
A character string indicating the full path to the sqlite file.
pt_columns List the column names for a table in the PubTator sqlite database
Description
List the column names for a table in the PubTator sqlite database
Usage
pt_columns(db_con, table_name)
Arguments
db_con A connection to the PubTator sqlite database, as created via pubator_connector.
table_name The name of the table of interest. Valid tables can be found using pt_tables.
Capitalization does not matter.
Value
A character vector of the column names for a given table.
Examples
db_con <- pt_connector(pt_path)
pubtator_columns(db_con, "gene")
pt_connector Connect to pubtator.sqlite
Description
Connect to pubtator.sqlite
Usage
pt_connector(pt_path)
Arguments
pt_path A character string indicating the full path of the directory containing the pubtator
gz files to be extracted.
Value
A SQLiteConnection
Examples
pt_connector("D:/Reference_data/PubTator")
pt_select Retrieve data from the PubTator database.
Description
Retrieve data from the PubTator database.
Usage
pt_select(
db_con,
table_name,
columns = NULL,
keys = NULL,
keytype = NULL,
limit = Inf
)
Arguments
db_con A connection to the PubTator sqlite database, as created via pubator_connector.
table_name The name of the table of interest. Valid tables can be found using pt_tables.
Capitalization does not matter.
columns A character vector of the names of the columns of interest. Capitalization does
not matter.
keys A vector specifying which values must be in the keytype column to enable re-
trieval. No filtering is performed if keys = NULL.
keytype The column in which the keys should be searched for.
limit The maximum number of rows the query should return. All rows passing filter-
ing (if any) are returned if limit = Inf.
Value
A data.frame.
Examples
db_con <- pt_connector(pt_path)
pt_select(
db_con,
"gene",
columns = c("ENTREZID","Resource","MENTIONS","PMID"),
keys = c("7356", "4199", "7018"),
keytype = "ENTREZID",
limit = 10
)
pt_tables List the tables in the PubTator sqlite database
Description
List the tables in the PubTator sqlite database
Usage
pt_tables(db_con)
Arguments
db_con A connection to the PubTator sqlite database, as created via pubator_connector.
Value
A character vector of the names of the tables found in the database.
Examples
db_con <- pt_connector(pt_path)
pt_tables(db_con)
pt_to_sql Create sqlite database from the pubtator data.
Description
Create sqlite database from the pubtator data.
Usage
pt_to_sql(pt_path, skip_behavior = TRUE, remove_behavior = FALSE)
Arguments
pt_path A character string indicating the full path of the directory containing the pubtator
gz files to be extracted.
skip_behavior TRUE/FALSE indicating whether the file should be re-extracted if it has already
been extracted.
remove_behavior
TRUE/FALSE indicating whether the gz files should be removed following suc-
cessful extraction.
Examples
download_path <- tempdir()
current_dir <- getwd()
setwd(download_path)
pt_to_sql("PubTator")
setwd(current_dir)
pubtator_citations See the citations for PubTator
Description
See the citations for PubTator
Usage
pubtator_citations()
Examples
pubtator_citations()
pubtator_ftp_url NCBI’s ftp url definition for PubTator.
Description
NCBI’s ftp url definition for PubTator.
Usage
pubtator_ftp_url()
Value
A character string giving the ftp url for PubTator.
pubtator_tables Table and dataset definitions
Description
Table and dataset definitions
Usage
pubtator_tables()
Value
A character vector where names are table names and values are dataset names. |
csdmpy | readthedoc | SQL | csdmpy:doc v0.5
Toggle navigation
[csdmpy:doc v0.5](#)
* [Table Of Contents](#)
- [Introduction to CSDM format](index.html#document-CSD_model)
* [CSDM](index.html#document-CSDmodel_uml/csdm)
* [Dimension](index.html#document-CSDmodel_uml/dimensions/dimension)
* [DependentVariable](index.html#document-CSDmodel_uml/dependent_variables/dependent_variable)
* [Enumeration](index.html#document-CSDmodel_uml/enumeration)
* [ScalarQuantity](index.html#document-CSDmodel_uml/scalarQuantity)
- [Installation](index.html#document-installation)
* [Requirements](index.html#requirements)
* [Installing `csdmpy`](index.html#installing-csdmpy)
- [Getting started with csdmpy package](index.html#document-getting_started)
* [Accessing dimensions and dependent variables of the dataset](index.html#accessing-dimensions-and-dependent-variables-of-the-dataset)
* [Plotting the dataset](index.html#plotting-the-dataset)
- [Example Gallery](index.html#document-auto_examples/index)
* [Scalar, 1D{1} datasets](index.html#scalar-1d-1-datasets)
* [Scalar, 2D{1} datasets](index.html#scalar-2d-1-datasets)
* [Vector datasets](index.html#vector-datasets)
* [Tensor datasets](index.html#tensor-datasets)
* [Pixel datasets](index.html#pixel-datasets)
* [Correlated datasets](index.html#correlated-datasets)
* [Sparse datasets](index.html#sparse-datasets)
- [Serializing CSDM object to file](index.html#document-startFromScratch/save_dataset)
- [Using csdmpy objects](index.html#document-startFromScratch/start)
* [Generating Dimension objects](index.html#document-startFromScratch/dimension/generate/dimension_objects)
* [Generating DependentVariable objects](index.html#document-startFromScratch/generating_dependent_variable_object)
* [Generating CSDM objects](index.html#document-startFromScratch/generating_csdm_object)
* [Adding Dimension objects to CSDM object](index.html#document-startFromScratch/add_dimension_object)
* [Adding DependentVariable objects to CSDM object](index.html#document-startFromScratch/add_dependent_variable_object)
- [Interacting with csdmpy objects](index.html#document-startFromScratch/interacting_with_csdmpy_objects)
* [Interacting with Dimension objects](index.html#document-startFromScratch/dimension/interact/dimension_objects)
* [Interacting with CSDM objects](index.html#document-startFromScratch/interacting_with_csdm)
- [Plotting CSDM object with matplotlib](index.html#document-using_csdm_with_pyplot)
* [1D CSDM objects with `plot()|scatter()`](index.html#document-plotting_with_pyplot/one_D_plots)
* [2D CSDM objects with `imshow()|contour()|contourf()`](index.html#document-plotting_with_pyplot/two_D_plots)
- [Tutorial examples on generating CSDM datasets](index.html#document-auto_tutorials/index)
* [1D Datasets](index.html#d-datasets)
* [2D Datasets](index.html#sphx-glr-auto-tutorials-2d-datasets)
- [An emoji 😁 example](index.html#document-startFromScratch/A fun example)
- [API-Reference](index.html#document-referenceAPI)
* [csdmpy](index.html#document-api/csdmpy)
* [CSDM](index.html#document-api/CSDM)
* [Dimension](index.html#document-api/Dimensions)
* [DependentVariable](index.html#document-api/DependentVariable)
* [Statistics](index.html#document-api/statistics)
* [CSDMAxes](index.html#document-api/plotting_function)
* [Numpy methods](index.html#document-api/numpy_wrappers)
- [Changelog](index.html#document-changelog)
* [v0.5.0](index.html#v0-5-0)
* [v0.4.1](index.html#v0-4-1)
* [v0.4](index.html#v0-4)
* [v0.3.5](index.html#v0-3-5)
* [v0.3.4](index.html#v0-3-4)
* [v0.3.3](index.html#v0-3-3)
* [v0.3.2](index.html#v0-3-2)
* [v0.3.1](index.html#v0-3-1)
* [v0.3.0](index.html#v0-3-0)
* [v0.2.2](index.html#v0-2-2)
* [v0.2.1](index.html#v0-2-1)
* [v0.2.0](index.html#v0-2-0)
* [v0.1.5](index.html#v0-1-5)
* [v0.1.4](index.html#v0-1-4)
* [v0.1.3](index.html#v0-1-3)
* [v0.0.11 to v0.1.2](index.html#v0-0-11-to-v0-1-2)
*
Welcome to the csdmpy documentation[¶](#welcome-to-the-csdmpy-documentation)
===
| Deployment | |
| Build Status |
|
| License | |
| Metrics |
|
| GitHub | |
| Citation | |
---
**About**
The `csdmpy` package is the Python support for the core scientific dataset (CSD) model file exchange-format [1](#f10).
The package is based on the core scientific dataset (CSD) model, which is designed as a building block in the development of a more sophisticated portable scientific dataset file standard.
The CSD model is capable of handling a wide variety of scientific datasets both within and across disciplinary fields.
The main objective of this python package is to facilitate an easy import and export of the CSD model serialized files for Python users. The package utilizes Numpy library and, therefore, offers the end-users versatility to process or visualize the imported datasets with any third-party package(s)
compatible with Numpy.
---
**View the core scientific dataset model (CSDM) examples gallery.**
---
**Tutorial on generating and serializing CSDM objects from Numpy arrays.**
---
Table of Contents[¶](#table-of-contents)
---
Introduction to CSDM format[¶](#introduction-to-csdm-format)
---
The core scientific dataset (CSD) model is a *light-weight*, *portable*,
*versatile*, and *standalone* data model capable of handling a variety of scientific datasets. The model only encapsulates data values and the minimum metadata to accurately represent a p-component dependent variable,
\((\mathbf{U}_0, ... \mathbf{U}_q, ... \mathbf{U}_{p-1})\),
discretely sampled at M unique points in a d-dimensional coordinate space,
\((\mathbf{X}_0, \mathbf{X}_1, ... \mathbf{X}_k, ... \mathbf{X}_{d-1})\).
The model is not intended to encapsulate any information on how the data might be acquired, processed, or visualized.
The data model is *versatile* in allowing many use cases for most spectroscopy,
diffraction, and imaging techniques. As such the model supports multi-component datasets associated with continuous physical quantities that are discretely sampled in a multi-dimensional space associated with other carefully controlled quantities, for e.g., a mass as a function of temperature, a current as a function of voltage and time, a signal voltage as a function of magnetic field gradient strength, a color image with a red, green, and blue (RGB) light intensity components as a function of two independent spatial dimensions, or the six components of the symmetric second-rank diffusion tensor MRI as a function of three independent spatial dimensions. Additionally, the model supports multiple dependent variables sharing the same \(d\)-dimensional coordinate space. For example, a simultaneous measurement of current and voltage as a function of time,
simultaneous acquisition of air temperature, pressure, wind velocity, and solar-flux as a function of Earth’s latitude and longitude coordinates. We refer to these dependent variables as correlated-datasets.
The CSD model is independent of the hardware,
operating system, application software, programming language, and the object-oriented file-serialization format utilized in serializing the CSD model to the file. Out of numerous file serialization formats, XML, JSON, property list, we chose the data-exchange oriented JSON (JavaScript Object Notation)
file-serialization format because it is human-readable and easily integrable with any number of programming languages and field related application-software.
### CSDM[¶](#csdm)
#### Description[¶](#description)
The root level object of the CSD model.
#### Attributes[¶](#attributes)
| Name | Type | Description |
| --- | --- | --- |
| version | String | A required version number of CSDM file-exchange format. |
| dimensions | [[Dimension](index.html#dimension-uml), …] | A required ordered and unique array of dimension objects. An empty array is a valid value. |
| dependent_variables | [[DependentVariable](index.html#dependent-var-uml), …] | A required array of dependent-variable objects. An empty array is a valid value. |
| tags | [String, …] | An optional list of keywords associated with the dataset. |
| read_only | Boolean | An optional value with default as False. If true, the serialized file is archived. |
| timestamp | String | An optional UTC ISO-8601 format timestamp from when the CSDM-compliant file was last serialized. |
| geographic_coordinate | geographic_coordinate | An optional object with attributes required to describe the location from where the CSDM-compliant file was last serialized. |
| description | String | An optional description of the datasets in the CSD model. |
| application | Generic | An optional generic dictionary object containing application specific metadata describing the CSDM object. |
### Dimension[¶](#dimension)
A generalized object describing a dimension of a multi-dimensional grid/space.
#### Specialized Class[¶](#specialized-class)
[LinearDimension](./linear.html#lineardimension-uml)
[MonotonicDimension](./monotonic.html#monotonicdimension-uml)
[LabeledDimension](./labeled.html#labeleddimension-uml)
#### Attributes[¶](#attributes)
| Name | Type | Description |
| --- | --- | --- |
| type | [DimObjectSubtype](index.html#dimobjectsubtype-uml) | A required enumeration literal with a valid dimension subtype. |
| label | String | An optional label of the dimension. |
| description | String | An optional description of the dimension. |
| application | Generic | An optional generic dictionary object containing application specific metadata describing the dimension. |
### DependentVariable[¶](#dependentvariable)
#### Description[¶](#description)
A generalized object describing a dependent variable of the dataset, which holds an ordered list of p components, indexed as q=0 to p-1, as
()[¶](#equation-csdmodel-uml-dependent-variables-dependent-variable-0)\[[\mathbf{U}_0, ... \mathbf{U}_q, ... \mathbf{U}_{p-1}].\]
#### Specialized Class[¶](#specialized-class)
[InternalDependentVariable](./internal.html#internal-uml)
[ExternalDependentVariable](./external.html#external-uml)
#### Attributes[¶](#attributes)
| Name | Type | Description |
| --- | --- | --- |
| type | [DVObjectSubtype](index.html#dvobjectsubtype-uml) | An enumeration literal with a valid dependent variable subtype. |
| name | String | Name of the dependent variable. |
| unit | String | The unit associated with the physical quantities describing the dependent variable. |
| quantity_name | String | Quantity name associated with the physical quantities describing the dependent variable. |
| numeric_type | [NumericType](index.html#numerictype-uml) | An enumeration literal with a valid numeric type. |
| quantity_type | [QuantityType](index.html#quantitytype-uml) | An enumeration literal with a valid quantity type. |
| component_labels | [String, String, … ] | Ordered array of labels associated with ordered array of components of the dependent variable. |
| sparse_sampling | [SparseSampling](#sparsesampling-uml) | Object with attribute required to describe a sparsely sampled dependent variable components. |
| description | String | Description of the dependent variable. |
| application | Generic | Generic dictionary object containing application specific metadata describing the dependent variable. |
### Enumeration[¶](#enumeration)
#### DimObjectSubtype[¶](#dimobjectsubtype)
An enumeration with literals as the value of the [Dimension](index.html#dimension-uml) objects’
type attribute.
| Literal | Description |
| --- | --- |
| linear | Literal specifying an instance of a [LinearDimension](index.html#lineardimension-uml) object. |
| monotonic | Literal specifying an instance of a [MonotonicDimension](index.html#monotonicdimension-uml) object. |
| labeled | Literal specifying an instance of a [LabeledDimension](index.html#labeleddimension-uml) object. |
#### DVObjectSubtype[¶](#dvobjectsubtype)
An enumeration with literals as the values of the [DependentVariable](index.html#dependent-var-uml)
object’ type attribute.
| Literal | Description |
| --- | --- |
| internal | Literal specifying an instance of an [InternalDependentVariable](index.html#internal-uml) object. |
| external | Literal specifying an instance of an [ExternalDependentVariable](index.html#external-uml) object. |
#### NumericType[¶](#numerictype)
An enumeration with literals as the value of the [DependentVariable](index.html#dependent-var-uml)
objects’ numeric_type attribute.
| Literal | Description |
| --- | --- |
| uint8 | 8-bit unsigned integer |
| uint16 | 16-bit unsigned integer |
| uint32 | 32-bit unsigned integer |
| uint64 | 64-bit unsigned integer |
| int8 | 8-bit signed integer |
| int16 | 16-bit signed integer |
| int32 | 32-bit signed integer |
| int64 | 64-bit signed integer |
| float32 | 32-bit floating point number |
| float64 | 64-bit floating point number |
| complex64 | two 32-bit floating points numbers |
| complex128 | two 64-bit floating points numbers |
#### QuantityType[¶](#quantitytype)
An enumeration with literals as the value of the [DependentVariable](index.html#dependent-var-uml)
objects’ quantity_type attribute. The value is used in interpreting the p-components of the dependent variable.
* **scalar**A dependent variable with \(p=1\) component interpret as a scalar, \(\mathcal{S}_i=U_{0,i}\).
* **vector_n**A dependent variable with \(p=n\) components interpret as vector components,
\(\mathcal{V}_i= \left[ U_{0,i}, U_{1,i}, ... U_{n-1,i}\right]\).
* **matrix_n_m**A dependent variable with \(p=mn\) components interpret as a \(n \times m\) matrix as follows,
()[¶](#equation-csdmodel-uml-enumeration-0)\[\begin{split}M_i = \left[
\begin{array}{cccc}
U_{0,i} & U_{1,i} & ... &U_{(n-1)m,i} \\
U_{1,i} & U_{m+1,i} & ... &U_{(n-1)m+1,i} \\
\vdots & \vdots & \vdots & \vdots \\
U_{m-1,i} & U_{2m-1,i} & ... &U_{nm-1,i}
\end{array}
\right]\end{split}\]
* **symmetric_matrix_n**A dependent variable with \(p=n^2\) components interpret as a matrix symmetric about its leading diagonal as shown below,
()[¶](#equation-csdmodel-uml-enumeration-1)\[\begin{split}M^{(s)}_i = \left[
\begin{array}{cccc}
U_{0,i} & U_{1,i} & ... & U_{n-1,i} \\
U_{1,i} & U_{n,i} & ... &U_{2n-2,i} \\
\vdots & \vdots & \vdots & \vdots \\
U_{n-1,i} & U_{2n-2,i} & ... &U_{\frac{n(n+1)}{2}-1,i}
\end{array}
\right]\end{split}\]
* **pixel_n**A dependent variable with \(p=n\) components interpret as image/pixel components,
\(\mathcal{P}_i= \left[ U_{0,i}, U_{1,i}, ... U_{n-1,i}\right]\).
Here, the terms \(n\) and \(m\) are intergers.
### ScalarQuantity[¶](#scalarquantity)
ScalarQuantity is an object composed of a numerical value and any valid SI unit symbol or any number of accepted non-SI unit symbols. It is serialized in the JSON file as a string containing a numerical value followed by the unit symbol,
for example,
* “3.4 m” (SI)
* “2.3 bar” (non-SI)
Installation[¶](#installation)
---
### Requirements[¶](#requirements)
`csdmpy` has the following strict requirements:
* [Python](https://www.python.org) 3.6 or later
* [Numpy](https://numpy.org) 1.17 or later
Other requirements include:
* [requests>=2.21.0](http://docs.python-requests.org/en/master/)
(for downloading files from server)
* [astropy>=3.0](http://www.astropy.org) (for astropy units module)
* [matplotlib>=3.0](https://matplotlib.org) (for rendering plots)
### Installing `csdmpy`[¶](#installing-csdmpy)
#### On Local machine (Using pip)[¶](#on-local-machine-using-pip)
PIP is a package manager for Python packages and is included with python version 3.4 and higher. PIP is the easiest way to install python packages.
```
$ pip install csdmpy
```
If you get a `PermissionError`, it usually means that you do not have the required administrative access to install new packages to your Python installation. In this case, you may consider adding the `--user` option, at the end of the statement, to install the package into your home directory. You can read more about how to do this in the [pip documentation](https://pip.pypa.io/en/stable/user_guide/#user-installs).
```
$ pip install csdmpy --user
```
##### Upgrading to a newer version[¶](#upgrading-to-a-newer-version)
To upgrade, type the following in the terminal/Prompt
```
$ pip install csdmpy -U
```
#### On Google Colab Notebook[¶](#on-google-colab-notebook)
Colaboratory is a Google research project. It is a Jupyter notebook environment that runs entirely in the cloud. Launch a new notebook on
[Colab](http://colab.research.google.com). To install the package, type
```
!pip install csdmpy
```
in the first cell, and execute. All done! You may now start using the library.
Getting started with csdmpy package[¶](#getting-started-with-csdmpy-package)
---
We have put together a set of guidelines for importing the csdmpy package and related methods and attributes. We encourage the users to follow these guidelines to promote consistency, amongst others.
Import the package using
```
>>> import csdmpy as cp
```
To load a .csdf or a .csdfe file, use the [`load()`](index.html#csdmpy.load)
method of the csdmpy module. In the following example, we load a sample test file.
```
>>> filename = "https://www.ssnmr.org/sites/default/files/CSDM/test/test01.csdf"
>>> testdata1 = cp.load(filename)
```
Here, `testdata1` is an instance of the CSDM class.
At the root level, the [CSDM](index.html#csdm-api) object includes various useful optional attributes that may contain additional information about the dataset. One such useful attribute is the [`description`](index.html#csdmpy.CSDM.description) key, which briefs the end-users on the contents of the dataset. To access the value of this attribute use,
```
>>> testdata1.description
'A simulated sine curve.'
```
### Accessing dimensions and dependent variables of the dataset[¶](#accessing-dimensions-and-dependent-variables-of-the-dataset)
An instance of the CSDM object may include multiple dimensions and dependent variables. Collectively, the dimensions form a multi-dimensional grid system, and the dependent variables populate this grid.
In csdmpy,
dimensions and dependent variables are structured as list object.
To access these lists, use the [`dimensions`](index.html#csdmpy.CSDM.dimensions) and
[`dependent_variables`](index.html#csdmpy.CSDM.dependent_variables) attribute of the CSDM object,
respectively. For example,
```
>>> x = testdata1.dimensions
>>> y = testdata1.dependent_variables
```
In this example, the dataset contains one dimension and one dependent variable.
You may access the instances of individual dimension and dependent variable by using the proper indexing. For example, the dimension and dependent variable at index 0 may be accessed using `x[0]` and `y[0]`, respectively.
Every instance of the [Dimension](index.html#dim-api) object has its own set of attributes that further describe the respective dimension. For example, a Dimension object may have an optional [`description`](index.html#csdmpy.Dimension.description)
attribute,
```
>>> x[0].description
'A temporal dimension.'
```
Similarly, every instance of the [DependentVariable](index.html#dv-api) object has its own set of attributes. In this example, the
[`description`](index.html#csdmpy.DependentVariable.description)
attribute from the dependent variable is
```
>>> y[0].description
'A response dependent variable.'
```
#### Coordinates along the dimension[¶](#coordinates-along-the-dimension)
Every dimension object contains a list of coordinates associated with every grid index along the dimension. To access these coordinates, use the [`coordinates`](index.html#csdmpy.Dimension.coordinates) attribute of the respective [Dimension](index.html#dim-api) instance. In this example, the coordinates are
```
>>> x[0].coordinates
<Quantity [0. , 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9] s>
```
Note
`x[0].coordinates` returns a
[Quantity](http://docs.astropy.org/en/stable/api/astropy.units.Quantity.html#astropy.units.Quantity)
instance from the
[Astropy](http://docs.astropy.org/en/stable/units/) package.
The csdmpy module utilizes the units library from
[astropy.units](http://docs.astropy.org/en/stable/units/) module to handle physical quantities. The numerical value and the unit of the physical quantities are accessed through the Quantity instance, using the `value` and the `unit` attributes, respectively.
Please refer to the [astropy.units](http://docs.astropy.org/en/stable/units/)
documentation for details.
In the csdmpy module, the `Quantity.value` is a
[Numpy array](https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.ndarray.html).
For instance, in the above example, the underlying Numpy array from the coordinates attribute is accessed as
```
>>> x[0].coordinates.value array([0. , 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9])
```
#### Components of the dependent variable[¶](#components-of-the-dependent-variable)
Every dependent variable object has at least one component. The number of components of the dependent variable is determined from the
[`quantity_type`](index.html#csdmpy.DependentVariable.quantity_type) attribute of the dependent variable object. For example, a scalar quantity has one-component, while a vector quantity may have multiple components. To access the components of the dependent variable, use the
[`components`](index.html#csdmpy.DependentVariable.components)
attribute of the respective [DependentVariable](index.html#dv-api) instance. For example,
```
>>> y[0].components array([[ 0.0000000e+00, 5.8778524e-01, 9.5105654e-01, 9.5105654e-01,
5.8778524e-01, 1.2246469e-16, -5.8778524e-01, -9.5105654e-01,
-9.5105654e-01, -5.8778524e-01]], dtype=float32)
```
The [`components`](index.html#csdmpy.DependentVariable.components) attribute is a Numpy array. Note, the number of dimensions of this array is \(d+1\),
where \(d\) is the number of [Dimension](index.html#dim-api) objects from the
[`dimensions`](index.html#csdmpy.CSDM.dimensions) attribute. The additional dimension in the Numpy array corresponds to the number of components of the dependent variable.
For instance, in this example, there is a single dimension, i.e., \(d=1\)
and, therefore, the value of the
[`components`](index.html#csdmpy.DependentVariable.components)
attribute holds a two-dimensional Numpy array of shape
```
>>> y[0].components.shape
(1, 10)
```
where the first element of the shape tuple, 1, is the number of components of the dependent variable and the second element, 10, is the number of points along the dimension, i.e., `x[0].coordinates`.
### Plotting the dataset[¶](#plotting-the-dataset)
It is always helpful to represent a scientific dataset with visual aids such as a plot or a figure instead of columns of numbers. As such, throughout this documentation, we provide a figure or two for every example dataset.
We make use of Python’s [Matplotlib library](https://matplotlib.org)
for generating these figures. The users may, however, use their favorite plotting library.
The following snippet plots the dataset from this example. Here, the axis_label is an attribute of both Dimension and DependentVariable instances, and the name is an attribute of the DependentVariable instance.
```
>>> import matplotlib.pyplot as plt
>>> plt.figure(figsize=(5, 3.5))
>>> plt.plot(x[0].coordinates, y[0].components[0])
>>> plt.xlabel(x[0].axis_label)
>>> plt.ylabel(y[0].axis_label[0])
>>> plt.title(y[0].name)
>>> plt.tight_layout()
>>> plt.show()
```
([Source code](./../pyplot/getting_started.py), [png](./../pyplot/getting_started.png), [hires.png](./../pyplot/getting_started.hires.png), [pdf](./../pyplot/getting_started.pdf))
See also
[CSDM](index.html#csdm-api), [Dimension](index.html#dim-api), [DependentVariable](index.html#dv-api),
[Quantity](http://docs.astropy.org/en/stable/api/astropy.units.Quantity.html#astropy.units.Quantity),
[numpy array](https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.ndarray.html),
[Matplotlib library](https://matplotlib.org)
Example Gallery[¶](#example-gallery)
---
In this section, we present illustrative examples for importing files serialized with the CSD model, using the csdmpy package.
Because the CSD model allows multi-dimensional datasets with multiple dependent variables, we use a shorthand notation of \(d\mathrm{D}\{p\}\) to indicate that a dataset has a \(p\)-component dependent variable defined on a \(d\)-dimensional coordinate grid.
In the case of correlated datasets, the number of components in each dependent variable is given as a list within the curly braces, i.e.,
\(d\mathrm{D}\{p_0, p_1, p_2, ...\}\).
### Scalar, 1D{1} datasets[¶](#scalar-1d-1-datasets)
The 1D{1} datasets are one dimensional, \(d=1\), with one single-component,
\(p=1\), dependent variable. These datasets are the most common, and we,
therefore, provide a few examples from various fields of science.
[Global Mean Sea Level rise dataset](index.html#sphx-glr-auto-examples-1d-1-examples-plot-0-gmsl-py)[¶](#id1)
Note
Click [here](#sphx-glr-download-auto-examples-1d-1-examples-plot-0-gmsl-py)
to download the full example code
#### Global Mean Sea Level rise dataset[¶](#global-mean-sea-level-rise-dataset)
The following dataset is the Global Mean Sea Level (GMSL) rise from the late 19th to the Early 21st Century [1](#f0). The
[original dataset](http://www.cmar.csiro.au/sealevel/sl_data_cmar.html) was downloaded as a CSV file and subsequently converted to the CSD model format.
Let’s import this file.
```
import csdmpy as cp
filename = "https://www.ssnmr.org/sites/default/files/CSDM/gmsl/GMSL.csdf"
sea_level = cp.load(filename)
```
The variable filename is a string with the address to the .csdf file.
The [`load()`](index.html#csdmpy.load) method of the csdmpy module reads the file and returns an instance of the [CSDM](index.html#csdm-api) class, in this case, as a variable `sea_level`. For a quick preview of the data structure, use the [`data_structure`](index.html#csdmpy.CSDM.data_structure) attribute of this instance.
```
print(sea_level.data_structure)
```
Out:
```
{
"csdm": {
"version": "1.0",
"read_only": true,
"timestamp": "2019-05-21T13:43:00Z",
"tags": [
"Jason-2",
"satellite altimetry",
"mean sea level",
"climate"
],
"description": "Global Mean Sea Level (GMSL) rise from the late 19th to the Early 21st Century.",
"dimensions": [
{
"type": "linear",
"count": 1608,
"increment": "0.08333333333 yr",
"coordinates_offset": "1880.0416666667 yr",
"quantity_name": "time",
"reciprocal": {
"quantity_name": "frequency"
}
}
],
"dependent_variables": [
{
"type": "internal",
"name": "Global Mean Sea Level",
"unit": "mm",
"quantity_name": "length",
"numeric_type": "float32",
"quantity_type": "scalar",
"component_labels": [
"GMSL"
],
"components": [
[
"-183.0, -171.125, ..., 59.6875, 58.5"
]
]
}
]
}
}
```
Warning
The serialized string from the [`data_structure`](index.html#csdmpy.CSDM.data_structure)
attribute is not the same as the JSON serialization on the file.
This attribute is only intended for a quick preview of the data structure and avoids displaying large datasets. Do not use the value of this attribute to save the data to the file. Instead, use the
[`save()`](index.html#csdmpy.CSDM.save) method of the [CSDM](index.html#csdm-api)
class.
The tuple of the dimensions and dependent variables, from this example, are
```
x = sea_level.dimensions y = sea_level.dependent_variables
```
respectively. The coordinates along the dimension and the component of the dependent variable are
```
print(x[0].coordinates)
```
Out:
```
[1880.04166667 1880.125 1880.20833333 ... 2013.79166666 2013.87499999
2013.95833333] yr
```
and
```
print(y[0].components[0])
```
Out:
```
[-183. -171.125 -164.25 ... 66.375 59.6875 58.5 ]
```
respectively.
**Plotting the data**
Note
The following code is only for illustrative purposes. The users may use any plotting library to visualize their datasets.
```
import matplotlib.pyplot as plt
plt.figure(figsize=(5, 3.5))
ax = plt.subplot(projection="csdm")
# csdmpy is compatible with matplotlib function. Use the csdm object as the argument
# of the matplotlib function.
ax.plot(sea_level)
plt.tight_layout()
plt.show()
```
The following is a quick description of the above code. Within the code, we make use of the csdm instance’s attributes in addition to the matplotlib functions. The first line is an import call for the matplotlib functions.
The following line generates a plot of the coordinates along the dimension verse the component of the dependent variable.
The next line sets the x-range. For labeling the axes,
use the [`axis_label`](index.html#csdmpy.Dimension.axis_label) attribute of both dimension and dependent variable instances. For the figure title,
use the [`name`](index.html#csdmpy.DependentVariable.name) attribute of the dependent variable instance. The next statement adds the grid lines.
For additional information, refer to [Matplotlib](https://matplotlib.org)
documentation.
See also
[Getting started with csdmpy package](index.html#getting-started)
Citation
[1](#id1)
Church JA, White NJ. Sea-Level Rise from the Late 19th to the Early 21st Century. Surveys in Geophysics. 2011;32:585–602. DOI:10.1007/s10712-011-9119-1.
**Total running time of the script:** ( 0 minutes 0.860 seconds)
[`Download Python source code: plot_0_gmsl.py`](_downloads/2f15257c6f0a64007724e64c49572fb6/plot_0_gmsl.py)
[`Download Jupyter notebook: plot_0_gmsl.ipynb`](_downloads/6108c741ecb1196ac1442cd2e18ea68b/plot_0_gmsl.ipynb)
[Gallery generated by Sphinx-Gallery](https://sphinx-gallery.github.io)
[Nuclear Magnetic Resonance (NMR) dataset](index.html#sphx-glr-auto-examples-1d-1-examples-plot-1-nmr-bloch-py)[¶](#id2)
Note
Click [here](#sphx-glr-download-auto-examples-1d-1-examples-plot-1-nmr-bloch-py)
to download the full example code
#### Nuclear Magnetic Resonance (NMR) dataset[¶](#nuclear-magnetic-resonance-nmr-dataset)
The following dataset is a \(^{13}\mathrm{C}\) time-domain NMR Bloch decay signal of ethanol.
Let’s load this data file and take a quick look at its data structure. We follow the steps described in the previous example.
```
import matplotlib.pyplot as plt
import csdmpy as cp
domain = "https://www.ssnmr.org/sites/default/files/CSDM"
filename = f"{domain}/NMR/blochDecay/blochDecay.csdf"
NMR_data = cp.load(filename)
print(NMR_data.data_structure)
```
Out:
```
{
"csdm": {
"version": "1.0",
"read_only": true,
"timestamp": "2016-03-12T16:41:00Z",
"geographic_coordinate": {
"altitude": "238.9719543457031 m",
"longitude": "-83.05154573892345 °",
"latitude": "39.97968794964322 °"
},
"tags": [
"13C",
"NMR",
"spectrum",
"ethanol"
],
"description": "A time domain NMR 13C Bloch decay signal of ethanol.",
"dimensions": [
{
"type": "linear",
"count": 4096,
"increment": "0.1 ms",
"coordinates_offset": "-0.3 ms",
"quantity_name": "time",
"reciprocal": {
"coordinates_offset": "-3005.363 Hz",
"origin_offset": "75426328.86 Hz",
"quantity_name": "frequency",
"label": "13C frequency shift"
}
}
],
"dependent_variables": [
{
"type": "internal",
"numeric_type": "complex128",
"quantity_type": "scalar",
"components": [
[
"(-8899.40625-1276.7734375j), (-4606.88037109375-742.4124755859375j), ..., (37.548492431640625+20.156890869140625j), (-193.9228515625-67.06524658203125j)"
]
]
}
]
}
}
```
This particular example illustrates two additional attributes of the CSD model,
namely, the [`geographic_coordinate`](index.html#csdmpy.CSDM.geographic_coordinate) and
[`tags`](index.html#csdmpy.CSDM.tags). The geographic_coordinate described the location where the CSDM file was last serialized. You may access this attribute through,
```
print(NMR_data.geographic_coordinate)
```
Out:
```
{'altitude': '238.9719543457031 m', 'longitude': '-83.05154573892345 °', 'latitude': '39.97968794964322 °'}
```
The tags attribute is a list of keywords that best describe the dataset.
The tags attribute is accessed through,
```
print(NMR_data.tags)
```
Out:
```
['13C', 'NMR', 'spectrum', 'ethanol']
```
You may add additional tags, if so desired, using the append method of python’s list class, for example,
```
NMR_data.tags.append("Bloch decay")
print(NMR_data.tags)
```
Out:
```
['13C', 'NMR', 'spectrum', 'ethanol', 'Bloch decay']
```
The coordinates along the dimension are
```
x = NMR_data.dimensions x0 = x[0].coordinates print(x0)
```
Out:
```
[-3.000e-01 -2.000e-01 -1.000e-01 ... 4.090e+02 4.091e+02 4.092e+02] ms
```
Unlike the previous example, the data structure of an NMR measurement is a complex-valued dependent variable. The numeric type of the components from a dependent variable is accessed through the
[`numeric_type`](index.html#csdmpy.DependentVariable.numeric_type) attribute.
```
y = NMR_data.dependent_variables print(y[0].numeric_type)
```
Out:
```
complex128
```
##### Visualizing the dataset[¶](#visualizing-the-dataset)
In the previous example, we illustrated a matplotlib script for plotting 1D data.
Here, we use the csdmpy [`plot()`](index.html#csdmpy.plot) method, which is a supplementary method for plotting 1D and 2D datasets only.
```
plt.figure(figsize=(5, 3.5))
ax = plt.subplot(projection="csdm")
ax.plot(NMR_data.real, label="real")
ax.plot(NMR_data.imag, label="imag")
plt.grid()
plt.tight_layout()
plt.show()
```
##### Reciprocal dimension object[¶](#reciprocal-dimension-object)
When observing the dimension instance of NMR_data,
```
print(x[0].data_structure)
```
Out:
```
{
"type": "linear",
"count": 4096,
"increment": "0.1 ms",
"coordinates_offset": "-0.3 ms",
"quantity_name": "time",
"reciprocal": {
"coordinates_offset": "-3005.363 Hz",
"origin_offset": "75426328.86 Hz",
"quantity_name": "frequency",
"label": "13C frequency shift"
}
}
```
notice, the reciprocal keyword. The `reciprocal`
attribute is useful for datasets that frequently transform to a reciprocal domain,
such as the NMR dataset. The value of the reciprocal attribute is the reciprocal object, which contains metadata for describing the reciprocal coordinates, such as the coordinates_offset, origin_offset of the reciprocal dimension.
You may perform a fourier transform to visualize the NMR spectrum. Use the
[`fft()`](index.html#csdmpy.CSDM.fft) method on the csdm object `NMR_data` as follows
```
fft_NMR_data = NMR_data.fft()
```
By default, the unit associated with a dimension after FFT is the reciprocal of the unit associated with the dimension before FFT. In this case, the dimension unit after FFT is Hz. NMR datasets are often visualized as a dimension frequency scale. To convert the dimension’s unit to ppm use,
```
fft_NMR_data.dimensions[0].to("ppm", "nmr_frequency_ratio")
# plot of the frequency domain data after FFT.
fig, ax = plt.subplots(1, 2, figsize=(8, 3), subplot_kw={"projection": "csdm"})
ax[0].plot(fft_NMR_data.real, label="real")
plt.grid()
ax[1].plot(fft_NMR_data.imag, label="imag")
plt.grid()
plt.tight_layout()
plt.show()
```
In the above plot, the plot metadata is taken from the reciprocal object such as the x-axis label.
To return to time domain signal, once again use the [`fft()`](index.html#csdmpy.CSDM.fft) method on the `fft_NMR_data` object. We use the CSDM object’s
`complex_fft` attribute to determine the FFT ot iFFT operation.
```
NMR_data_2 = fft_NMR_data.fft()
# plot of the frequency domain data.
fig, ax = plt.subplots(1, 2, figsize=(8, 3), subplot_kw={"projection": "csdm"})
ax[0].plot(NMR_data_2.real, label="real")
plt.grid()
ax[1].plot(NMR_data_2.imag, label="imag")
plt.grid()
plt.tight_layout()
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.783 seconds)
[`Download Python source code: plot_1_NMR_bloch.py`](_downloads/30b7d295d2c6b7a25967271a75e63c55/plot_1_NMR_bloch.py)
[`Download Jupyter notebook: plot_1_NMR_bloch.ipynb`](_downloads/629690b7b87e002c83b241e260aa4018/plot_1_NMR_bloch.ipynb)
[Gallery generated by Sphinx-Gallery](https://sphinx-gallery.github.io)
[Electron Paramagnetic Resonance (EPR) dataset](index.html#sphx-glr-auto-examples-1d-1-examples-plot-2-epr-py)[¶](#id3)
Note
Click [here](#sphx-glr-download-auto-examples-1d-1-examples-plot-2-epr-py)
to download the full example code
#### Electron Paramagnetic Resonance (EPR) dataset[¶](#electron-paramagnetic-resonance-epr-dataset)
The following is a simulation of the
[EPR dataset](http://wwwchem.uwimona.edu.jm/spectra/index.html),
originally obtained as a JCAMP-DX file, and subsequently converted to the CSD model file-format. The data structure of this dataset follows,
```
import matplotlib.pyplot as plt
import csdmpy as cp
domain = "https://www.ssnmr.org/sites/default/files/CSDM"
filename = f"{domain}/EPR/AmanitaMuscaria_base64.csdf"
EPR_data = cp.load(filename)
print(EPR_data.data_structure)
```
Out:
```
{
"csdm": {
"version": "1.0",
"read_only": true,
"timestamp": "2015-02-26T16:41:00Z",
"description": "A Electron Paramagnetic Resonance simulated dataset.",
"dimensions": [
{
"type": "linear",
"count": 298,
"increment": "4.0 G",
"coordinates_offset": "2750.0 G",
"quantity_name": "magnetic flux density",
"reciprocal": {
"quantity_name": "electrical mobility"
}
}
],
"dependent_variables": [
{
"type": "internal",
"name": "Amanita.muscaria",
"numeric_type": "float32",
"quantity_type": "scalar",
"component_labels": [
"Intensity Derivative"
],
"components": [
[
"0.067, 0.136, ..., -0.035, -0.137"
]
]
}
]
}
}
```
and the corresponding plot.
```
plt.figure(figsize=(5, 3.5))
ax = plt.subplot(projection="csdm")
ax.plot(EPR_data)
plt.tight_layout()
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.237 seconds)
[`Download Python source code: plot_2_EPR.py`](_downloads/b18ab2a9e7a47f95a5b32a94ef3ff9dd/plot_2_EPR.py)
[`Download Jupyter notebook: plot_2_EPR.ipynb`](_downloads/bc73e7e756936ea914e597fae71f1fec/plot_2_EPR.ipynb)
[Gallery generated by Sphinx-Gallery](https://sphinx-gallery.github.io)
[Gas Chromatography dataset](index.html#sphx-glr-auto-examples-1d-1-examples-plot-3-gs-py)[¶](#id4)
Note
Click [here](#sphx-glr-download-auto-examples-1d-1-examples-plot-3-gs-py)
to download the full example code
#### Gas Chromatography dataset[¶](#gas-chromatography-dataset)
The following
[Gas Chromatography dataset](http://wwwchem.uwimona.edu.jm/spectra/index.html)
was obtained as a JCAMP-DX file, and subsequently converted to the CSD model file-format. The data structure of the gas chromatography dataset follows,
```
import matplotlib.pyplot as plt
import csdmpy as cp
filename = "https://www.ssnmr.org/sites/default/files/CSDM/GC/cinnamon_base64.csdf"
GCData = cp.load(filename)
print(GCData.data_structure)
```
Out:
```
{
"csdm": {
"version": "1.0",
"read_only": true,
"timestamp": "2011-12-16T12:24:10Z",
"description": "A Gas Chromatography dataset of cinnamon stick.",
"dimensions": [
{
"type": "linear",
"count": 6001,
"increment": "0.0034 min",
"quantity_name": "time",
"reciprocal": {
"quantity_name": "frequency"
}
}
],
"dependent_variables": [
{
"type": "internal",
"name": "Headspace from cinnamon stick",
"numeric_type": "float32",
"quantity_type": "scalar",
"component_labels": [
"monotonic"
],
"components": [
[
"48453.0, 48444.0, ..., 48040.0, 48040.0"
]
]
}
]
}
}
```
and the corresponding plot
```
plt.figure(figsize=(5, 3.5))
ax = plt.subplot(projection="csdm")
ax.plot(GCData)
plt.tight_layout()
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.293 seconds)
[`Download Python source code: plot_3_GS.py`](_downloads/d10cd5195b28f122da2b2d2ddff55b21/plot_3_GS.py)
[`Download Jupyter notebook: plot_3_GS.ipynb`](_downloads/9cb8af6d1a150e5252e6f30323c105b5/plot_3_GS.ipynb)
[Gallery generated by Sphinx-Gallery](https://sphinx-gallery.github.io)
[Fourier Transform Infrared Spectroscopy (FTIR) dataset](index.html#sphx-glr-auto-examples-1d-1-examples-plot-4-ftir-py)[¶](#id5)
Note
Click [here](#sphx-glr-download-auto-examples-1d-1-examples-plot-4-ftir-py)
to download the full example code
#### Fourier Transform Infrared Spectroscopy (FTIR) dataset[¶](#fourier-transform-infrared-spectroscopy-ftir-dataset)
The following
[FTIR dataset](http://wwwchem.uwimona.edu.jm/spectra/index.html),
was obtained as a JCAMP-DX file, and subsequently converted to the CSD model file-format. The data structure of the FTIR dataset follows,
```
import matplotlib.pyplot as plt
import csdmpy as cp
filename = "https://www.ssnmr.org/sites/default/files/CSDM/ir/caffeine_base64.csdf"
FTIR_data = cp.load(filename)
print(FTIR_data.data_structure)
```
Out:
```
{
"csdm": {
"version": "1.0",
"read_only": true,
"timestamp": "2019-07-01T21:03:42Z",
"description": "An IR spectrum of caffeine.",
"dimensions": [
{
"type": "linear",
"count": 1842,
"increment": "1.930548614883216 cm^-1",
"coordinates_offset": "449.41 cm^-1",
"quantity_name": "wavenumber",
"reciprocal": {
"quantity_name": "length"
}
}
],
"dependent_variables": [
{
"type": "internal",
"name": "Caffeine",
"numeric_type": "float32",
"quantity_type": "scalar",
"component_labels": [
"Transmittance"
],
"components": [
[
"99.31053, 99.08212, ..., 100.22944, 100.22944"
]
]
}
]
}
}
```
and the corresponding plot.
```
plt.figure(figsize=(5, 3.5))
ax = plt.subplot(projection="csdm")
ax.plot(FTIR_data)
ax.invert_xaxis()
plt.tight_layout()
plt.show()
```
Because, FTIR spectrum is conventionally displayed on a reverse axis, an optional reverse_axis argument is provided to the [`plot()`](index.html#csdmpy.plot) method.
Its value is an order list of boolean, corresponding to the order of the dimensions.
**Total running time of the script:** ( 0 minutes 0.238 seconds)
[`Download Python source code: plot_4_FTIR.py`](_downloads/e7315c2ff19a9113dd46914b9638dc6c/plot_4_FTIR.py)
[`Download Jupyter notebook: plot_4_FTIR.ipynb`](_downloads/93be990b9e0352421d5832d38f5938cf/plot_4_FTIR.ipynb)
[Gallery generated by Sphinx-Gallery](https://sphinx-gallery.github.io)
[Ultraviolet–visible (UV-vis) dataset](index.html#sphx-glr-auto-examples-1d-1-examples-plot-5-uv-vis-py)[¶](#id6)
Note
Click [here](#sphx-glr-download-auto-examples-1d-1-examples-plot-5-uv-vis-py)
to download the full example code
#### Ultraviolet–visible (UV-vis) dataset[¶](#ultravioletvisible-uv-vis-dataset)
The following
[UV-vis dataset](http://wwwchem.uwimona.edu.jm/spectra/index.html)
was obtained as a JCAMP-DX file, and subsequently converted to the CSD model file-format. The data structure of the UV-vis dataset follows,
```
import matplotlib.pyplot as plt
import csdmpy as cp
domain = "https://www.ssnmr.org/sites/default/files/CSDM"
filename = f"{domain}/UV-vis/benzeneVapour_base64.csdf"
UV_data = cp.load(filename)
print(UV_data.data_structure)
```
Out:
```
{
"csdm": {
"version": "1.0",
"read_only": true,
"timestamp": "2014-09-30T11:16:33Z",
"description": "A UV-vis spectra of benzene vapours.",
"dimensions": [
{
"type": "linear",
"count": 4001,
"increment": "0.01 nm",
"coordinates_offset": "230.0 nm",
"quantity_name": "length",
"label": "wavelength",
"reciprocal": {
"quantity_name": "wavenumber"
}
}
],
"dependent_variables": [
{
"type": "internal",
"name": "Vapour of Benzene",
"numeric_type": "float32",
"quantity_type": "scalar",
"component_labels": [
"Absorbance"
],
"components": [
[
"0.25890622, 0.25923702, ..., 0.16814752, 0.16786034"
]
]
}
]
}
}
```
and the corresponding plot
```
plt.figure(figsize=(5, 3.5))
ax = plt.subplot(projection="csdm")
ax.plot(UV_data)
plt.tight_layout()
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.279 seconds)
[`Download Python source code: plot_5_UV-vis.py`](_downloads/2ace467373af2d3284b725b1c2a02845/plot_5_UV-vis.py)
[`Download Jupyter notebook: plot_5_UV-vis.ipynb`](_downloads/840e320c9fa46d272b2149ec074058b9/plot_5_UV-vis.ipynb)
[Gallery generated by Sphinx-Gallery](https://sphinx-gallery.github.io)
[Mass spectrometry (sparse) dataset](index.html#sphx-glr-auto-examples-1d-1-examples-plot-6-mass-py)[¶](#id7)
Note
Click [here](#sphx-glr-download-auto-examples-1d-1-examples-plot-6-mass-py)
to download the full example code
#### Mass spectrometry (sparse) dataset[¶](#mass-spectrometry-sparse-dataset)
The following mass spectrometry data of acetone is an example of a sparse dataset.
Here, the CSDM data file holds a sparse dependent variable. Upon import, the components of the dependent variable sparsely populates the coordinate grid. The remaining unpopulated coordinates are assigned a zero value.
```
import matplotlib.pyplot as plt
import csdmpy as cp
filename = "https://www.ssnmr.org/sites/default/files/CSDM/MassSpec/acetone.csdf"
mass_spec = cp.load(filename)
print(mass_spec.data_structure)
```
Out:
```
{
"csdm": {
"version": "1.0",
"read_only": true,
"timestamp": "2019-06-23T17:53:26Z",
"description": "MASS spectrum of acetone",
"dimensions": [
{
"type": "linear",
"count": 51,
"increment": "1.0",
"coordinates_offset": "10.0",
"label": "m/z"
}
],
"dependent_variables": [
{
"type": "internal",
"name": "acetone",
"numeric_type": "float32",
"quantity_type": "scalar",
"component_labels": [
"relative abundance"
],
"components": [
[
"0.0, 0.0, ..., 10.0, 0.0"
]
]
}
]
}
}
```
Here, the coordinates along the dimension are
```
print(mass_spec.dimensions[0].coordinates)
```
Out:
```
[10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27.
28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45.
46. 47. 48. 49. 50. 51. 52. 53. 54. 55. 56. 57. 58. 59. 60.]
```
and the corresponding components of the dependent variable,
```
print(mass_spec.dependent_variables[0].components[0])
```
Out:
```
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 9. 9. 49. 0. 0. 79. 1000. 19. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
270. 10. 0.]
```
Note, only eight values were listed in the dependent variable’s components attribute in the .csdf file. The remaining component values were set to zero.
```
plt.figure(figsize=(5, 3.5))
ax = plt.subplot(projection="csdm")
ax.plot(mass_spec)
plt.tight_layout()
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.227 seconds)
[`Download Python source code: plot_6_Mass.py`](_downloads/9e2a4873713d2b6ff288886d6b3a0346/plot_6_Mass.py)
[`Download Jupyter notebook: plot_6_Mass.ipynb`](_downloads/d6a0dd7e0c7e17393049c501b9e22b00/plot_6_Mass.ipynb)
[Gallery generated by Sphinx-Gallery](https://sphinx-gallery.github.io)
### Scalar, 2D{1} datasets[¶](#scalar-2d-1-datasets)
The 2D{1} datasets are two dimensional, \(d=2\), with one single-component dependent variable, \(p=1\). Following are some 2D{1} example datasets from various scientific fields expressed in CSDM format.
[Astronomy dataset](index.html#sphx-glr-auto-examples-2d-1-examples-plot-0-astronomy-py)[¶](#id8)
Note
Click [here](#sphx-glr-download-auto-examples-2d-1-examples-plot-0-astronomy-py)
to download the full example code
#### Astronomy dataset[¶](#astronomy-dataset)
The following dataset is a new observation of the Bubble Nebula acquired by [The Hubble Heritage Team](https://archive.stsci.edu/prepds/heritage/bubble/introduction.html),
in February 2016. The original dataset was obtained in the FITS format and subsequently converted to the CSD model file-format. For the convenience of illustration, we have downsampled the original dataset.
Let’s load the .csdfe file and look at its data structure.
```
import matplotlib.pyplot as plt
import csdmpy as cp
domain = "https://www.ssnmr.org/sites/default/files/CSDM"
filename = f"{domain}/BubbleNebula/Bubble_nebula.csdf"
bubble_nebula = cp.load(filename)
print(bubble_nebula.data_structure)
```
Out:
```
{
"csdm": {
"version": "1.0",
"timestamp": "2020-01-04T01:43:31Z",
"description": "The dataset is a new observation of the Bubble Nebula acquired by The Hubble Heritage Team, in February 2016.",
"dimensions": [
{
"type": "linear",
"count": 1024,
"increment": "-0.0002581136196 °",
"coordinates_offset": "350.311874957 °",
"quantity_name": "plane angle",
"label": "Right Ascension"
},
{
"type": "linear",
"count": 1024,
"increment": "0.0001219957797701109 °",
"coordinates_offset": "61.12851494969163 °",
"quantity_name": "plane angle",
"label": "Declination"
}
],
"dependent_variables": [
{
"type": "internal",
"name": "Bubble Nebula, 656nm",
"numeric_type": "float32",
"quantity_type": "scalar",
"components": [
[
"0.0, 0.0, ..., 0.0, 0.0"
]
]
}
]
}
}
```
Here, the variable `bubble_nebula` is an instance of the [CSDM](index.html#csdm-api)
class. From the data structure, one finds two dimensions, labeled as
*Right Ascension* and *Declination*, and one single-component dependent variable named *Bubble Nebula, 656nm*.
Let’s get the tuple of the dimension and dependent variable instances from the `bubble_nebula` instance following,
```
x = bubble_nebula.dimensions y = bubble_nebula.dependent_variables
```
There are two dimension instances in `x`. Let’s look at the coordinates along each dimension, using the
[`coordinates`](index.html#csdmpy.Dimension.coordinates) attribute of the respective instances.
```
print(x[0].coordinates[:10])
```
Out:
```
[350.31187496 350.31161684 350.31135873 350.31110062 350.3108425
350.31058439 350.31032628 350.31006816 350.30981005 350.30955193] deg
```
```
print(x[1].coordinates[:10])
```
Out:
```
[61.12851495 61.12863695 61.12875894 61.12888094 61.12900293 61.12912493
61.12924692 61.12936892 61.12949092 61.12961291] deg
```
Here, we only print the first ten coordinates along the respective dimensions.
The component of the dependent variable is accessed through the
[`components`](index.html#csdmpy.DependentVariable.components) attribute.
```
y00 = y[0].components[0]
```
**Visualize the dataset**
```
from matplotlib.colors import LogNorm
plt.figure(figsize=(6, 4.5))
ax = plt.subplot(projection="csdm")
ax.imshow(bubble_nebula, norm=LogNorm(vmin=7.5e-3, clip=True), aspect="auto")
plt.tight_layout()
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.657 seconds)
[`Download Python source code: plot_0_astronomy.py`](_downloads/d64374736dfd5d31b246ff983b2fe70f/plot_0_astronomy.py)
[`Download Jupyter notebook: plot_0_astronomy.ipynb`](_downloads/2020e1296499fe7ab16b06d333bbc333/plot_0_astronomy.ipynb)
[Gallery generated by Sphinx-Gallery](https://sphinx-gallery.github.io)
[Nuclear Magnetic Resonance (NMR) dataset](index.html#sphx-glr-auto-examples-2d-1-examples-plot-1-nmr-satrec-py)[¶](#id9)
Note
Click [here](#sphx-glr-download-auto-examples-2d-1-examples-plot-1-nmr-satrec-py)
to download the full example code
#### Nuclear Magnetic Resonance (NMR) dataset[¶](#nuclear-magnetic-resonance-nmr-dataset)
The following example is a \(^{29}\mathrm{Si}\) NMR time-domain saturation recovery measurement of a highly siliceous zeolite ZSM-12.
Usually, the spin recovery measurements are acquired over a rectilinear grid where the measurements along one of the dimensions are non-uniform and span several orders of magnitude. In this example, we illustrate the use of monotonic dimensions for describing such datasets.
Let’s load the file.
```
import csdmpy as cp
filename = "https://www.ssnmr.org/sites/default/files/CSDM/NMR/satrec/satRec.csdf"
NMR_2D_data = cp.load(filename)
print(NMR_2D_data.description)
```
Out:
```
A 29Si NMR magnetization saturation recovery measurement of highly siliceous zeolite ZSM-12.
```
The tuples of the dimension and dependent variable instances from the
`NMR_2D_data` instance are
```
x = NMR_2D_data.dimensions y = NMR_2D_data.dependent_variables
```
respectively. There are two dimension instances in this example with respective dimension data structures as
```
print(x[0].data_structure)
```
Out:
```
{
"type": "linear",
"count": 1024,
"increment": "80.0 µs",
"coordinates_offset": "-41.04 ms",
"quantity_name": "time",
"label": "t2",
"description": "A full echo echo acquisition along the t2 dimension using a Hahn echo.",
"reciprocal": {
"coordinates_offset": "-8766.0626 Hz",
"origin_offset": "79578822.26200001 Hz",
"quantity_name": "frequency",
"label": "29Si frequency shift"
}
}
```
and
```
print(x[1].data_structure)
```
Out:
```
{
"type": "monotonic",
"coordinates": [
"1 s",
"5 s",
"10 s",
"20 s",
"40 s",
"80 s"
],
"quantity_name": "time",
"label": "t1",
"reciprocal": {
"quantity_name": "frequency"
}
}
```
respectively. The first dimension is uniformly spaced, as indicated by the linear subtype, while the second dimension is non-linear and monotonically sampled. The coordinates along the respective dimensions are
```
x0 = x[0].coordinates print(x0)
```
Out:
```
[-41040. -40960. -40880. ... 40640. 40720. 40800.] us
```
```
x1 = x[1].coordinates print(x1)
```
Out:
```
[ 1. 5. 10. 20. 40. 80.] s
```
Notice, the unit of `x0` is in microseconds. It might be convenient to convert the unit to milliseconds. To do so, use the
[`to()`](index.html#csdmpy.Dimension.to) method of the respective
[Dimension](index.html#dim-api) instance as follows,
```
x[0].to("ms")
x0 = x[0].coordinates print(x0)
```
Out:
```
[-41.04 -40.96 -40.88 ... 40.64 40.72 40.8 ] ms
```
As before, the components of the dependent variable are accessed using the
[`components`](index.html#csdmpy.DependentVariable.components) attribute.
```
y00 = y[0].components[0]
```
**Visualize the dataset**
The [`plot()`](index.html#csdmpy.plot) method is a very basic supplementary function for quick visualization of 1D and 2D datasets. You may use this function to plot the data from this example, however, we use the following script to visualize the data with projections onto the respective dimensions.
```
import matplotlib.pyplot as plt from matplotlib.image import NonUniformImage import numpy as np
# Set the extents of the image.
# To set the independent variable coordinates at the center of each image
# pixel, subtract and add half the sampling interval from the first
# and the last coordinate, respectively, of the linearly sampled
# dimension, i.e., x0.
si = x[0].increment extent = (
(x0[0] - 0.5 * si).to("ms").value,
(x0[-1] + 0.5 * si).to("ms").value,
x1[0].value,
x1[-1].value,
)
# Create a 2x2 subplot grid. The subplot at the lower-left corner is for
# the image intensity plot. The subplots at the top-left and bottom-right
# are for the data slice at the horizontal and vertical cross-section,
# respectively. The subplot at the top-right corner is empty.
fig, axi = plt.subplots(
2, 2, gridspec_kw={"width_ratios": [4, 1], "height_ratios": [1, 4]}
)
# The image subplot quadrant.
# Add an image over a rectilinear grid. Here, only the real part of the
# data values is used.
ax = axi[1, 0]
im = NonUniformImage(ax, interpolation="nearest", extent=extent, cmap="bone_r")
im.set_data(x0, x1, y00.real / y00.real.max())
# Set up the grid lines.
ax.images.append(im)
for i in range(x1.size):
ax.plot(x0, np.ones(x0.size) * x1[i], "k--", linewidth=0.5)
ax.grid(axis="x", color="k", linestyle="--", linewidth=0.5, which="both")
# Setup the axes, add the axes labels, and the figure title.
ax.set_xlim([extent[0], extent[1]])
ax.set_ylim([extent[2], extent[3]])
ax.set_xlabel(x[0].axis_label)
ax.set_ylabel(x[1].axis_label)
ax.set_title(y[0].name)
# Add the horizontal data slice to the top-left subplot.
ax0 = axi[0, 0]
top = y00[-1].real ax0.plot(x0, top, "k", linewidth=0.5)
ax0.set_xlim([extent[0], extent[1]])
ax0.set_ylim([top.min(), top.max()])
ax0.axis("off")
# Add the vertical data slice to the bottom-right subplot.
ax1 = axi[1, 1]
right = y00[:, 513].real ax1.plot(right, x1, "k", linewidth=0.5)
ax1.set_ylim([extent[2], extent[3]])
ax1.set_xlim([right.min(), right.max()])
ax1.axis("off")
# Add the colorbar and the component label.
cbar = fig.colorbar(im, ax=ax1)
cbar.ax.set_ylabel(y[0].axis_label[0])
# Turn off the axis system for the top-right subplot.
axi[0, 1].axis("off")
plt.tight_layout(pad=0.0, w_pad=0.0, h_pad=0.0)
plt.subplots_adjust(wspace=0.025, hspace=0.05)
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.429 seconds)
[`Download Python source code: plot_1_NMR_satrec.py`](_downloads/fce74e4a9bdd9a76acba48491535c512/plot_1_NMR_satrec.py)
[`Download Jupyter notebook: plot_1_NMR_satrec.ipynb`](_downloads/819683f505eab2ea7e6125af07f8aa2c/plot_1_NMR_satrec.ipynb)
[Gallery generated by Sphinx-Gallery](https://sphinx-gallery.github.io)
[Transmission Electron Microscopy (TEM) dataset](index.html#sphx-glr-auto-examples-2d-1-examples-plot-2-tem-py)[¶](#id10)
Note
Click [here](#sphx-glr-download-auto-examples-2d-1-examples-plot-2-tem-py)
to download the full example code
#### Transmission Electron Microscopy (TEM) dataset[¶](#transmission-electron-microscopy-tem-dataset)
The following [TEM dataset](https://doi.org/10.1371/journal.pbio.1000502) is a section of an early larval brain of *Drosophila melanogaster* used in the analysis of neuronal microcircuitry. The dataset was obtained from the [TrakEM2 tutorial](http://www.ini.uzh.ch/~acardona/data.html) and subsequently converted to the CSD model file-format.
Let’s import the CSD model data-file and look at its data structure.
```
import matplotlib.pyplot as plt
import csdmpy as cp
filename = "https://www.ssnmr.org/sites/default/files/CSDM/TEM/TEM.csdf"
TEM = cp.load(filename)
print(TEM.data_structure)
```
Out:
```
{
"csdm": {
"version": "1.0",
"read_only": true,
"timestamp": "2016-03-12T16:41:00Z",
"description": "TEM image of the early larval brain of Drosophila melanogaster used in the analysis of neuronal microcircuitry.",
"dimensions": [
{
"type": "linear",
"count": 512,
"increment": "4.0 nm",
"quantity_name": "length",
"reciprocal": {
"quantity_name": "wavenumber"
}
},
{
"type": "linear",
"count": 512,
"increment": "4.0 nm",
"quantity_name": "length",
"reciprocal": {
"quantity_name": "wavenumber"
}
}
],
"dependent_variables": [
{
"type": "internal",
"numeric_type": "uint8",
"quantity_type": "scalar",
"components": [
[
"126, 107, ..., 164, 171"
]
]
}
]
}
}
```
This dataset consists of two linear dimensions and one single-component dependent variable. The tuple of the dimension and the dependent variable instances from this example are
```
x = TEM.dimensions y = TEM.dependent_variables
```
and the respective coordinates (viewed only for the first ten coordinates),
```
print(x[0].coordinates[:10])
```
Out:
```
[ 0. 4. 8. 12. 16. 20. 24. 28. 32. 36.] nm
```
```
print(x[1].coordinates[:10])
```
Out:
```
[ 0. 4. 8. 12. 16. 20. 24. 28. 32. 36.] nm
```
For convenience, let’s convert the coordinates from nm to µm using the
[`to()`](index.html#csdmpy.Dimension.to) method of the respective [Dimension](index.html#dim-api) instance,
```
x[0].to("µm")
x[1].to("µm")
```
and plot the data.
```
plt.figure(figsize=(5, 3.5))
ax = plt.subplot(projection="csdm")
cb = ax.imshow(TEM, aspect="auto")
plt.colorbar(cb, ax=ax)
plt.tight_layout()
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.416 seconds)
[`Download Python source code: plot_2_TEM.py`](_downloads/dce4e60ea7a7c7e2726fbafdb1467b35/plot_2_TEM.py)
[`Download Jupyter notebook: plot_2_TEM.ipynb`](_downloads/ed032c73ab16c10936851d1d2de6b079/plot_2_TEM.ipynb)
[Gallery generated by Sphinx-Gallery](https://sphinx-gallery.github.io)
[Labeled Dataset](index.html#sphx-glr-auto-examples-2d-1-examples-plot-3-labeled-py)[¶](#id11)
Note
Click [here](#sphx-glr-download-auto-examples-2d-1-examples-plot-3-labeled-py)
to download the full example code
#### Labeled Dataset[¶](#labeled-dataset)
The CSD model also supports labeled dimensions. In the following example, we present a mixed linear and labeled two-dimensional dataset representing the population of the country as a function of time. The dataset is obtained from
[The World Bank](https://data.worldbank.org/indicator/SP.POP.TOTL?view=chart).
Import the csdmpy model and load the dataset.
```
import csdmpy as cp
filename = "https://www.ssnmr.org/sites/default/files/CSDM/labeled/population.csdf"
labeled_data = cp.load(filename)
```
The tuple of dimension and dependent variable objects from `labeled_data` instance are
```
x = labeled_data.dimensions y = labeled_data.dependent_variables
```
Since one of the dimensions is a labeled dimension, let’s make use of the
[`type`](index.html#csdmpy.Dimension.type) attribute of the dimension instances to find out which dimension is labeled.
```
print(x[0].type)
```
Out:
```
linear
```
```
print(x[1].type)
```
Out:
```
labeled
```
Here, the second dimension is the labeled dimension with [1](#f1)
```
print(x[1].count)
```
Out:
```
263
```
labels, where the first five labels are
```
print(x[1].labels[:5])
```
Out:
```
['Aruba' 'Afghanistan' 'Angola' 'Albania' 'Andorra']
```
Note
For labeled dimensions, the [`coordinates`](index.html#csdmpy.Dimension.coordinates)
attribute is an alias of the [`labels`](index.html#csdmpy.Dimension.labels)
attribute.
```
print(x[1].coordinates[:5])
```
Out:
```
['Aruba' 'Afghanistan' 'Angola' 'Albania' 'Andorra']
```
The coordinates along the first dimension, viewed up to the first ten points, are
```
print(x[0].coordinates[:10])
```
Out:
```
[1960. 1961. 1962. 1963. 1964. 1965. 1966. 1967. 1968. 1969.] yr
```
**Plotting the dataset**
You may plot this dataset however you like. Here, we use a bar graph to represent the population of countries in the year 2017. The data corresponding to this year is a cross-section of the dependent variable at index 57 along the `x[0]` dimension.
```
print(x[0].coordinates[57])
```
Out:
```
2017.0 yr
```
To keep the plot simple, we only plot the first 20 country labels along the `x[1]` dimension.
```
import matplotlib.pyplot as plt import numpy as np
x_data = x[1].coordinates[:20]
x_pos = np.arange(20)
y_data = y[0].components[0][:20, 57]
plt.bar(x_data, y_data, align="center", alpha=0.5)
plt.xticks(x_pos, x_data, rotation=90)
plt.ylabel(y[0].axis_label[0])
plt.yscale("log")
plt.title(y[0].name)
plt.tight_layout()
plt.show()
```
Footnotes
[1](#id1)
In the CSD model, the attribute count is only valid for the
[LinearDimension](index.html#lineardimension-uml). In csdmpy, however, the
[`count`](index.html#csdmpy.Dimension.count) attribute is valid for all dimension objects and returns an integer with the number of grid points along the dimension.
**Total running time of the script:** ( 0 minutes 0.756 seconds)
[`Download Python source code: plot_3_labeled.py`](_downloads/7a1605fcf627d622aa73f22f5ea8d090/plot_3_labeled.py)
[`Download Jupyter notebook: plot_3_labeled.ipynb`](_downloads/2d800a3aa200d1b9ee943ca131056889/plot_3_labeled.ipynb)
[Gallery generated by Sphinx-Gallery](https://sphinx-gallery.github.io)
### Vector datasets[¶](#vector-datasets)
[Vector, 1D{2} dataset](index.html#sphx-glr-auto-examples-vector-plot-0-vector-py)[¶](#id12)
Note
Click [here](#sphx-glr-download-auto-examples-vector-plot-0-vector-py)
to download the full example code
#### Vector, 1D{2} dataset[¶](#vector-1d-2-dataset)
The 1D{2} datasets are one-dimensional, \(d=1\), with two-component dependent variable, \(p=2\). Such datasets are more common with the weather forecast, such as the wind velocity predicting at a location as a function of time.
The following is an example of a simulated 1D vector field dataset.
```
import matplotlib.pyplot as plt
import csdmpy as cp
filename = "https://www.ssnmr.org/sites/default/files/CSDM/vector/1D_vector.csdf"
vector_data = cp.load(filename)
print(vector_data.data_structure)
```
Out:
```
{
"csdm": {
"version": "1.0",
"read_only": true,
"timestamp": "2019-02-12T10:00:00Z",
"dimensions": [
{
"type": "linear",
"count": 10,
"increment": "1.0 m",
"quantity_name": "length",
"reciprocal": {
"quantity_name": "wavenumber"
}
}
],
"dependent_variables": [
{
"type": "internal",
"numeric_type": "float32",
"quantity_type": "vector_2",
"components": [
[
"0.6907923, 0.31292602, ..., 0.40570852, 0.7005596"
],
[
"0.5603441, 0.06866818, ..., 0.48200375, 0.15077808"
]
]
}
]
}
}
```
The tuple of the dimension and dependent variable instances from this example are
```
x = vector_data.dimensions y = vector_data.dependent_variables
```
with coordinates
```
print(x[0].coordinates)
```
Out:
```
[0. 1. 2. 3. 4. 5. 6. 7. 8. 9.] m
```
In this example, the components of the dependent variable are vectors as seen from the
[`quantity_type`](index.html#csdmpy.DependentVariable.quantity_type)
attribute of the corresponding dependent variable instance.
```
print(y[0].quantity_type)
```
Out:
```
vector_2
```
From the value vector_2, vector indicates a vector dataset, while 2 indicates the number of vector components.
**Visualizing the dataset**
```
plt.figure(figsize=(5, 3.5))
cp.plot(vector_data)
plt.tight_layout()
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.238 seconds)
[`Download Python source code: plot_0_vector.py`](_downloads/1908bbcd2c3418d564250abcc4ac8795/plot_0_vector.py)
[`Download Jupyter notebook: plot_0_vector.ipynb`](_downloads/2775efc387eedd4690a5c5c507ac4216/plot_0_vector.ipynb)
[Gallery generated by Sphinx-Gallery](https://sphinx-gallery.github.io)
[Vector, 2D{2} dataset](index.html#sphx-glr-auto-examples-vector-plot-1-vector-py)[¶](#id13)
Note
Click [here](#sphx-glr-download-auto-examples-vector-plot-1-vector-py)
to download the full example code
#### Vector, 2D{2} dataset[¶](#vector-2d-2-dataset)
The 2D{2} datasets are two-dimensional, \(d=2\),
with one two-component dependent variable, \(p=2\).
The following is an example of a simulated electric field vector dataset of a dipole as a function of two linearly sampled spatial dimensions.
```
import csdmpy as cp
domain = "https://www.ssnmr.org/sites/default/files/CSDM"
filename = f"{domain}/vector/electric_field/electric_field_base64.csdf"
vector_data = cp.load(filename)
print(vector_data.data_structure)
```
Out:
```
{
"csdm": {
"version": "1.0",
"read_only": true,
"timestamp": "2014-09-30T11:16:33Z",
"description": "A simulated electric field dataset from an electric dipole.",
"dimensions": [
{
"type": "linear",
"count": 64,
"increment": "0.0625 cm",
"coordinates_offset": "-2.0 cm",
"quantity_name": "length",
"label": "x",
"reciprocal": {
"quantity_name": "wavenumber"
}
},
{
"type": "linear",
"count": 64,
"increment": "0.0625 cm",
"coordinates_offset": "-2.0 cm",
"quantity_name": "length",
"label": "y",
"reciprocal": {
"quantity_name": "wavenumber"
}
}
],
"dependent_variables": [
{
"type": "internal",
"name": "Electric field lines",
"unit": "C^-1 * N",
"quantity_name": "electric field strength",
"numeric_type": "float32",
"quantity_type": "vector_2",
"components": [
[
"3.7466873e-07, 3.3365018e-07, ..., 3.5343004e-07, 4.0100363e-07"
],
[
"1.6129676e-06, 1.6765767e-06, ..., 1.846712e-06, 1.7754871e-06"
]
]
}
]
}
}
```
The tuple of the dimension and dependent variable instances from this example are
```
x = vector_data.dimensions y = vector_data.dependent_variables
```
with the respective coordinates (viewed only up to five values), as
```
print(x[0].coordinates[:5])
```
Out:
```
[-2. -1.9375 -1.875 -1.8125 -1.75 ] cm
```
```
print(x[1].coordinates[:5])
```
Out:
```
[-2. -1.9375 -1.875 -1.8125 -1.75 ] cm
```
The components of the dependent variable are vector components as seen from the [`quantity_type`](index.html#csdmpy.DependentVariable.quantity_type)
attribute of the corresponding dependent variable instance.
```
print(y[0].quantity_type)
```
Out:
```
vector_2
```
**Visualizing the dataset**
Let’s visualize the vector data using the *streamplot* method from the matplotlib package. Before we could visualize, however, there is an initial processing step. We use the Numpy library for processing.
```
import numpy as np
X, Y = np.meshgrid(x[0].coordinates, x[1].coordinates) # (x, y) coordinate pairs U, V = y[0].components[0], y[0].components[1] # U and V are the components R = np.sqrt(U**2 + V**2) # The magnitude of the vector R /= R.min() # Scaled magnitude of the vector Rlog = np.log10(R) # Scaled magnitude of the vector on a log scale
```
In the above steps, we calculate the X-Y grid points along with a scaled magnitude of the vector dataset. The magnitude is scaled such that the minimum value is one. Next, calculate the log of the scaled magnitude to visualize the intensity on a logarithmic scale.
And now, the streamplot vector plot
```
import matplotlib.pyplot as plt
plt.streamplot(
X.value, Y.value, U, V, density=1, linewidth=Rlog, color=Rlog, cmap="viridis"
)
plt.xlim([x[0].coordinates[0].value, x[0].coordinates[-1].value])
plt.ylim([x[1].coordinates[0].value, x[1].coordinates[-1].value])
# Set axes labels and figure title.
plt.xlabel(x[0].axis_label)
plt.ylabel(x[1].axis_label)
plt.title(y[0].name)
# Set grid lines.
plt.grid(color="gray", linestyle="--", linewidth=0.5)
plt.tight_layout()
plt.show()
```
**Total running time of the script:** ( 0 minutes 1.234 seconds)
[`Download Python source code: plot_1_vector.py`](_downloads/30afa342803f1d58bd256b3da9103c06/plot_1_vector.py)
[`Download Jupyter notebook: plot_1_vector.ipynb`](_downloads/8b807e477befacdee7997c05848920e9/plot_1_vector.ipynb)
[Gallery generated by Sphinx-Gallery](https://sphinx-gallery.github.io)
### Tensor datasets[¶](#tensor-datasets)
[Diffusion tensor MRI, 3D{6} dataset](index.html#sphx-glr-auto-examples-tensor-plot-0-3d-diff-tensor-mri-py)[¶](#id14)
Note
Click [here](#sphx-glr-download-auto-examples-tensor-plot-0-3d-diff-tensor-mri-py)
to download the full example code
#### Diffusion tensor MRI, 3D{6} dataset[¶](#diffusion-tensor-mri-3d-6-dataset)
The following is an example of a 3D{6} diffusion tensor MRI dataset with three spatial dimensions, \(d=3\), and one, \(p=1\), dependent-variable with six components. For illustration, we have reduced the size of the dataset.
The complete diffusion tensor MRI dataset, in the CSDM format, is available
[online](https://osu.box.com/shared/static/i7pwedo7sjabzr9qfn5q2gnjffqabp0p.csdf).
The original dataset [1](#f1) is also available.
Let’s import the CSDM data-file and look at its data structure.
```
import csdmpy as cp
domain = "https://www.ssnmr.org/sites/default/files/CSDM"
filename = f"{domain}/tensor/human_brain/brain_MRI_reduced_example.csdf"
diff_mri = cp.load(filename)
```
There are three linear dimensions in this dataset, corresponding to the x, y, and z spatial dimensions,
```
x = diff_mri.dimensions print(x[0].label, x[1].label, x[2].label)
```
Out:
```
x y z
```
and one six-component dependent variables holding the diffusion tensor components.
Because the diffusion tensor is a symmetric second-rank tensor, we only need six tensor components. The components of the tensor are ordered as
```
y = diff_mri.dependent_variables print(y[0].component_labels)
```
Out:
```
['dxx', 'dxy', 'dxz', 'dyy', 'dyz', 'dzz']
```
The symmetric matrix information is also found with the
[`quantity_type`](index.html#csdmpy.DependentVariable.quantity_type) attribute,
```
print(y[0].quantity_type)
```
Out:
```
symmetric_matrix_3
```
which implies a 3x3 symmetric matrix.
**Visualize the dataset**
In the following, we visualize the isotropic diffusion coefficient, that is, the average of the \(d_{xx}\), \(d_{yy}\), and \(d_{zz}\) tensor components.
Since it’s a three-dimensional dataset, we’ll visualize the projections onto the three dimensions.
```
# the isotropic diffusion coefficient.
# component at index 0 = dxx
# component at index 3 = dyy
# component at index 5 = dzz isotropic_diffusion = (y[0].components[0] + y[0].components[3] + y[0].components[5]) / 3
```
In the following, we use certain features of the csdmpy module.
Please refer to [Generating CSDM objects](index.html#generating-csdm-objects) for further details.
```
# Create a new csdm object from the isotropic diffusion coefficient array.
new_csdm = cp.as_csdm(isotropic_diffusion, quantity_type="scalar")
# Add the dimensions from `diff_mri` object to the `new_csdm` object.
for i, dim in enumerate(x):
new_csdm.dimensions[i] = dim
```
Now, we can plot the projections of the isotropic diffusion coefficients along the respective dimensions as
```
import matplotlib.pyplot as plt
# projection along the x-axis.
plt.figure(figsize=(5, 4))
ax = plt.subplot(projection="csdm")
cb = ax.imshow(new_csdm.sum(axis=0), cmap="gray_r", origin="upper", aspect="auto")
plt.colorbar(cb, ax=ax)
plt.tight_layout()
plt.show()
```
```
# projection along the y-axis.
plt.figure(figsize=(5, 4))
ax = plt.subplot(projection="csdm")
cb = ax.imshow(new_csdm.sum(axis=1), cmap="gray_r", origin="upper", aspect="auto")
plt.colorbar(cb, ax=ax)
plt.tight_layout()
plt.show()
```
```
# projection along the z-axis.
plt.figure(figsize=(5, 4))
ax = plt.subplot(projection="csdm")
cb = ax.imshow(new_csdm.sum(axis=2), cmap="gray_r", origin="upper", aspect="auto")
plt.colorbar(cb, ax=ax)
plt.tight_layout()
plt.show()
```
Citation
[1](#id1)
Diffusion tensor MRI [data](http://www.sci.utah.edu/~gk/DTI-data/); 2000.
**Total running time of the script:** ( 0 minutes 0.959 seconds)
[`Download Python source code: plot_0_3D_diff_tensor_mri.py`](_downloads/953a8859d5f7b26c5c6f21241826a872/plot_0_3D_diff_tensor_mri.py)
[`Download Jupyter notebook: plot_0_3D_diff_tensor_mri.ipynb`](_downloads/fe9ffda1ab6b2ec409cc0dda49bddd56/plot_0_3D_diff_tensor_mri.ipynb)
[Gallery generated by Sphinx-Gallery](https://sphinx-gallery.github.io)
### Pixel datasets[¶](#pixel-datasets)
[Image, 2D{3} datasets](index.html#sphx-glr-auto-examples-pixel-plot-0-image-py)[¶](#id15)
Note
Click [here](#sphx-glr-download-auto-examples-pixel-plot-0-image-py)
to download the full example code
#### Image, 2D{3} datasets[¶](#image-2d-3-datasets)
The 2D{3} dataset is two dimensional, \(d=2\), with a single three-component dependent variable, \(p=3\).
A common example from this subset is perhaps the RGB image dataset.
An RGB image dataset has two spatial dimensions and one dependent variable with three components corresponding to the red, green, and blue color intensities.
The following is an example of an RGB image dataset.
```
import csdmpy as cp
filename = "https://www.ssnmr.org/sites/default/files/CSDM/image/raccoon_image.csdf"
ImageData = cp.load(filename)
print(ImageData.data_structure)
```
Out:
```
{
"csdm": {
"version": "1.0",
"read_only": true,
"timestamp": "2016-03-12T16:41:00Z",
"tags": [
"racoon",
"image",
"<NAME>"
],
"description": "An RBG image of a raccoon face.",
"dimensions": [
{
"type": "linear",
"count": 1024,
"increment": "1.0",
"label": "horizontal index"
},
{
"type": "linear",
"count": 768,
"increment": "1.0",
"label": "vertical index"
}
],
"dependent_variables": [
{
"type": "internal",
"name": "raccoon",
"numeric_type": "uint8",
"quantity_type": "pixel_3",
"component_labels": [
"red",
"green",
"blue"
],
"components": [
[
"121, 138, ..., 119, 118"
],
[
"112, 129, ..., 155, 154"
],
[
"131, 148, ..., 93, 92"
]
]
}
]
}
}
```
The tuple of the dimension and dependent variable instances from
`ImageData` instance are
```
x = ImageData.dimensions y = ImageData.dependent_variables
```
respectively. There are two dimensions, and the coordinates along each dimension are
```
print("x0 =", x[0].coordinates[:10])
```
Out:
```
x0 = [0. 1. 2. 3. 4. 5. 6. 7. 8. 9.]
```
```
print("x1 =", x[1].coordinates[:10])
```
Out:
```
x1 = [0. 1. 2. 3. 4. 5. 6. 7. 8. 9.]
```
respectively, where only first ten coordinates along each dimension is displayed.
The dependent variable is the image data, as also seen from the
[`quantity_type`](index.html#csdmpy.DependentVariable.quantity_type) attribute of the corresponding [DependentVariable](index.html#dv-api) instance.
```
print(y[0].quantity_type)
```
Out:
```
pixel_3
```
From the value pixel_3, pixel indicates a pixel data, while 3 indicates the number of pixel components.
As usual, the components of the dependent variable are accessed through the [`components`](index.html#csdmpy.DependentVariable.components) attribute.
To access the individual components, use the appropriate array indexing.
For example,
```
print(y[0].components[0])
```
Out:
```
[[121 138 153 ... 119 131 139]
[ 89 110 130 ... 118 134 146]
[ 73 94 115 ... 117 133 144]
...
[ 87 94 107 ... 120 119 119]
[ 85 95 112 ... 121 120 120]
[ 85 97 111 ... 120 119 118]]
```
will return an array with the first component of all data values. In this case,
the components correspond to the red color intensity, also indicated by the corresponding component label. The label corresponding to the component array is accessed through the
[`component_labels`](index.html#csdmpy.DependentVariable.component_labels)
attribute with appropriate indexing, that is
```
print(y[0].component_labels[0])
```
Out:
```
red
```
To avoid displaying larger output, as an example, we print the shape of each component array (using Numpy array’s shape attribute) for the three components along with their respective labels.
```
print(y[0].component_labels[0], y[0].components[0].shape)
```
Out:
```
red (768, 1024)
```
```
print(y[0].component_labels[1], y[0].components[1].shape)
```
Out:
```
green (768, 1024)
```
```
print(y[0].component_labels[2], y[0].components[2].shape)
```
Out:
```
blue (768, 1024)
```
The shape (768, 1024) corresponds to the number of points from the each dimension instances.
Note
In this example, since there is only one dependent variable, the index of y is set to zero, which is `y[0]`. The indices for the
[`components`](index.html#csdmpy.DependentVariable.components) and the
[`component_labels`](index.html#csdmpy.DependentVariable.component_labels),
on the other hand, spans through the number of components.
Now, to visualize the dataset as an RGB image,
```
import matplotlib.pyplot as plt
ax = plt.subplot(projection="csdm")
ax.imshow(ImageData, origin="upper")
plt.tight_layout()
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.577 seconds)
[`Download Python source code: plot_0_image.py`](_downloads/1972d9160a0bd68b72cbb7910abe048a/plot_0_image.py)
[`Download Jupyter notebook: plot_0_image.ipynb`](_downloads/67b09843f1acd1ba3deb951db66cbbeb/plot_0_image.ipynb)
[Gallery generated by Sphinx-Gallery](https://sphinx-gallery.github.io)
### Correlated datasets[¶](#correlated-datasets)
The Core Scientific Dataset Model (CSDM) supports multiple dependent variables that share the same d-dimensional coordinate grid, where
\(d>=0\).
We call the dependent variables from these datasets as correlated datasets.
Following are a few examples of the correlated dataset.
[Scatter, 0D{1,1} dataset](index.html#sphx-glr-auto-examples-correlated-examples-plot-0-0d11-dataset-py)[¶](#id16)
Note
Click [here](#sphx-glr-download-auto-examples-correlated-examples-plot-0-0d11-dataset-py)
to download the full example code
#### Scatter, 0D{1,1} dataset[¶](#scatter-0d-1-1-dataset)
We start with a 0D{1,1} correlated dataset, that is, a dataset without a coordinate grid. A 0D{1,1} dataset has no dimensions, d = 0, and two single-component dependent variables.
In the following example [1](#f3), the two correlated dependent variables are the \(^{29}\text{Si}\) - \(^{29}\text{Si}\) nuclear spin couplings,
\(^2J\), across a Si-O-Si linkage, and the s-character product on the O and two Si along the Si-O bond across the Si-O-Si linkage.
Let’s import the dataset.
```
import csdmpy as cp
domain = "https://www.ssnmr.org/sites/default/files/CSDM"
filename = f"{domain}/correlatedDataset/0D_dataset/J_vs_s.csdf"
zero_d_dataset = cp.load(filename)
```
Since the dataset has no dimensions, the value of the
[`dimensions`](index.html#csdmpy.CSDM.dimensions) attribute of the [`CSDM`](index.html#csdmpy.CSDM)
class is an empty tuple,
```
print(zero_d_dataset.dimensions)
```
Out:
```
[]
```
The [`dependent_variables`](index.html#csdmpy.CSDM.dependent_variables) attribute, however, holds two dependent-variable objects. The data structure from the two dependent variables is
```
print(zero_d_dataset.dependent_variables[0].data_structure)
```
Out:
```
{
"type": "internal",
"name": "Gaussian computed J-couplings",
"unit": "Hz",
"quantity_name": "frequency",
"numeric_type": "float32",
"quantity_type": "scalar",
"component_labels": [
"J-coupling"
],
"components": [
[
"-1.87378, -1.42918, ..., 25.1742, 26.0608"
]
]
}
```
and
```
print(zero_d_dataset.dependent_variables[1].data_structure)
```
Out:
```
{
"type": "internal",
"name": "product of s-characters",
"unit": "%",
"numeric_type": "float32",
"quantity_type": "scalar",
"component_labels": [
"s-character product"
],
"components": [
[
"0.8457453, 0.8534185, ..., 1.5277092, 1.5289451"
]
]
}
```
respectively.
**Visualizing the dataset**
The correlation plot of the dependent-variables from the dataset is shown below.
```
import matplotlib.pyplot as plt
y0 = zero_d_dataset.dependent_variables[0]
y1 = zero_d_dataset.dependent_variables[1]
plt.scatter(y1.components[0], y0.components[0], s=2, c="k")
plt.xlabel(y1.axis_label[0])
plt.ylabel(y0.axis_label[0])
plt.tight_layout()
plt.show()
```
Citation
[1](#id1)
<NAME>, <NAME>, <NAME>, <NAME>. Correlating geminal couplings to structure in framework silicates. Phys Chem Chem Phys.
2018;20:562–571. DOI:10.1039/C7CP06486A
**Total running time of the script:** ( 0 minutes 0.679 seconds)
[`Download Python source code: plot_0_0D11_dataset.py`](_downloads/d93ff3a9238de114e22541310e27c64d/plot_0_0D11_dataset.py)
[`Download Jupyter notebook: plot_0_0D11_dataset.ipynb`](_downloads/d5063d4ec8368fd1f966c55fca1b8c40/plot_0_0D11_dataset.ipynb)
[Gallery generated by Sphinx-Gallery](https://sphinx-gallery.github.io)
[Meteorological, 2D{1,1,2,1,1} dataset](index.html#sphx-glr-auto-examples-correlated-examples-plot-1-meteorology-py)[¶](#id17)
Note
Click [here](#sphx-glr-download-auto-examples-correlated-examples-plot-1-meteorology-py)
to download the full example code
#### Meteorological, 2D{1,1,2,1,1} dataset[¶](#meteorological-2d-1-1-2-1-1-dataset)
The following dataset is obtained from [NOAA/NCEP Global Forecast System (GFS)
Atmospheric Model](https://coastwatch.pfeg.noaa.gov/erddap/griddap/NCEP_Global_Best.graph?ugrd10m[(2017-09-17T12:00:00Z)][(-4.5):(52.0)][(275.0):(331.5)]&.draw=surface&.vars=longitude%7Clatitude%7Cugrd10m&.colorBar=%7C%7C%7C%7C%7C&.bgColor=0xffccccff)
and subsequently converted to the CSD model file-format.
The dataset consists of two spatial dimensions describing the geographical coordinates of the earth surface and five dependent variables with 1) surface temperature, 2) air temperature at 2 m, 3) relative humidity,
4) air pressure at sea level as the four scalar quantity_type dependent variables, and 5) wind velocity as the two-component vector, quantity_type dependent variable.
Let’s import the csdmpy module and load this dataset.
```
import csdmpy as cp
domain = "https://www.ssnmr.org/sites/default/files/CSDM"
filename = f"{domain}/correlatedDataset/forecast/NCEI.csdf"
multi_dataset = cp.load(filename)
```
The tuple of dimension and dependent variable objects from
`multi_dataset` instance are
```
x = multi_dataset.dimensions y = multi_dataset.dependent_variables
```
The dataset contains two dimension objects representing the longitude and latitude of the earth’s surface. The labels along thee respective dimensions are
```
print(x[0].label)
```
Out:
```
longitude
```
```
print(x[1].label)
```
Out:
```
latitude
```
There are a total of five dependent variables stored in this dataset. The first dependent variable is the surface air temperature. The data structure of this dependent variable is
```
print(y[0].data_structure)
```
Out:
```
{
"type": "internal",
"description": "The label 'tmpsfc' is the standard attribute name for 'surface air temperature'.",
"name": "Surface temperature",
"unit": "K",
"quantity_name": "temperature",
"numeric_type": "float64",
"quantity_type": "scalar",
"component_labels": [
"tmpsfc - surface air temperature"
],
"components": [
[
"292.8152160644531, 293.0152282714844, ..., 301.8152160644531, 303.8152160644531"
]
]
}
```
If you have followed all previous examples, the above data structure should be self-explanatory.
We will use the following snippet to plot the dependent variables of scalar quantity_type.
```
import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.axes_grid1 import make_axes_locatable
def plot_scalar(yx):
fig, ax = plt.subplots(1, 1, figsize=(6, 3))
# Set the extents of the image plot.
extent = [
x[0].coordinates[0].value,
x[0].coordinates[-1].value,
x[1].coordinates[0].value,
x[1].coordinates[-1].value,
]
# Add the image plot.
im = ax.imshow(yx.components[0], origin="lower", extent=extent, cmap="coolwarm")
# Add a colorbar.
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.05)
cbar = fig.colorbar(im, cax)
cbar.ax.set_ylabel(yx.axis_label[0])
# Set up the axes label and figure title.
ax.set_xlabel(x[0].axis_label)
ax.set_ylabel(x[1].axis_label)
ax.set_title(yx.name)
# Set up the grid lines.
ax.grid(color="k", linestyle="--", linewidth=0.5)
plt.tight_layout()
plt.show()
```
Now to plot the data from the dependent variable.
```
plot_scalar(y[0])
```
Similarly, other dependent variables with their respective plots are
```
print(y[1].name)
```
Out:
```
Air temperature at 2m
```
```
plot_scalar(y[1])
```
```
print(y[3].name)
```
Out:
```
Relative humidity
```
```
plot_scalar(y[3])
```
```
print(y[4].name)
```
Out:
```
Air pressure at sea level
```
```
plot_scalar(y[4])
```
Notice, we skipped the dependent variable at index two. The reason is that this particular dependent variable is a vector dataset,
```
print(y[2].quantity_type)
```
Out:
```
vector_2
```
```
print(y[2].name)
```
Out:
```
Wind velocity
```
which represents the wind velocity, and requires a vector visualization routine. To visualize the vector data, we use the matplotlib quiver plot.
```
def plot_vector(yx):
fig, ax = plt.subplots(1, 1, figsize=(6, 3))
magnitude = np.sqrt(yx.components[0] ** 2 + yx.components[1] ** 2)
cf = ax.quiver(
x[0].coordinates,
x[1].coordinates,
yx.components[0],
yx.components[1],
magnitude,
pivot="middle",
cmap="inferno",
)
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.05)
cbar = fig.colorbar(cf, cax)
cbar.ax.set_ylabel(yx.name + " / " + str(yx.unit))
ax.set_xlim([x[0].coordinates[0].value, x[0].coordinates[-1].value])
ax.set_ylim([x[1].coordinates[0].value, x[1].coordinates[-1].value])
# Set axes labels and figure title.
ax.set_xlabel(x[0].axis_label)
ax.set_ylabel(x[1].axis_label)
ax.set_title(yx.name)
# Set grid lines.
ax.grid(color="gray", linestyle="--", linewidth=0.5)
plt.tight_layout()
plt.show()
```
```
plot_vector(y[2])
```
**Total running time of the script:** ( 0 minutes 1.468 seconds)
[`Download Python source code: plot_1_meteorology.py`](_downloads/e536fea930beec8a96ac1e1f54c619d3/plot_1_meteorology.py)
[`Download Jupyter notebook: plot_1_meteorology.ipynb`](_downloads/ddabf136b7c29e9aee4d567991092859/plot_1_meteorology.ipynb)
[Gallery generated by Sphinx-Gallery](https://sphinx-gallery.github.io)
[Astronomy, 2D{1,1,1} dataset (Creating image composition)](index.html#sphx-glr-auto-examples-correlated-examples-plot-2-astronomy-py)[¶](#id18)
Note
Click [here](#sphx-glr-download-auto-examples-correlated-examples-plot-2-astronomy-py)
to download the full example code
#### Astronomy, 2D{1,1,1} dataset (Creating image composition)[¶](#astronomy-2d-1-1-1-dataset-creating-image-composition)
More often, the images in astronomy are a composition of datasets measured at different wavelengths over an area of the sky. In this example, we illustrate the use of the CSDM file-format, and csdmpy module, beyond just reading a CSDM-compliant file. We’ll use these datasets, and compose an image,
using Numpy arrays.
The following example is the data from the Eagle Nebula acquired at three different wavelengths and serialized as a CSDM compliant file.
Import the csdmpy model and load the dataset.
```
import csdmpy as cp
domain = "https://www.ssnmr.org/sites/default/files/CSDM"
filename = f"{domain}/EagleNebula/eagleNebula_base64.csdf"
eagle_nebula = cp.load(filename)
```
Let’s get the tuple of dimension and dependent variable objects from the `eagle_nebula` instance.
```
x = eagle_nebula.dimensions y = eagle_nebula.dependent_variables
```
Before we compose an image, let’s take a look at the individual dependent variables from the dataset. The three dependent variables correspond to signal acquisition at 502 nm, 656 nm, and 673 nm, respectively. This information is also listed in the
[`name`](index.html#csdmpy.DependentVariable.name) attribute of the respective dependent variable instances,
```
print(y[0].name)
```
Out:
```
Eagle Nebula acquired @ 502 nm
```
```
print(y[1].name)
```
Out:
```
Eagle Nebula acquired @ 656 nm
```
```
print(y[2].name)
```
Out:
```
Eagle Nebula acquired @ 673 nm
```
##### Data Visualization[¶](#data-visualization)
For convince, let’s view this CSDM object with three dependent-variables as three CSDM objects, each with a single dependent variable. We use the split() method.
```
data0, data1, data2 = eagle_nebula.split()
```
Here, `data0`, `data1`, and `data2` contain the dependent-variable at index 0, 1, 2 of the `eagle_nebula` object. Let’s plot the data from these dependent variables.
```
import matplotlib.pyplot as plt
_, ax = plt.subplots(3, 1, figsize=(6, 14), subplot_kw={"projection": "csdm"})
ax[0].imshow(data0 / data0.max(), cmap="bone", vmax=0.1, aspect="auto")
ax[1].imshow(data1 / data1.max(), cmap="bone", vmax=0.1, aspect="auto")
ax[2].imshow(data2 / data1.max(), cmap="bone", vmax=0.1, aspect="auto")
plt.tight_layout()
plt.show()
```
##### Image composition[¶](#image-composition)
```
import numpy as np
```
For the image composition, we assign the dependent variable at index zero as the blue channel, index one as the green channel, and index two as the red channel of an RGB image. Start with creating an empty array to hold the RGB dataset.
```
shape = y[0].components[0].shape + (3,)
image = np.empty(shape, dtype=np.float64)
```
Here, `image` is the variable we use for storing the composition. Add the respective dependent variables to the designated color channel in the
`image` array,
```
image[..., 0] = y[2].components[0] / y[2].components[0].max() # red channel image[..., 1] = y[1].components[0] / y[1].components[0].max() # green channel image[..., 2] = y[0].components[0] / y[0].components[0].max() # blue channel
```
Following the intensity plot of the individual dependent variables, see the above figures, it is evident that the component intensity from `y[1]` and,
therefore, the green channel dominates the other two. If we plot the `image` data, the image will be saturated with green intensity. To attain a color-balanced image, we arbitrarily scale the intensities from the three channels. You may choose any scaling factor. Each scaling factor will produce a different composition. In this example, we use the following,
```
image[..., 0] = np.clip(image[..., 0] * 65.0, 0, 1) # red channel image[..., 1] = np.clip(image[..., 1] * 7.50, 0, 1) # green channel image[..., 2] = np.clip(image[..., 2] * 75.0, 0, 1) # blue channel
```
Now to plot this composition.
```
# Set the extents of the image plot.
extent = [
x[0].coordinates[0].value,
x[0].coordinates[-1].value,
x[1].coordinates[0].value,
x[1].coordinates[-1].value,
]
# add figure plt.imshow(image, origin="lower", extent=extent)
plt.xlabel(x[0].axis_label)
plt.ylabel(x[1].axis_label)
plt.title("composition")
plt.tight_layout()
plt.show()
```
**Total running time of the script:** ( 0 minutes 2.557 seconds)
[`Download Python source code: plot_2_astronomy.py`](_downloads/a00c1690435fa902882d7246cb08d5f6/plot_2_astronomy.py)
[`Download Jupyter notebook: plot_2_astronomy.ipynb`](_downloads/5960be9c1fc6a569c282dd8779a7808e/plot_2_astronomy.ipynb)
[Gallery generated by Sphinx-Gallery](https://sphinx-gallery.github.io)
### Sparse datasets[¶](#sparse-datasets)
[Sparse along one dimension, 2D{1,1} dataset](index.html#sphx-glr-auto-examples-sparse-plot-0-1d-sparse-py)[¶](#id19)
Note
Click [here](#sphx-glr-download-auto-examples-sparse-plot-0-1d-sparse-py)
to download the full example code
#### Sparse along one dimension, 2D{1,1} dataset[¶](#sparse-along-one-dimension-2d-1-1-dataset)
The following is an example [1](#f2) of a 2D{1,1} sparse dataset with two-dimensions,
\(d=2\), and two, \(p=2\), sparse single-component dependent-variables,
where the component is sparsely sampled along one dimension. The following is an example of a hypercomplex acquisition of the NMR dataset.
Let’s import the CSD model data-file.
```
import csdmpy as cp
filename = "https://www.ssnmr.org/sites/default/files/CSDM/sparse/iglu_1d.csdf"
sparse_1d = cp.load(filename)
```
There are two linear dimensions and two single-component sparse dependent variables.
The tuple of the dimension and the dependent variable instances are
```
x = sparse_1d.dimensions y = sparse_1d.dependent_variables
```
The coordinates, viewed only for the first ten coordinates, are
```
print(x[0].coordinates[:10])
```
Out:
```
[ 0. 192. 384. 576. 768. 960. 1152. 1344. 1536. 1728.] us
```
```
print(x[1].coordinates[:10])
```
Out:
```
[ 0. 192. 384. 576. 768. 960. 1152. 1344. 1536. 1728.] us
```
Converting the coordinates to ms.
```
x[0].to("ms")
x[1].to("ms")
```
**Visualizing the dataset**
```
import matplotlib.pyplot as plt
# split the CSDM object with two dependent variables into two CSDM objects with single
# dependent variables.
cos, sin = sparse_1d.split()
# cosine data plt.figure(figsize=(5, 3.5))
ax = plt.subplot(projection="csdm")
cb = ax.contourf(cos.real)
plt.colorbar(cb, ax=ax)
plt.tight_layout()
plt.show()
```
```
# sine data plt.figure(figsize=(5, 3.5))
ax = plt.subplot(projection="csdm")
cb = ax.contourf(sin.real)
plt.colorbar(cb, ax=ax)
plt.tight_layout()
plt.show()
```
Citation
[1](#id1)
<NAME>, <NAME>., Fast Forward Maximum entropy reconstruction of sparsely sampled data., J Magn Reson. 2012, 223, 164-169.
doi: 10.1016/j.jmr.2012.07.002
**Total running time of the script:** ( 0 minutes 1.080 seconds)
[`Download Python source code: plot_0_1D_sparse.py`](_downloads/5d0af6a7c1ea7d6ba0b7cc16275efe3d/plot_0_1D_sparse.py)
[`Download Jupyter notebook: plot_0_1D_sparse.ipynb`](_downloads/18e5a264a787b26907c3bab9d82726ad/plot_0_1D_sparse.ipynb)
[Gallery generated by Sphinx-Gallery](https://sphinx-gallery.github.io)
[Sparse along two dimensions, 2D{1,1} dataset](index.html#sphx-glr-auto-examples-sparse-plot-1-2d-sparse-py)[¶](#id20)
Note
Click [here](#sphx-glr-download-auto-examples-sparse-plot-1-2d-sparse-py)
to download the full example code
#### Sparse along two dimensions, 2D{1,1} dataset[¶](#sparse-along-two-dimensions-2d-1-1-dataset)
The following is an example [1](#f2) of a 2D{1,1} sparse dataset with two-dimensions,
\(d=2\), and two, \(p=2\), sparse single-component dependent-variables,
where the component is sparsely sampled along two dimensions. The following is an example of a hypercomplex acquisition of the NMR dataset.
Let’s import the CSD model data-file and look at its data structure.
```
import csdmpy as cp
filename = "https://www.ssnmr.org/sites/default/files/CSDM/sparse/iglu_2d.csdf"
sparse_2d = cp.load(filename)
```
There are two linear dimensions and two single-component sparse dependent variables.
The tuple of the dimension and the dependent variable instances are
```
x = sparse_2d.dimensions y = sparse_2d.dependent_variables
```
The coordinates, viewed only for the first ten coordinates, are
```
print(x[0].coordinates[:10])
```
Out:
```
[ 0. 192. 384. 576. 768. 960. 1152. 1344. 1536. 1728.] us
```
```
print(x[1].coordinates[:10])
```
Out:
```
[ 0. 192. 384. 576. 768. 960. 1152. 1344. 1536. 1728.] us
```
Converting the coordinates to ms.
```
x[0].to("ms")
x[1].to("ms")
```
**Visualize the dataset**
```
import matplotlib.pyplot as plt
# split the CSDM object with two dependent variables into two CSDM objects with single
# dependent variables.
cos, sin = sparse_2d.split()
# cosine data plt.figure(figsize=(5, 3.5))
ax = plt.subplot(projection="csdm")
cb = ax.contourf(cos.real)
plt.colorbar(cb, ax=ax)
plt.tight_layout()
plt.show()
```
```
# sine data plt.figure(figsize=(5, 3.5))
ax = plt.subplot(projection="csdm")
cb = ax.contourf(sin.real)
plt.colorbar(cb, ax=ax)
plt.tight_layout()
plt.show()
```
Citation
[1](#id1)
<NAME>, <NAME>., Fast Forward Maximum entropy reconstruction of sparsely sampled data., J Magn Reson. 2012, 223, 164-169.
doi: 10.1016/j.jmr.2012.07.002
**Total running time of the script:** ( 0 minutes 1.074 seconds)
[`Download Python source code: plot_1_2D_sparse.py`](_downloads/37b860f0ebf8020320f47ede34c829ab/plot_1_2D_sparse.py)
[`Download Jupyter notebook: plot_1_2D_sparse.ipynb`](_downloads/46fef6593c645dcd7a62e03c7205af34/plot_1_2D_sparse.ipynb)
[Gallery generated by Sphinx-Gallery](https://sphinx-gallery.github.io)
[`Download all examples in Python source code: auto_examples_python.zip`](_downloads/07fcc19ba03226cd3d83d4e40ec44385/auto_examples_python.zip)
[`Download all examples in Jupyter notebooks: auto_examples_jupyter.zip`](_downloads/6f1e7a639e0699d6164445b55e6c116d/auto_examples_jupyter.zip)
[Gallery generated by Sphinx-Gallery](https://sphinx-gallery.github.io)
Serializing CSDM object to file[¶](#serializing-csdm-object-to-file)
---
An instance of a [CSDM](index.html#csdm-api) object is serialized as a csdf/csdfe JSON-format file with the [`save()`](index.html#csdmpy.CSDM.save) method.
When serializing the dependent-variable from the CSDM object to the data-file,
the csdmpy module uses the value of the dependent variable’s
[`encoding`](index.html#csdmpy.DependentVariable.encoding) attribute to determine the encoding type of the serialized data. There are three encoding types for the dependent variables:
* `none`
* `base64`
* `raw`
Note
By default, all instances of
[`DependentVariable`](index.html#csdmpy.DependentVariable) from a
[`CSDM`](index.html#csdmpy.CSDM) object are serialized as base64 strings.
For the following examples, consider `data` as an instance of the
[`CSDM`](index.html#csdmpy.CSDM) class.
To serialize a dependent variable with a given encoding type, set the value of it’s encoding attribute to the respective encoding. For example,
**As ``none`` encoding**
```
>>> data.dependent_variables[0].encoding = "none"
>>> data.save('my_file.csdf')
```
The above code will serialize the dependent variable at index zero to a JSON file, my_file.csdf, where each component of the dependent variable is serialized as an array of JSON number.
**As ``base64`` encoding**
```
>>> data.dependent_variables[0].encoding = "base64"
>>> data.save('my_file.csdf')
```
The above code will serialize the dependent variable at index zero to a JSON file, my_file.csdf, where each component of the dependent variable is serialized as a base64 string.
**As ``raw`` encoding**
```
>>> data.dependent_variables[0].encoding = "raw"
>>> data.save('my_file.csdfe')
```
The above code will serialize the metadata from the dependent variable at index zero to a JSON file, my_file.csdfe, which includes a link to an external file where the components of the respective dependent variable are serialized as a binary array. The binary file is named, my_file_0.dat, where my_file is the filename from the argument of the save method, and 0 is the index number of the dependent variable from the CSDM object.
**Multiple encoding types**
In the case of multiple dependent-variables, you may choose to serialize each dependent variables with a different encoding, for example,
```
>>> my_data.dependent_variables[0].encoding = "raw"
>>> my_data.dependent_variables[1].encoding = "base64"
>>> my_data.dependent_variables[2].encoding = "none"
>>> my_data.dependent_variables[3].encoding = "base64"
>>> my_data.save('my_file.csdfe')
```
In the above example, `my_data` is a CSDM object containing four
[`DependentVariable`](index.html#csdmpy.DependentVariable) objects. Here, we serialize the dependent variable at index two with `none`,
the dependent variables at index one and three with `bae64`,
and the dependent variables at index zero with `raw` encoding, respectively.
Note
Because an instance of the dependent variable, that is, the index zero in the above example, is set to be serialized with an external subtype, the corresponding file should be saved with a .csdfe extension.
Using csdmpy objects[¶](#using-csdmpy-objects)
---
The csdmpy module is not just designed for deserializing and serializing the .csdf or .csdfe files. It can also be used to create new datasets,
a feature that is most useful when converting datasets to CSDM compliant files.
### Generating Dimension objects[¶](#generating-dimension-objects)
#### LinearDimension[¶](#lineardimension)
A LinearDimension is where the coordinates are regularly spaced along the dimension. This type of dimension is frequently encountered in many scientific datasets. There are several ways to generate LinearDimension.
**Using the** [`Dimension`](index.html#csdmpy.Dimension) **class.**
```
>>> import csdmpy as cp
>>> x = cp.Dimension(
... type="linear",
... count=10,
... increment="0.1 s",
... label="time",
... description="A temporal dimension.",
... )
>>> print(x)
LinearDimension([0. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9] s)
```
**Using the** [`LinearDimension`](index.html#csdmpy.LinearDimension) **class.**
```
>>> import csdmpy as cp
>>> x1 = cp.LinearDimension(
... count=10, increment="0.1 s", label="time", description="A temporal dimension."
... )
>>> print(x1)
LinearDimension([0. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9] s)
```
**Using NumPy array**
You may also create a LinearDimesion object from a one-dimensional NumPy array using the [`as_dimension()`](index.html#csdmpy.as_dimension) method.
```
>>> import numpy as np
>>> array = np.arange(10) * 0.1
>>> x2 = cp.as_dimension(array)
>>> print(x2)
LinearDimension([0. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9])
```
Note, the Dimension object `x2` is dimensionless. You can create a physical dimension by either providing an appropriate unit as the argument to the
[`as_dimension()`](index.html#csdmpy.as_dimension) method,
```
>>> x3 = cp.as_dimension(array, unit="s")
>>> print(x3)
LinearDimension([0. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9] s)
```
or appropriately multiplying the dimension object `x2` with a
`ScalarQuantity`.
```
>>> x2 *= cp.ScalarQuantity("s")
>>> print(x2)
LinearDimension([0. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9] s)
```
The coordinates of the `x2` LinearDimension object are
```
>>> x2.coordinates
<Quantity [0. , 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9] s>
```
where `x2.coordinates` is a [Quantity](http://docs.astropy.org/en/stable/api/astropy.units.Quantity.html#astropy.units.Quantity)
array. The value and the unit of the quantity instance are
```
>>> # To access the numpy array
>>> numpy_array = x.coordinates.value
>>> print("numpy array =", numpy_array)
numpy array = [0. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9]
>>> # To access the astropy.unit
>>> unit = x.coordinates.unit
>>> print("unit =", unit)
unit = s
```
respectively.
Note
When generating LinearDimension objects from NumPy array, the NumPy array must be one-dimensional and regularly spaced.
```
>>> cp.as_dimension(np.arange(20).reshape(2, 10))
ValueError: Cannot convert a 2 dimensional array to a Dimension object.
```
#### MonotonicDimension[¶](#monotonicdimension)
A MonotonicDimension is one where the coordinates along the dimension are sampled monotonically, that is, either strictly increasing or decreasing coordinates. Like the LinearDimension, there are several ways to generate a MonotonicDimension.
**Using the** [`Dimension`](index.html#csdmpy.Dimension) **class.**
```
>>> import csdmpy as cp
>>> x = cp.Dimension(
... type="monotonic",
... coordinates=[
... "10ns",
... "100ns",
... "1µs",
... "10µs",
... "100µs",
... "1ms",
... "10ms",
... "100ms",
... "1s",
... "10s",
... ],
... )
>>> print(x)
MonotonicDimension([1.e+01 1.e+02 1.e+03 1.e+04 1.e+05 1.e+06 1.e+07 1.e+08 1.e+09 1.e+10] ns)
```
**Using the** [`MonotonicDimension`](index.html#csdmpy.MonotonicDimension) **class.**
```
>>> import numpy as np
>>> array = np.asarray(
... [
... -0.28758166,
... -0.22712233,
... -0.19913859,
... -0.17235106,
... -0.1701172,
... -0.10372635,
... -0.01817061,
... 0.05936719,
... 0.18141424,
... 0.34758913,
... ]
... )
>>> x = cp.MonotonicDimension(coordinates=array) * cp.ScalarQuantity("cm")
>>> print(x)
MonotonicDimension([-0.28758166 -0.22712233 -0.19913859 -0.17235106 -0.1701172 -0.10372635
-0.01817061 0.05936719 0.18141424 0.34758913] cm)
```
In the above example, we generate a dimensionless MonotonicDimension from the NumPy array and then scale its dimensionality by multiplying the object with an appropriate `ScalarQuantity`.
**From numpy arrays.**
Use the [`as_dimension()`](index.html#csdmpy.as_dimension) method to convert a numpy array as a Dimension object.
```
>>> numpy_array = 10 ** (np.arange(10) / 10)
>>> x_dim = cp.as_dimension(numpy_array, unit="A")
>>> print(x_dim)
MonotonicDimension([1. 1.25892541 1.58489319 1.99526231 2.51188643 3.16227766
3.98107171 5.01187234 6.30957344 7.94328235] A)
```
When generating MonotonicDimension object using the Numpy array, the array must be monotonic, that is, either strictly increasing or decreasing.
An exception will be raised otherwise.
```
>>> numpy_array = np.random.rand(10)
>>> x_dim = cp.as_dimension(numpy_array)
Exception: Invalid array for Dimension object.
```
#### LabeledDimension[¶](#labeleddimension)
A LabeledDimension is one where the coordinates along the dimension are string labels. You can similarly generate a labeled dimension.
**Using the** [`Dimension`](index.html#csdmpy.Dimension) **class.**
```
>>> import csdmpy as cp
>>> x = cp.Dimension(type="labeled", labels=["The", "great", "circle"])
>>> print(x)
LabeledDimension(['The' 'great' 'circle'])
```
**Using the** [`LabeledDimension`](index.html#csdmpy.LabeledDimension) **class.**
```
>>> x = cp.LabeledDimension(labels=["The", "great", "circle"])
>>> print(x)
LabeledDimension(['The' 'great' 'circle'])
```
**From numpy arrays or python list.**
Use the [`as_dimension()`](index.html#csdmpy.as_dimension) method to convert a numpy array as a Dimension object.
```
>>> array = ["The", "great", "circle"]
>>> x = cp.as_dimension(array)
>>> print(x)
LabeledDimension(['The' 'great' 'circle'])
```
### Generating DependentVariable objects[¶](#generating-dependentvariable-objects)
A DependentVariable is where the responses of the multi-dimensional dataset reside. There are two types of DependentVariable objects, internal and external. In this section, we show how to generate DependentVariable objects of both types.
#### InternalDependentVariable[¶](#internaldependentvariable)
##### Single component dependent variable[¶](#single-component-dependent-variable)
**Using the** [`DependentVariable`](index.html#csdmpy.DependentVariable) **class.**
```
>>> dv1 = cp.DependentVariable(
... type="internal",
... quantity_type="scalar",
... components=np.arange(10000),
... unit="J",
... description="A sample internal dependent variable.",
... )
>>> print(dv1)
DependentVariable(
[[ 0 1 2 ... 9997 9998 9999]] J, quantity_type=scalar, numeric_type=int64)
```
**Using NumPy array**
Use the [`as_dependent_variable()`](index.html#csdmpy.as_dependent_variable) method to convert a NumPy array into a DependentVariable object. Note, this method returns a view of the NumPy array as the DependentVariable object.
```
>>> dv1 = cp.as_dependent_variable(np.arange(10000).astype(np.complex64), unit="J")
>>> print(dv1)
DependentVariable(
[[0.000e+00+0.j 1.000e+00+0.j 2.000e+00+0.j ... 9.997e+03+0.j
9.998e+03+0.j 9.999e+03+0.j]] J, quantity_type=scalar, numeric_type=complex64)
```
You may additionally provide the quantity_type for the dependent variable,
```
>>> dv2 = cp.as_dependent_variable(
... np.arange(10000).astype(np.complex64), quantity_type="pixel_1"
... )
>>> print(dv2)
DependentVariable(
[[0.000e+00+0.j 1.000e+00+0.j 2.000e+00+0.j ... 9.997e+03+0.j
9.998e+03+0.j 9.999e+03+0.j]], quantity_type=pixel_1, numeric_type=complex64)
```
##### Multi-component dependent variable[¶](#multi-component-dependent-variable)
To generate a multi-component DependentVariable object, add an appropriate quantity_type value, see [QuantityType](index.html#quantitytype-uml) for details.
**Using the** [`DependentVariable`](index.html#csdmpy.DependentVariable) **class.**
```
>>> dv1 = cp.DependentVariable(
... type="internal",
... quantity_type="vector_2",
... components=np.arange(10000),
... unit="J",
... description="A sample internal dependent variable.",
... )
>>> print(dv1)
DependentVariable(
[[ 0 1 2 ... 4997 4998 4999]
[5000 5001 5002 ... 9997 9998 9999]] J, quantity_type=vector_2, numeric_type=int64)
```
The above example generates a two-component dependent variable.
**Using NumPy array**
```
>>> dv1 = cp.as_dependent_variable(
... np.arange(9000).astype(np.complex64), unit="m/s", quantity_type="symmetric_matrix_3"
... )
>>> print(dv1)
DependentVariable(
[[0.000e+00+0.j 1.000e+00+0.j 2.000e+00+0.j ... 1.497e+03+0.j
1.498e+03+0.j 1.499e+03+0.j]
[1.500e+03+0.j 1.501e+03+0.j 1.502e+03+0.j ... 2.997e+03+0.j
2.998e+03+0.j 2.999e+03+0.j]
[3.000e+03+0.j 3.001e+03+0.j 3.002e+03+0.j ... 4.497e+03+0.j
4.498e+03+0.j 4.499e+03+0.j]
[4.500e+03+0.j 4.501e+03+0.j 4.502e+03+0.j ... 5.997e+03+0.j
5.998e+03+0.j 5.999e+03+0.j]
[6.000e+03+0.j 6.001e+03+0.j 6.002e+03+0.j ... 7.497e+03+0.j
7.498e+03+0.j 7.499e+03+0.j]
[7.500e+03+0.j 7.501e+03+0.j 7.502e+03+0.j ... 8.997e+03+0.j
8.998e+03+0.j 8.999e+03+0.j]] m / s, quantity_type=symmetric_matrix_3, numeric_type=complex64)
```
The above example generates a six-component dependent variable.
Note
For multi-component DependentVariable objects, the size of the NumPy array must be an integer multiple of the total number of components.
```
>>> d1 = cp.as_dependent_variable(np.arange(127), quantity_type="pixel_2")
ValueError: cannot reshape array of size 127 into shape (2,63)
```
Notice in the above examples, we use a one-dimensional NumPy array to generate a DependentVariable object. If a multi-dimensional NumPy array is given as the argument, the array will be raveled (flattened) before returning the DependentVariable object. Note, in the core scientific dataset model, the DependentVariable objects only contain information about the number of components and not the dimensions. For example, consider the following.
```
>>> d2 = cp.as_dependent_variable(
... np.arange(6000).reshape(10, 20, 30), quantity_type="vector_2"
... )
>>> print(d2)
DependentVariable(
[[ 0 1 2 ... 2997 2998 2999]
[3000 3001 3002 ... 5997 5998 5999]], quantity_type=vector_2, numeric_type=int64)
```
Here, a three-dimensional Numpy array is given as the argument with a quantity_type of vector_2. The DependentVariable object generated from this array contains two-components by appropriately flattening the input array.
#### ExternalDependentVariable[¶](#externaldependentvariable)
The ExternalDependentVariable objects are generated similar to the InternalDependentVariable object. The only difference is that the components of the dependent variable are located at a remote and local address.
**Using the** [`DependentVariable`](index.html#csdmpy.DependentVariable) **class.**
```
>>> dv = cp.DependentVariable(
... type="external",
... quantity_type="scalar",
... unit="J",
... components_url="address to the binary file.",
... numeric_type="int64",
... description="A sample internal dependent variable.",
... )
```
A DependentVariable of type external is useful for data serialization. When using with csdmpy, all instances of the external dependent variable objects are set as internal after downloading the components from the components_url.
### Generating CSDM objects[¶](#generating-csdm-objects)
#### An empty csdm object[¶](#an-empty-csdm-object)
To create a new empty csdm object, import the csdmpy module and create a new instance of the CSDM class following,
```
>>> import csdmpy as cp
>>> new_data = cp.new(description="A new test dataset")
```
The [`new()`](index.html#csdmpy.new) method returns an instance of the CSDM class with zero dimensions and dependent variables. respectively, i.e., a 0D{0} dataset.
In the above example, this instance is assigned to the `new_data` variable.
Optionally, a description may also be provided as an argument of the
[`new()`](index.html#csdmpy.new) method.
The data structure from the above example is
```
>>> print(new_data.data_structure)
{
"csdm": {
"version": "1.0",
"description": "A new test dataset"
}
}
```
#### From a NumPy array[¶](#from-a-numpy-array)
Perhaps the easiest way to generate a csdm object is to convert the NumPy array holding the dataset as a csdm object using the [`as_csdm()`](index.html#csdmpy.as_csdm) method,
which returns a view of the array as a CSDM object.
Here, the NumPy array becomes the dependent variable of the CSDM object of the given quantity_type.
Unlike the [`as_dependent_variable()`](index.html#csdmpy.as_dependent_variable) method, however, the
[`as_csdm()`](index.html#csdmpy.as_csdm) method retains the shape of the Numpy array and uses this information to generate the dimensions of the CSDM object. By default,
the dimensions are of a linear subtype with unit increment. Consider the following example.
```
>>> array = np.arange(30).reshape(3, 10)
>>> csdm_obj = cp.as_csdm(array)
>>> print(csdm_obj)
CSDM(
DependentVariable(
[[[ 0 1 2 3 4 5 6 7 8 9]
[10 11 12 13 14 15 16 17 18 19]
[20 21 22 23 24 25 26 27 28 29]]], quantity_type=scalar, numeric_type=int64),
LinearDimension([0. 1. 2. 3. 4. 5. 6. 7. 8. 9.]),
LinearDimension([0. 1. 2.])
)
```
Here, a two-dimensional NumPy array of shape (3, 10) is given as the argument of the [`as_csdm()`](index.html#csdmpy.as_csdm) method. The resulting CSDM object, `csdm_obj`,
contains a 2D{1} datasets, with two linear dimensions of unit increment and 10 and 3 points, respectively, and a single one-component dependent variable of quantity_type scalar.
Note
The order of the dimensions in the CSDM object is the reverse of the order of axes from the corresponding Numpy array. Thus, the dimension at index 0 of the CSDM object is the last axis of the Numpy array.
You may additionally provide a quantity type as the argument of the
[`as_csdm()`](index.html#csdmpy.as_csdm) method. When the quantity type requires more than one component, see [QuantityType](index.html#quantitytype-uml), the first axis of the NumPy array must be the number of components. For example,
```
>>> csdm_obj1 = cp.as_csdm(array, quantity_type="pixel_3")
>>> print(csdm_obj1)
CSDM(
DependentVariable(
[[ 0 1 2 3 4 5 6 7 8 9]
[10 11 12 13 14 15 16 17 18 19]
[20 21 22 23 24 25 26 27 28 29]], quantity_type=pixel_3, numeric_type=int64),
LinearDimension([0. 1. 2. 3. 4. 5. 6. 7. 8. 9.])
)
```
Here, the `csdm_obj1` object is a 1D{3} datasets, with a single three-component dependent variable. In this case, the length of the NumPy array along axis 0, i.e., 3, is consistent with the number of components required by the quantity type pixel_3. The remaining axes of the NumPy array are used in generating the dimensions of the csdm object. In this example, this corresponds to a single dimension of linear type with 10 points.
The following example generates a 3D{2} vector dataset. Here, the first axis of the four-dimensional Numpy array is the components of the vector dataset, and the remaining three axes become the respective dimensions.
```
>>> array2 = np.arange(12000).reshape(2, 30, 20, 10)
>>> csdm_obj2 = cp.as_csdm(array2, quantity_type="vector_2")
>>> print(len(csdm_obj2.dimensions), len(csdm_obj2.dependent_variables[0].components))
3 2
```
An exception will be raised if the quantity_type and the number of points along the first axis of the NumPy array are inconsistent, for example,
```
>>> csdm_obj_err = cp.as_csdm(array, quantity_type='vector_2')
ValueError: Expecting exactly 2 components for quantity type, `vector_2`, found 3.
Make sure `array.shape[0]` is equal to the number of components supported by vector_2.
```
Note
Only a csdm object with a single dependent variable may be created from a NumPy array.
To add more dependent variables to the CSDM object, see [Adding DependentVariable objects to CSDM object](index.html#adding-dv).
### Adding Dimension objects to CSDM object[¶](#adding-dimension-objects-to-csdm-object)
There are three subtypes of Dimension objects,
* LinearDimension
* MonotonicDimension
* LabeledDimension
**Using an instance of the Dimension class**
Please read the topic [Generating Dimension objects](index.html#generate-dimension-objects) for details on how to generate an instance of the Dimension class. Once created, use the dimensions to generate a CSDM object.
```
>>> linear_dim = cp.LinearDimension(count=10, increment="0.1 C/V")
>>> new_data = cp.CSDM(dimensions=[linear_dim])
>>> print(new_data)
CSDM(
LinearDimension([0. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9] C / V)
)
```
**Using Python’s dictionary objects**
When using python dictionaries, the key-value pairs of the dictionary must be a valid collection for the given Dimension subtype. For example,
```
>>> # dictionary representation of a linear dimension.
>>> d0 = {
... "type": "linear",
... "description": "This is a linear dimension",
... "count": 5,
... "increment": "0.1 rad",
... }
>>> # dictionary representation of a monotonic dimension.
>>> d1 = {
... "type": "monotonic",
... "description": "This is a monotonic dimension",
... "coordinates": ["1 m/s", "2 cm/s", "4 mm/s"],
... }
>>> # dictionary representation of a labeled dimension.
>>> d2 = {
... "type": "labeled",
... "description": "This is a labeled dimension",
... "labels": ["Cu", "Ag", "Au"],
... }
>>> # add the dictionaries to the CSDM object.
>>> new_data = cp.CSDM(dimensions=[d0, d1, d2])
>>> print(new_data)
CSDM(
LinearDimension([0. 0.1 0.2 0.3 0.4] rad),
MonotonicDimension([1. 0.02 0.004] m / s),
LabeledDimension(['Cu' 'Ag' 'Au'])
)
```
### Adding DependentVariable objects to CSDM object[¶](#adding-dependentvariable-objects-to-csdm-object)
There are two subtypes of DependentVariable class:
* **InternalDependentVariable**:
We refer to an instance of the DependentVariable as *internal* when the components of the dependent variable are listed along with the other metadata specifying the dependent variable.
* **ExternalDependentVariable**:
We refer to an instance of the DependentVariable as *external* when the components of the dependent variable are stored in an external file as binary data either locally or at a remote server.
**Using an instance of the DependentVariable class**
Please read the topic [Generating DependentVariable objects](index.html#generate-dependent-variable-objects) for details on how to generate an instance of the DependentVariable class. Once created,
use the dependent variables to generate a CSDM object.
```
>>> dv = cp.as_dependent_variable(np.arange(10))
>>> new_data = cp.CSDM(dependent_variables=[dv])
>>> print(new_data)
CSDM(
DependentVariable(
[[0 1 2 3 4 5 6 7 8 9]], quantity_type=scalar, numeric_type=int64)
)
```
**Using Python’s dictionary objects**
When using python dictionaries, the key-value pairs of the dictionary must be a valid collection for the given DependentVariable subtype. For example,
```
>>> dv0 = {
... "type": "internal",
... "quantity_type": "scalar",
... "description": "This is an internal scalar dependent variable",
... "unit": "cm",
... "components": np.arange(50),
... }
>>> dv1 = {
... "type": "internal",
... "quantity_type": "vector_2",
... "description": "This is an internal vector dependent variable",
... "unit": "cm",
... "components": np.arange(100),
... }
>>> new_data = cp.CSDM(dependent_variables=[dv0, dv1])
>>> print(new_data)
CSDM(
DependentVariable(
[[ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
48 49]] cm, quantity_type=scalar, numeric_type=int64),
DependentVariable(
[[ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
48 49]
[50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73
74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97
98 99]] cm, quantity_type=vector_2, numeric_type=int64)
)
```
Interacting with csdmpy objects[¶](#interacting-with-csdmpy-objects)
---
### Interacting with Dimension objects[¶](#interacting-with-dimension-objects)
#### LinearDimension[¶](#lineardimension)
There are several attributes and methods associated with the LinearDimension,
each controlling the coordinates along the dimension. The following section demonstrates the effect of these attributes and methods on the coordinates of the LinearDimension.
```
>>> import csdmpy as cp
>>> x = cp.LinearDimension(
... count=10, increment="0.1 s", label="time", description="A temporal dimension."
... )
>>> print(x)
LinearDimension([0. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9] s)
```
##### Attributes[¶](#attributes)
[`type`](index.html#csdmpy.Dimension.type)This attribute returns the type of the instance.
```
>>> print(x.type)
linear
```
**The attributes that modify the coordinates**
[`count`](index.html#csdmpy.Dimension.count)The number of points along the dimension
```
>>> print("number of points =", x.count)
number of points = 10
```
To update the number of points, update the value of this attribute,
```
>>> x.count = 12
>>> print("new number of points =", x.count)
new number of points = 12
>>> print("new coordinates =", x.coordinates)
new coordinates = [0. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1. 1.1] s
```
[`increment`](index.html#csdmpy.Dimension.increment)
```
>>> print("old increment =", x.increment)
old increment = 0.1 s
>>> x.increment = "10 s"
>>> print("new increment =", x.increment)
new increment = 10.0 s
>>> print("new coordinates =", x.coordinates)
new coordinates = [ 0. 10. 20. 30. 40. 50. 60. 70. 80. 90. 100. 110.] s
```
[`coordinates_offset`](index.html#csdmpy.Dimension.coordinates_offset)
```
>>> print("old reference offset =", x.coordinates_offset)
old reference offset = 0.0 s
>>> x.coordinates_offset = "1 s"
>>> print("new reference offset =", x.coordinates_offset)
new reference offset = 1.0 s
>>> print("new coordinates =", x.coordinates)
new coordinates = [ 1. 11. 21. 31. 41. 51. 61. 71. 81. 91. 101. 111.] s
```
[`origin_offset`](index.html#csdmpy.Dimension.origin_offset)
```
>>> print("old origin offset =", x.origin_offset)
old origin offset = 0.0 s
>>> x.origin_offset = "1 day"
>>> print("new origin offset =", x.origin_offset)
new origin offset = 1.0 d
>>> print("new coordinates =", x.coordinates)
new coordinates = [ 1. 11. 21. 31. 41. 51. 61. 71. 81. 91. 101. 111.] s
```
The last operation updates the value of the origin offset, however,
the coordinates remain unaffected. This is because the
[`coordinates`](index.html#csdmpy.Dimension.coordinates) attribute refers to the reference coordinates. You may access the absolute coordinates through the
[`absolute_coordinates`](index.html#csdmpy.Dimension.absolute_coordinates) attribute.
```
>>> print("absolute coordinates =", x.absolute_coordinates)
absolute coordinates = [86401. 86411. 86421. 86431. 86441. 86451. 86461. 86471. 86481. 86491.
86501. 86511.] s
```
**The attributes that modify the order of coordinates**
[`complex_fft`](index.html#csdmpy.Dimension.complex_fft)If true, orders the coordinates along the dimension according to the output of a complex Fast Fourier Transform (FFT) routine.
```
>>> print("old coordinates =", x.coordinates)
old coordinates = [ 1. 11. 21. 31. 41. 51. 61. 71. 81. 91. 101. 111.] s
>>> x.complex_fft = True
>>> print("new coordinates =", x.coordinates)
new coordinates = [-59. -49. -39. -29. -19. -9. 1. 11. 21. 31. 41. 51.] s
```
**Other attributes**
[`period`](index.html#csdmpy.Dimension.period)The period of the dimension.
```
>>> print("old period =", x.period)
old period = inf s
>>> x.period = "10 s"
>>> print("new period =", x.period)
new period = 10.0 s
```
[`quantity_name`](index.html#csdmpy.Dimension.quantity_name)Returns the quantity name.
```
>>> print("quantity name is", x.quantity_name)
quantity name is time
```
[`label`](index.html#csdmpy.Dimension.label)
```
>>> x.label
'time'
>>> x.label = "t1"
>>> x.label
't1'
```
[`axis_label`](index.html#csdmpy.Dimension.axis_label)Returns a formatted string for labeling axis.
```
>>> x.label
't1'
>>> x.axis_label
't1 / (s)'
```
##### Methods[¶](#methods)
[`to()`](index.html#csdmpy.Dimension.to):
This method is used for unit conversions.
```
>>> print("old unit =", x.coordinates.unit)
old unit = s
>>> print("old coordinates =", x.coordinates)
old coordinates = [-59. -49. -39. -29. -19. -9. 1. 11. 21. 31. 41. 51.] s
>>> ## unit conversion
>>> x.to("min")
>>> print("new coordinates =", x.coordinates)
new coordinates = [-0.98333333 -0.81666667 -0.65 -0.48333333 -0.31666667 -0.15
0.01666667 0.18333333 0.35 0.51666667 0.68333333 0.85 ] min
```
Note
In the above examples, the coordinates are ordered according to the FFT output order, based on the previous set of operations.
The argument of this method is a string containing the unit, in this case,
min, whose dimensionality is be consistent with the dimensionality of the coordinates. An exception will be raised otherwise.
```
>>> x.to("km/s")
Exception: The unit 'km / s' (speed) is inconsistent with the unit 'min' (time).
```
##### Changing the dimensionality[¶](#changing-the-dimensionality)
You may scale the dimension object by multiplying the object with the appropriate ScalarQuantity, as follows,
```
>>> print(x)
LinearDimension([-0.98333333 -0.81666667 -0.65 -0.48333333 -0.31666667 -0.15
0.01666667 0.18333333 0.35 0.51666667 0.68333333 0.85 ] min)
>>> x *= cp.ScalarQuantity("m/s")
>>> print(x)
LinearDimension([-59. -49. -39. -29. -19. -9. 1. 11. 21. 31. 41. 51.] m)
```
#### MonotonicDimension[¶](#monotonicdimension)
There are several attributes and methods associated with a MonotonicDimension,
controlling the coordinates along the dimension. The following section demonstrates the effect of these attributes and methods on the coordinates.
```
>>> import numpy as np
>>> array = np.asarray(
... [
... -0.28758166,
... -0.22712233,
... -0.19913859,
... -0.17235106,
... -0.1701172,
... -0.10372635,
... -0.01817061,
... 0.05936719,
... 0.18141424,
... 0.34758913,
... ]
... )
>>> x = cp.MonotonicDimension(coordinates=array) * cp.ScalarQuantity("cm")
```
##### Attributes[¶](#id1)
The following are the attributes of the [`MonotonicDimension`](index.html#csdmpy.MonotonicDimension)
instance.
[`type`](index.html#csdmpy.Dimension.type)This attribute returns the type of the instance.
```
>>> print(x.type)
monotonic
```
**The attributes that modify the coordinates**
[`count`](index.html#csdmpy.Dimension.count)The number of points along the dimension
```
>>> print("number of points =", x.count)
number of points = 10
```
You may update the number of points with this attribute, however, you can only lower the number of points.
```
>>> x.count = 6
>>> print("new number of points =", x.count)
new number of points = 6
>>> print(x.coordinates)
[-0.28758166 -0.22712233 -0.19913859 -0.17235106 -0.1701172 -0.10372635] cm
```
[`origin_offset`](index.html#csdmpy.Dimension.origin_offset)
```
>>> print("old origin offset =", x.origin_offset)
old origin offset = 0.0 cm
>>> x.origin_offset = "1 km"
>>> print("new origin offset =", x.origin_offset)
new origin offset = 1.0 km
>>> print(x.coordinates)
[-0.28758166 -0.22712233 -0.19913859 -0.17235106 -0.1701172 -0.10372635] cm
```
The last operation updates the value of the origin offset, however,
the value of the `coordinates` attribute remains unchanged.
This is because the `coordinates` refer to the reference coordinates.
The absolute coordinates are accessed through the `absolute_coordinates`
attribute.
```
>>> print("absolute coordinates =", x.absolute_coordinates)
absolute coordinates = [99999.71241834 99999.77287767 99999.80086141 99999.82764894
99999.8298828 99999.89627365] cm
```
**Other attributes**
[`label`](index.html#csdmpy.Dimension.label)
```
>>> x.label = "t1"
>>> print("new label =", x.label)
new label = t1
```
[`period`](index.html#csdmpy.Dimension.period)
```
>>> print("old period =", x.period)
old period = inf cm
>>> x.period = "10 m"
>>> print("new period =", x.period)
new period = 10.0 m
```
[`quantity_name`](index.html#csdmpy.Dimension.quantity_name)Returns the quantity name.
```
>>> print("quantity is", x.quantity_name)
quantity is length
```
##### Methods[¶](#id2)
[`to()`](index.html#csdmpy.Dimension.to)
The method is used for unit conversions. It follows,
```
>>> print("old unit =", x.coordinates.unit)
old unit = cm
>>> print("old coordinates =", x.coordinates)
old coordinates = [-0.28758166 -0.22712233 -0.19913859 -0.17235106 -0.1701172 -0.10372635] cm
>>> ## unit conversion
>>> x.to("mm")
>>> print("new coordinates =", x.coordinates)
new coordinates = [-2.8758166 -2.2712233 -1.9913859 -1.7235106 -1.701172 -1.0372635] mm
```
The argument of this method is a unit, in this case, ‘mm’, whose dimensionality must be consistent with the dimensionality of the coordinates. An exception will be raised otherwise,
```
>>> x.to("km/s")
Exception("Validation Failed: The unit 'km / s' (speed) is inconsistent with the unit 'mm' (length).")
```
##### Changing the dimensionality[¶](#id3)
You may scale the dimension object by multiplying the object with the appropriate ScalarQuantity, as follows,
```
>>> print(x)
MonotonicDimension([-2.8758166 -2.2712233 -1.9913859 -1.7235106 -1.701172 -1.0372635] mm)
>>> x *= cp.ScalarQuantity("2 s/mm")
>>> print(x)
MonotonicDimension([-0.57516332 -0.45424466 -0.39827718 -0.34470212 -0.3402344 -0.2074527 ] cm s / mm)
```
### Interacting with CSDM objects[¶](#interacting-with-csdm-objects)
#### Basic math operations[¶](#basic-math-operations)
The csdm object supports basic mathematical operations such as additive and multiplicative operations.
Note
All operations applied to or involving the csdm objects apply only to the components of the dependent variables within the csdm object. These operations do not apply to the dimensions within the csdm object.
Consider the following csdm data object.
```
>>> arr1 = np.arange(6, dtype=np.float32).reshape(2, 3)
>>> csdm_obj1 = cp.as_csdm(arr1)
>>> # converting the dimension to proper physical dimensions.
>>> csdm_obj1.dimensions[0] *= cp.ScalarQuantity("2.64 m")
>>> csdm_obj1.dimensions[0].coordinates_offset = "1 km"
>>> # converting the dimension to proper physical dimensions.
>>> csdm_obj1.dimensions[1] *= cp.ScalarQuantity("10 µs")
>>> csdm_obj1.dimensions[1].coordinates_offset = "-0.5 ms"
>>> print(csdm_obj1)
CSDM(
DependentVariable(
[[[0. 1. 2.]
[3. 4. 5.]]], quantity_type=scalar, numeric_type=float32),
LinearDimension([1000. 1002.64 1005.28] m),
LinearDimension([-500. -490.] us)
)
```
##### Additive operations involving a scalar[¶](#additive-operations-involving-a-scalar)
**Example 1**
```
>>> csdm_obj1 += np.pi
>>> print(csdm_obj1)
CSDM(
DependentVariable(
[[[3.1415927 4.141593 5.141593 ]
[6.141593 7.141593 8.141593 ]]], quantity_type=scalar, numeric_type=float32),
LinearDimension([1000. 1002.64 1005.28] m),
LinearDimension([-500. -490.] us)
)
```
**Example 2**
```
>>> csdm_obj2 = csdm_obj1 + (2 - 4j)
>>> print(csdm_obj2)
CSDM(
DependentVariable(
[[[ 5.141593-4.j 6.141593-4.j 7.141593-4.j]
[ 8.141593-4.j 9.141593-4.j 10.141593-4.j]]], quantity_type=scalar, numeric_type=complex64),
LinearDimension([1000. 1002.64 1005.28] m),
LinearDimension([-500. -490.] us)
)
```
##### Multiplicative operations involving scalar / ScalarQuantity[¶](#multiplicative-operations-involving-scalar-scalarquantity)
**Example 3**
```
>>> csdm_obj1 = cp.as_csdm(np.ones(6).reshape(2, 3))
>>> csdm_obj2 = csdm_obj1 * 4.693
>>> print(csdm_obj2)
CSDM(
DependentVariable(
[[[4.693 4.693 4.693]
[4.693 4.693 4.693]]], quantity_type=scalar, numeric_type=float64),
LinearDimension([0. 1. 2.]),
LinearDimension([0. 1.])
)
```
**Example 4**
```
>>> csdm_obj2 = csdm_obj1 * 3j / 2.4
>>> print(csdm_obj2)
CSDM(
DependentVariable(
[[[0.+1.25j 0.+1.25j 0.+1.25j]
[0.+1.25j 0.+1.25j 0.+1.25j]]], quantity_type=scalar, numeric_type=complex128),
LinearDimension([0. 1. 2.]),
LinearDimension([0. 1.])
)
```
You may change the dimensionality of the dependent variables by multiplying the csdm object with the appropriate scalar quantity, for example,
**Example 5**
```
>>> csdm_obj1 *= cp.ScalarQuantity("3.23 m")
>>> print(csdm_obj1)
CSDM(
DependentVariable(
[[[3.23 3.23 3.23]
[3.23 3.23 3.23]]] m, quantity_type=scalar, numeric_type=float64),
LinearDimension([0. 1. 2.]),
LinearDimension([0. 1.])
)
```
**Example 6**
```
>>> csdm_obj1 /= cp.ScalarQuantity("3.23 m")
>>> print(csdm_obj1)
CSDM(
DependentVariable(
[[[1. 1. 1.]
[1. 1. 1.]]], quantity_type=scalar, numeric_type=float64),
LinearDimension([0. 1. 2.]),
LinearDimension([0. 1.])
)
```
##### Additive operations involving two csdm objects[¶](#additive-operations-involving-two-csdm-objects)
The additive operations are supported between two csdm objects only when the two objects have identical sets of Dimension objects and DependentVariable objects with the same dimensionality. For examples,
**Example 7**
```
>>> csdm1 = cp.as_csdm(np.ones((2, 3)), unit="m/s")
>>> csdm2 = cp.as_csdm(np.ones((2, 3)), unit="cm/s")
>>> csdm_obj = csdm1 + csdm2
>>> print(csdm_obj)
CSDM(
DependentVariable(
[[[1.01 1.01 1.01]
[1.01 1.01 1.01]]] m / s, quantity_type=scalar, numeric_type=float64),
LinearDimension([0. 1. 2.]),
LinearDimension([0. 1.])
)
```
An exception will be raised if the DependentVariable objects of the two csdm objects have different dimensionality.
**Example 8**
```
>>> csdm1 = cp.as_csdm(np.ones((2, 3)), unit="m/s")
>>> csdm2 = cp.as_csdm(np.ones((2, 3)))
>>> csdm_obj = csdm1 + csdm2
Exception: Cannot operate on dependent variables with physical types: speed and dimensionless.
```
Similarly, an exception will be raised if the dimension objects of the two csdm objects are different.
**Example 9**
```
>>> csdm1 = cp.as_csdm(np.ones((2, 3)), unit="m/s")
>>> csdm1.dimensions[1] = cp.MonotonicDimension(coordinates=["1 ms", "1 s"])
>>> csdm2 = cp.as_csdm(np.ones((2, 3)), unit="cm/s")
>>> csdm_obj = csdm1 + csdm2
Exception: Cannot operate on CSDM objects with different dimensions.
```
#### Basic Slicing and Indexing[¶](#basic-slicing-and-indexing)
The CSDM objects support NumPy basic slicing and indexing and follow the same rules as the NumPy array. Consider the following 3D{1} csdm object.
```
>>> csdm1 = cp.as_csdm(np.zeros((5, 10, 20)), unit="s")
>>> csdm1.dimensions[0] = cp.as_dimension(np.arange(20) * 0.5 + 4.3, unit="kg")
>>> csdm1.dimensions[1] = cp.as_dimension([1, 2, 3, 5, 7, 11, 13, 17, 19, 23], unit="mm")
>>> csdm1.dimensions[2] = cp.LabeledDimension(labels=list("abcde"))
>>> print(csdm1.shape)
(20, 10, 5)
>>> print(csdm1.dimensions)
[LinearDimension(count=20, increment=0.5 kg, coordinates_offset=4.3 kg, quantity_name=mass),
MonotonicDimension(coordinates=[ 1. 2. 3. 5. 7. 11. 13. 17. 19. 23.] mm, quantity_name=length, reciprocal={'quantity_name': 'wavenumber'}),
LabeledDimension(labels=['a', 'b', 'c', 'd', 'e'])]
```
The above object `csdm1` has three dimensions, each with different dimensionality and dimension type.
To retrieve a sub-grid of this 3D{1} dataset, use the NumPy indexing scheme.
**Example 10**
```
>>> sub_csdm = csdm1[0]
>>> print(sub_csdm.shape)
(10, 5)
>>> print(sub_csdm.dimensions)
[MonotonicDimension(coordinates=[ 1. 2. 3. 5. 7. 11. 13. 17. 19. 23.] mm, quantity_name=length, reciprocal={'quantity_name': 'wavenumber'}),
LabeledDimension(labels=['a', 'b', 'c', 'd', 'e'])]
```
The above example returns a 2D{1} cross-section of the 3D{1} datasets corresponding to the index 0 along the first dimension of the `csdm1`
object as a `sub_csdm` csdm object. The two dimensions in `sub_csdm` are the MonotonicDimension and LabeledDimension.
**Example 11**
```
>>> sub_csdm = csdm1[::5, 2::2, :]
>>> print(sub_csdm.shape)
(4, 4, 5)
>>> print(sub_csdm.dimensions)
[LinearDimension(count=4, increment=2.5 kg, coordinates_offset=4.3 kg, quantity_name=mass),
MonotonicDimension(coordinates=[ 3. 7. 13. 19.] mm, quantity_name=length, reciprocal={'quantity_name': 'wavenumber'}),
LabeledDimension(labels=['a', 'b', 'c', 'd', 'e'])]
```
The above example returns a 3D{1} dataset, `sub_csdm`, which contains a sub-grid of the 3D{1} datasets from `csdm1`. In `sub_csdm`, the first dimension is a sub-grid of the first dimension from the `csdm1` object,
where only every fifth grid point is selected. Similarly, the second dimension of the `sub_csdm` object is sampled from the second dimension of the
`csdm1` object, where every second grid point is selected, starting with the entry at the grid index two. The third dimension of the `sub_csdm` object is the same as the third object of the `csdm1` object. The values of the corresponding linear, monotonic, and labeled dimensions are adjusted accordingly.
For example, notice the value of the count and increment attributes of the linear dimension in `sub_csdm` object.
**Example 12**
```
>>> sub_csdm = csdm1[::5, 2::2, -3::-1]
>>> print(sub_csdm.shape)
(4, 4, 3)
>>> print(sub_csdm.dimensions)
[LinearDimension(count=4, increment=2.5 kg, coordinates_offset=4.3 kg, quantity_name=mass),
MonotonicDimension(coordinates=[ 3. 7. 13. 19.] mm, quantity_name=length, reciprocal={'quantity_name': 'wavenumber'}),
LabeledDimension(labels=['c', 'b', 'a'])]
```
The above example is similar to the previous examples, except the third dimension indexed in reversed starting at the third index from the end.
See also
[Basic Slicing and Indexing](https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#basic-slicing-and-indexing)
#### Support for Numpy methods[¶](#support-for-numpy-methods)
In most cases, the csdm object may be used as if it were a NumPy array.
See the list of all supported [Supported NumPy functions](index.html#numpy-support).
##### Method that only operate on dimensionless dependent variables[¶](#method-that-only-operate-on-dimensionless-dependent-variables)
**Example 13**
```
>>> csdm_obj1 = cp.as_csdm(10 ** (np.arange(10) / 10))
>>> new_csdm1 = np.log10(csdm_obj1)
>>> print(new_csdm1)
CSDM(
DependentVariable(
[[0. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9]], quantity_type=scalar, numeric_type=float64),
LinearDimension([0. 1. 2. 3. 4. 5. 6. 7. 8. 9.])
)
```
**Example 14**
```
>>> new_csdm2 = np.cos(2 * np.pi * new_csdm1)
>>> print(new_csdm2)
CSDM(
DependentVariable(
[[ 1. 0.80901699 0.30901699 -0.30901699 -0.80901699 -1.
-0.80901699 -0.30901699 0.30901699 0.80901699]], quantity_type=scalar, numeric_type=float64),
LinearDimension([0. 1. 2. 3. 4. 5. 6. 7. 8. 9.])
)
```
**Example 15**
```
>>> new_csdm2 = np.exp(new_csdm1 * cp.ScalarQuantity("K"))
ValueError: Cannot apply `exp` to quantity with physical type `temperature`.
```
An exception is raised for csdm object with non-dimensionless dependent variables.
##### Method that are independent of the dependent variable dimensionality[¶](#method-that-are-independent-of-the-dependent-variable-dimensionality)
**Example 16**
```
>>> new_csdm2 = np.square(new_csdm1 * cp.ScalarQuantity("K"))
>>> print(new_csdm2)
CSDM(
DependentVariable(
[[0. 0.01 0.04 0.09 0.16 0.25 0.36 0.49 0.64 0.81]] K2, quantity_type=scalar, numeric_type=float64),
LinearDimension([0. 1. 2. 3. 4. 5. 6. 7. 8. 9.])
)
```
**Example 17**
```
>>> new_csdm1 = np.sqrt(new_csdm2)
>>> print(new_csdm1)
CSDM(
DependentVariable(
[[0. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9]] K, quantity_type=scalar, numeric_type=float64),
LinearDimension([0. 1. 2. 3. 4. 5. 6. 7. 8. 9.])
)
```
##### Dimension reduction methods[¶](#dimension-reduction-methods)
**Example 18**
```
>>> csdm1 = cp.as_csdm(np.ones((10, 20, 30)), unit="µG")
>>> csdm1.shape
(30, 20, 10)
>>> new = np.sum(csdm1, axis=1)
>>> new.shape
(30, 10)
>>> print(new.dimensions)
[LinearDimension(count=30, increment=1.0),
LinearDimension(count=10, increment=1.0)]
```
**Example 19**
```
>>> csdm1 = cp.as_csdm(np.ones((10, 20, 30)), unit="µG")
>>> csdm1.shape
(30, 20, 10)
>>> new = np.sum(csdm1, axis=(1, 2))
>>> new.shape
(30,)
>>> print(new.dimensions)
[LinearDimension(count=30, increment=1.0)]
```
**Example 20**
```
>>> minimum = np.min(new_csdm1)
>>> print(minimum)
0.0 K
>>> np.min(new_csdm1) == new_csdm1.min()
True
```
Note
See the list of all supported [Supported NumPy functions](index.html#numpy-support).
Plotting CSDM object with matplotlib[¶](#plotting-csdm-object-with-matplotlib)
---
As you may have noticed by now, a CSDM object holds basic metadata such as the label,
unit, and physical quantity of the dimensions and dependent-variables, which is enough to visualize the CSDM datasets on proper coordinate axes. In the following section, we illustrate how you may use the CSDM object with the matplotlib plotting library.
When plotting CSDM objects with matplotlib, we make use of the CSDM object’s metadata to produce a [matplotlib Axes](https://matplotlib.org/api/axes_api.html) object with basic formattings, such as the coordinate axes label, dependent variable labels, and legends. You may still additionally customize your figures. Please refer to the
[matplotlib documentation](https://matplotlib.org/index.html) for further details.
To enable plotting CSDM objects with matplotlib, add a `projection="csdm"` to the matplotlib’s Axes instance, as follows,
```
ax = plt.subplot(projection="csdm")
# now add the matplotlib plotting functions to this axes.
# ax.plot(csdm_object) or
# ax.imshow(csdm_object) ... etc
```
See the following examples.
### 1D CSDM objects with `plot()|scatter()`[¶](#d-csdm-objects-with-plot-scatter)
#### 1D{1} datasets[¶](#d-1-datasets)
```
import matplotlib.pyplot as plt import numpy as np
import csdmpy as cp
# Create a test 1D{1} dataset. ===
# Step-1: Create dimension objects.
x = cp.as_dimension(np.arange(10) * 0.1 + 15, unit="s", label="t1")
# Step-2: Create dependent variable objects.
y = cp.as_dependent_variable(np.random.rand(10), unit="cm", name="test-0")
# Step-3: Create the CSDM object with Dimension and Dependent variable objects.
csdm = cp.CSDM(dimensions=[x], dependent_variables=[y])
# Plot ===
plt.figure(figsize=(5, 3.5))
# create the axes with `projection="csdm"`
ax = plt.subplot(projection="csdm")
# use matplotlib plot function with csdm object.
ax.plot(csdm)
plt.tight_layout()
plt.show()
```
([Source code](../../pyplot/oneD_plot.py), [png](../../pyplot/oneD_plot_00_00.png), [hires.png](../../pyplot/oneD_plot_00_00.hires.png), [pdf](../../pyplot/oneD_plot_00_00.pdf))
```
# Scatter ===
plt.figure(figsize=(5, 3.5))
# create the axes with `projection="csdm"`
ax = plt.subplot(projection="csdm")
# use matplotlib plot function with csdm object.
ax.scatter(csdm, marker="x", color="red")
plt.tight_layout()
plt.show()
```
([png](../../pyplot/oneD_plot_01_00.png), [hires.png](../../pyplot/oneD_plot_01_00.hires.png), [pdf](../../pyplot/oneD_plot_01_00.pdf))
#### 1D{1, 1, …} datasets[¶](#d-1-1-datasets)
##### Plotting on the same Axes[¶](#plotting-on-the-same-axes)
When multiple single-component dependent variables are present within the CSDM object,
the data from all dependent-variables is plotted on the same axes. The name of each dependent variable is displayed within the legend.
##### Plotting on separate Axes[¶](#plotting-on-separate-axes)
To plot the data from individual dependent variables onto separate axes, use the
[`split()`](index.html#csdmpy.CSDM.split) method to first split the CSDM object with n dependent variables into n CSDM objects with single dependent variables, and then plot them separately.
```
import matplotlib.pyplot as plt import numpy as np
import csdmpy as cp
# Create a test 1D{1, 1, 1, 1, 1} dataset. ===
# Step-1: Create dimension objects.
x = cp.as_dimension(np.arange(40) * 0.5 - 10, unit="µm", label="x")
# Step-2: Create dependent variable objects.
units = ["cm", "s", "m/s", ""]
y = [
cp.as_dependent_variable(np.random.rand(40) + 10, unit=units[i], name=f"test-{i}")
for i in range(4)
]
# Step-3: Create the CSDM object with Dimension and Dependent variable objects.
csdm = cp.CSDM(dimensions=[x], dependent_variables=y)
# Plot ===
plt.figure(figsize=(5, 3.5))
# create the axes with `projection="csdm"`
ax = plt.subplot(projection="csdm")
# use matplotlib plot function with csdm object.
ax.plot(csdm)
plt.title("Data plotted on the same figure")
plt.tight_layout()
plt.show()
```
([Source code](../../pyplot/oneD111_plot.py), [png](../../pyplot/oneD111_plot_00_00.png), [hires.png](../../pyplot/oneD111_plot_00_00.hires.png), [pdf](../../pyplot/oneD111_plot_00_00.pdf))
```
# The plot on separate axes ===
# Split the CSDM object into multiple single dependent-variable CSDM objects.
sub_type = csdm.split()
# create the axes with `projection="csdm"`
_, ax = plt.subplots(2, 2, figsize=(8, 6), subplot_kw={"projection": "csdm"})
# now use matplotlib plot function with csdm object.
ax[0, 0].plot(sub_type[0])
ax[0, 1].plot(sub_type[1])
ax[1, 0].plot(sub_type[2])
ax[1, 1].plot(sub_type[3])
plt.title("Data plotted separately")
plt.tight_layout()
plt.show()
```
([png](../../pyplot/oneD111_plot_01_00.png), [hires.png](../../pyplot/oneD111_plot_01_00.hires.png), [pdf](../../pyplot/oneD111_plot_01_00.pdf))
### 2D CSDM objects with `imshow()|contour()|contourf()`[¶](#d-csdm-objects-with-imshow-contour-contourf)
#### 2D{1} datasets[¶](#d-1-datasets)
```
import matplotlib.pyplot as plt import numpy as np
import csdmpy as cp
# Create a test 2D{1} dataset. ===
# Step-1: Create dimension objects.
x1 = cp.as_dimension(np.arange(10) * 0.1 + 15, unit="s", label="t1")
x2 = cp.as_dimension(np.arange(10) * 12.5, unit="s", label="t2")
# Step-2: Create dependent variable objects.
y = cp.as_dependent_variable(np.diag(np.ones(10)), name="body-diagonal")
# Step-3: Create the CSDM object with Dimension and Dependent variable objects.
csdm = cp.CSDM(dimensions=[x1, x2], dependent_variables=[y])
# Plot imshow ===
plt.figure(figsize=(5, 3.5))
# create the axes with `projection="csdm"`
ax = plt.subplot(projection="csdm")
# use matplotlib imshow function with csdm object.
ax.imshow(csdm, origin="upper", aspect="auto")
plt.tight_layout()
plt.show()
```
([Source code](../../pyplot/twoD_plot.py), [png](../../pyplot/twoD_plot_00_00.png), [hires.png](../../pyplot/twoD_plot_00_00.hires.png), [pdf](../../pyplot/twoD_plot_00_00.pdf))
```
# Plot contour ===
plt.figure(figsize=(5, 3.5))
# create the axes with `projection="csdm"`
ax = plt.subplot(projection="csdm")
# use matplotlib contour function with csdm object.
ax.contour(csdm)
plt.tight_layout()
plt.show()
```
([png](../../pyplot/twoD_plot_01_00.png), [hires.png](../../pyplot/twoD_plot_01_00.hires.png), [pdf](../../pyplot/twoD_plot_01_00.pdf))
#### 2D{1, 1, ..} datasets[¶](#d-1-1-datasets)
##### Plotting on the same Axes[¶](#plotting-on-the-same-axes)
When multiple single-component dependent variables are present within the CSDM object,
the data from all dependent-variables is plotted on the same axes. The name of each dependent variable is displayed along the color bar.
```
import matplotlib.pyplot as plt import numpy as np
import csdmpy as cp
# Create a test 2D{1} dataset. ===
# Step-1: Create dimension objects.
x1 = cp.as_dimension(np.arange(10) * 0.1 + 15, unit="s", label="t1")
x2 = cp.as_dimension(np.arange(10) * 12.5, unit="s", label="t2")
# Step-2: Create dependent variable objects.
y1 = cp.as_dependent_variable(np.diag(np.ones(10)), name="body-diagonal")
y2 = cp.as_dependent_variable(np.diag(np.ones(5), 5), name="off-body-diagonal")
# Step-3: Create the CSDM object with Dimension and Dependent variable objects.
csdm = cp.CSDM(dimensions=[x1, x2], dependent_variables=[y1, y2])
# Plot imshow ===
plt.figure(figsize=(5, 3.5))
# create the axes with `projection="csdm"`
ax = plt.subplot(projection="csdm")
# use matplotlib imshow function with csdm object.
ax.imshow(csdm, origin="upper", aspect="auto", cmaps=["Blues", "Reds"], alpha=0.5)
plt.tight_layout()
plt.show()
```
([Source code](../../pyplot/twoD111_plot.py), [png](../../pyplot/twoD111_plot_00_00.png), [hires.png](../../pyplot/twoD111_plot_00_00.hires.png), [pdf](../../pyplot/twoD111_plot_00_00.pdf))
```
# Plot contourf ===
plt.figure(figsize=(5, 3.5))
# create the axes with `projection="csdm"`
ax = plt.subplot(projection="csdm")
# use matplotlib contourf function with csdm object.
ax.contourf(csdm, cmaps=["Blues", "Reds"], alpha=0.5)
plt.tight_layout()
plt.show()
```
([png](../../pyplot/twoD111_plot_01_00.png), [hires.png](../../pyplot/twoD111_plot_01_00.hires.png), [pdf](../../pyplot/twoD111_plot_01_00.pdf))
##### Plotting on separate Axes[¶](#plotting-on-separate-axes)
To plot the data from individual dependent variables onto separate axes, use the
[`split()`](index.html#csdmpy.CSDM.split) method to first split the CSDM object with n dependent variables into n CSDM objects with single dependent variables, and then plot them separately.
Tutorial examples on generating CSDM datasets[¶](#tutorial-examples-on-generating-csdm-datasets)
---
### 1D Datasets[¶](#d-datasets)
[1D{1} datasets](index.html#sphx-glr-auto-tutorials-1d-datasets-plot-0-1d-py)[¶](#id2)
Note
Click [here](#sphx-glr-download-auto-tutorials-1d-datasets-plot-0-1d-py)
to download the full example code
#### 1D{1} datasets[¶](#d-1-datasets)
In the following example, we illustrate how one can covert a Numpy array into a CSDM object. Start by importing the Numpy and csdmpy libraries.
```
import matplotlib.pyplot as plt import numpy as np
import csdmpy as cp
```
Let’s generate a 1D NumPy array of as our dataset.
```
test_data = np.zeros(500)
test_data[250] = 1
```
Create a DependentVariable object from the numpy object
```
dv = cp.as_dependent_variable(test_data, unit="%")
```
Create the corresponding dimensions object. Here, we create a LinearDimension object
```
dim = cp.LinearDimension(count=500, increment="1 m")
```
Creating the CSDM object.
```
csdm_object = cp.CSDM(dependent_variables=[dv], dimensions=[dim])
```
Plot of the dataset.
```
plt.figure(figsize=(5, 3.5))
ax = plt.subplot(projection="csdm")
ax.plot(csdm_object)
plt.tight_layout()
plt.show()
```
To serialize the file, use the save method.
```
csdm_object.save("1D_1_dataset.csdf")
```
**Total running time of the script:** ( 0 minutes 0.111 seconds)
[`Download Python source code: plot_0_1D.py`](_downloads/0cd08094963ce1bd26d42b9b5dafd653/plot_0_1D.py)
[`Download Jupyter notebook: plot_0_1D.ipynb`](_downloads/99d4af0f90d0c254aa20a3e5d7e3de70/plot_0_1D.ipynb)
[Gallery generated by Sphinx-Gallery](https://sphinx-gallery.github.io)
[1D{1,1} datasets](index.html#sphx-glr-auto-tutorials-1d-datasets-plot-1-1d-py)[¶](#id3)
Note
Click [here](#sphx-glr-download-auto-tutorials-1d-datasets-plot-1-1d-py)
to download the full example code
#### 1D{1,1} datasets[¶](#d-1-1-datasets)
In the following example, we illustrate how one can covert a Numpy array into a CSDM object. Start by importing the Numpy and csdmpy libraries.
```
import matplotlib.pyplot as plt import numpy as np
import csdmpy as cp
```
Let’s generate two 1D NumPy arrays as the dependent variables of as our dataset.
```
test_data1 = np.zeros(500)
test_data1[250] = 1
test_data2 = np.zeros(500)
test_data2[150] = 1
```
Create the two DependentVariable objects from the numpy objects.
```
dv1 = cp.as_dependent_variable(test_data1, unit="%")
dv2 = cp.as_dependent_variable(test_data2, unit="J")
```
Create the corresponding dimension object. Here, we create a LinearDimension object.
```
dim = cp.LinearDimension(count=500, increment="43 cm", coordinates_offset="-0.1 km")
```
Creating the CSDM object.
```
csdm_object = cp.CSDM(dependent_variables=[dv1, dv2], dimensions=[dim])
```
Plot of the dataset.
```
plt.figure(figsize=(5, 3.5))
ax = plt.subplot(projection="csdm")
ax.plot(csdm_object)
plt.tight_layout()
plt.show()
```
To serialize the file, use the save method.
```
csdm_object.save("1D_11_dataset.csdf")
```
**Total running time of the script:** ( 0 minutes 0.150 seconds)
[`Download Python source code: plot_1_1D.py`](_downloads/ee8cc1cd2bf9c74b2ff4ce41ca7a2f43/plot_1_1D.py)
[`Download Jupyter notebook: plot_1_1D.ipynb`](_downloads/4fa29ac69847f647b404fa8d009cfdb6/plot_1_1D.ipynb)
[Gallery generated by Sphinx-Gallery](https://sphinx-gallery.github.io)
### 2D Datasets[¶](#sphx-glr-auto-tutorials-2d-datasets)
[2D{1} dataset with two linear dimensions](index.html#sphx-glr-auto-tutorials-2d-datasets-plot-0-2d-py)[¶](#id4)
Note
Click [here](#sphx-glr-download-auto-tutorials-2d-datasets-plot-0-2d-py)
to download the full example code
#### 2D{1} dataset with two linear dimensions[¶](#d-1-dataset-with-two-linear-dimensions)
In the following example, we illustrate how one can covert a Numpy array into a CSDM object. Start by importing the Numpy and csdmpy libraries.
```
import matplotlib.pyplot as plt import numpy as np
import csdmpy as cp
```
Let’s generate a 2D NumPy array of random numbers as our dataset.
```
data = np.random.rand(65536).reshape(256, 256)
```
Create the DependentVariable object from the numpy object.
```
dv = cp.as_dependent_variable(data, unit="Pa")
```
Create the two Dimension objects
```
d0 = cp.LinearDimension(
count=256, increment="15.23 µs", coordinates_offset="-1.95 ms", label="t1"
)
d1 = cp.LinearDimension(
count=256, increment="10 cm", coordinates_offset="-5 m", label="x2"
)
```
Here, `d0` and `d1` are LinearDimension objects with 256 points and 15.23 µs and 10 cm as increment.
Creating the CSDM object.
```
csdm_object = cp.CSDM(dependent_variables=[dv], dimensions=[d0, d1])
print(csdm_object.dimensions)
```
Out:
```
[LinearDimension(count=256, increment=15.23 µs, coordinates_offset=-1.95 ms, quantity_name=time, label=t1, reciprocal={'quantity_name': 'frequency'}),
LinearDimension(count=256, increment=10.0 cm, coordinates_offset=-5.0 m, quantity_name=length, label=x2, reciprocal={'quantity_name': 'wavenumber'})]
```
Plot of the dataset.
```
plt.figure(figsize=(5, 3.5))
ax = plt.subplot(projection="csdm")
cb = ax.imshow(csdm_object, aspect="auto")
plt.colorbar(cb, ax=ax)
plt.tight_layout()
plt.show()
```
To serialize the file, use the save method.
```
csdm_object.save("2D_1_dataset.csdf")
```
**Total running time of the script:** ( 0 minutes 0.202 seconds)
[`Download Python source code: plot_0_2D.py`](_downloads/9753e49a8f1948f877f06b1819ebcd75/plot_0_2D.py)
[`Download Jupyter notebook: plot_0_2D.ipynb`](_downloads/593dd24c1df33e8dfa502a0b4a2d4f2d/plot_0_2D.ipynb)
[Gallery generated by Sphinx-Gallery](https://sphinx-gallery.github.io)
[2D{1} dataset with linear and monotonic dimensions](index.html#sphx-glr-auto-tutorials-2d-datasets-plot-1-2d-py)[¶](#id5)
Note
Click [here](#sphx-glr-download-auto-tutorials-2d-datasets-plot-1-2d-py)
to download the full example code
#### 2D{1} dataset with linear and monotonic dimensions[¶](#d-1-dataset-with-linear-and-monotonic-dimensions)
In the following example, we illustrate how one can covert a Numpy array into a CSDM object. Start by importing the Numpy and csdmpy libraries.
```
import matplotlib.pyplot as plt import numpy as np
import csdmpy as cp
```
Let’s generate a 2D NumPy array of random numbers as our dataset.
```
data = np.random.rand(8192).reshape(32, 256)
```
Create the DependentVariable object from the numpy object.
```
dv = cp.as_dependent_variable(data, unit="J/(mol K)")
```
Create the two Dimension objects.
```
d0 = cp.LinearDimension(
count=256, increment="15.23 µs", coordinates_offset="-1.95 ms", label="t1"
)
```
Here, `d0` is a LinearDimension with 256 points and 15.23 µs increment. You may similarly set the second dimension as a LinearDimension, however, in this example, let’s set it as a MonotonicDimension.
```
array = 10 ** (np.arange(32) / 8)
d1 = cp.as_dimension(array, unit="µs", label="t2")
```
The variable `array` is a NumPy array that is uniformly sampled on a log scale. To convert this array into a Dimension object, we use the
[`as_dimension()`](index.html#csdmpy.as_dimension) method.
Creating the CSDM object.
```
csdm_object = cp.CSDM(dependent_variables=[dv], dimensions=[d0, d1])
print(csdm_object.dimensions)
```
Out:
```
[LinearDimension(count=256, increment=15.23 µs, coordinates_offset=-1.95 ms, quantity_name=time, label=t1, reciprocal={'quantity_name': 'frequency'}),
MonotonicDimension(coordinates=[1.00000000e+00 1.33352143e+00 1.77827941e+00 2.37137371e+00
3.16227766e+00 4.21696503e+00 5.62341325e+00 7.49894209e+00
1.00000000e+01 1.33352143e+01 1.77827941e+01 2.37137371e+01
3.16227766e+01 4.21696503e+01 5.62341325e+01 7.49894209e+01
1.00000000e+02 1.33352143e+02 1.77827941e+02 2.37137371e+02
3.16227766e+02 4.21696503e+02 5.62341325e+02 7.49894209e+02
1.00000000e+03 1.33352143e+03 1.77827941e+03 2.37137371e+03
3.16227766e+03 4.21696503e+03 5.62341325e+03 7.49894209e+03] us, quantity_name=time, label=t2, reciprocal={'quantity_name': 'frequency'})]
```
Plot of the dataset.
```
plt.figure(figsize=(5, 3.5))
cp.plot(csdm_object)
plt.tight_layout()
plt.show()
```
To serialize the file, use the save method.
```
csdm_object.save("2D_1_dataset.csdf")
```
**Total running time of the script:** ( 0 minutes 0.181 seconds)
[`Download Python source code: plot_1_2D.py`](_downloads/b1c0896e9e0679ee0621c4f5c3bcb1c7/plot_1_2D.py)
[`Download Jupyter notebook: plot_1_2D.ipynb`](_downloads/678ee606fbe6a3bae467183c6a310831/plot_1_2D.ipynb)
[Gallery generated by Sphinx-Gallery](https://sphinx-gallery.github.io)
[`Download all examples in Python source code: auto_tutorials_python.zip`](_downloads/cab7a090c4183ca69dc0cd84d3b04413/auto_tutorials_python.zip)
[`Download all examples in Jupyter notebooks: auto_tutorials_jupyter.zip`](_downloads/97a1de59bce682890841bb846e3dd09c/auto_tutorials_jupyter.zip)
[Gallery generated by Sphinx-Gallery](https://sphinx-gallery.github.io)
An emoji 😁 example[¶](#an-emoji-example)
---
Let’s make use of what we learned so far and create a simple 1D{1} dataset.
To make it interesting, let’s create an emoji dataset.
Start by importing the csdmpy package.
```
>>> import csdmpy as cp
```
Create a labeled dimension. Here, we make use of python dictionary.
```
>>> x = dict(type="labeled", labels=["🍈", "🍉", "🍋", "🍌", "🥑", "🍍"])
```
The above python dictionary contains two keys. The type key identifies the dimension as a labeled dimension while the labels key holds an array of labels. In this example, the labels are emojis. Add this dictionary to the list of dimensions.
Next, create a dependent variable. Similarly, set up a python dictionary corresponding to the dependent variable object.
```
>>> y = dict(
... type="internal",
... numeric_type="float32",
... quantity_type="scalar",
... components=[[0.5, 0.25, 1, 2, 1, 0.25]],
... )
```
Here, the python dictionary contains type, numeric_type, and components key. The value of the components key holds an array of data values corresponding to the labels from the labeled dimension.
Create a csdm object from the dimensions and dependent variables and we have a 😂 dataset…
```
>>> fun_data = cp.CSDM(
... dimensions=[x], dependent_variables=[y], description="An emoji dataset"
... )
>>> print(fun_data.data_structure)
{
"csdm": {
"version": "1.0",
"description": "An emoji dataset",
"dimensions": [
{
"type": "labeled",
"labels": [
"🍈",
"🍉",
"🍋",
"🍌",
"🥑",
"🍍"
]
}
],
"dependent_variables": [
{
"type": "internal",
"numeric_type": "float32",
"quantity_type": "scalar",
"components": [
[
"0.5, 0.25, ..., 1.0, 0.25"
]
]
}
]
}
}
```
To serialize this file, use the [`save()`](index.html#csdmpy.CSDM.save) method of the fun_data instance as
```
>>> fun_data.dependent_variables[0].encoding = "base64"
>>> fun_data.save("my_file.csdf")
```
In the above code, the components from the
[`dependent_variables`](index.html#csdmpy.CSDM.dependent_variables) attribute at index zero, are encoded as base64 strings before serializing to the my_file.csdf file.
You may also save the components as a binary file, in which case, the file is serialized with a .csdfe file extension.
```
>>> fun_data.dependent_variables[0].encoding = "raw"
>>> fun_data.save("my_file_raw.csdfe")
```
API-Reference[¶](#api-reference)
---
### csdmpy[¶](#csdmpy)
The csdmpy is a python package for importing and exporting files serialized with the core scientific dataset model file-format. The package supports a
\(p\)-component dependent variable,
\(\mathbf{U} \equiv \{\mathbf{U}_{0}, \ldots,\mathbf{U}_{q},
\ldots,\mathbf{U}_{p-1} \}\), which is discretely sampled at \(M\) unique points in a \(d\)-dimensional space
\((\mathbf{X}_0, \ldots \mathbf{X}_k, \ldots \mathbf{X}_{d-1})\). Besides,
the package also supports multiple dependent variables,
\(\mathbf{U}_i\), sharing the same \(d\)-dimensional space.
Here, every dataset is an instance of the [CSDM](index.html#csdm-api) class, which holds a list of dimensions and dependent variables. Every dimension,
\(\mathbf{X}_k\), is an instance of the [Dimension](index.html#dim-api) class, while every dependent variable, \(\mathbf{U}_i\), is an instance of the
[DependentVariable](index.html#dv-api) class.
#### Methods[¶](#methods)
Methods Summary
| [`parse_dict`](#csdmpy.parse_dict) | Parse a CSDM compliant python dictionary and return a CSDM object. |
| [`load`](#csdmpy.load) | Loads a .csdf/.csdfe file and returns an instance of the [CSDM](index.html#csdm-api) class. |
| [`loads`](#csdmpy.loads) | Loads a JSON serialized string as a CSDM object. |
| [`new`](#csdmpy.new) | Creates a new instance of the [CSDM](index.html#csdm-api) class containing a 0D{0} dataset. |
| [`as_dimension`](#csdmpy.as_dimension) | Generate and return a Dimension object from a 1D numpy array. |
| [`as_dependent_variable`](#csdmpy.as_dependent_variable) | Generate and return a DependentVariable object from a 1D or 2D numpy array. |
| [`as_csdm`](#csdmpy.as_csdm) | Generate and return a view of the nD numpy array as a csdm object. |
| [`plot`](#csdmpy.plot) | A supplementary function for plotting basic 1D and 2D datasets only. |
Method Documentation
csdmpy.parse_dict(*dictionary*)[[source]](_modules/csdmpy.html#parse_dict)[¶](#csdmpy.parse_dict)
Parse a CSDM compliant python dictionary and return a CSDM object.
Parameters
**dictionary** – A CSDM compliant python dictionary.
csdmpy.load(*filename=None*, *application=False*, *verbose=False*)[[source]](_modules/csdmpy.html#load)[¶](#csdmpy.load)
Loads a .csdf/.csdfe file and returns an instance of the [CSDM](index.html#csdm-api) class.
The file must be a JSON serialization of the CSD Model.
Example
```
>>> data1 = cp.load('local_address/file.csdf')
>>> data2 = cp.load('url_address/file.csdf')
```
Parameters
* **filename** (*str*) – A local or a remote address to the .csdf or `.csdfe file.
* **application** (*bool*) – If true, the application metadata from application that last serialized the file will be imported. Default is False.
* **verbose** (*bool*) – If the filename is a URL, this option will show the progress bar for the file download status, when True.
Returns A CSDM instance.
csdmpy.loads(*string*)[[source]](_modules/csdmpy.html#loads)[¶](#csdmpy.loads)
Loads a JSON serialized string as a CSDM object.
Parameters
**string** – A JSON serialized CSDM string.
Returns A CSDM object.
Example
```
>>> object_from_string = cp.loads(cp.new('A test dump').dumps())
>>> print(object_from_string.data_structure)
{
"csdm": {
"version": "1.0",
"timestamp": "2019-10-21T20:33:17Z",
"description": "A test dump",
"dimensions": [],
"dependent_variables": []
}
}
```
csdmpy.new(*description=''*)[[source]](_modules/csdmpy.html#new)[¶](#csdmpy.new)
Creates a new instance of the [CSDM](index.html#csdm-api) class containing a 0D{0} dataset.
Parameters
**description** (*str*) – A string describing the csdm object. This is optional.
Example
```
>>> import csdmpy as cp
>>> empty_data = cp.new(description='Testing Testing 1 2 3')
>>> print(empty_data.data_structure)
{
"csdm": {
"version": "1.0",
"description": "Testing Testing 1 2 3"
}
}
```
Returns A CSDM instance.
csdmpy.as_csdm(*array*, *unit=''*, *quantity_type='scalar'*)[[source]](_modules/csdmpy.html#as_csdm)[¶](#csdmpy.as_csdm)
Generate and return a view of the nD numpy array as a csdm object.
The nD array is the dependent variable of the csdm object of the given quantity type. The shape of the nD array is used to generate Dimension object of linear subtype.
Parameters
* **array** – The nD numpy array.
* **unit** – The unit for the dependent variable. The default is empty string.
* **quantity_type** – The quantity type of the dependent variable.
Example
```
>>> array = np.arange(30).reshape(3, 10)
>>> csdm_obj = cp.as_csdm(array)
>>> print(csdm_obj)
CSDM(
DependentVariable(
[[[ 0 1 2 3 4 5 6 7 8 9]
[10 11 12 13 14 15 16 17 18 19]
[20 21 22 23 24 25 26 27 28 29]]], quantity_type=scalar, numeric_type=int64),
LinearDimension([0. 1. 2. 3. 4. 5. 6. 7. 8. 9.]),
LinearDimension([0. 1. 2.])
)
```
csdmpy.as_dimension(*array*, *unit=''*, *type=None*, ***kwargs*)[[source]](_modules/csdmpy/dimension.html#as_dimension)[¶](#csdmpy.as_dimension)
Generate and return a Dimension object from a 1D numpy array.
Parameters
* **array** – A 1D numpy array.
* **unit** – The unit of the coordinates along the dimension.
* **type** – The dimension type. Valid values are linear, monotonic, labeled, or None. If the value is None, let us decide. The default value is None.
* **kwargs** – Additional keyword arguments from the Dimension class.
Example
```
>>> array = np.arange(15)*0.5
>>> dim_object = cp.as_dimension(array)
>>> print(dim_object)
LinearDimension([0. 0.5 1. 1.5 2. 2.5 3. 3.5 4. 4.5 5. 5.5 6. 6.5 7. ])
```
```
>>> array = ['The', 'great', 'circle']
>>> dim_object = cp.as_dimension(array, label='in the sky')
>>> print(dim_object)
LabeledDimension(['The' 'great' 'circle'])
```
csdmpy.as_dependent_variable(*array*, ***kwargs*)[[source]](_modules/csdmpy/dependent_variable.html#as_dependent_variable)[¶](#csdmpy.as_dependent_variable)
Generate and return a DependentVariable object from a 1D or 2D numpy array.
Parameters
* **array** – A 1D or 2D numpy array.
* **kwargs** – Additional keyword arguments from the DependentVariable class.
Example
```
>>> array = np.arange(1e4).astype(np.complex128)
>>> dim_object = cp.as_dependent_variable(array)
>>> print(dim_object)
DependentVariable(
[[0.000e+00+0.j 1.000e+00+0.j 2.000e+00+0.j ... 9.997e+03+0.j
9.998e+03+0.j 9.999e+03+0.j]], quantity_type=scalar, numeric_type=complex128)
```
csdmpy.plot(*csdm_object*, *reverse_axis=None*, *range=None*, ***kwargs*)[[source]](_modules/csdmpy.html#plot)[¶](#csdmpy.plot)
A supplementary function for plotting basic 1D and 2D datasets only.
Parameters
* **csdm_object** – The CSDM object.
* **reverse_axis** – An ordered array of boolean specifying which dimensions will be displayed on a reverse axis.
* **range** – A list of minimum and maximum coordinates along the dimensions. The range along each dimension is given as [min, max]
* **kwargs** – Additional keyword arguments are used in matplotlib plotting functions.
We implement the following matplotlib methods for the one and two-dimensional datasets.
+ The 1D{1} scalar dataset use the plt.plot() method.
+ The 1D{2} vector dataset use the plt.quiver() method.
+ The 2D{1} scalar dataset use the plt.imshow() method if the two
dimensions have a linear subtype. If any one of the dimension is
monotonic, plt.NonUniformImage() method is used instead.
+ The 2D{2} vector dataset use the plt.quiver() method.
+ The 2D{3} pixel dataset use the plt.imshow(), assuming the pixel dataset
as an RGB image.
Returns A matplotlib figure instance.
Example
```
>>> cp.plot(data_object)
```
### CSDM[¶](#csdm)
*class* csdmpy.CSDM(*filename=''*, *version=None*, *description=''*, ***kwargs*)[[source]](_modules/csdmpy/csdm.html#CSDM)[¶](#csdmpy.CSDM)
Bases: `object`
Create an instance of a CSDM class.
This class is based on the root CSDM object of the core scientific dataset
(CSD) model. The class is a composition of the [DependentVariable](index.html#dv-api) and
[Dimension](index.html#dim-api) instances, where an instance of the [DependentVariable](index.html#dv-api) class describes a \(p\)-component dependent variable, and an instance of the
[Dimension](index.html#dim-api) class describes a dimension of a \(d\)-dimensional space. Additional attributes of this class are listed below.
Attributes Summary
| [`version`](#csdmpy.CSDM.version) | Version number of the CSD model on file. |
| [`description`](#csdmpy.CSDM.description) | Description of the dataset. |
| [`read_only`](#csdmpy.CSDM.read_only) | If True, the data-file is serialized as read only, otherwise, False. |
| [`tags`](#csdmpy.CSDM.tags) | List of tags attached to the dataset. |
| [`timestamp`](#csdmpy.CSDM.timestamp) | Timestamp from when the file was last serialized. |
| [`geographic_coordinate`](#csdmpy.CSDM.geographic_coordinate) | Geographic coordinate, if present, from where the file was last serialized. |
| [`dimensions`](#csdmpy.CSDM.dimensions) | Tuple of the [Dimension](index.html#dim-api) instances. |
| [`x`](#csdmpy.CSDM.x) | Alias for the dimensions attribute. |
| [`dependent_variables`](#csdmpy.CSDM.dependent_variables) | Tuple of the [DependentVariable](index.html#dv-api) instances. |
| [`y`](#csdmpy.CSDM.y) | Alias for the dependent_variables attribute. |
| [`application`](#csdmpy.CSDM.application) | Application metadata dictionary of the CSDM object. |
| [`data_structure`](#csdmpy.CSDM.data_structure) | Json serialized string describing the CSDM class instance. |
| [`filename`](#csdmpy.CSDM.filename) | Local file address of the current file. |
Methods summary
| [`dict`](#csdmpy.CSDM.dict) | Serialize the [CSDM](#csdm-api) instance as a python dictionary. |
| [`to_dict`](#csdmpy.CSDM.to_dict) | Alias to the dict() method of the class. |
| [`dumps`](#csdmpy.CSDM.dumps) | Serialize the [CSDM](#csdm-api) instance as a JSON data-exchange string. |
| [`astype`](#csdmpy.CSDM.astype) | Return a copy of the CSDM object by converting the numeric type of each dependent variables components to the given value. |
| [`save`](#csdmpy.CSDM.save) | Serialize the [CSDM](#csdm-api) instance as a JSON data-exchange file. |
| [`copy`](#csdmpy.CSDM.copy) | Create a copy of the current CSDM instance. |
| [`split`](#csdmpy.CSDM.split) | View of the dependent-variables as individual csdm objects. |
Numpy compatible attributes summary
| [`real`](#csdmpy.CSDM.real) | Return a csdm object with only the real part of the dependent variable components. |
| [`imag`](#csdmpy.CSDM.imag) | Return a csdm object with only the imaginary part of the dependent variable components. |
| [`shape`](#csdmpy.CSDM.shape) | Return the count along each dimension of the csdm object. |
| [`size`](#csdmpy.CSDM.size) | Return the size of the dependent_variable components. |
| [`T`](#csdmpy.CSDM.T) | Return a csdm object with a transpose of the dataset. |
Numpy compatible method summary
| [`max`](#csdmpy.CSDM.max) | Return a csdm object of maximum dependent variable along a given axis.s |
| [`min`](#csdmpy.CSDM.min) | Return a csdm object of minimum dependent variable component along a given axis. |
| [`clip`](#csdmpy.CSDM.clip) | Clip the dependent variable components between the min and max values. |
| [`conj`](#csdmpy.CSDM.conj) | Return a complex conjugate of the csdm object. |
| [`round`](#csdmpy.CSDM.round) | Rounds a csdm object to the given decimals. |
| [`sum`](#csdmpy.CSDM.sum) | Return a csdm object sum over a given axis. |
| [`mean`](#csdmpy.CSDM.mean) | Return a csdm object mean over a given axis. |
| [`var`](#csdmpy.CSDM.var) | Return a csdm object variance over a given axis. |
| [`std`](#csdmpy.CSDM.std) | Return a csdm object standard deviation over a given axis. |
| [`prod`](#csdmpy.CSDM.prod) | Return a csdm object product over a given axis. |
Attributes documentation
version[¶](#csdmpy.CSDM.version)
Version number of the CSD model on file.
description[¶](#csdmpy.CSDM.description)
Description of the dataset. The default value is an empty string.
Example
```
>>> print(data.description)
A simulated sine curve.
```
Returns A string of UTF-8 allows characters describing the dataset.
Raises
**TypeError** – When the assigned value is not a string.
read_only[¶](#csdmpy.CSDM.read_only)
If True, the data-file is serialized as read only, otherwise, False.
By default, the [CSDM](#csdm-api) object loads a copy of the .csdf(e) file,
irrespective of the value of the read_only attribute. The value of this attribute may be toggled at any time after the file import.
When serializing the .csdf(e) file, if the value of the read_only attribute is found True, the file will be serialized as read only.
tags[¶](#csdmpy.CSDM.tags)
List of tags attached to the dataset.
timestamp[¶](#csdmpy.CSDM.timestamp)
Timestamp from when the file was last serialized. Attribute is real only.
The timestamp stamp is a string representation of the Coordinated Universal Time (UTC) formatted according to the iso-8601 standard.
Raises
**AttributeError** – When the attribute is modified.
geographic_coordinate[¶](#csdmpy.CSDM.geographic_coordinate)
Geographic coordinate, if present, from where the file was last serialized.
This attribute is read-only.
The geographic coordinates correspond to the location where the file was last serialized. If present, the geographic coordinates are described with three attributes, the required latitude and longitude, and an optional altitude.
Raises
**AttributeError** – When the attribute is modified.
dimensions[¶](#csdmpy.CSDM.dimensions)
Tuple of the [Dimension](index.html#dim-api) instances.
x[¶](#csdmpy.CSDM.x)
Alias for the dimensions attribute.
dependent_variables[¶](#csdmpy.CSDM.dependent_variables)
Tuple of the [DependentVariable](index.html#dv-api) instances.
y[¶](#csdmpy.CSDM.y)
Alias for the dependent_variables attribute.
application[¶](#csdmpy.CSDM.application)
Application metadata dictionary of the CSDM object.
```
>>> print(data.application)
None
```
By default, the application attribute is an empty object, that is,
the application metadata stored by the previous application is ignored upon file import.
The application metadata may, however, be retained with a request via the [`load()`](index.html#csdmpy.load) method. This feature may be useful to related applications where application metadata might contain additional information.
The attribute may be updated with a python dictionary.
The application attribute is where an application can place its own metadata as a python dictionary object containing application specific metadata, using a reverse domain name notation string as the attribute key, for example,
Example
```
>>> data.application = {
... "com.example.myApp" : {
... "myApp_key": "myApp_metadata"
... }
... }
>>> print(data.application)
{'com.example.myApp': {'myApp_key': 'myApp_metadata'}}
```
Returns Python dictionary object with the application metadata.
data_structure[¶](#csdmpy.CSDM.data_structure)
Json serialized string describing the CSDM class instance.
The data_structure attribute is only intended for a quick preview of the dataset. This JSON serialized string from this attribute avoids displaying large datasets. Do not use the value of this attribute to save the data to a file, instead use the [`save()`](#csdmpy.CSDM.save)
methods of the instance.
Raises
**AttributeError** – When modified.
filename[¶](#csdmpy.CSDM.filename)
Local file address of the current file.
Numpy compatible attributes documentation
real[¶](#csdmpy.CSDM.real)
Return a csdm object with only the real part of the dependent variable components.
imag[¶](#csdmpy.CSDM.imag)
Return a csdm object with only the imaginary part of the dependent variable components.
shape[¶](#csdmpy.CSDM.shape)
Return the count along each dimension of the csdm object.
size[¶](#csdmpy.CSDM.size)
Return the size of the dependent_variable components.
T[¶](#csdmpy.CSDM.T)
Return a csdm object with a transpose of the dataset.
Methods documentation
dict(*update_timestamp=False*, *read_only=False*)[[source]](_modules/csdmpy/csdm.html#CSDM.dict)[¶](#csdmpy.CSDM.dict)
Serialize the [CSDM](#csdm-api) instance as a python dictionary.
Parameters
* **update_timestamp** (*bool*) – If True, timestamp is updated to current time.
* **read_only** (*bool*) – If true, the read_only flag is set true.
Example
```
>>> data.dict()['csdm']['version']
'1.0'
```
to_dict(*update_timestamp=False*, *read_only=False*)[[source]](_modules/csdmpy/csdm.html#CSDM.to_dict)[¶](#csdmpy.CSDM.to_dict)
Alias to the dict() method of the class.
dumps(*update_timestamp=False*, *read_only=False*, ***kwargs*)[[source]](_modules/csdmpy/csdm.html#CSDM.dumps)[¶](#csdmpy.CSDM.dumps)
Serialize the [CSDM](#csdm-api) instance as a JSON data-exchange string.
Parameters
* **update_timestamp** (*bool*) – If True, timestamp is updated to current time.
* **read_only** (*bool*) – If true, the file is serialized as read_only.
Example
```
>>> data.dumps()[:63] # first 63 characters
'{"csdm": {"version": "1.0", "timestamp": "1994-11-05T13:15:30Z"'
```
save(*filename=''*, *read_only=False*, *output_device=None*, *indent=0*)[[source]](_modules/csdmpy/csdm.html#CSDM.save)[¶](#csdmpy.CSDM.save)
Serialize the [CSDM](#csdm-api) instance as a JSON data-exchange file.
There are two types of file serialization extensions, .csdf and
.csdfe. In the CSD model, when every instance of the DependentVariable objects from a CSDM class has an internal subtype, the corresponding CSDM instance is serialized with a .csdf file extension.
If any single DependentVariable instance has an external subtype, the CSDM instance is serialized with a .csdfe file extension.
The two different file extensions are used to alert the end-user of the possible deserialization error associated with the .csdfe file extensions had the external data file becomes inaccessible.
In csdmpy, however, irrespective of the dependent variable subtypes from the serialized JSON file, by default, all instances of DependentVariable class are treated an internal after import.
Therefore, when serialized, the CSDM object should be stored as a .csdf file.
To store a file as a .csdfe file, the user much set the value of the [`encoding`](index.html#csdmpy.DependentVariable.encoding)
attribute from the dependent variables to `raw`.
In which case, a binary file named filename_i.dat will be generated where \(i\) is the \(i^\text{th}\) dependent variable.
The parameter filename is an argument of this method.
Note
Only dependent variables with `encoding="raw"` will be serialized to a binary file.
Parameters
* **filename** (*str*) – The filename of the serialized file.
* **read_only** (*bool*) – If true, the file is serialized as read_only.
* **output_device** (*object*) – Object where the data is written. If provided,
the argument filename become irrelevant.
Example
```
>>> data.save('my_file.csdf')
```
to_list()[[source]](_modules/csdmpy/csdm.html#CSDM.to_list)[¶](#csdmpy.CSDM.to_list)
Return the dimension coordinates and dependent variable components as a list of numpy arrays. For multiple dependent variables, the components of each dependent variable is appended in the order of the dependent variables.
For example,* A 2D{1} will be packed as \([x_{0}, x_{1}, y_{0,0}]\)
* A 2D{3} will be packed as \([x_{0}, x_{1}, y_{0,0}, y_{0,1}, y_{0,2}]\)
* A 1D{1,2} will be packed as \([x_{0}, y_{0,0}, y_{1,0}, y_{1,1}]\)
where \(x_i\) represents the \(i^\text{th}\) dimension and
\(y_{i,j}\) represents the \(j^\text{th}\) component of the
\(i^\text{th}\) dependent variable.
astype(*numeric_type*)[[source]](_modules/csdmpy/csdm.html#CSDM.astype)[¶](#csdmpy.CSDM.astype)
Return a copy of the CSDM object by converting the numeric type of each dependent variables components to the given value.
Parameters
**numeric_type** – A numpy dtype or a string with a valid numeric type
Example
```
>>> data_32 = data_64.astype('float32')
```
copy()[[source]](_modules/csdmpy/csdm.html#CSDM.copy)[¶](#csdmpy.CSDM.copy)
Create a copy of the current CSDM instance.
Returns A CSDM instance.
Example
```
>>> data2 = data.copy()
```
split()[[source]](_modules/csdmpy/csdm.html#CSDM.split)[¶](#csdmpy.CSDM.split)
View of the dependent-variables as individual csdm objects.
Returns A list of CSDM objects, each with one dependent variable. The objects are returned as a view.
Example
```
>>> # data contains two dependent variables
>>> d1, d2 = data.split()
```
transpose()[[source]](_modules/csdmpy/csdm.html#CSDM.transpose)[¶](#csdmpy.CSDM.transpose)
Return a transpose of the CSDM object.
fft(*axis=0*)[[source]](_modules/csdmpy/csdm.html#CSDM.fft)[¶](#csdmpy.CSDM.fft)
Perform a FFT along the given dimension=axis, for linear dimension, assuming Nyquist-Shannon relation.
Parameters
**axis** – dimension index along which the FFT is performed.
The FFT method uses the [`complex_fft`](index.html#csdmpy.Dimension.complex_fft) attribute of the Dimension object to decide whether a forward or inverse Fourier transform is performed. If the value of the complex_fft is True, an inverse FFT is performed, otherwise a forward FFT.
For FFT process, this function is equivalent to performing
```
phase = np.exp(-2j * np.pi * coordinates_offset * reciprocal_coordinates)
x_fft = np.fft.fftshift(np.fft.fft(x)) * phase
```
over all components for every dependent variable.
Similarly, for inverse FFT process, this function is equivalent to performing
```
phase = np.exp(2j * np.pi * reciprocal_coordinates_offset * coordinates)
x = np.fft.ifft(np.fft.ifftshift(x_fft * phase))
```
over all components for every dependent variable.
Returns A CSDM object with the Fourier Transform data.
Numpy compatible method documentation
max(*axis=None*)[[source]](_modules/csdmpy/csdm.html#CSDM.max)[¶](#csdmpy.CSDM.max)
Return a csdm object of maximum dependent variable along a given axis.s
Parameters
**axis** – An integer or None or a tuple of m integers corresponding to the dimension index/indices along which the operation is performed. If None,
the output is over all dimensions per dependent variable.
Returns A CSDM object with m dimensions removed, or a numpy array when dimension is None.
Example
```
>>> data.max()
<Quantity 0.95105654>
```
min(*axis=None*)[[source]](_modules/csdmpy/csdm.html#CSDM.min)[¶](#csdmpy.CSDM.min)
Return a csdm object of minimum dependent variable component along a given axis.
Parameters
**axis** – An integer or None or a tuple of m integers corresponding to the dimension index/indices along which the operation is performed. If None, the output is over all dimensions per dependent variable.
Returns A CSDM object with m dimensions removed, or a list when axis is None.
clip(*min=None*, *max=None*)[[source]](_modules/csdmpy/csdm.html#CSDM.clip)[¶](#csdmpy.CSDM.clip)
Clip the dependent variable components between the min and max values.
Parameters
* **min** – The minimum clip value.
* **max** – The maximum clip value.
Returns A CSDM object with values clipped between min and max.
conj()[[source]](_modules/csdmpy/csdm.html#CSDM.conj)[¶](#csdmpy.CSDM.conj)
Return a complex conjugate of the csdm object.
round(*decimals=0*)[[source]](_modules/csdmpy/csdm.html#CSDM.round)[¶](#csdmpy.CSDM.round)
Rounds a csdm object to the given decimals.
sum(*axis=None*)[[source]](_modules/csdmpy/csdm.html#CSDM.sum)[¶](#csdmpy.CSDM.sum)
Return a csdm object sum over a given axis.
Parameters
**axis** – An integer or None or a tuple of m integers corresponding to the dimension index/indices along which the operation is performed. If None,
the output is over all dimensions per dependent variable.
Returns A CSDM object with m dimensions removed, or a list when axis is None.
mean(*axis=None*)[[source]](_modules/csdmpy/csdm.html#CSDM.mean)[¶](#csdmpy.CSDM.mean)
Return a csdm object mean over a given axis.
Parameters
**axis** – An integer or None or a tuple of m integers corresponding to the dimension index/indices along which the operation is performed. If None,
the output is over all dimensions per dependent variable.
Returns A CSDM object with m dimensions removed, or a list when axis is None.
var(*axis=None*)[[source]](_modules/csdmpy/csdm.html#CSDM.var)[¶](#csdmpy.CSDM.var)
Return a csdm object variance over a given axis.
Parameters
**axis** – An integer or None or a tuple of m integers corresponding to the dimension index/indices along which the operation is performed. If None,
the output is over all dimensions per dependent variable.
Returns A CSDM object with m dimensions removed, or a list when axis is None.
std(*axis=None*)[[source]](_modules/csdmpy/csdm.html#CSDM.std)[¶](#csdmpy.CSDM.std)
Return a csdm object standard deviation over a given axis.
Parameters
**axis** – An integer or None or a tuple of m integers corresponding to the dimensions index/indices along which the operation is performed.
If None, the output is over all dimensions per dependent variable.
Returns A CSDM object with m dimensions removed, or a list when axis is None.
prod(*axis=None*)[[source]](_modules/csdmpy/csdm.html#CSDM.prod)[¶](#csdmpy.CSDM.prod)
Return a csdm object product over a given axis.
Parameters
**axis** – An integer or None or a tuple of m integers corresponding to the dimension index/indices along which the operation is performed.
If None, the output is over all dimensions per dependent variable.
Returns A CSDM object with m dimensions removed, or a list when axis is None.
### Dimension[¶](#dimension)
#### LinearDimension[¶](#lineardimension)
*class* csdmpy.LinearDimension(*count*, *increment*, *complex_fft=False*, ***kwargs*)[[source]](_modules/csdmpy/dimension/linear.html#LinearDimension)[¶](#csdmpy.LinearDimension)
Bases: `BaseQuantitativeDimension`
LinearDimension class.
Generates an object representing a physical dimension whose coordinates are uniformly sampled along a grid dimension. See [LinearDimension](index.html#lineardimension-uml) for details.
*property* complex_fft[¶](#csdmpy.LinearDimension.complex_fft)
If True, orders the coordinates according to FFT output order.
*property* coordinates[¶](#csdmpy.LinearDimension.coordinates)
Return the coordinates along the dimensions.
*property* count[¶](#csdmpy.LinearDimension.count)
Total number of points along the linear dimension.
dict()[[source]](_modules/csdmpy/dimension/linear.html#LinearDimension.dict)[¶](#csdmpy.LinearDimension.dict)
Return the LinearDimension as a python dictionary.
get_nmr_reference_offset()[[source]](_modules/csdmpy/dimension/linear.html#LinearDimension.get_nmr_reference_offset)[¶](#csdmpy.LinearDimension.get_nmr_reference_offset)
Calculate reference offset for NMR datasets.
*property* increment[¶](#csdmpy.LinearDimension.increment)
Increment along the linear dimension.
reciprocal_coordinates()[[source]](_modules/csdmpy/dimension/linear.html#LinearDimension.reciprocal_coordinates)[¶](#csdmpy.LinearDimension.reciprocal_coordinates)
Return reciprocal coordinates assuming Nyquist-Shannon theorem.
reciprocal_increment()[[source]](_modules/csdmpy/dimension/linear.html#LinearDimension.reciprocal_increment)[¶](#csdmpy.LinearDimension.reciprocal_increment)
Return reciprocal increment assuming Nyquist-Shannon theorem.
*property* type[¶](#csdmpy.LinearDimension.type)
Return the type of the dimension.
#### MonotonicDimension[¶](#monotonicdimension)
*class* csdmpy.MonotonicDimension(*coordinates*, ***kwargs*)[[source]](_modules/csdmpy/dimension/monotonic.html#MonotonicDimension)[¶](#csdmpy.MonotonicDimension)
Bases: `BaseQuantitativeDimension`
Monotonic grid dimension.
Generates an object representing a physical dimension whose coordinates are monotonically sampled along a grid dimension. See [MonotonicDimension](index.html#monotonicdimension-uml)
for details.
*property* coordinates[¶](#csdmpy.MonotonicDimension.coordinates)
Return the coordinates along the dimensions.
*property* coordinates_offset[¶](#csdmpy.MonotonicDimension.coordinates_offset)
Value at index zero, \(c_k\), along the dimension.
*property* count[¶](#csdmpy.MonotonicDimension.count)
Total number of points along the monotonic dimension.
dict()[[source]](_modules/csdmpy/dimension/monotonic.html#MonotonicDimension.dict)[¶](#csdmpy.MonotonicDimension.dict)
Return the MonotonicDimension as a python dictionary.
*property* type[¶](#csdmpy.MonotonicDimension.type)
Return the type of the dimension.
#### LabeledDimension[¶](#labeleddimension)
*class* csdmpy.LabeledDimension(*labels*, *label=''*, *description=''*, *application=None*, ***kwargs*)[[source]](_modules/csdmpy/dimension/labeled.html#LabeledDimension)[¶](#csdmpy.LabeledDimension)
Bases: `BaseDimension`
A labeled dimension.
Generates an object representing a non-physical dimension whose coordinates are labels. See [LabeledDimension](index.html#labeleddimension-uml) for details.
*property* coordinates[¶](#csdmpy.LabeledDimension.coordinates)
Return the coordinates along the dimensions. This is an alias for labels.
*property* count[¶](#csdmpy.LabeledDimension.count)
Total number of labels along the dimension.
dict()[[source]](_modules/csdmpy/dimension/labeled.html#LabeledDimension.dict)[¶](#csdmpy.LabeledDimension.dict)
Return the LabeledDimension as a python dictionary.
is_quantitative()[[source]](_modules/csdmpy/dimension/labeled.html#LabeledDimension.is_quantitative)[¶](#csdmpy.LabeledDimension.is_quantitative)
Return True, if the dimension is quantitative, otherwise False.
:returns: A Boolean.
*property* labels[¶](#csdmpy.LabeledDimension.labels)
Return a list of labels along the dimension.
*property* type[¶](#csdmpy.LabeledDimension.type)
Return the type of the dimension.
*class* csdmpy.Dimension(**args*, ***kwargs*)[[source]](_modules/csdmpy/dimension.html#Dimension)[¶](#csdmpy.Dimension)
Bases: `object`
Dimension class.
An instance of this class describes a dimension of a multi-dimensional system.
In version 1.0 of the CSD model, there are three subtypes of the Dimension class:
* [LinearDimension](index.html#lineardimension-uml),
* [MonotonicDimension](index.html#monotonicdimension-uml), and
* [LabeledDimension](index.html#labeleddimension-uml).
**Creating an instance of a dimension object**
There are two ways of creating a new instance of a Dimension class.
*From a python dictionary containing valid keywords.*
```
>>> from csdmpy import Dimension
>>> dimension_dictionary = {
... "type": "linear",
... "description": "test",
... "increment": "5 G",
... "count": 10,
... "coordinates_offset": "10 mT",
... "origin_offset": "10 T",
... }
>>> x = Dimension(dimension_dictionary)
```
Here, dimension_dictionary is the python dictionary.
*From valid keyword arguments.*
```
>>> x = Dimension(
... type="linear",
... description="test",
... increment="5 G",
... count=10,
... coordinates_offset="10 mT",
... origin_offset="10 T",
... )
```
Attributes Summary
| [`type`](#csdmpy.Dimension.type) | The dimension subtype. |
| [`description`](#csdmpy.Dimension.description) | Brief description of the dimension object. |
| [`application`](#csdmpy.Dimension.application) | Application metadata dictionary of the dimension object. |
| [`coordinates`](#csdmpy.Dimension.coordinates) | Coordinates, \({\bf X}_k\), along the dimension. |
| [`coords`](#csdmpy.Dimension.coords) | Alias for the coordinates attribute. |
| [`absolute_coordinates`](#csdmpy.Dimension.absolute_coordinates) | Absolute coordinates, \(\bf X_k^{\rm{abs}}\), along the dimension. |
| [`count`](#csdmpy.Dimension.count) | Number of coordinates, \(N_k \ge 1\), along the dimension. |
| [`increment`](#csdmpy.Dimension.increment) | Increment along a linear dimension. |
| [`coordinates_offset`](#csdmpy.Dimension.coordinates_offset) | Offset corresponding to the zero of the indexes array, \(\mathbf{J}_k\). |
| [`origin_offset`](#csdmpy.Dimension.origin_offset) | Origin offset, \(o_k\), along the dimension. |
| [`complex_fft`](#csdmpy.Dimension.complex_fft) | If true, the coordinates are the ordered as the output of a complex fft. |
| [`quantity_name`](#csdmpy.Dimension.quantity_name) | Quantity name associated with the physical quantities specifying dimension. |
| [`label`](#csdmpy.Dimension.label) | Label associated with the dimension. |
| [`labels`](#csdmpy.Dimension.labels) | Ordered list of labels along the Labeled dimension. |
| [`period`](#csdmpy.Dimension.period) | Period of the dimension. |
| [`axis_label`](#csdmpy.Dimension.axis_label) | Formatted string for displaying label along the dimension axis. |
| [`data_structure`](#csdmpy.Dimension.data_structure) | JSON serialized string describing the Dimension class instance. |
Methods Summary
| [`to`](#csdmpy.Dimension.to) | Convert the coordinates along the dimension to the unit, unit. |
| [`dict`](#csdmpy.Dimension.dict) | Return Dimension object as a python dictionary. |
| [`to_dict`](#csdmpy.Dimension.to_dict) | Alias to the dict() method of the class. |
| [`is_quantitative`](#csdmpy.Dimension.is_quantitative) | Return True if the dependent variable is quantitative. |
| [`copy`](#csdmpy.Dimension.copy) | Return a copy of the Dimension object. |
| [`reciprocal_coordinates`](#csdmpy.Dimension.reciprocal_coordinates) | Return reciprocal coordinates assuming Nyquist-Shannon theorem. |
| [`reciprocal_increment`](#csdmpy.Dimension.reciprocal_increment) | Return reciprocal increment assuming Nyquist-Shannon theorem. |
Attributes Documentation
type[¶](#csdmpy.Dimension.type)
The dimension subtype.
There are three *valid* subtypes of Dimension class. The valid literals are given by the [DimObjectSubtype](index.html#dimobjectsubtype-uml) enumeration.
```
>>> print(x.type)
linear
```
Returns A string with a valid dimension subtype.
Raises
**AttributeError** – When the attribute is modified.
description[¶](#csdmpy.Dimension.description)
Brief description of the dimension object.
The default value is an empty string, ‘’. The attribute may be modified, for example,
```
>>> print(x.description)
This is a test
>>> x.description = "This is a test dimension."
```
Returns A string of UTF-8 allows characters describing the dimension.
Raises
**TypeError** – When the assigned value is not a string.
application[¶](#csdmpy.Dimension.application)
Application metadata dictionary of the dimension object.
```
>>> print(x.application)
None
```
The application attribute is where an application can place its metadata as a python dictionary object using a reverse domain name notation string as the attribute key, for example,
```
>>> x.application = {"com.example.myApp": {"myApp_key": "myApp_metadata"}}
>>> print(x.application)
{'com.example.myApp': {'myApp_key': 'myApp_metadata'}}
```
Returns A python dictionary containing dimension application metadata.
coordinates[¶](#csdmpy.Dimension.coordinates)
Coordinates, \({\bf X}_k\), along the dimension.
Example
```
>>> print(x.coordinates)
[100. 105. 110. 115. 120. 125. 130. 135. 140. 145.] G
```
For linear dimensions, the order of the coordinates also depend on the value of the [`complex_fft`](#csdmpy.Dimension.complex_fft) attributes.
For examples, when the value of the complex_fft attribute is True,
the coordinates are
```
>>> x.complex_fft = True
>>> print(x.coordinates)
[ 75. 80. 85. 90. 95. 100. 105. 110. 115. 120.] G
```
Returns A Quantity array of coordinates for quantitative dimensions, i.e. linear and monotonic.
Returns A Numpy array for labeled dimensions.
Raises
**AttributeError** – For dimensions with subtype linear.
coords[¶](#csdmpy.Dimension.coords)
Alias for the coordinates attribute.
absolute_coordinates[¶](#csdmpy.Dimension.absolute_coordinates)
Absolute coordinates, \(\bf X_k^{\rm{abs}}\), along the dimension.
This attribute is only *valid* for quantitative dimensions, that is,
linear and monotonic dimensions. The absolute coordinates are given as
()[¶](#equation-api-dimensions-0)\[\mathbf{X}_k^\mathrm{abs} = \mathbf{X}_k + o_k \mathbf{1}\]
where \(\mathbf{X}_k\) are the coordinates along the dimension and
\(o_k\) is the [`origin_offset`](#csdmpy.Dimension.origin_offset).
For example, consider
```
>>> print(x.origin_offset)
10.0 T
>>> print(x.coordinates[:5])
[100. 105. 110. 115. 120.] G
```
then the absolute coordinates are
```
>>> print(x.absolute_coordinates[:5])
[100100. 100105. 100110. 100115. 100120.] G
```
For linear dimensions, the order of the absolute_coordinates further depend on the value of the
[`complex_fft`](#csdmpy.Dimension.complex_fft) attributes. For examples, when the value of the complex_fft attribute is True,
the absolute coordinates are
```
>>> x.complex_fft = True
>>> print(x.absolute_coordinates[:5])
[100075. 100080. 100085. 100090. 100095.] G
```
Returns A Quantity array of absolute coordinates for quantitative dimensions, i.e linear and monotonic.
Raises
**AttributeError** – For labeled dimensions.
count[¶](#csdmpy.Dimension.count)
Number of coordinates, \(N_k \ge 1\), along the dimension.
Example
```
>>> print(x.count)
10
>>> x.count = 5
```
Returns An Integer specifying the number of coordinates along the dimension.
Raises
**TypeError** – When the assigned value is not an integer.
increment[¶](#csdmpy.Dimension.increment)
Increment along a linear dimension.
The attribute is only valid for Dimension instances with the subtype linear. When assigning a value, the dimensionality of the value must be consistent with the dimensionality of other members specifying the dimension.
Example
```
>>> print(x.increment)
5.0 G
>>> x.increment = "0.1 G"
>>> print(x.coordinates)
[100. 100.1 100.2 100.3 100.4 100.5 100.6 100.7 100.8 100.9] G
```
Returns A Quantity instance with the increment along the dimension.
Raises
* **AttributeError** – For dimension with subtypes other than linear.
* **TypeError** – When the assigned value is not a string containing a quantity
or a Quantity object.
coordinates_offset[¶](#csdmpy.Dimension.coordinates_offset)
Offset corresponding to the zero of the indexes array, \(\mathbf{J}_k\).
When assigning a value, the dimensionality of the value must be consistent with the dimensionality of the other members specifying the dimension.
Example
```
>>> print(x.coordinates_offset)
10.0 mT
>>> x.coordinates_offset = "0 T"
>>> print(x.coordinates)
[ 0. 5. 10. 15. 20. 25. 30. 35. 40. 45.] G
```
The attribute is invalid for labeled dimensions.
Returns A Quantity instance with the coordinates offset.
Raises
* **AttributeError** – For labeled dimensions.
* **TypeError** – When the assigned value is not a string containing a quantity
or a Quantity object.
origin_offset[¶](#csdmpy.Dimension.origin_offset)
Origin offset, \(o_k\), along the dimension.
When assigning a value, the dimensionality of the value must be consistent with the dimensionality of other members specifying the dimension.
Example
```
>>> print(x.origin_offset)
10.0 T
>>> x.origin_offset = "1e5 G"
```
The origin offset only affect the absolute_coordinates along the dimension.
This attribute is invalid for labeled dimensions.
Returns A Quantity instance with the origin offset.
Raises
* **AttributeError** – For labeled dimensions.
* **TypeError** – When the assigned value is not a string containing a quantity
or a Quantity object.
complex_fft[¶](#csdmpy.Dimension.complex_fft)
If true, the coordinates are the ordered as the output of a complex fft.
This attribute is only valid for the Dimension instances with linear subtype.
The value of this attribute is a boolean specifying if the coordinates along the dimension are evaluated as the output of a complex fast Fourier transform
(FFT) routine.
For example, consider the following Dimension object,
```
>>> test = Dimension(type="linear", increment="1", count=10)
>>> test.complex_fft False
>>> print(test.coordinates)
[0. 1. 2. 3. 4. 5. 6. 7. 8. 9.]
>>> test.complex_fft = True
>>> print(test.coordinates)
[-5. -4. -3. -2. -1. 0. 1. 2. 3. 4.]
```
Returns A Boolean.
Raises
**TypeError** – When the assigned value is not a boolean.
quantity_name[¶](#csdmpy.Dimension.quantity_name)
Quantity name associated with the physical quantities specifying dimension.
The attribute is invalid for the labeled dimension.
```
>>> print(x.quantity_name)
magnetic flux density
```
Returns A string with the quantity name.
Raises
* **AttributeError** – For labeled dimensions.
* **NotImplementedError** – When assigning a value.
label[¶](#csdmpy.Dimension.label)
Label associated with the dimension.
Example
```
>>> print(x.label)
field strength
>>> x.label = 'magnetic field strength'
```
Returns A string containing the label.
Raises
**TypeError** – When the assigned value is not a string.
labels[¶](#csdmpy.Dimension.labels)
Ordered list of labels along the Labeled dimension.
Consider the following labeled dimension,
```
>>> x2 = Dimension(type="labeled", labels=["Cu", "Ag", "Au"])
```
then the labels along the labeled dimension are
```
>>> print(x2.labels)
['Cu' 'Ag' 'Au']
```
Note
For Labeled dimension, the [`coordinates`](#csdmpy.Dimension.coordinates)
attribute is an alias of [`labels`](#csdmpy.Dimension.labels)
attribute. For example,
```
>>> np.all(x2.coordinates == x2.labels)
True
```
In the above example, `x2` is an instance of the [Dimension](#dim-api) class with labeled subtype.
Returns A Numpy array with labels along the dimension.
Raises
**AttributeError** – For dimensions with subtype other than labeled.
period[¶](#csdmpy.Dimension.period)
Period of the dimension.
The default value of the period is infinity, i.e., the dimension is non-periodic.
Example
```
>>> print(x.period)
inf G
>>> x.period = '1 T'
```
To assign a dimension as non-periodic, one of the following may be used,
```
>>> x.period = "1/0 T"
>>> x.period = "infinity µT"
>>> x.period = "∞ G"
```
Attention
The physical quantity of the period must be consistent with other physical quantities specifying the dimension.
Returns A Quantity instance with the period of the dimension.
Raises
* **AttributeError** – For labeled dimensions.
* **TypeError** – When the assigned value is not a string containing a quantity
or a Quantity object.
axis_label[¶](#csdmpy.Dimension.axis_label)
Formatted string for displaying label along the dimension axis.
This attribute is not a part of the original core scientific dataset model, however, it is a convenient supplementary attribute that provides a formatted string ready for labeling dimension axes.
For quantitative dimensions, this attributes returns a string,
label / unit, if the label is a non-empty string, otherwise,
quantity_name / unit. Here
[`quantity_name`](#csdmpy.Dimension.quantity_name) and
[`label`](#csdmpy.Dimension.label) are the attributes of the
[Dimension](#dim-api) instances, and unit is the unit associated with the coordinates along the dimension. For examples,
```
>>> x.label
'field strength'
>>> x.axis_label
'field strength / (G)'
```
For labeled dimensions, this attribute returns label.
Returns A formatted string of label.
Raises
**AttributeError** – When assigned a value.
data_structure[¶](#csdmpy.Dimension.data_structure)
JSON serialized string describing the Dimension class instance.
This supplementary attribute is useful for a quick preview of the dimension object. The attribute cannot be modified.
```
>>> print(x.data_structure)
{
"type": "linear",
"count": 10,
"increment": "5.0 G",
"coordinates_offset": "10.0 mT",
"origin_offset": "10.0 T",
"quantity_name": "magnetic flux density",
"label": "field strength",
"description": "This is a test",
"reciprocal": {
"quantity_name": "electrical mobility"
}
}
```
Returns A json serialized string of the dimension object.
Raises
**AttributeError** – When modified.
Method Documentation
to(*unit=''*, *equivalencies=None*)[[source]](_modules/csdmpy/dimension.html#Dimension.to)[¶](#csdmpy.Dimension.to)
Convert the coordinates along the dimension to the unit, unit.
This method is a wrapper of the to method from the
[Quantity](http://docs.astropy.org/en/stable/api/ astropy.units.Quantity.html#astropy.units.Quantity.to) class and is only valid for physical dimensions.
Example
```
>>> print(x.coordinates)
[100. 105. 110. 115. 120. 125. 130. 135. 140. 145.] G
>>> x.to('mT')
>>> print(x.coordinates)
[10. 10.5 11. 11.5 12. 12.5 13. 13.5 14. 14.5] mT
```
Parameters
**unit** – A string containing a unit with the same dimensionality as the coordinates along the dimension.
Raises
**AttributeError** – For labeled dimensions.
dict()[[source]](_modules/csdmpy/dimension.html#Dimension.dict)[¶](#csdmpy.Dimension.dict)
Return Dimension object as a python dictionary.
Example
```
>>> x.dict()
{'type': 'linear', 'description': 'This is a test', 'count': 10,
'increment': '5.0 G', 'coordinates_offset': '10.0 mT',
'origin_offset': '10.0 T', 'quantity_name': 'magnetic flux density',
'label': 'field strength'}
```
to_dict()[[source]](_modules/csdmpy/dimension.html#Dimension.to_dict)[¶](#csdmpy.Dimension.to_dict)
Alias to the dict() method of the class.
is_quantitative()[[source]](_modules/csdmpy/dimension.html#Dimension.is_quantitative)[¶](#csdmpy.Dimension.is_quantitative)
Return True if the dependent variable is quantitative.
Example
```
>>> x.is_quantitative()
True
```
copy()[[source]](_modules/csdmpy/dimension.html#Dimension.copy)[¶](#csdmpy.Dimension.copy)
Return a copy of the Dimension object.
reciprocal_coordinates()[[source]](_modules/csdmpy/dimension.html#Dimension.reciprocal_coordinates)[¶](#csdmpy.Dimension.reciprocal_coordinates)
Return reciprocal coordinates assuming Nyquist-Shannon theorem.
reciprocal_increment()[[source]](_modules/csdmpy/dimension.html#Dimension.reciprocal_increment)[¶](#csdmpy.Dimension.reciprocal_increment)
Return reciprocal increment assuming Nyquist-Shannon theorem.
### DependentVariable[¶](#dependentvariable)
*class* csdmpy.DependentVariable(**args*, ***kwargs*)[[source]](_modules/csdmpy/dependent_variable.html#DependentVariable)[¶](#csdmpy.DependentVariable)
Bases: `object`
Create an instance of the DependentVariable class.
The instance of this class represents a dependent variable, \(\mathbf{U}\).
A dependent variable holds \(p\)-component data values, where \(p>0\)
is an integer. For example, a scalar is single-component (\(p=1\)),
a vector may have up to n-components (\(p=n\)),
while a second rank symmetric tensor have six unique component (\(p=6\)).
**Creating a new dependent variable.**
There are two ways of creating a new instance of a DependentVariable class.
*From a python dictionary containing valid keywords.*
```
>>> from csdmpy import DependentVariable
>>> import numpy as np
>>> numpy_array = np.arange(30).reshape(3, 10).astype(np.float32)
>>> dependent_variable_dictionary = {
... "type": "internal",
... "components": numpy_array,
... "name": "star",
... "unit": "W s",
... "quantity_name": "energy",
... "quantity_type": "pixel_3",
... }
>>> y = DependentVariable(dependent_variable_dictionary)
```
Here, dependent_variable_dictionary is the python dictionary.
*From valid keyword arguments.*
```
>>> y = DependentVariable(
... type="internal",
... name="star",
... unit="W s",
... quantity_type="pixel_3",
... components=numpy_array,
... )
```
Attributes Summary
| [`type`](#csdmpy.DependentVariable.type) | The dependent variable subtype. |
| [`description`](#csdmpy.DependentVariable.description) | Brief description of the dependent variables. |
| [`application`](#csdmpy.DependentVariable.application) | Application metadata of the DependentVariable object. |
| [`name`](#csdmpy.DependentVariable.name) | Name of the dependent variable. |
| [`unit`](#csdmpy.DependentVariable.unit) | Unit associated with the dependent variable. |
| [`quantity_name`](#csdmpy.DependentVariable.quantity_name) | Quantity name of physical quantities associated with the dependent variable. |
| [`encoding`](#csdmpy.DependentVariable.encoding) | The encoding method used in representing the dependent variable. |
| [`numeric_type`](#csdmpy.DependentVariable.numeric_type) | The numeric type of the component values from the dependent variable. |
| [`quantity_type`](#csdmpy.DependentVariable.quantity_type) | Quantity type of the dependent variable. |
| [`component_labels`](#csdmpy.DependentVariable.component_labels) | List of labels corresponding to the components of the dependent variable. |
| [`components`](#csdmpy.DependentVariable.components) | Component array of the dependent variable. |
| [`components_url`](#csdmpy.DependentVariable.components_url) | URL where the data components of the dependent variable are stored. |
| [`axis_label`](#csdmpy.DependentVariable.axis_label) | List of formatted string labels for each component of the dependent variable. |
| [`data_structure`](#csdmpy.DependentVariable.data_structure) | Json serialized string describing the DependentVariable class instance. |
Methods Summary
| [`to`](#csdmpy.DependentVariable.to) | Convert the unit of the dependent variable to the unit. |
| [`dict`](#csdmpy.DependentVariable.dict) | Return DependentVariable object as a python dictionary. |
| [`to_dict`](#csdmpy.DependentVariable.to_dict) | Alias to the dict() method of the class. |
| [`copy`](#csdmpy.DependentVariable.copy) | Return a copy of the DependentVariable object. |
Attributes Documentation
type[¶](#csdmpy.DependentVariable.type)
The dependent variable subtype.
There are two *valid* subtypes of DependentVariable class with the following enumeration literals,
`internal`
`external`
corresponding to Internal and External sub class. By default, all instances of the DependentVariable class are assigned as internal upon import. The user may update the value of this attribute, at any time, with a string containing a valid type literal, for example,
```
>>> print(y.type)
internal
>>> y.type = "external"
```
When type is external, the data values from the corresponding dependent variable are serialized to an external file within the same directory as the
.csdfe file.
Returns A string with a valid dependent variable subtype.
Raises
**ValueError** – When an invalid value is assigned.
description[¶](#csdmpy.DependentVariable.description)
Brief description of the dependent variables.
The default value is an empty string, ‘’.
```
>>> print(y.description)
A test image
>>> y.description = "A test pixel_3 image"
>>> print(y.description)
A test pixel_3 image
```
Returns A string of UTF-8 allowed characters describing the dependent variable.
Raises
**TypeError** – When the assigned value is not a string.
application[¶](#csdmpy.DependentVariable.application)
Application metadata of the DependentVariable object.
```
>>> print(y.application)
None
```
The application attribute is where an application can place its own metadata as a python dictionary object containing the application specific metadata, using a reverse domain name notation string as the attribute key, for example,
```
>>> y.application = {"com.example.myApp": {"myApp_key": "myApp_metadata"}}
>>> print(y.application)
{'com.example.myApp': {'myApp_key': 'myApp_metadata'}}
```
Please refer to the Core Scientific Dataset Model article for details.
Returns A python dictionary containing dependent variable application metadata.
name[¶](#csdmpy.DependentVariable.name)
Name of the dependent variable.
```
>>> y.name
'star'
>>> y.name = "rock star"
```
Returns A string containing the name of the dependent variable.
Raises
**TypeError** – When the assigned value is not a string.
unit[¶](#csdmpy.DependentVariable.unit)
Unit associated with the dependent variable.
Note
The attribute cannot be modified. To convert the unit, use the
[`to()`](#csdmpy.DependentVariable.to) method of the class instance.
```
>>> y.unit Unit("s W")
```
Returns A Unit object from astropy.unit package.
Raises
**AttributeError** – When assigned a value.
quantity_name[¶](#csdmpy.DependentVariable.quantity_name)
Quantity name of physical quantities associated with the dependent variable.
```
>>> y.quantity_name
'energy'
```
Returns A string with the quantity name associated with the dependent variable physical quantities .
Raises
**NotImplementedError** – When assigning a value.
encoding[¶](#csdmpy.DependentVariable.encoding)
The encoding method used in representing the dependent variable.
The value of this attribute determines the method used when serializing or deserializing the data values to and from the file. Currently, there are three valid encoding methods:
`raw`
`base64`
`none`
A value, raw, means that the data values are serialized as binary data.
The value, base64, implies that the data values are serialized as base64 strings, while, the value none refers to text-based serialization.
By default, the encoding attribute of all dependent variable object are set to base64 after import. The user may update this attribute, at any time, with a string containing a *valid* encoding literal, for example,
```
>>> y.encoding = "base64"
```
The value of this attribute will be used in serializing the data to the file,
when using the [`save()`](index.html#csdmpy.CSDM.save) method.
Returns A string with a valid encoding type.
Raises
**ValueError** – If an invalid encoding value is assigned.
numeric_type[¶](#csdmpy.DependentVariable.numeric_type)
The numeric type of the component values from the dependent variable.
There are currently twelve *valid* numeric types in core scientific dataset model.
| `uint8` | `int8` | `float32` | `complex64` |
| `uint16` | `int16` | `float64` | `complex128` |
| `uint32` | `int32` | | |
| `uint64` | `int64` | | |
Besides, csdmpy also accepts any valid type object, such as int, float,
np.complex64, as long as the type is consistent with the above twelve entries.
When assigning a valid value, this attribute updates the dtype of the Numpy array from the corresponding [`components`](#csdmpy.DependentVariable.components)
attribute.
```
>>> y.numeric_type
'float32'
>>> print(y.components)
[[ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9.]
[10. 11. 12. 13. 14. 15. 16. 17. 18. 19.]
[20. 21. 22. 23. 24. 25. 26. 27. 28. 29.]]
>>> y.numeric_type = "complex64"
>>> print(y.components[:, :5])
[[ 0.+0.j 1.+0.j 2.+0.j 3.+0.j 4.+0.j]
[10.+0.j 11.+0.j 12.+0.j 13.+0.j 14.+0.j]
[20.+0.j 21.+0.j 22.+0.j 23.+0.j 24.+0.j]]
>>> y.numeric_type = float # python type object
>>> print(y.components[:, :5])
[[ 0. 1. 2. 3. 4.]
[10. 11. 12. 13. 14.]
[20. 21. 22. 23. 24.]]
```
Returns A string with a valid numeric type.
Raises
**ValueError** – If an invalid numeric type value is assigned.
quantity_type[¶](#csdmpy.DependentVariable.quantity_type)
Quantity type of the dependent variable.
There are currently six *valid* quantity types,
`scalar`
`vector_n`
`pixel_n`
`matrix_n_m`
`symmetric_matrix_n`
where n and m are integers. The value of the attribute is modified with a string containing a *valid* quantity type.
```
>>> y.quantity_type
'pixel_3'
>>> y.quantity_type = "vector_3"
```
Returns A string with a valid quantity type.
Raises
**ValueError** – If an invalid value is assigned.
component_labels[¶](#csdmpy.DependentVariable.component_labels)
List of labels corresponding to the components of the dependent variable.
```
>>> y.component_labels
['', '', '']
```
To update the component_labels, assign an array of strings with same number of elements as the number of components.
```
>>> y.component_labels = ["channel 0", "channel 1", "channel 2"]
```
The individual labels are accessed with proper indexing, for example,
```
>>> y.component_labels[2]
'channel 2'
```
Returns A list of component label strings.
Raises
**TypeError** – When the assigned value is not an array of strings.
components[¶](#csdmpy.DependentVariable.components)
Component array of the dependent variable.
The value of this attribute, \(\mathbb{U}\), is a Numpy array of shape \((p \times N_{d-1} \times ... N_1 \times N_0)\) where
\(p\) is the number of components, and \(N_k\) is the number of points from the \(k^\mathrm{th}\) [Dimension](index.html#dim-api) object.
Note
The shape of the components Numpy array,
\((p \times N_{d-1} \times ... N_1 \times N_0)\), is reverse the shape of the components array,
\((N_0 \times N_1 \times ... N_{d-1} \times p)\), from the CSD model.
This is because CSD model utilizes a column-major order to shape the components array relative to the order of the dimension while Numpy utilizes a row-major order.
The dimensionality of this Numpy array is \(d+1\) where \(d\)
is the number of dimension objects. The zeroth axis with \(p\) points is the number of components.
This attribute can only be updated when the shape of the new array is the same as the shape of the components array.
For example,
```
>>> print(y.components.shape)
(3, 10)
>>> y.numeric_type
'float32'
```
is a three-component dependent variable with ten data values per component. The numeric type of the data values, in this example, is float32. To update the components array, assign an array of shape (3, 10) to the components attribute. In the following example,
we assign a Numpy array,
```
>>> y.components = np.linspace(0, 256, 30, dtype="u1").reshape(3, 10)
>>> y.numeric_type
'uint8'
```
Notice, the value of the numeric_type attribute is automatically updated based on the dtype of the Numpy array. In this case, from a
*float32* to *uint8*.
In this other example,
```
>>> try:
... y.components = np.random.rand(1,10).astype('u1')
... except ValueError as e:
... print(e)
The shape of the `ndarray`, `(1, 10)`, is inconsistent with the shape of the components array, `(3, 10)`.
```
a ValueError is raised because the shape of the input array (1, 10)
is not consistent with the shape of the components array, (3, 10).
Returns A Numpy array of components.
Raises
**ValueError** – When assigning an array whose shape is not consistent with
the shape of the components array.
components_url[¶](#csdmpy.DependentVariable.components_url)
URL where the data components of the dependent variable are stored.
This attribute is only informative and cannot be modified. Its value is a string containing the local or remote address of the file where the data values are stored. The attribute is only valid for dependent variable with type,
external.
Returns A string containing the URL.
Raises
**AttributeError** – When assigned a value.
axis_label[¶](#csdmpy.DependentVariable.axis_label)
List of formatted string labels for each component of the dependent variable.
This attribute is not a part of the original core scientific dataset model, however, it is a convenient supplementary attribute that provides formatted string ready for labeling the components of the dependent variable.
The string at index i is formatted as component_labels[i] / unit if component_labels[i] is a non-empty string, otherwise, quantity_name / unit.
Here, quantity_name, component_labels, and unit`are the attributes of the
:ref:`dv_api instance. For example,
```
>>> y.axis_label
['energy / (s W)', 'energy / (s W)', 'energy / (s W)']
```
Returns A list of formatted component label strings.
Raises
**AttributeError** – When assigned a value.
data_structure[¶](#csdmpy.DependentVariable.data_structure)
Json serialized string describing the DependentVariable class instance.
This supplementary attribute is useful for a quick preview of the dependent variable object. For convenience, the values from the components attribute are truncated to the first and the last two numbers per component.
The encoding keyword is also hidden from this view.
```
>>> print(y.data_structure)
{
"type": "internal",
"description": "A test image",
"name": "star",
"unit": "s * W",
"quantity_name": "energy",
"numeric_type": "float32",
"quantity_type": "pixel_3",
"components": [
[
"0.0, 1.0, ..., 8.0, 9.0"
],
[
"10.0, 11.0, ..., 18.0, 19.0"
],
[
"20.0, 21.0, ..., 28.0, 29.0"
]
]
}
```
Returns A json serialized string of the dependent variable object.
Raises
**AttributeError** – When modified.
Method Documentation
to(*unit*)[[source]](_modules/csdmpy/dependent_variable.html#DependentVariable.to)[¶](#csdmpy.DependentVariable.to)
Convert the unit of the dependent variable to the unit.
Parameters
**unit** – A string containing a unit with the same dimensionality as the components of the dependent variable.
```
>>> y.unit Unit("s W")
>>> print(y.components[0, 5])
5.0
>>> y.to("mJ")
>>> y.unit Unit("mJ")
>>> print(y.components[0, 5])
5000.0
```
Note
This method is a wrapper of the to method from the [Quantity](http://docs.astropy.org/en/stable/api/stropy.units.Quantity.html#astropy.units.Quantity.to) class.
dict()[[source]](_modules/csdmpy/dependent_variable.html#DependentVariable.dict)[¶](#csdmpy.DependentVariable.dict)
Return DependentVariable object as a python dictionary.
Example
```
>>> y.dict()
{'type': 'internal', 'description': 'A test image', 'name': 'star',
'unit': 's * W', 'quantity_name': 'energy', 'encoding': 'none',
'numeric_type': 'float32', 'quantity_type': 'pixel_3',
'components': [[0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0],
[10.0, 11.0, 12.0, 13.0, 14.0, 15.0, 16.0, 17.0, 18.0, 19.0],
[20.0, 21.0, 22.0, 23.0, 24.0, 25.0, 26.0, 27.0, 28.0, 29.0]]}
```
to_dict()[[source]](_modules/csdmpy/dependent_variable.html#DependentVariable.to_dict)[¶](#csdmpy.DependentVariable.to_dict)
Alias to the dict() method of the class.
copy()[[source]](_modules/csdmpy/dependent_variable.html#DependentVariable.copy)[¶](#csdmpy.DependentVariable.copy)
Return a copy of the DependentVariable object.
### Statistics[¶](#statistics)
Methods Summary
| [`integral`](#csdmpy.statistics.integral) | Evaluate the integral of the dependent variables over all dimensions. |
| [`mean`](#csdmpy.statistics.mean) | Evaluate the mean coordinate of a dependent variable along each dimension. |
| [`var`](#csdmpy.statistics.var) | Evaluate the variance of the dependent variables along each dimension. |
| [`std`](#csdmpy.statistics.std) | Evaluate the standard deviation of the dependent variables along each dimension. |
Method Documentation
csdmpy.statistics.integral(*csdm*)[[source]](_modules/csdmpy/statistics.html#integral)[¶](#csdmpy.statistics.integral)
Evaluate the integral of the dependent variables over all dimensions.
Parameters
**csdm** – A csdm object.
Returns A list of integrals corresponding to the list of the dependent variables. If only one dependent variable is present, return a quantity instead.
Example
```
>>> import csdmpy.statistics as stat
>>> x = np.arange(100) * 2 - 100.0
>>> gauss = np.exp(-((x - 5.) ** 2) / (2 * 4. ** 2))
>>> csdm = cp.as_csdm(gauss, unit='T')
>>> csdm.dimensions[0] = cp.as_dimension(x, unit="m")
>>> stat.integral(csdm)
<Quantity 10.0265131 m T>
```
csdmpy.statistics.mean(*csdm*)[[source]](_modules/csdmpy/statistics.html#mean)[¶](#csdmpy.statistics.mean)
Evaluate the mean coordinate of a dependent variable along each dimension.
Parameters
**csdm** – A csdm object.
Returns A list of tuples, where each tuple represents the mean coordinates of the dependent variables. If only one dependent variable is present, return a tuple of coordinates instead.
Example
```
>>> stat.mean(csdm)
(<Quantity 5. m>,)
```
csdmpy.statistics.var(*csdm*)[[source]](_modules/csdmpy/statistics.html#var)[¶](#csdmpy.statistics.var)
Evaluate the variance of the dependent variables along each dimension.
Parameters
**csdm** – A csdm object.
Returns A list of tuples, where each tuple is the variance along the dimensions of the dependent variables. If only one dependent variable is present, return a tuple instead.
Example
```
>>> stat.var(csdm)
(<Quantity 16. m2>,)
```
csdmpy.statistics.std(*csdm*)[[source]](_modules/csdmpy/statistics.html#std)[¶](#csdmpy.statistics.std)
Evaluate the standard deviation of the dependent variables along each dimension.
Parameters
**csdm** – A csdm object.
Returns A list of tuples, where each tuple is the standard deviation along the dimensions of the dependent variables. If only one dependent variable is present, return a tuple instead.
Example
```
>>> stat.std(csdm)
(<Quantity 4. m>,)
```
### CSDMAxes[¶](#csdmaxes)
*class* csdmpy.helper_functions.CSDMAxes(*fig*, *rect*, *facecolor=None*, *frameon=True*, *sharex=None*, *sharey=None*, *label=''*, *xscale=None*, *yscale=None*, *box_aspect=None*, ***kwargs*)[[source]](_modules/csdmpy/helper_functions.html#CSDMAxes)[¶](#csdmpy.helper_functions.CSDMAxes)
Bases: [`Axes`](https://matplotlib.org/stable/api/axes_api.html#matplotlib.axes.Axes)
A custom CSDM data plot axes.
Methods Summary
| [`plot`](#csdmpy.helper_functions.CSDMAxes.plot) | Generate a figure axes using the plot method from the matplotlib library. |
| [`scatter`](#csdmpy.helper_functions.CSDMAxes.scatter) | Generate a figure axes using the scatter method from the matplotlib library. |
| [`imshow`](#csdmpy.helper_functions.CSDMAxes.imshow) | Generate a figure axes using the imshow method from the matplotlib library. |
| [`contour`](#csdmpy.helper_functions.CSDMAxes.contour) | Generate a figure axes using the contour method from the matplotlib library. |
| [`contourf`](#csdmpy.helper_functions.CSDMAxes.contourf) | Generate a figure axes using the contourf method from the matplotlib library. |
Method Documentation
plot(*csdm*, **args*, ***kwargs*)[[source]](_modules/csdmpy/helper_functions.html#CSDMAxes.plot)[¶](#csdmpy.helper_functions.CSDMAxes.plot)
Generate a figure axes using the plot method from the matplotlib library.
Apply to all 1D datasets with single-component dependent-variables. For multiple dependent variables, the data from individual dependent-variables is plotted on the same figure.
Parameters
* **csdm** – A CSDM object of a one-dimensional dataset.
* **kwargs** – Additional keyword arguments for the matplotlib plot() method.
Example
```
>>> ax = plt.subplot(projection='csdm')
>>> ax.plot(csdm_object)
>>> plt.show()
```
scatter(*csdm*, **args*, ***kwargs*)[[source]](_modules/csdmpy/helper_functions.html#CSDMAxes.scatter)[¶](#csdmpy.helper_functions.CSDMAxes.scatter)
Generate a figure axes using the scatter method from the matplotlib library.
Apply to all 1D datasets with single-component dependent-variables. For multiple dependent variables, the data from individual dependent-variables is plotted on the same figure.
Parameters
* **csdm** – A CSDM object of a one-dimensional dataset.
* **kwargs** – Additional keyword arguments for the matplotlib plot() method.
Example
```
>>> ax = plt.subplot(projection='csdm')
>>> ax.scatter(csdm_object)
>>> plt.show()
```
imshow(*csdm*, *origin='lower'*, **args*, ***kwargs*)[[source]](_modules/csdmpy/helper_functions.html#CSDMAxes.imshow)[¶](#csdmpy.helper_functions.CSDMAxes.imshow)
Generate a figure axes using the imshow method from the matplotlib library.
Apply to all 2D datasets with either single-component (scalar),
three-components (pixel_3), or four-components (pixel_4) dependent-variables.
For single-component (scalar) dependent-variable, a colormap image is produced.
For three-components (pixel_3) dependent-variable, an RGB image is produced.
For four-components (pixel_4) dependent-variable, an RGBA image is produced.
For multiple dependent variables, the data from individual dependent-variables is plotted on the same figure.
Parameters
* **csdm** – A CSDM object of a two-dimensional dataset with scalar, pixel_3, or pixel_4 quantity_type dependent variable.
* **origin** – The matplotlib origin argument. In matplotlib, the default is
‘upper’. In csdmpy, however, the default to ‘lower’.
* **kwargs** – Additional keyword arguments for the matplotlib imshow() method.
Example
```
>>> ax = plt.subplot(projection='csdm')
>>> ax.imshow(csdm_object)
>>> plt.show()
```
contour(*csdm*, **args*, ***kwargs*)[[source]](_modules/csdmpy/helper_functions.html#CSDMAxes.contour)[¶](#csdmpy.helper_functions.CSDMAxes.contour)
Generate a figure axes using the contour method from the matplotlib library.
Apply to all 2D datasets with a single-component (scalar) dependent-variables.
For multiple dependent variables, the data from individual dependent-variables is plotted on the same figure.
Parameters
* **csdm** – A CSDM object of a two-dimensional dataset with scalar dependent variable.
* **kwargs** – Additional keyword arguments for the matplotlib contour() method.
Example
```
>>> ax = plt.subplot(projection='csdm')
>>> ax.contour(csdm_object)
>>> plt.show()
```
contourf(*csdm*, **args*, ***kwargs*)[[source]](_modules/csdmpy/helper_functions.html#CSDMAxes.contourf)[¶](#csdmpy.helper_functions.CSDMAxes.contourf)
Generate a figure axes using the contourf method from the matplotlib library.
Apply to all 2D datasets with a single-component (scalar) dependent-variables.
For multiple dependent variables, the data from individual dependent-variables is plotted on the same figure.
Parameters
* **csdm** – A CSDM object of a two-dimensional dataset with scalar dependent variable.
* **kwargs** – Additional keyword arguments for the matplotlib contourf() method.
Example
```
>>> ax = plt.subplot(projection='csdm')
>>> ax.contourf(csdm_object)
>>> plt.show()
```
### Numpy methods[¶](#numpy-methods)
#### Supported NumPy functions[¶](#supported-numpy-functions)
The csdm object supports the use of NumPy functions, as
```
>>> y = np.func(x)
```
where `x` and `y` are the csdm objects, and `func` is any one of the following functions. These functions apply to each component of the dependent variables from a given csdm object, x.
Trigonometric functions
The trigonometric functions apply to the components of the dependent variables from a csdm object.
Note
The components must be dimensionless quantities.
A list of supported trigonometric functions.[¶](#id1)
| Functions | Description |
| --- | --- |
| [sin](https://docs.scipy.org/doc/numpy/reference/generated/numpy.sin.html#numpy.sin) | Apply sine to the components of the dependent variables |
| [cos](https://docs.scipy.org/doc/numpy/reference/generated/numpy.cos.html#numpy.cos) | Apply cosine to the components of the dependent variables |
| [tan](https://docs.scipy.org/doc/numpy/reference/generated/numpy.tan.html#numpy.tan) | Apply tangent to the components of the dependent variables |
| [arcsin](https://docs.scipy.org/doc/numpy/reference/generated/numpy.arcsin.html#numpy.arcsin) | Apply inverse sine to the components of the dependent variables |
| [arccos](https://docs.scipy.org/doc/numpy/reference/generated/numpy.arccos.html#numpy.arccos) | Apply inverse cosine to the components of the dependent variables |
| [arctan](https://docs.scipy.org/doc/numpy/reference/generated/numpy.arctan.html#numpy.arctan) | Apply inverse tangent to the components of the dependent variables |
| [sinh](https://docs.scipy.org/doc/numpy/reference/generated/numpy.sinh.html#numpy.sinh) | Apply hyperbolic sine to the components of the dependent variables |
| [cosh](https://docs.scipy.org/doc/numpy/reference/generated/numpy.cosh.html#numpy.cosh) | Apply hyperbolic cosine to the components of the dependent variables |
| [tanh](https://docs.scipy.org/doc/numpy/reference/generated/numpy.tanh.html#numpy.tanh) | Apply hyperbolic tangent to the components of the dependent variables |
| [arcsinh](https://docs.scipy.org/doc/numpy/reference/generated/numpy.arcsinh.html#numpy.arcsinh) | Apply inverse hyperbolic sine to the components of the dependent variables |
| [arccosh](https://docs.scipy.org/doc/numpy/reference/generated/numpy.arccosh.html#numpy.arccosh) | Apply inverse hyperbolic cosine to the components of the dependent variables |
| [arctanh](https://docs.scipy.org/doc/numpy/reference/generated/numpy.arctanh.html#numpy.arctanh) | Apply inverse hyperbolic tangent to the components of the dependent variables |
Mathematical operations
The following mathematical functions apply to the components of the dependent variables from a csdm object.
Note
The components must be dimensionless quantities.
A list of supported mathematical functions.[¶](#id2)
| Functions | Description |
| --- | --- |
| [exp](https://docs.scipy.org/doc/numpy/reference/generated/numpy.exp.html#numpy.exp) | Calculate the exponential of the components of the dependent variables. |
| [expm1](https://docs.scipy.org/doc/numpy/reference/generated/numpy.expm1.html#numpy.expm1) | Apply \(e^x - 1\), where x are the components of the dependent variables. |
| [exp2](https://docs.scipy.org/doc/numpy/reference/generated/numpy.exp2.html#numpy.exp2) | Calculate \(2^x\), where x are the components of the dependent variables. |
| [log](https://docs.scipy.org/doc/numpy/reference/generated/numpy.log.html#numpy.log) | Calculate natural logarithm of the components of the dependent variables. |
| [log1p](https://docs.scipy.org/doc/numpy/reference/generated/numpy.log1p.html#numpy.log1p) | Calculate natural logarithm plus one on the components of the dependent variables. |
| [log2](https://docs.scipy.org/doc/numpy/reference/generated/numpy.log2.html#numpy.log2) | Calculate base-2 logarithm of the components of the dependent variables. |
| [log10](https://docs.scipy.org/doc/numpy/reference/generated/numpy.log10.html#numpy.log10) | Calculate base-10 logarithm of the components of the dependent variables. |
The following mathematical functions apply to the components of the dependent variables from a csdm object irrespective of the components’ dimensionality.
Arithmetic operations[¶](#id3)
| Functions | Description |
| --- | --- |
| [reciprocal](https://docs.scipy.org/doc/numpy/reference/generated/numpy.reciprocal.html#numpy.reciprocal) | Return element-wise reciprocal. |
| [positive](https://docs.scipy.org/doc/numpy/reference/generated/numpy.positive.html#numpy.positive) | Return element-wise numerical positive. |
| [negative](https://docs.scipy.org/doc/numpy/reference/generated/numpy.negative.html#numpy.negative) | Return element-wise numerical negative. |
Miscellaneous[¶](#id4)
| Functions | Description |
| --- | --- |
| [sqrt](https://docs.scipy.org/doc/numpy/reference/generated/numpy.sqrt.html#numpy.sqrt) | Return element-wise non-negative square-root. |
| [cbrt](https://docs.scipy.org/doc/numpy/reference/generated/numpy.cbrt.html#numpy.cbrt) | Return element-wise cube-root. |
| [square](https://docs.scipy.org/doc/numpy/reference/generated/numpy.square.html#numpy.square) | Return element-wise square. |
| [absolute](https://docs.scipy.org/doc/numpy/reference/generated/numpy.absolute.html#numpy.absolute) | Return element-wise absolute value. |
| [fabs](https://docs.scipy.org/doc/numpy/reference/generated/numpy.fabs.html#numpy.fabs) | Return element-wise absolute value. |
| [sign](https://docs.scipy.org/doc/numpy/reference/generated/numpy.sign.html#numpy.sign) | Return element-wise sign of the values. |
Handling complex numbers[¶](#id5)
| Functions | Description |
| --- | --- |
| [angle](https://docs.scipy.org/doc/numpy/reference/generated/numpy.angle.html#numpy.angle) | Return element-wise angle of a complex value. |
| [real](https://docs.scipy.org/doc/numpy/reference/generated/numpy.real.html#numpy.real) | Return element-wise real part of a complex value. |
| [imag](https://docs.scipy.org/doc/numpy/reference/generated/numpy.imag.html#numpy.imag) | Return element-wise imaginary part of a complex value.å |
| [conj](https://docs.scipy.org/doc/numpy/reference/generated/numpy.conj.html#numpy.conj) | Return element-wise conjugate. |
| [conjugate](https://docs.scipy.org/doc/numpy/reference/generated/numpy.conjugate.html#numpy.conjugate) | Return element-wise conjugate. |
Sums, products, differences[¶](#id6)
| Functions | Description |
| --- | --- |
| [prod](https://docs.scipy.org/doc/numpy/reference/generated/numpy.prod.html#numpy.prod) | Return the product of the components of a dependent variable along a dimension. |
| [sum](https://docs.scipy.org/doc/numpy/reference/generated/numpy.sum.html#numpy.sum) | Return the sum of the components of a dependent variable along a dimension. |
Rounding[¶](#id7)
| Functions | Description |
| --- | --- |
| [rint](https://docs.scipy.org/doc/numpy/reference/generated/numpy.rint.html#numpy.rint) | Round elements to the nearest integer. |
| [around](https://docs.scipy.org/doc/numpy/reference/generated/numpy.around.html#numpy.around) | Round elements to the given number of decimals. |
| [round](https://docs.scipy.org/doc/numpy/reference/generated/numpy.round_.html#numpy.round_) | Round elements to the given number of decimals. |
Other functions
* min
* max
* mean
* var
* std
#### Dimension specific Apodization methods[¶](#dimension-specific-apodization-methods)
The following methods of form
()[¶](#equation-api-wrappers-apodization-0)\[y = f(a x),\]
where \(a\) is the function argument, and \(x\) are the coordinates along the dimension, apodize the components of the dependent variables along the respective dimensions. The dimensionality of \(a\) must be the reciprocal of that of \(x\).
The resulting CSDM object has the same number of dimensions as the original object.
Method Summary
| [`sin`](#csdmpy.apodize.sin)(csdm, arg[, dimension]) | Apodize the components along the dimension with \(\sin(a x)\). |
| [`cos`](#csdmpy.apodize.cos)(csdm, arg[, dimension]) | Apodize the components along the dimension with \(\cos(a x)\). |
| [`tan`](#csdmpy.apodize.tan)(csdm, arg[, dimension]) | Apodize the components along the dimension with \(\tan(a x)\). |
| [`arcsin`](#csdmpy.apodize.arcsin)(csdm, arg[, dimension]) | Apodize the components along the dimension with \(\arcsin(a x)\). |
| [`arccos`](#csdmpy.apodize.arccos)(csdm, arg[, dimension]) | Apodize the components along the dimension with \(\arccos(a x)\). |
| [`arctan`](#csdmpy.apodize.arctan)(csdm, arg[, dimension]) | Apodize the components along the dimension with \(\arctan(a x)\). |
| [`exp`](#csdmpy.apodize.exp)(csdm, arg[, dimension]) | Apodize the components along the dimension with \(\exp(a x)\). |
Method Documentation
csdmpy.apodize.sin(*csdm*, *arg*, *dimension=0*)[¶](#csdmpy.apodize.sin)
Apodize the components along the dimension with \(\sin(a x)\).
Parameters
* **csdm** – A CSDM object.
* **arg** – String or Quantity object. The function argument \(a\).
* **dimension** – An integer or tuple of m integers cooresponding to the index/indices of the dimensions along which the sine of the dependent variable components is performed.
Returns A CSDM object with d-m dimensions, where d is the total number of dimensions from the original csdm object.
csdmpy.apodize.cos(*csdm*, *arg*, *dimension=0*)[¶](#csdmpy.apodize.cos)
Apodize the components along the dimension with \(\cos(a x)\).
Parameters
* **csdm** – A CSDM object.
* **arg** – String or Quantity object. The function argument \(a\).
* **dimension** – An integer or tuple of m integers cooresponding to the index/indices of the dimensions along which the cosine of the dependent variable components is performed.
Returns A CSDM object with d-m dimensions, where d is the total number of dimensions from the original csdm object.
csdmpy.apodize.tan(*csdm*, *arg*, *dimension=0*)[¶](#csdmpy.apodize.tan)
Apodize the components along the dimension with \(\tan(a x)\).
Parameters
* **csdm** – A CSDM object.
* **arg** – String or Quantity object. The function argument \(a\).
* **dimension** – An integer or tuple of m integers cooresponding to the index/indices of the dimensions along which the tangent of the dependent variable components is performed.
Returns A CSDM object with d-m dimensions, where d is the total number of dimensions from the original csdm object.
csdmpy.apodize.arcsin(*csdm*, *arg*, *dimension=0*)[¶](#csdmpy.apodize.arcsin)
Apodize the components along the dimension with \(\arcsin(a x)\).
Parameters
* **csdm** – A CSDM object.
* **arg** – String or Quantity object. The function argument \(a\).
* **dimension** – An integer or tuple of m integers cooresponding to the index/indices of the dimensions along which the inverse sine of the dependent variable components is performed.
Returns A CSDM object with d-m dimensions, where d is the total number of dimensions from the original csdm object.
csdmpy.apodize.arccos(*csdm*, *arg*, *dimension=0*)[¶](#csdmpy.apodize.arccos)
Apodize the components along the dimension with \(\arccos(a x)\).
Parameters
* **csdm** – A CSDM object.
* **arg** – String or Quantity object. The function argument \(a\).
* **dimension** – An integer or tuple of m integers cooresponding to the index/indices of the dimensions along which the inverse cosine of the dependent variable components is performed.
Returns A CSDM object with d-m dimensions, where d is the total number of dimensions from the original csdm object.
csdmpy.apodize.arctan(*csdm*, *arg*, *dimension=0*)[¶](#csdmpy.apodize.arctan)
Apodize the components along the dimension with \(\arctan(a x)\).
Parameters
* **csdm** – A CSDM object.
* **arg** – String or Quantity object. The function argument \(a\).
* **dimension** – An integer or tuple of m integers cooresponding to the index/indices of the dimensions along which the inverse tangent of the dependent variable components is performed.
Returns A CSDM object with d-m dimensions, where d is the total number of dimensions from the original csdm object.
csdmpy.apodize.exp(*csdm*, *arg*, *dimension=0*)[¶](#csdmpy.apodize.exp)
Apodize the components along the dimension with \(\exp(a x)\).
Parameters
* **csdm** – A CSDM object.
* **arg** – String or Quantity object. The function argument \(a\).
* **dimension** – An integer or tuple of m integers cooresponding to the index/indices of the dimensions along which the exp of the dependent variable components is performed.
Returns A CSDM object with d-m dimensions, where d is the total number of dimensions from the original csdm object.
Changelog[¶](#changelog)
---
### v0.5.0[¶](#v0-5-0)
#### What’s new[¶](#what-s-new)
* Add support for `np.cumsum`, `np.cumprod`, `np.argmin`, `np.argmax` functions to CSDM objects.
#### Bugfix[¶](#bugfix)
* Bugfix involving plot of datasets with dependent-variable quantity type of vector_1 or pixel_1.
* Bugfix when assigning DimensionList/DependentVariableList to the CSDM dimensions and dependent_variables attribute #45
* Bugfix in CSDM object serializing when using Astropy.units v4.0 and higher. #44
* Bugfix for incorrect class name. #39
#### Deprecated[¶](#deprecated)
* add_x, add_y functions are removed.
### v0.4.1[¶](#v0-4-1)
Patch update for the CSDM dimension’s `quantity_name` attribute value from units compatible with astropy>=4.3
### v0.4[¶](#v0-4)
#### What’s new[¶](#id1)
* The `add_dimension` and `add_dependent_variable` from CSDM class are deprecated.
#### Bugfix[¶](#id2)
* Fixed error in calculating the nmr dimensionless frequency ratio (ppm) when dimension.complex_fft=False
### v0.3.5[¶](#v0-3-5)
* Fix the missing library error from pip installation.
### v0.3.4[¶](#v0-3-4)
#### Changes[¶](#changes)
* Image and Contour plots of csdm objects no longer draw colorbar. Colorbar can be requested separately using plt.colorbar().
### v0.3.3[¶](#v0-3-3)
#### What’s new![¶](#id3)
* Add `size` method to the CSDM object.
* Added alias for the csdm keywords that are short and easy for coding. The following is the list of aliases
+ dependent_variables -> y
+ dimensions -> x
+ add_dependent_variable -> add_x
+ add_dimension -> add_x
+ coordinates -> coords
#### Bug fixes[¶](#bug-fixes)
* Fixed bug causing a false error when reading sparse datasets.
### v0.3.2[¶](#v0-3-2)
#### Bug fixes[¶](#id4)
* Bugfix in fft method when applied to multi-dimensional CSDM objects.
* Added new tutorial examples.
### v0.3.1[¶](#v0-3-1)
#### Bug fixes[¶](#id5)
* Bugfix regarding the phase multiplier for the `CSDM.fft()` methods where an incorrect phase was multiplied to the signal vector.
### v0.3.0[¶](#v0-3-0)
#### What’s new![¶](#id6)
* Support for `matplotlib.pyplot` functions from `CSDM` objects.
+ `plot`,
+ `scatter`,
+ `imshow`,
+ `contour`, and
+ `contourf`
Now you can directly plot CSDM objects as an argument to the above matplotlib methods.
### v0.2.2[¶](#v0-2-2)
#### Bug fixes[¶](#id7)
* Fixed bug where the metadata from the `csdm.application` key was not serialized to the file when using `csdm.save()` method.
* Fixed a bug where the transpose of a CSDM object failed to retain the quantity_type information after the transpose.
#### Other changes[¶](#other-changes)
* Add a new diffusion tensor MRI dataset to the example gallery.
* Added `dict()` as an alias to the `to_dict()` method for all objects.
* Added an alias of the `cp.plot()` function to the CSDM object as the
`plot()` method.
### v0.2.1[¶](#v0-2-1)
#### What’s new![¶](#id8)
* Add `reciprocal_coordinates()` and `reciprocal_increment()` methods to the LinearDimension class.
* Added `fft()` function to the CSDM class.
* Added `transpose()` method to the CSDM class.
### v0.2.0[¶](#v0-2-0)
#### What’s new![¶](#id9)
* Added following methods to the `CSDM` class:
+ `__eq__()` for all class
+ `__add__()` = Adds two csdm object.
+ `__iadd__()` = Adds two csdm objects in-place.
+ `__sub__()` = Subtrace two csdm objects.
+ `__isub__()` = Subtrace two csdm objects in-place.
+ `__mul__()` = Multiply the components of the csdm object by a scalar.
+ `__imul__()` = Multiply the components of the csdm object by a scalar in-place.
+ `__truvdiv__()` = Divide the components of the csdm object by a scalar.
+ `__itruediv__()` = Divide the components of the csdm object by a scalar
in-place.
+ `split()` = Split the dependent-variables into individual csdm objects.
* Support for Numpy dimension reduction functions
+ `sum()`: Sum along a given dimension.
+ `prod()`: Product along a given dimension.
* Support for Numpy ufunc functions:
+ `sin`, `cos`, `tan`, `arcsin`, `arccos`, `arctan`, `sinh`, `cosh`,
`tanh`, `arcsinh`, `arccosh`, `arctanh`, `exp`, `exp2`, `log`,
`log2`, `log10`, `expm1`, `log1p`, `negative`, `positive`, `square`,
`absolute`, `fabs`, `rint`, `sign`, `conj`, `conjugate`, `sqrt`,
`cbrt`, `reciprocal`
* Added apodization functions.
+ `sin`, `cos`, `tan`, `arcsin`, `arccos`, `arctan`, `exp`
#### Bug fixes[¶](#id10)
* Fixed a bug in `cp.plot()` method.
### v0.1.5[¶](#v0-1-5)
* Added method to convert the frequency dimension to nmr dimensionless frequency ratio with syntax, `dimension.to('ppm', 'nmr_frequency_ratio')`, where dimension is a LinearDimension object.
* The `csdmpy.plot()` method also displays the dimension index on the axis label.
### v0.1.4[¶](#v0-1-4)
* Added `to_dict()` method to the CSDM, Dimension, and DependentVariable objects.
### v0.1.3[¶](#v0-1-3)
* Fixed warning message when physical quantity name is not found in the astropy units package.
* Added dumps and loads function to dump and load the data model as json serialized string, respectively without serializing it to a file.
### v0.0.11 to v0.1.2[¶](#v0-0-11-to-v0-1-2)
* Add a required unsigned_interger_type for SparseSampling dimension.
* Fixed minor bugs.
* Added a tags attribute to the CSDmodel object.
* Changed ‘sampling_interval’ key to ‘count’.
* Changed ‘quantity’ key to ‘quantity_name’.
* Changed ‘index_zero_value’ key to ‘coordinates_offset’.
* Changed ‘fft_output_order’ key to ‘complex_fft’.
* Renamed IndependentVariable class to Dimension.
* Renamed LinearlySpacedDimension class to LinearDimension.
* Renamed ArbitrarilySpacedDimension class to MonotonicDimension.
* Added a reciprocal attribute to LinearDimension and MonotonicDimension classes.
* Removed the reverse attribute from all Dimension classes.
* Changed ‘sampling_interval’ keyword to ‘increment’.
* Changed ‘reference_offset’ keyword to ‘index_zero_value’.
* Changed ‘linear_spacing’ literal to ‘linear’.
* Changed ‘arbitrarily_sampled’ literal to ‘monotonic’.
* Changed the defining of the coordinates for the LinearDimension from
()[¶](#equation-changelog-0)\[X^\text{ref} = m_k J_k - c_k {\bf 1}\]
to
()[¶](#equation-changelog-1)\[X^\text{ref} = m_k J_k + c_k {\bf 1},\]
where \(c_k\) is the reference offset, \(m_k\) is the increment, and
\(J_k\) is the set of integer indices along the dimension.
* Added ‘description’ key to ‘Dimension’, ‘DependentVariable’ and ‘CSDM’ object.
* Changed ‘CSDM’ keyword to ‘csdm’
* Changed ‘FFT_output_order’ keyword to ‘fft_output_order’
* Changed ‘components_URL’ keyword to ‘components_url’
---
Citations[¶](#citations)
---
[1](#id1)
<NAME>., <NAME>., <NAME>., <NAME>. (2020) Core Scientific Dataset Model: A lightweight and portable model and file format for multi-dimensional scientific data.
[PLOS ONE 15(1): e0225953.](https://doi.org/10.1371/journal.pone.0225953)
Media coverage[¶](#media-coverage)
---
[Des chimistes élaborent un nouveau format pour le partage de données scientifiques.](https://inc.cnrs.fr/fr/cnrsinfo/des-chimistes-elaborent-un-nouveau-format-pour-le-partage-de-donnees-scientifiques)
[Simplifying how scientists share data.](https://www.technology.org/2020/01/03/simplifying-how-scientists-share-data/)
Indices and tables[¶](#indices-and-tables)
---
* [Index](genindex.html)
* [Module Index](py-modindex.html)
* [Search Page](search.html) |
SuperLearner | cran | R | Package ‘SuperLearner’
July 18, 2023
Type Package
Title Super Learner Prediction
Version 2.0-28.1
Date 2021-05-04
Maintainer <NAME> <<EMAIL>>
Description Implements the super learner prediction method and contains a
library of prediction algorithms to be used in the super learner.
License GPL-3
URL https://github.com/ecpolley/SuperLearner
Depends R (>= 2.14.0), nnls, gam (>= 1.15)
Imports cvAUC
Suggests arm, bartMachine, biglasso, bigmemory, caret, class,
devtools, e1071, earth, extraTrees, gbm, genefilter, ggplot2,
glmnet, ipred, KernelKnn, kernlab, knitr, lattice, LogicReg,
MASS, mlbench, nloptr, nnet, party, polspline, prettydoc,
quadprog, randomForest, ranger, RhpcBLASctl, ROCR, rmarkdown,
rpart, SIS, speedglm, spls, sva, testthat, xgboost (>= 0.6)
LazyLoad yes
VignetteBuilder knitr, rmarkdown
RoxygenNote 6.0.1
NeedsCompilation no
Author <NAME> [aut, cre],
<NAME> [aut],
<NAME> [aut],
<NAME> [ctb],
<NAME> [aut, ths]
Repository CRAN
Date/Publication 2023-07-18 11:46:38 UTC
R topics documented:
create.Learne... 3
create.SL.xgboos... 4
CV.SuperLearne... 5
CVFold... 9
listWrapper... 9
plot.CV.SuperLearne... 10
predict.SL.bartMachin... 11
predict.SL.biglass... 12
predict.SL.extraTree... 12
predict.SL.gl... 13
predict.SL.glmne... 13
predict.SL.kernelKn... 14
predict.SL.ksv... 14
predict.SL.ld... 15
predict.SL.l... 15
predict.SL.qd... 16
predict.SL.range... 17
predict.SL.speedgl... 17
predict.SL.speedl... 18
predict.SL.xgboos... 18
predict.SuperLearne... 19
recombineCVS... 20
recombineS... 23
SampleSplitSuperLearne... 26
SL.bartMachin... 30
SL.biglass... 31
SL.cfores... 32
SL.extraTree... 33
SL.gl... 35
SL.glmne... 36
SL.kernelKn... 38
SL.ksv... 39
SL.ld... 41
SL.l... 42
SL.qd... 43
SL.range... 44
SL.speedgl... 46
SL.speedl... 47
SL.xgboos... 47
summary.CV.SuperLearne... 48
SuperLearne... 50
SuperLearner.contro... 56
SuperLearner.CV.contro... 56
SuperLearnerNew... 57
trimLogi... 58
write.method.templat... 58
write.screen.templat... 60
write.SL.templat... 61
create.Learner Factory for learner wrappers
Description
Create custom learners and/or a sequence of learners with hyperparameter combinations defined
over a grid.
Usage
create.Learner(base_learner, params = list(), tune = list(),
env = parent.frame(), name_prefix = base_learner, detailed_names = F,
verbose = F)
Arguments
base_learner Character string of the learner function that will be customized.
params List with parameters to customize.
tune List of hyperparameter settings that will define custom learners.
env Environment in which to create the functions. Defaults to the current environ-
ment (e.g. often the global environment).
name_prefix The prefix string for the name of each function that is generated.
detailed_names Set to T to have the function names include the parameter configurations.
verbose Display extra details.
Value
Returns a list with expanded tuneGrid and the names of the created functions.
Examples
## Not run:
# Create a randomForest learner with ntree set to 1000 rather than the
# default of 500.
create_rf = create.Learner("SL.randomForest", list(ntree = 1000))
create_rf
sl = SuperLearner(Y = Y, X = X, SL.library = create_rf$names, family = binomial())
sl
# Clean up global environment.
rm(list = create_rf$names)
# Create a randomForest learner that optimizes over mtry
create_rf = create.Learner("SL.randomForest",
tune = list(mtry = round(c(1, sqrt(ncol(X)), ncol(X)))))
create_rf
sl = SuperLearner(Y = Y, X = X, SL.library = create_rf$names, family = binomial())
sl
# Clean up global environment.
rm(list = create_rf$names)
# Optimize elastic net over alpha, with a custom environment and detailed names.
learners = new.env()
create_enet = create.Learner("SL.glmnet", env = learners, detailed_names = T,
tune = list(alpha = seq(0, 1, length.out=5)))
create_enet
# List the environment to review what functions were created.
ls(learners)
# We can simply list the environment to specify the library.
sl = SuperLearner(Y = Y, X = X, SL.library = ls(learners), family = binomial(), env = learners)
sl
## End(Not run)
create.SL.xgboost Factory for XGBoost SL wrappers
Description
Create multiple configurations of XGBoost learners based on the desired combinations of hyperpa-
rameters.
Usage
create.SL.xgboost(tune = list(ntrees = c(1000), max_depth = c(4), shrinkage =
c(0.1), minobspernode = c(10)), detailed_names = F, env = .GlobalEnv,
name_prefix = "SL.xgb")
Arguments
tune List of hyperparameter settings to test. If specified, each hyperparameter will
need to be defined.
detailed_names Set to T to have the function names include the parameter configurations.
env Environment in which to create the SL.xgboost functions. Defaults to the global
environment.
name_prefix The prefix string for the name of each function that is generated.
Examples
# Create a new environment to store the learner functions.
# This keeps the global environment organized.
sl_env = new.env()
# Create 2 * 2 * 1 * 3 = 12 combinations of hyperparameters.
tune = list(ntrees = c(100, 500), max_depth = c(1, 2), minobspernode = 10,
shrinkage = c(0.1, 0.01, 0.001))
# Generate a separate learner for each combination.
xgb_grid = create.SL.xgboost(tune = tune, env = sl_env)
# Review the function configurations.
xgb_grid
# Attach the environment so that the custom learner functions can be accessed.
attach(sl_env)
## Not run:
sl = SuperLearner(Y = Y, X = X, SL.library = xgb_grid$names)
## End(Not run)
detach(sl_env)
CV.SuperLearner Function to get V-fold cross-validated risk estimate for super learner
Description
Function to get V-fold cross-validated risk estimate for super learner. This function simply splits
the data into V folds and then calls SuperLearner. Most of the arguments are passed directly to
SuperLearner.
Usage
CV.SuperLearner(Y, X, V = NULL, family = gaussian(), SL.library,
method = "method.NNLS", id = NULL, verbose = FALSE,
control = list(saveFitLibrary = FALSE), cvControl = list(),
innerCvControl = list(),
obsWeights = NULL, saveAll = TRUE, parallel = "seq", env = parent.frame())
Arguments
Y The outcome.
X The covariates.
V The number of folds for CV.SuperLearner. This argument will be depreciated
and moved into the cvControl. If Both V and cvControl set the number of
cross-validation folds, an error message will appear. The recommendation is to
use cvControl. This is not the number of folds for SuperLearner. The number
of folds for SuperLearner is controlled with innerCvControl.
family Currently allows gaussian or binomial to describe the error distribution. Link
function information will be ignored and should be contained in the method
argument below.
SL.library Either a character vector of prediction algorithms or a list containing character
vectors. See details below for examples on the structure. A list of functions
included in the SuperLearner package can be found with listWrappers().
method A list (or a function to create a list) containing details on estimating the coeffi-
cients for the super learner and the model to combine the individual algorithms
in the library. See ?method.template for details. Currently, the built in options
are either "method.NNLS" (the default), "method.NNLS2", "method.NNloglik",
"method.CC_LS", "method.CC_nloglik", or "method.AUC". NNLS and NNLS2
are non-negative least squares based on the Lawson-Hanson algorithm and the
dual method of Goldfarb and Idnani, respectively. NNLS and NNLS2 will work
for both gaussian and binomial outcomes. NNloglik is a non-negative binomial
likelihood maximization using the BFGS quasi-Newton optimization method.
NN* methods are normalized so weights sum to one. CC_LS uses Goldfarb and
Idnani’s quadratic programming algorithm to calculate the best convex combi-
nation of weights to minimize the squared error loss. CC_nloglik calculates the
convex combination of weights that minimize the negative binomial log like-
lihood on the logistic scale using the sequential quadratic programming algo-
rithm. AUC, which only works for binary outcomes, uses the Nelder-Mead
method via the optim function to minimize rank loss (equivalent to maximizing
AUC).
id Optional cluster identification variable. For the cross-validation splits, id forces
observations in the same cluster to be in the same validation fold. id is passed
to the prediction and screening algorithms in SL.library, but be sure to check the
individual wrappers as many of them ignore the information.
verbose Logical; TRUE for printing progress during the computation (helpful for debug-
ging).
control A list of parameters to control the estimation process. Parameters include saveFitLibrary
and trimLogit. See SuperLearner.control for details.
cvControl A list of parameters to control the outer cross-validation process. The outer
cross-validation is the sample spliting for evaluating the SuperLearner. Parame-
ters include V, stratifyCV, shuffle and validRows. See SuperLearner.CV.control
for details.
innerCvControl A list of lists of parameters to control the inner cross-validation process. It
should have V elements in the list, each a valid cvControl list. If only a single
value, then replicated across all folds. The inner cross-validation are the values
passed to each of the V SuperLearner calls. Parameters include V, stratifyCV,
shuffle and validRows. See SuperLearner.CV.control for details.
obsWeights Optional observation weights variable. As with id above, obsWeights is passed
to the prediction and screening algorithms, but many of the built in wrappers
ignore (or can’t use) the information. If you are using observation weights,
make sure the library you specify uses the information.
saveAll Logical; Should the entire SuperLearner object be saved for each fold?
parallel Options for parallel computation of the V-fold step. Use "seq" (the default) for
sequential computation. parallel = 'multicore' to use mclapply for the V-
fold step (but note that SuperLearner() will still be sequential). The default
for mclapply is to check the mc.cores option, and if not set to default to 2
cores. Be sure to set options()$mc.cores to the desired number of cores if
you don’t want the default. Or parallel can be the name of a snow cluster and
will use parLapply for the V-fold step. For both multicore and snow, the inner
SuperLearner calls will be sequential.
env Environment containing the learner functions. Defaults to the calling environ-
ment.
Details
The SuperLearner function builds a estimator, but does not contain an estimate on the performance
of the estimator. Various methods exist for estimator performance evaluation. If you are familiar
with the super learner algorithm, it should be no surprise we recommend using cross-validation to
evaluate the honest performance of the super learner estimator. The function CV.SuperLearner
computes the usual V-fold cross-validated risk estimate for the super learner (and all algorithms in
SL.library for comparison).
Value
An object of class CV.SuperLearner (a list) with components:
call The matched call.
AllSL If saveAll = TRUE, a list with output from each call to SuperLearner, otherwise
NULL.
SL.predict The predicted values from the super learner when each particular row was part
of the validation fold.
discreteSL.predict
The traditional cross-validated selector. Picks the algorithm with the smallest
cross-validated risk (in super learner terms, gives that algorithm coefficient 1
and all others 0).
whichDiscreteSL
A list of length V. The elements in the list are the algorithm that had the smallest
cross-validated risk estimate for that fold.
library.predict
A matrix with the predicted values from each algorithm in SL.library. The
columns are the algorithms in SL.library and the rows represent the predicted
values when that particular row was in the validation fold (i.e. not used to fit
that estimator).
coef A matrix with the coefficients for the super learner on each fold. The columns
are the algorithms in SL.library the rows are the folds.
folds A list containing the row numbers for each validation fold.
V Number of folds for CV.SuperLearner.
libraryNames A character vector with the names of the algorithms in the library. The format is
’predictionAlgorithm_screeningAlgorithm’ with ’_All’ used to denote the pre-
diction algorithm run on all variables in X.
SL.library Returns SL.library in the same format as the argument with the same name
above.
method A list with the method functions.
Y The outcome
Author(s)
<NAME> <<EMAIL>>
See Also
SuperLearner
Examples
## Not run:
set.seed(23432)
## training set
n <- 500
p <- 50
X <- matrix(rnorm(n*p), nrow = n, ncol = p)
colnames(X) <- paste("X", 1:p, sep="")
X <- data.frame(X)
Y <- X[, 1] + sqrt(abs(X[, 2] * X[, 3])) + X[, 2] - X[, 3] + rnorm(n)
## build Library and run Super Learner
SL.library <- c("SL.glm", "SL.randomForest", "SL.polymars", "SL.mean")
test <- CV.SuperLearner(Y = Y, X = X, V = 10, SL.library = SL.library,
verbose = TRUE, method = "method.NNLS")
test
summary(test)
## Look at the coefficients across folds
coef(test)
# Example with specifying cross-validation options for both
# CV.SuperLearner (cvControl) and the internal SuperLearners (innerCvControl)
test <- CV.SuperLearner(Y = Y, X = X, SL.library = SL.library,
cvControl = list(V = 10, shuffle = FALSE),
innerCvControl = list(list(V = 5)),
verbose = TRUE, method = "method.NNLS")
## examples with snow
library(parallel)
cl <- makeCluster(2, type = "PSOCK") # can use different types here
clusterSetRNGStream(cl, iseed = 2343)
testSNOW <- CV.SuperLearner(Y = Y, X = X, SL.library = SL.library, method = "method.NNLS",
parallel = cl)
summary(testSNOW)
stopCluster(cl)
## End(Not run)
CVFolds Generate list of row numbers for each fold in the cross-validation
Description
Generate list of row numbers for each fold in the cross-validation. CVFolds is used in the SuperLearner
to create the cross-validation splits.
Usage
CVFolds(N, id, Y, cvControl)
Arguments
N Sample size
id Optional cluster id variable. If present, all observations in the same cluster will
always be in the same split.
Y outcome
cvControl Control parameters for the cross-validation step. See SuperLearner.CV.control
for details.
Value
validRows A list of length V where each element in the list is a vector with the row numbers
of the corresponding validation sample.
Author(s)
<NAME> <<EMAIL>>
listWrappers list all wrapper functions in SuperLearner
Description
List all wrapper functions in SuperLearner package
Usage
listWrappers(what = "both")
Arguments
what What list to return. Can be both for both prediction algorithms and screening al-
gorithms, SL for the prediction algorithms, screen for the screening algorithms,
method for the estimation method details, or anything else will return a list of all
(exported) functions in the SuperLearner package. Additional wrapper func-
tions are available at https://github.com/ecpolley/SuperLearnerExtra.
Value
Invisible character vector with all exported functions in the SuperLearner package
Author(s)
<NAME> <<EMAIL>>
See Also
SuperLearner
Examples
listWrappers(what = "SL")
listWrappers(what = "screen")
plot.CV.SuperLearner Graphical display of the V-fold CV risk estimates
Description
The function plots the V-fold cross-validated risk estimates for the super learner, the discrete super
learner and each algorithm in the library. By default the estimates will be sorted and include an
asymptotic 95% confidence interval.
Usage
## S3 method for class 'CV.SuperLearner'
plot(x, package = "ggplot2", constant = qnorm(0.975), sort = TRUE, ...)
Arguments
x The output from CV.SuperLearner.
package Either "ggplot2" or "lattice". The package selected must be available.
constant A numeric value. The confidence interval is defined as p +/- constant * se, where
p is the point estimate and se is the standard error. The default is the quantile of
the standard normal corresponding to a 95% CI.
sort Logical. Should the rows in the plot be sorted from the smallest to the largest
point estimate. If FALSE, then the order is super learner, discrete super learner,
then the estimators in SL.library.
... Additional arguments for summary.CV.SuperLearner
Details
see summary.CV.SuperLearner for details on how the estimates are computed
Value
Returns the plot (either a ggplot2 object (class ggplot) or a lattice object (class trellis))
Author(s)
<NAME> <<EMAIL>>
See Also
summary.CV.SuperLearner and CV.SuperLearner
predict.SL.bartMachine
bartMachine prediction
Description
bartMachine prediction
Usage
## S3 method for class 'SL.bartMachine'
predict(object, newdata, family, X = NULL,
Y = NULL, ...)
Arguments
object SuperLearner object
newdata Dataframe to predict the outcome
family "gaussian" for regression, "binomial" for binary classification. (Not used)
X Covariate dataframe (not used)
Y Outcome variable (not used)
... Additional arguments (not used)
predict.SL.biglasso Prediction wrapper for SL.biglasso
Description
Prediction wrapper for SL.biglasso objects.
Usage
## S3 method for class 'SL.biglasso'
predict(object, newdata, ...)
Arguments
object SL.kernlab object
newdata Dataframe to generate predictions
... Unused additional arguments
See Also
SL.biglasso biglasso predict.biglasso
predict.SL.extraTrees extraTrees prediction on new data
Description
extraTrees prediction on new data
Usage
## S3 method for class 'SL.extraTrees'
predict(object, newdata, family, ...)
Arguments
object Model fit object from SuperLearner
newdata Dataframe
family Binomial or gaussian
... Any remaining arguments (not used).
predict.SL.glm Prediction for SL.glm
Description
Prediction for SL.glm
Usage
## S3 method for class 'SL.glm'
predict(object, newdata, ...)
Arguments
object SL.glm object
newdata Dataframe to generate predictions
... Unused additional arguments
See Also
SL.glm glm predict.glm SL.speedglm
predict.SL.glmnet Prediction for an SL.glmnet object
Description
Prediction for the glmnet wrapper.
Usage
## S3 method for class 'SL.glmnet'
predict(object, newdata, remove_extra_cols = T,
add_missing_cols = T, ...)
Arguments
object Result object from SL.glmnet
newdata Dataframe or matrix that will generate predictions.
remove_extra_cols
Remove any extra columns in the new data that were not part of the original
model.
add_missing_cols
Add any columns from original data that do not exist in the new data, and set
values to 0.
... Any additional arguments (not used).
See Also
SL.glmnet
predict.SL.kernelKnn Prediction for SL.kernelKnn
Description
Prediction for SL.kernelKnn
Usage
## S3 method for class 'SL.kernelKnn'
predict(object, newdata, ...)
Arguments
object SL.kernelKnn object
newdata Dataframe to generate predictions
... Unused additional arguments
predict.SL.ksvm Prediction for SL.ksvm
Description
Prediction for SL.ksvm
Usage
## S3 method for class 'SL.ksvm'
predict(object, newdata, family, coupler = "minpair", ...)
Arguments
object SL.kernlab object
newdata Dataframe to generate predictions
family Gaussian or binomial
coupler Coupling method used in the multiclass case, can be one of minpair or pkpd (see
kernlab package for details). For future usage.
... Unused additional arguments
See Also
SL.ksvm ksvm predict.ksvm
predict.SL.lda Prediction wrapper for SL.lda
Description
Prediction wrapper for SL.lda
Usage
## S3 method for class 'SL.lda'
predict(object, newdata, prior = object$object$prior,
dimen = NULL, method = "plug-in", ...)
Arguments
object SL.lda object
newdata Dataframe to generate predictions
prior The prior probabilities of the classes, by default the proportions in the training
set or what was set in the call to lda.
dimen the dimension of the space to be used. If this is less than min(p, ng-1), only the
first dimen discriminant components are used (except for method="predictive"),
and only those dimensions are returned in x.
method This determines how the parameter estimation is handled. With "plug-in" (the
default) the usual unbiased parameter estimates are used and assumed to be cor-
rect. With "debiased" an unbiased estimator of the log posterior probabilities is
used, and with "predictive" the parameter estimates are integrated out using a
vague prior.
... Unused additional arguments
See Also
SL.lda lda predict.lda
predict.SL.lm Prediction for SL.lm
Description
Prediction for SL.lm
Usage
## S3 method for class 'SL.lm'
predict(object, newdata, ...)
Arguments
object SL.lm object
newdata Dataframe to generate predictions
... Unused additional arguments
See Also
SL.lm lm predict.lm SL.speedlm
predict.SL.qda Prediction wrapper for SL.qda
Description
Prediction wrapper for SL.qda
Usage
## S3 method for class 'SL.qda'
predict(object, newdata, prior = object$object$prior,
dimen = NULL, method = "plug-in", ...)
Arguments
object SL.lda object
newdata Dataframe to generate predictions
prior The prior probabilities of the classes, by default the proportions in the training
set or what was set in the call to lda.
dimen the dimension of the space to be used. If this is less than min(p, ng-1), only the
first dimen discriminant components are used (except for method="predictive"),
and only those dimensions are returned in x.
method This determines how the parameter estimation is handled. With "plug-in" (the
default) the usual unbiased parameter estimates are used and assumed to be cor-
rect. With "debiased" an unbiased estimator of the log posterior probabilities is
used, and with "predictive" the parameter estimates are integrated out using a
vague prior.
... Unused additional arguments
See Also
SL.qda qda predict.qda
predict.SL.ranger Prediction wrapper for ranger random forests
Description
Prediction wrapper for SL.ranger objects.
Usage
## S3 method for class 'SL.ranger'
predict(object, newdata, family, num.threads = 1,
verbose = object$verbose, ...)
Arguments
object SL.kernlab object
newdata Dataframe to generate predictions
family Gaussian or binomial
num.threads Number of threads used for parallelization
verbose If TRUE output additional information during execution.
... Unused additional arguments
See Also
SL.ranger ranger predict.ranger
predict.SL.speedglm Prediction for SL.speedglm
Description
Prediction for SL.speedglm
Usage
## S3 method for class 'SL.speedglm'
predict(object, newdata, ...)
Arguments
object SL.speedglm object
newdata Dataframe to generate predictions
... Unused additional arguments
See Also
SL.speedglm speedglm predict.speedglm
predict.SL.speedlm Prediction for SL.speedlm
Description
Prediction for SL.speedlm, a fast lm()
Usage
## S3 method for class 'SL.speedlm'
predict(object, newdata, ...)
Arguments
object SL.speedlm object
newdata Dataframe to generate predictions
... Unused additional arguments
See Also
SL.speedlm speedlm predict.speedlm SL.speedglm
predict.SL.xgboost XGBoost prediction on new data
Description
XGBoost prediction on new data
Usage
## S3 method for class 'SL.xgboost'
predict(object, newdata, family, ...)
Arguments
object Model fit object from SuperLearner
newdata Dataframe that will be converted to an xgb.DMatrix
family Binomial or gaussian
... Any remaining arguments (not supported though).
predict.SuperLearner Predict method for SuperLearner object
Description
Obtains predictions on a new data set from a SuperLearner fit. May require the original data if one
of the library algorithms uses the original data in its predict method.
Usage
## S3 method for class 'SuperLearner'
predict(object, newdata, X = NULL, Y = NULL,
onlySL = FALSE, ...)
Arguments
object Fitted object from SuperLearner
newdata New X values for prediction
X Original data set used to fit object, if needed by fit object.
Y Original outcome used to fit object, if needed by fit object.
onlySL Logical. If TRUE, only compute predictions for algorithms with non-zero coef-
ficients in the super learner object. Default is FALSE (computes predictions for
all algorithms in library).
... Additional arguments passed to the predict.SL.* functions
Details
If newdata is omitted the predicted values from object are returned. Each algorithm in the Super
Learner library needs to have a corresponding prediction function with the “predict.” prefixed onto
the algorithm name (e.g. predict.SL.glm for SL.glm).
Value
pred Predicted values from Super Learner fit
library.predict
Predicted values for each algorithm in library
Author(s)
<NAME> <<EMAIL>>
See Also
SuperLearner
recombineCVSL Recombine a CV.SuperLearner fit using a new metalearning method
Description
Function to re-compute the V-fold cross-validated risk estimate for super learner using a new met-
alearning method. This function takes as input an existing CV.SuperLearner fit and applies the
recombineSL fit to each of the V Super Learner fits.
Usage
recombineCVSL(object, method = "method.NNloglik", verbose = FALSE,
saveAll = TRUE, parallel = "seq")
Arguments
object Fitted object from CV.SuperLearner.
method A list (or a function to create a list) containing details on estimating the coeffi-
cients for the super learner and the model to combine the individual algorithms
in the library. See ?method.template for details. Currently, the built in options
are either "method.NNLS" (the default), "method.NNLS2", "method.NNloglik",
"method.CC_LS", "method.CC_nloglik", or "method.AUC". NNLS and NNLS2
are non-negative least squares based on the Lawson-Hanson algorithm and the
dual method of Goldfarb and Idnani, respectively. NNLS and NNLS2 will work
for both gaussian and binomial outcomes. NNloglik is a non-negative binomial
likelihood maximization using the BFGS quasi-Newton optimization method.
NN* methods are normalized so weights sum to one. CC_LS uses Goldfarb and
Idnani’s quadratic programming algorithm to calculate the best convex combi-
nation of weights to minimize the squared error loss. CC_nloglik calculates the
convex combination of weights that minimize the negative binomial log like-
lihood on the logistic scale using the sequential quadratic programming algo-
rithm. AUC, which only works for binary outcomes, uses the Nelder-Mead
method via the optim function to minimize rank loss (equivalent to maximizing
AUC).
verbose logical; TRUE for printing progress during the computation (helpful for debug-
ging).
saveAll Logical; Should the entire SuperLearner object be saved for each fold?
parallel Options for parallel computation of the V-fold step. Use "seq" (the default) for
sequential computation. parallel = 'multicore' to use mclapply for the V-
fold step (but note that SuperLearner() will still be sequential). Or parallel
can be the name of a snow cluster and will use parLapply for the V-fold step.
For both multicore and snow, the inner SuperLearner calls will be sequential.
Details
The function recombineCVSL computes the usual V-fold cross-validated risk estimate for the super
learner (and all algorithms in SL.library for comparison), using a newly specified metalearning
method. The weights for each algorithm in SL.library are re-estimated using the new metalearner,
however the base learner fits are not regenerated, so this function saves a lot of computation time
as opposed to using the CV.SuperLearner function with a new method argument. The output is
identical to the output from the CV.SuperLearner function.
Value
An object of class CV.SuperLearner (a list) with components:
call The matched call.
AllSL If saveAll = TRUE, a list with output from each call to SuperLearner, otherwise
NULL.
SL.predict The predicted values from the super learner when each particular row was part
of the validation fold.
discreteSL.predict
The traditional cross-validated selector. Picks the algorithm with the smallest
cross-validated risk (in super learner terms, gives that algorithm coefficient 1
and all others 0).
whichDiscreteSL
A list of length V. The elements in the list are the algorithm that had the smallest
cross-validated risk estimate for that fold.
library.predict
A matrix with the predicted values from each algorithm in SL.library. The
columns are the algorithms in SL.library and the rows represent the predicted
values when that particular row was in the validation fold (i.e. not used to fit
that estimator).
coef A matrix with the coefficients for the super learner on each fold. The columns
are the algorithms in SL.library the rows are the folds.
folds A list containing the row numbers for each validation fold.
V Number of folds for CV.SuperLearner.
libraryNames A character vector with the names of the algorithms in the library. The format is
’predictionAlgorithm_screeningAlgorithm’ with ’_All’ used to denote the pre-
diction algorithm run on all variables in X.
SL.library Returns SL.library in the same format as the argument with the same name
above.
method A list with the method functions.
Y The outcome
Author(s)
<NAME> <<EMAIL>>
See Also
recombineSL
Examples
## Not run:
# Binary outcome example adapted from SuperLearner examples
set.seed(1)
N <- 200
X <- matrix(rnorm(N*10), N, 10)
X <- as.data.frame(X)
Y <- rbinom(N, 1, plogis(.2*X[, 1] + .1*X[, 2] - .2*X[, 3] +
.1*X[, 3]*X[, 4] - .2*abs(X[, 4])))
SL.library <- c("SL.glmnet", "SL.glm", "SL.knn", "SL.mean")
# least squares loss function
set.seed(1) # for reproducibility
cvfit_nnls <- CV.SuperLearner(Y = Y, X = X, V = 10, SL.library = SL.library,
verbose = TRUE, method = "method.NNLS", family = binomial())
cvfit_nnls$coef
# SL.glmnet_All SL.glm_All SL.knn_All SL.gam_All SL.mean_All
# 1 0.0000000 0.00000000 0.000000000 0.4143862 0.5856138
# 2 0.0000000 0.00000000 0.304802397 0.3047478 0.3904498
# 3 0.0000000 0.00000000 0.002897533 0.5544075 0.4426950
# 4 0.0000000 0.20322642 0.000000000 0.1121891 0.6845845
# 5 0.1743973 0.00000000 0.032471026 0.3580624 0.4350693
# 6 0.0000000 0.00000000 0.099881535 0.3662309 0.5338876
# 7 0.0000000 0.00000000 0.234876082 0.2942472 0.4708767
# 8 0.0000000 0.06424676 0.113988158 0.5600208 0.2617443
# 9 0.0000000 0.00000000 0.338030342 0.2762604 0.3857093
# 10 0.3022442 0.00000000 0.294226204 0.1394534 0.2640762
# negative log binomial likelihood loss function
cvfit_nnloglik <- recombineCVSL(cvfit_nnls, method = "method.NNloglik")
cvfit_nnloglik$coef
# SL.glmnet_All SL.glm_All SL.knn_All SL.gam_All SL.mean_All
# 1 0.0000000 0.0000000 0.00000000 0.5974799 0.40252010
# 2 0.0000000 0.0000000 0.31177345 0.6882266 0.00000000
# 3 0.0000000 0.0000000 0.01377469 0.8544238 0.13180152
# 4 0.0000000 0.1644188 0.00000000 0.2387919 0.59678930
# 5 0.2142254 0.0000000 0.00000000 0.3729426 0.41283197
# 6 0.0000000 0.0000000 0.00000000 0.5847150 0.41528502
# 7 0.0000000 0.0000000 0.47538172 0.5080311 0.01658722
# 8 0.0000000 0.0000000 0.00000000 1.0000000 0.00000000
# 9 0.0000000 0.0000000 0.45384961 0.2923480 0.25380243
# 10 0.3977816 0.0000000 0.27927906 0.1606384 0.16230097
# If we use the same seed as the original `cvfit_nnls`, then
# the recombineCVSL and CV.SuperLearner results will be identical
# however, the recombineCVSL version will be much faster since
# it doesn't have to re-fit all the base learners, V times each.
set.seed(1)
cvfit_nnloglik2 <- CV.SuperLearner(Y = Y, X = X, V = 10, SL.library = SL.library,
verbose = TRUE, method = "method.NNloglik", family = binomial())
cvfit_nnloglik2$coef
# SL.glmnet_All SL.glm_All SL.knn_All SL.gam_All SL.mean_All
# 1 0.0000000 0.0000000 0.00000000 0.5974799 0.40252010
# 2 0.0000000 0.0000000 0.31177345 0.6882266 0.00000000
# 3 0.0000000 0.0000000 0.01377469 0.8544238 0.13180152
# 4 0.0000000 0.1644188 0.00000000 0.2387919 0.59678930
# 5 0.2142254 0.0000000 0.00000000 0.3729426 0.41283197
# 6 0.0000000 0.0000000 0.00000000 0.5847150 0.41528502
# 7 0.0000000 0.0000000 0.47538172 0.5080311 0.01658722
# 8 0.0000000 0.0000000 0.00000000 1.0000000 0.00000000
# 9 0.0000000 0.0000000 0.45384961 0.2923480 0.25380243
# 10 0.3977816 0.0000000 0.27927906 0.1606384 0.16230097
## End(Not run)
recombineSL Recombine a SuperLearner fit using a new metalearning method
Description
The recombineSL function takes an existing SuperLearner fit and a new metalearning method and
returns a new SuperLearner fit with updated base learner weights.
Usage
recombineSL(object, Y, method = "method.NNloglik", verbose = FALSE)
Arguments
object Fitted object from SuperLearner.
Y The outcome in the training data set. Must be a numeric vector.
method A list (or a function to create a list) containing details on estimating the coeffi-
cients for the super learner and the model to combine the individual algorithms
in the library. See ?method.template for details. Currently, the built in options
are either "method.NNLS" (the default), "method.NNLS2", "method.NNloglik",
"method.CC_LS", "method.CC_nloglik", or "method.AUC". NNLS and NNLS2
are non-negative least squares based on the Lawson-Hanson algorithm and the
dual method of Goldfarb and Idnani, respectively. NNLS and NNLS2 will work
for both gaussian and binomial outcomes. NNloglik is a non-negative binomial
likelihood maximization using the BFGS quasi-Newton optimization method.
NN* methods are normalized so weights sum to one. CC_LS uses Goldfarb and
Idnani’s quadratic programming algorithm to calculate the best convex combi-
nation of weights to minimize the squared error loss. CC_nloglik calculates the
convex combination of weights that minimize the negative binomial log like-
lihood on the logistic scale using the sequential quadratic programming algo-
rithm. AUC, which only works for binary outcomes, uses the Nelder-Mead
method via the optim function to minimize rank loss (equivalent to maximizing
AUC).
verbose logical; TRUE for printing progress during the computation (helpful for debug-
ging).
Details
recombineSL re-fits the super learner prediction algorithm using a new metalearning method. The
weights for each algorithm in SL.library are re-estimated using the new metalearner, however the
base learner fits are not regenerated, so this function saves a lot of computation time as opposed
to using the SuperLearner function with a new method argument. The output is identical to the
output from the SuperLearner function.
Value
call The matched call.
libraryNames A character vector with the names of the algorithms in the library. The format is
’predictionAlgorithm_screeningAlgorithm’ with ’_All’ used to denote the pre-
diction algorithm run on all variables in X.
SL.library Returns SL.library in the same format as the argument with the same name
above.
SL.predict The predicted values from the super learner for the rows in newX.
coef Coefficients for the super learner.
library.predict
A matrix with the predicted values from each algorithm in SL.library for the
rows in newX.
Z The Z matrix (the cross-validated predicted values for each algorithm in SL.library).
cvRisk A numeric vector with the V-fold cross-validated risk estimate for each algo-
rithm in SL.library. Note that this does not contain the CV risk estimate for
the SuperLearner, only the individual algorithms in the library.
family Returns the family value from above
fitLibrary A list with the fitted objects for each algorithm in SL.library on the full train-
ing data set.
varNames A character vector with the names of the variables in X.
validRows A list containing the row numbers for the V-fold cross-validation step.
method A list with the method functions.
whichScreen A logical matrix indicating which variables passed each screening algorithm.
control The control list.
cvControl The cvControl list.
errorsInCVLibrary
A logical vector indicating if any algorithms experienced an error within the CV
step.
errorsInLibrary
A logical vector indicating if any algorithms experienced an error on the full
data.
Author(s)
<NAME> <<EMAIL>>
References
<NAME>, <NAME>., <NAME>. and <NAME>. (2008) Super Learner, Statistical Applications
of Genetics and Molecular Biology, 6, article 25.
Examples
## Not run:
# Binary outcome example adapted from SuperLearner examples
set.seed(1)
N <- 200
X <- matrix(rnorm(N*10), N, 10)
X <- as.data.frame(X)
Y <- rbinom(N, 1, plogis(.2*X[, 1] + .1*X[, 2] - .2*X[, 3] +
.1*X[, 3]*X[, 4] - .2*abs(X[, 4])))
SL.library <- c("SL.glmnet", "SL.glm", "SL.knn", "SL.mean")
# least squares loss function
set.seed(1) # for reproducibility
fit_nnls <- SuperLearner(Y = Y, X = X, SL.library = SL.library,
verbose = TRUE, method = "method.NNLS", family = binomial())
fit_nnls
# Risk Coef
# SL.glmnet_All 0.2439433 0.01293059
# SL.glm_All 0.2461245 0.08408060
# SL.knn_All 0.2604000 0.09600353
# SL.gam_All 0.2471651 0.40761918
# SL.mean_All 0.2486049 0.39936611
# negative log binomial likelihood loss function
fit_nnloglik <- recombineSL(fit_nnls, Y = Y, method = "method.NNloglik")
fit_nnloglik
# Risk Coef
# SL.glmnet_All 0.6815911 0.1577228
# SL.glm_All 0.6918926 0.0000000
# SL.knn_All Inf 0.0000000
# SL.gam_All 0.6935383 0.6292881
# SL.mean_All 0.6904050 0.2129891
# If we use the same seed as the original `fit_nnls`, then
# the recombineSL and SuperLearner results will be identical
# however, the recombineSL version will be much faster since
# it doesn't have to re-fit all the base learners.
set.seed(1)
fit_nnloglik2 <- SuperLearner(Y = Y, X = X, SL.library = SL.library,
verbose = TRUE, method = "method.NNloglik", family = binomial())
fit_nnloglik2
# Risk Coef
# SL.glmnet_All 0.6815911 0.1577228
# SL.glm_All 0.6918926 0.0000000
# SL.knn_All Inf 0.0000000
# SL.gam_All 0.6935383 0.6292881
# SL.mean_All 0.6904050 0.2129891
## End(Not run)
SampleSplitSuperLearner
Super Learner Prediction Function
Description
A Prediction Function for the Super Learner. The SuperLearner function takes a training set pair
(X,Y) and returns the predicted values based on a validation set. SampleSplitSuperLearner uses
sample split validation whereas SuperLearner uses V-fold cross-validation.
Usage
SampleSplitSuperLearner(Y, X, newX = NULL, family = gaussian(), SL.library,
method = "method.NNLS", id = NULL, verbose = FALSE,
control = list(), split = 0.8, obsWeights = NULL)
Arguments
Y The outcome in the training data set. Must be a numeric vector.
X The predictor variables in the training data set, usually a data.frame.
newX The predictor variables in the validation data set. The structure should match X.
If missing, uses X for newX.
SL.library Either a character vector of prediction algorithms or a list containing character
vectors. See details below for examples on the structure. A list of functions
included in the SuperLearner package can be found with listWrappers().
verbose logical; TRUE for printing progress during the computation (helpful for debug-
ging).
family Currently allows gaussian or binomial to describe the error distribution. Link
function information will be ignored and should be contained in the method
argument below.
method A list (or a function to create a list) containing details on estimating the coeffi-
cients for the super learner and the model to combine the individual algorithms
in the library. See ?method.template for details. Currently, the built in options
are either "method.NNLS" (the default), "method.NNLS2", "method.NNloglik",
"method.CC_LS", or "method.CC_nloglik". NNLS and NNLS2 are non-negative
least squares based on the Lawson-Hanson algorithm and the dual method of
Goldfarb and Idnani, respectively. NNLS and NNLS2 will work for both gaus-
sian and binomial outcomes. NNloglik is a non-negative binomial likelihood
maximization using the BFGS quasi-Newton optimization method. NN* meth-
ods are normalized so weights sum to one. CC_LS uses Goldfarb and Idnani’s
quadratic programming algorithm to calculate the best convex combination of
weights to minimize the squared error loss. CC_nloglik calculates the convex
combination of weights that minimize the negative binomial log likelihood on
the logistic scale using the sequential quadratic programming algorithm.
id Optional cluster identification variable. For the cross-validation splits, id forces
observations in the same cluster to be in the same validation fold. id is passed
to the prediction and screening algorithms in SL.library, but be sure to check the
individual wrappers as many of them ignore the information.
obsWeights Optional observation weights variable. As with id above, obsWeights is passed
to the prediction and screening algorithms, but many of the built in wrappers
ignore (or can’t use) the information. If you are using observation weights,
make sure the library you specify uses the information.
control A list of parameters to control the estimation process. Parameters include saveFitLibrary
and trimLogit. See SuperLearner.control for details.
split Either a single value between 0 and 1 indicating the fraction of the samples for
the training split. A value of 0.8 will randomly assign 80 percent of the samples
to the training split and the other 20 percent to the validation split. Alternatively,
split can be a numeric vector with the row numbers of X corresponding to the
validation split. All other rows not in the vector will be considered in the training
split.
Details
SuperLearner fits the super learner prediction algorithm. The weights for each algorithm in
SL.library is estimated, along with the fit of each algorithm.
The prescreen algorithms. These algorithms first rank the variables in X based on either a univariate
regression p-value of the randomForest variable importance. A subset of the variables in X is
selected based on a pre-defined cut-off. With this subset of the X variables, the algorithms in
SL.library are then fit.
The SuperLearner package contains a few prediction and screening algorithm wrappers. The full
list of wrappers can be viewed with listWrappers(). The design of the SuperLearner package is
such that the user can easily add their own wrappers. We also maintain a website with additional
examples of wrapper functions at https://github.com/ecpolley/SuperLearnerExtra.
Value
call The matched call.
libraryNames A character vector with the names of the algorithms in the library. The format is
’predictionAlgorithm_screeningAlgorithm’ with ’_All’ used to denote the pre-
diction algorithm run on all variables in X.
SL.library Returns SL.library in the same format as the argument with the same name
above.
SL.predict The predicted values from the super learner for the rows in newX.
coef Coefficients for the super learner.
library.predict
A matrix with the predicted values from each algorithm in SL.library for the
rows in newX.
Z The Z matrix (the cross-validated predicted values for each algorithm in SL.library).
cvRisk A numeric vector with the V-fold cross-validated risk estimate for each algo-
rithm in SL.library. Note that this does not contain the CV risk estimate for
the SuperLearner, only the individual algorithms in the library.
family Returns the family value from above
fitLibrary A list with the fitted objects for each algorithm in SL.library on the full train-
ing data set.
varNames A character vector with the names of the variables in X.
validRows A list containing the row numbers for the V-fold cross-validation step.
method A list with the method functions.
whichScreen A logical matrix indicating which variables passed each screening algorithm.
control The control list.
split The split value.
errorsInCVLibrary
A logical vector indicating if any algorithms experienced an error within the CV
step.
errorsInLibrary
A logical vector indicating if any algorithms experienced an error on the full
data.
Author(s)
<NAME> <<EMAIL>>
References
<NAME>, <NAME>., <NAME>. and <NAME>. (2008) Super Learner, Statistical Applications
of Genetics and Molecular Biology, 6, article 25.
Examples
## Not run:
## simulate data
set.seed(23432)
## training set
n <- 500
p <- 50
X <- matrix(rnorm(n*p), nrow = n, ncol = p)
colnames(X) <- paste("X", 1:p, sep="")
X <- data.frame(X)
Y <- X[, 1] + sqrt(abs(X[, 2] * X[, 3])) + X[, 2] - X[, 3] + rnorm(n)
## test set
m <- 1000
newX <- matrix(rnorm(m*p), nrow = m, ncol = p)
colnames(newX) <- paste("X", 1:p, sep="")
newX <- data.frame(newX)
newY <- newX[, 1] + sqrt(abs(newX[, 2] * newX[, 3])) + newX[, 2] -
newX[, 3] + rnorm(m)
# generate Library and run Super Learner
SL.library <- c("SL.glm", "SL.randomForest", "SL.gam",
"SL.polymars", "SL.mean")
test <- SampleSplitSuperLearner(Y = Y, X = X, newX = newX, SL.library = SL.library,
verbose = TRUE, method = "method.NNLS")
test
# library with screening
SL.library <- list(c("SL.glmnet", "All"), c("SL.glm", "screen.randomForest",
"All", "screen.SIS"), "SL.randomForest", c("SL.polymars", "All"), "SL.mean")
test <- SuperLearner(Y = Y, X = X, newX = newX, SL.library = SL.library,
verbose = TRUE, method = "method.NNLS")
test
# binary outcome
set.seed(1)
N <- 200
X <- matrix(rnorm(N*10), N, 10)
X <- as.data.frame(X)
Y <- rbinom(N, 1, plogis(.2*X[, 1] + .1*X[, 2] - .2*X[, 3] +
.1*X[, 3]*X[, 4] - .2*abs(X[, 4])))
SL.library <- c("SL.glmnet", "SL.glm", "SL.knn", "SL.mean")
# least squares loss function
test.NNLS <- SampleSplitSuperLearner(Y = Y, X = X, SL.library = SL.library,
verbose = TRUE, method = "method.NNLS", family = binomial())
test.NNLS
## End(Not run)
SL.bartMachine Wrapper for bartMachine learner
Description
Support bayesian additive regression trees via the bartMachine package.
Usage
SL.bartMachine(Y, X, newX, family, obsWeights, id, num_trees = 50,
num_burn_in = 250, verbose = F, alpha = 0.95, beta = 2, k = 2,
q = 0.9, nu = 3, num_iterations_after_burn_in = 1000, ...)
Arguments
Y Outcome variable
X Covariate dataframe
newX Optional dataframe to predict the outcome
family "gaussian" for regression, "binomial" for binary classification
obsWeights Optional observation-level weights (supported but not tested)
id Optional id to group observations from the same unit (not used currently).
num_trees The number of trees to be grown in the sum-of-trees model.
num_burn_in Number of MCMC samples to be discarded as "burn-in".
verbose Prints information about progress of the algorithm to the screen.
alpha Base hyperparameter in tree prior for whether a node is nonterminal or not.
beta Power hyperparameter in tree prior for whether a node is nonterminal or not.
k For regression, k determines the prior probability that E(Y|X) is contained in the
interval (y_min, y_max), based on a normal distribution. For example, when
k=2, the prior probability is 95%. For classification, k determines the prior
probability that E(Y|X) is between (-3,3). Note that a larger value of k results in
more shrinkage and a more conservative fit.
q Quantile of the prior on the error variance at which the data-based estimate is
placed. Note that the larger the value of q, the more aggressive the fit as you
are placing more prior weight on values lower than the data-based estimate. Not
used for classification.
nu Degrees of freedom for the inverse chi^2 prior. Not used for classification.
num_iterations_after_burn_in
Number of MCMC samples to draw from the posterior distribution of f(x).
... Additional arguments (not used)
SL.biglasso SL wrapper for biglasso
Description
SL wrapper for biglasso
Usage
SL.biglasso(Y, X, newX, family, obsWeights, penalty = "lasso",
alg.logistic = "Newton", screen = "SSR", alpha = 1, nlambda = 100,
eval.metric = "default", ncores = 1, nfolds = 5, ...)
Arguments
Y Outcome variable
X Training dataframe
newX Test dataframe
family Gaussian or binomial
obsWeights Observation-level weights
penalty The penalty to be applied to the model. Either "lasso" (default), "ridge", or
"enet" (elastic net).
alg.logistic The algorithm used in logistic regression. If "Newton" then the exact hessian
is used (default); if "MM" then a majorization-minimization algorithm is used
to set an upper-bound on the hessian matrix. This can be faster, particularly in
data-larger-than-RAM case.
screen "SSR" (default) is the sequential strong rule; "SEDPP" is the (sequential) EDPP
rule. "SSR-BEDPP", "SSR-Dome", and "SSR-Slores" are our newly proposed
screening rules which combine the strong rule with a safe rule (BEDPP, Dome
test, or Slores rule). Among the three, the first two are for lasso-penalized linear
regression, and the last one is for lasso-penalized logistic regression. "None" is
to not apply a screening rule.
alpha The elastic-net mixing parameter that controls the relative contribution from the
lasso (l1) and the ridge (l2) penalty.
nlambda The number of lambda values to check. Default is 100.
eval.metric The evaluation metric for the cross-validated error and for choosing optimal
lambda. "default" for linear regression is MSE (mean squared error), for logistic
regression is misclassification error. "MAPE", for linear regression only, is the
Mean Absolute Percentage Error.
ncores The number of cores to use for parallel execution across a cluster created by the
parallel package.
nfolds The number of cross-validation folds. Default is 5.
... Any additional arguments, not currently used.
References
<NAME>, <NAME> (2017). biglasso: Extending Lasso Model Fitting to Big Data. https://CRAN.R-
project.org/package=biglasso.
See Also
predict.SL.biglasso biglasso cv.biglasso predict.biglasso SL.glmnet
Examples
## Not run:
data(Boston, package = "MASS")
Y = Boston$medv
# Remove outcome from covariate dataframe.
X = Boston[, -14]
set.seed(1)
# Sample rows to speed up example.
row_subset = sample(nrow(X), 30)
# Subset rows and columns & use only 2 folds to speed up example.
sl = SuperLearner(Y[row_subset], X[row_subset, 1:2, drop = FALSE],
family = gaussian(), cvControl = list(V = 2),
SL.library = "SL.biglasso")
sl
# example for predictions on the full dataset
pred = predict(sl, X)
summary(pred$pred)
## End(Not run)
SL.cforest cforest party
Description
These defaults emulate cforest_unbiased() but allow customization.
Usage
SL.cforest(Y, X, newX, family, obsWeights, id, ntree = 1000,
mtry = max(floor(ncol(X)/3), 1), mincriterion = 0, teststat = "quad",
testtype = "Univ", replace = F, fraction = 0.632, ...)
Arguments
Y Outcome variable
X Covariate dataframe
newX Optional dataframe to predict the outcome
family "gaussian" for regression, "binomial" for binary classification
obsWeights Optional observation-level weights (supported but not tested)
id Optional id to group observations from the same unit (not used currently).
ntree Number of trees
mtry Number of randomly selected features per node
mincriterion See ?cforest_control
teststat See ?cforest_control
testtype See ?cforest_control
replace See ?cforest_control
fraction See ?cforest_control
... Remaining arguments (unused)
SL.extraTrees extraTrees SuperLearner wrapper
Description
Supports the Extremely Randomized Trees package for SuperLearning, which is a variant of random
forest.
Usage
SL.extraTrees(Y, X, newX, family, obsWeights, id, ntree = 500, mtry = if
(family$family == "gaussian") max(floor(ncol(X)/3), 1) else
floor(sqrt(ncol(X))), nodesize = if (family$family == "gaussian") 5 else 1,
numRandomCuts = 1, evenCuts = FALSE, numThreads = 1, quantile = FALSE,
subsetSizes = NULL, subsetGroups = NULL, tasks = NULL,
probOfTaskCuts = mtry/ncol(X), numRandomTaskCuts = 1, verbose = FALSE,
...)
Arguments
Y Outcome variable
X Covariate dataframe
newX Optional dataframe to predict the outcome
family "gaussian" for regression, "binomial" for binary classification.
obsWeights Optional observation-level weights (supported but not tested)
id Optional id to group observations from the same unit (not used currently).
ntree Number of trees (default 500).
mtry Number of features tested at each node. Default is ncol(x) / 3 for regression and
sqrt(ncol(x)) for classification.
nodesize The size of leaves of the tree. Default is 5 for regression and 1 for classification.
numRandomCuts the number of random cuts for each (randomly chosen) feature (default 1, which
corresponds to the official ExtraTrees method). The higher the number of cuts
the higher the chance of a good cut.
evenCuts if FALSE then cutting thresholds are uniformly sampled (default). If TRUE
then the range is split into even intervals (the number of intervals is numRan-
domCuts) and a cut is uniformly sampled from each interval.
numThreads the number of CPU threads to use (default is 1).
quantile if TRUE then quantile regression is performed (default is FALSE), only for re-
gression data. Then use predict(et, newdata, quantile=k) to make predictions for
k quantile.
subsetSizes subset size (one integer) or subset sizes (vector of integers, requires subset-
Groups), if supplied every tree is built from a random subset of size subsetSizes.
NULL means no subsetting, i.e. all samples are used.
subsetGroups list specifying subset group for each sample: from samples in group g, each tree
will randomly select subsetSizes[g] samples.
tasks vector of tasks, integers from 1 and up. NULL if no multi-task learning. (untested)
probOfTaskCuts probability of performing task cut at a node (default mtry / ncol(x)). Used only
if tasks is specified. (untested)
numRandomTaskCuts
number of times task cut is performed at a node (default 1). Used only if tasks
is specified. (untested)
verbose Verbosity of model fitting.
... Any remaining arguments (not supported though).
Details
If Java runs out of memory: java.lang.OutOfMemoryError: Java heap space, then (assuming you
have free memory) you can increase the heap size by: options( java.parameters = "-Xmx2g" ) before
calling library(extraTrees),
References
<NAME>., <NAME>., & <NAME>. (2006). Extremely randomized trees. Machine learning,
63(1), 3-42.
<NAME>., <NAME>., & <NAME>. (2014). Tree-based ensemble multi-task learning method
for classification and regression. IEICE TRANSACTIONS on Information and Systems, 97(6),
1677-1681.
See Also
extraTrees predict.SL.extraTrees predict.extraTrees
Examples
data(Boston, package = "MASS")
Y = Boston$medv
# Remove outcome from covariate dataframe.
X = Boston[, -14]
set.seed(1)
# Sample rows to speed up example.
row_subset = sample(nrow(X), 30)
sl = SuperLearner(Y[row_subset], X[row_subset, ], family = gaussian(),
cvControl = list(V = 2), SL.library = c("SL.mean", "SL.extraTrees"))
print(sl)
SL.glm Wrapper for glm
Description
Wrapper for generalized linear models via glm().
Note that for outcomes bounded by [0, 1] the binomial family can be used in addition to gaussian.
Usage
SL.glm(Y, X, newX, family, obsWeights, model = TRUE, ...)
Arguments
Y Outcome variable
X Training dataframe
newX Test dataframe
family Gaussian or binomial
obsWeights Observation-level weights
model Whether to save model.matrix of data in fit object. Set to FALSE to save mem-
ory.
... Any remaining arguments, not used.
References
<NAME>. (2015). Applied regression analysis and generalized linear models. Sage Publications.
See Also
predict.SL.glm glm predict.glm SL.speedglm
Examples
data(Boston, package = "MASS")
Y = Boston$medv
# Remove outcome from covariate dataframe.
X = Boston[, -14]
set.seed(1)
sl = SuperLearner(Y, X, family = gaussian(),
SL.library = c("SL.mean", "SL.glm"))
print(sl)
SL.glmnet Elastic net regression, including lasso and ridge
Description
Penalized regression using elastic net. Alpha = 0 corresponds to ridge regression and alpha = 1
corresponds to Lasso.
See vignette("glmnet_beta", package = "glmnet") for a nice tutorial on glmnet.
Usage
SL.glmnet(Y, X, newX, family, obsWeights, id, alpha = 1, nfolds = 10,
nlambda = 100, useMin = TRUE, loss = "deviance", ...)
Arguments
Y Outcome variable
X Covariate dataframe
newX Dataframe to predict the outcome
family "gaussian" for regression, "binomial" for binary classification. Untested op-
tions: "multinomial" for multiple classification or "mgaussian" for multiple re-
sponse, "poisson" for non-negative outcome with proportional mean and vari-
ance, "cox".
obsWeights Optional observation-level weights
id Optional id to group observations from the same unit (not used currently).
alpha Elastic net mixing parameter, range [0, 1]. 0 = ridge regression and 1 = lasso.
nfolds Number of folds for internal cross-validation to optimize lambda.
nlambda Number of lambda values to check, recommended to be 100 or more.
useMin If TRUE use lambda that minimizes risk, otherwise use 1 standard-error rule
which chooses a higher penalty with performance within one standard error of
the minimum (see Breiman et al. 1984 on CART for background).
loss Loss function, can be "deviance", "mse", or "mae". If family = binomial can
also be "auc" or "class" (misclassification error).
... Any additional arguments are passed through to cv.glmnet.
References
<NAME>., <NAME>., & <NAME>. (2010). Regularization paths for generalized linear models
via coordinate descent. Journal of statistical software, 33(1), 1.
<NAME>., & <NAME>. (1970). Ridge regression: Biased estimation for nonorthogonal
problems. Technometrics, 12(1), 55-67.
<NAME>. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statis-
tical Society. Series B (Methodological), 267-288.
<NAME>., & <NAME>. (2005). Regularization and variable selection via the elastic net. Journal of
the Royal Statistical Society: Series B (Statistical Methodology), 67(2), 301-320.
See Also
predict.SL.glmnet cv.glmnet glmnet
Examples
## Not run:
# Load a test dataset.
data(PimaIndiansDiabetes2, package = "mlbench")
data = PimaIndiansDiabetes2
# Omit observations with missing data.
data = na.omit(data)
Y = as.numeric(data$diabetes == "pos")
X = subset(data, select = -diabetes)
set.seed(1, "L'Ecuyer-CMRG")
sl = SuperLearner(Y, X, family = binomial(),
SL.library = c("SL.mean", "SL.glm", "SL.glmnet"))
sl
## End(Not run)
SL.kernelKnn SL wrapper for KernelKNN
Description
Wrapper for a configurable implementation of k-nearest neighbors. Supports both binomial and
gaussian outcome distributions.
Usage
SL.kernelKnn(Y, X, newX, family, k = 10, method = "euclidean",
weights_function = NULL, extrema = F, h = 1, ...)
Arguments
Y Outcome variable
X Training dataframe
newX Test dataframe
family Gaussian or binomial
k Number of nearest neighbors to use
method Distance method, can be ’euclidean’ (default), ’manhattan’, ’chebyshev’, ’can-
berra’, ’braycurtis’, ’pearson_correlation’, ’simple_matching_coefficient’, ’minkowski’
(by default the order ’p’ of the minkowski parameter equals k), ’hamming’, ’ma-
halanobis’, ’jaccard_coefficient’, ’Rao_coefficient’
weights_function
Weighting method for combining the nearest neighbors. Can be ’uniform’ (de-
fault), ’triangular’, ’epanechnikov’, ’biweight’, ’triweight’, ’tricube’, ’gaussian’,
’cosine’, ’logistic’, ’gaussianSimple’, ’silverman’, ’inverse’, ’exponential’.
extrema if TRUE then the minimum and maximum values from the k-nearest-neighbors
will be removed (can be thought as outlier removal).
h the bandwidth, applicable if the weights_function is not NULL. Defaults to 1.0.
... Any additional parameters, not currently passed through.
Value
List with predictions and the original training data & hyperparameters.
Examples
# Load a test dataset.
data(PimaIndiansDiabetes2, package = "mlbench")
data = PimaIndiansDiabetes2
# Omit observations with missing data.
data = na.omit(data)
Y_bin = as.numeric(data$diabetes)
X = subset(data, select = -diabetes)
set.seed(1)
sl = SuperLearner(Y_bin, X, family = binomial(),
SL.library = c("SL.mean", "SL.kernelKnn"))
sl
SL.ksvm Wrapper for Kernlab’s SVM algorithm
Description
Wrapper for Kernlab’s support vector machine algorithm.
Usage
SL.ksvm(Y, X, newX, family, type = NULL, kernel = "rbfdot",
kpar = "automatic", scaled = T, C = 1, nu = 0.2, epsilon = 0.1,
cross = 0, prob.model = family$family == "binomial",
class.weights = NULL, cache = 40, tol = 0.001, shrinking = T, ...)
Arguments
Y Outcome variable
X Training dataframe
newX Test dataframe
family Gaussian or binomial
type ksvm can be used for classification , for regression, or for novelty detection.
Depending on whether y is a factor or not, the default setting for type is C-svc
or eps-svr, respectively, but can be overwritten by setting an explicit value. See
?ksvm for more details.
kernel the kernel function used in training and predicting. This parameter can be set to
any function, of class kernel, which computes the inner product in feature space
between two vector arguments. See ?ksvm for more details.
kpar the list of hyper-parameters (kernel parameters). This is a list which contains
the parameters to be used with the kernel function. See ?ksvm for more details.
scaled A logical vector indicating the variables to be scaled. If scaled is of length 1,
the value is recycled as many times as needed and all non-binary variables are
scaled. Per default, data are scaled internally (both x and y variables) to zero
mean and unit variance. The center and scale values are returned and used for
later predictions.
C cost of constraints violation (default: 1) this is the ’C’-constant of the regular-
ization term in the Lagrange formulation.
nu parameter needed for nu-svc, one-svc, and nu-svr. The nu parameter sets the
upper bound on the training error and the lower bound on the fraction of data
points to become Support Vectors (default: 0.2).
epsilon epsilon in the insensitive-loss function used for eps-svr, nu-svr and eps-bsvm
(default: 0.1)
cross if a integer value k>0 is specified, a k-fold cross validation on the training data is
performed to assess the quality of the model: the accuracy rate for classification
and the Mean Squared Error for regression
prob.model if set to TRUE builds a model for calculating class probabilities or in case of
regression, calculates the scaling parameter of the Laplacian distribution fitted
on the residuals. Fitting is done on output data created by performing a 3-fold
cross-validation on the training data. (default: FALSE)
class.weights a named vector of weights for the different classes, used for asymmetric class
sizes. Not all factor levels have to be supplied (default weight: 1). All compo-
nents have to be named.
cache cache memory in MB (default 40)
tol tolerance of termination criterion (default: 0.001)
shrinking option whether to use the shrinking-heuristics (default: TRUE)
... Any additional parameters, not currently passed through.
Value
List with predictions and the original training data & hyperparameters.
References
<NAME>., <NAME>., & <NAME>. (2016). A practical guide to support vector classification.
https://www.csie.ntu.edu.tw/~cjlin/papers/guide/guide.pdf
<NAME>., & <NAME>. (2001). Learning with kernels: support vector machines, regulariza-
tion, optimization, and beyond. MIT press.
<NAME>. (1998). Statistical learning theory (Vol. 1). New York: Wiley.
<NAME>., <NAME>., <NAME>., & <NAME>. (2004). kernlab-an S4 package for kernel
methods in R. Journal of statistical software, 11(9), 1-20.
See Also
predict.SL.ksvm ksvm predict.ksvm
Examples
## Not run:
data(Boston, package = "MASS")
Y = Boston$medv
# Remove outcome from covariate dataframe.
X = Boston[, -14]
set.seed(1)
sl = SuperLearner(Y, X, family = gaussian(),
SL.library = c("SL.mean", "SL.ksvm"))
sl
pred = predict(sl, X)
summary(pred$pred)
## End(Not run)
SL.lda SL wrapper for MASS:lda
Description
Linear discriminant analysis, used for classification.
Usage
SL.lda(Y, X, newX, family, obsWeights = rep(1, nrow(X)), id = NULL,
verbose = F, prior = as.vector(prop.table(table(Y))), method = "mle",
tol = 1e-04, CV = F, nu = 5, ...)
Arguments
Y Outcome variable
X Training dataframe
newX Test dataframe
family Binomial only, cannot be used for regression.
obsWeights Observation-level weights
id Not supported.
verbose If TRUE, display additional output during execution.
prior the prior probabilities of class membership. If unspecified, the class proportions
for the training set are used. If present, the probabilities should be specified in
the order of the factor levels.
method "moment" for standard estimators of the mean and variance, "mle" for MLEs,
"mve" to use cov.mve, or "t" for robust estimates based on a t distribution.
tol tolerance
CV If true, returns results (classes and posterior probabilities) for leave-one-out
cross-validation. Note that if the prior is estimated, the proportions in the whole
dataset are used.
nu degrees of freedom for method = "t".
... Any additional arguments, not currently used.
References
<NAME>., <NAME>., <NAME>., & <NAME>. (2013). An Introduction to Statistical Learning
(Vol. 6). New York: Springer. Section 4.4.
See Also
predict.SL.lda lda predict.lda SL.qda
Examples
data(Boston, package = "MASS")
Y = as.numeric(Boston$medv > 23)
# Remove outcome from covariate dataframe.
X = Boston[, -14]
set.seed(1)
# Use only 2 CV folds to speed up example.
sl = SuperLearner(Y, X, family = binomial(), cvControl = list(V = 2),
SL.library = c("SL.mean", "SL.lda"))
sl
pred = predict(sl, X)
summary(pred$pred)
SL.lm Wrapper for lm
Description
Wrapper for OLS via lm(), which may be faster than glm().
Usage
SL.lm(Y, X, newX, family, obsWeights, model = TRUE, ...)
Arguments
Y Outcome variable
X Training dataframe
newX Test dataframe
family Gaussian or binomial
obsWeights Observation-level weights
model Whether to save model.matrix of data in fit object. Set to FALSE to save mem-
ory.
... Any remaining arguments, not used.
References
<NAME>. (2015). Applied regression analysis and generalized linear models. Sage Publications.
See Also
predict.SL.lm lm predict.lm SL.speedlm
Examples
data(Boston, package = "MASS")
Y = Boston$medv
# Remove outcome from covariate dataframe.
X = Boston[, -14]
set.seed(1)
sl = SuperLearner(Y, X, family = gaussian(),
SL.library = c("SL.mean", "SL.lm"))
print(sl)
SL.qda SL wrapper for MASS:qda
Description
Quadratic discriminant analysis, used for classification.
Usage
SL.qda(Y, X, newX, family, obsWeights = rep(1, nrow(X)), verbose = F,
id = NULL, prior = as.vector(prop.table(table(Y))), method = "mle",
tol = 1e-04, CV = F, nu = 5, ...)
Arguments
Y Outcome variable
X Training dataframe
newX Test dataframe
family Binomial only, cannot be used for regression.
obsWeights Observation-level weights
verbose If TRUE, display additional output during execution.
id Not supported.
prior the prior probabilities of class membership. If unspecified, the class proportions
for the training set are used. If present, the probabilities should be specified in
the order of the factor levels.
method "moment" for standard estimators of the mean and variance, "mle" for MLEs,
"mve" to use cov.mve, or "t" for robust estimates based on a t distribution.
tol tolerance
CV If true, returns results (classes and posterior probabilities) for leave-one-out
cross-validation. Note that if the prior is estimated, the proportions in the whole
dataset are used.
nu degrees of freedom for method = "t".
... Any additional arguments, not currently used.
References
<NAME>., <NAME>., <NAME>., & <NAME>. (2013). An Introduction to Statistical Learning
(Vol. 6). New York: Springer. Section 4.4.
See Also
predict.SL.qda qda predict.qda SL.lda
Examples
data(Boston, package = "MASS")
Y = as.numeric(Boston$medv > 23)
# Remove outcome from covariate dataframe.
X = Boston[, -14]
set.seed(1)
# Use only 2 CV folds to speed up example.
sl = SuperLearner(Y, X, family = binomial(), cvControl = list(V = 2),
SL.library = c("SL.mean", "SL.qda"))
sl
pred = predict(sl, X)
summary(pred$pred)
SL.ranger SL wrapper for ranger
Description
Ranger is a fast implementation of Random Forest (Breiman 2001) or recursive partitioning, partic-
ularly suited for high dimensional data.
Extending code by <NAME> from the SuperLearnerExtra package.
Usage
SL.ranger(Y, X, newX, family, obsWeights, num.trees = 500,
mtry = floor(sqrt(ncol(X))), write.forest = TRUE,
probability = family$family == "binomial",
min.node.size = ifelse(family$family == "gaussian", 5, 1), replace = TRUE,
sample.fraction = ifelse(replace, 1, 0.632), num.threads = 1,
verbose = T, ...)
Arguments
Y Outcome variable
X Training dataframe
newX Test dataframe
family Gaussian or binomial
obsWeights Observation-level weights
num.trees Number of trees.
mtry Number of variables to possibly split at in each node. Default is the (rounded
down) square root of the number variables.
write.forest Save ranger.forest object, required for prediction. Set to FALSE to reduce mem-
ory usage if no prediction intended.
probability Grow a probability forest as in Malley et al. (2012).
min.node.size Minimal node size. Default 1 for classification, 5 for regression, 3 for survival,
and 10 for probability.
replace Sample with replacement.
sample.fraction
Fraction of observations to sample. Default is 1 for sampling with replacement
and 0.632 for sampling without replacement.
num.threads Number of threads to use.
verbose If TRUE, display additional output during execution.
... Any additional arguments, not currently used.
References
<NAME>. (2001). Random forests. Machine learning 45:5-32.
<NAME>. & <NAME>. (2016). ranger: A Fast Implementation of Random Forests for High Di-
mensional Data in C++ and R. Journal of Statistical Software, in press. http://arxiv.org/abs/1508.04409.
See Also
SL.ranger ranger predict.ranger
Examples
data(Boston, package = "MASS")
Y = Boston$medv
# Remove outcome from covariate dataframe.
X = Boston[, -14]
set.seed(1)
# Use only 2 CV folds to speed up example.
sl = SuperLearner(Y, X, family = gaussian(), cvControl = list(V = 2),
SL.library = c("SL.mean", "SL.ranger"))
sl
pred = predict(sl, X)
summary(pred$pred)
SL.speedglm Wrapper for speedglm
Description
Speedglm is a fast version of glm()
Usage
SL.speedglm(Y, X, newX, family, obsWeights, maxit = 25, k = 2, ...)
Arguments
Y Outcome variable
X Training dataframe
newX Test dataframe
family Gaussian or binomial
obsWeights Observation-level weights
maxit Maximum number of iterations before stopping.
k numeric, the penalty per parameter to be used; the default k = 2 is the classical
AIC.
... Any remaining arguments, not used.
References
Enea, <NAME>. (2013). Fitting linear models and generalized linear models with large data
sets in R. Statistical Methods for the Analysis of Large Datasets: book of short papers, 411-414.
See Also
predict.SL.speedglm speedglm predict.speedglm
SL.speedlm Wrapper for speedlm
Description
Speedlm is a fast version of lm()
Usage
SL.speedlm(Y, X, newX, family, obsWeights, ...)
Arguments
Y Outcome variable
X Training dataframe
newX Test dataframe
family Gaussian or binomial
obsWeights Observation-level weights
... Any remaining arguments, not used.
References
Enea, <NAME>. (2013). Fitting linear models and generalized linear models with large data
sets in R. Statistical Methods for the Analysis of Large Datasets: book of short papers, 411-414.
See Also
predict.SL.speedlm speedlm predict.speedlm SL.speedglm
SL.xgboost XGBoost SuperLearner wrapper
Description
Supports the Extreme Gradient Boosting package for SuperLearnering, which is a variant of gradi-
ent boosted machines (GBM).
Usage
SL.xgboost(Y, X, newX, family, obsWeights, id, ntrees = 1000, max_depth = 4,
shrinkage = 0.1, minobspernode = 10, params = list(), nthread = 1,
verbose = 0, save_period = NULL, ...)
Arguments
Y Outcome variable
X Covariate dataframe
newX Optional dataframe to predict the outcome
family "gaussian" for regression, "binomial" for binary classification, "multinomial" for
multiple classification (not yet supported).
obsWeights Optional observation-level weights (supported but not tested)
id Optional id to group observations from the same unit (not used currently).
ntrees How many trees to fit. Low numbers may underfit but high numbers may overfit,
depending also on the shrinkage.
max_depth How deep each tree can be. 1 means no interactions, aka tree stubs.
shrinkage How much to shrink the predictions, in order to reduce overfitting.
minobspernode Minimum observations allowed per tree node, after which no more splitting will
occur.
params Many other parameters can be customized. See https://xgboost.readthedocs.
io/en/latest/parameter.html
nthread How many threads (cores) should xgboost use. Generally we want to keep this
to 1 so that XGBoost does not compete with SuperLearner parallelization.
verbose Verbosity of XGB fitting.
save_period How often (in tree iterations) to save current model to disk during processing. If
NULL does not save model, and if 0 saves model at the end.
... Any remaining arguments (not supported though).
Details
The performance of XGBoost, like GBM, is sensitive to the configuration settings. Therefore it is
best to create multiple configurations using create.SL.xgboost and allow the SuperLearner to choose
the best weights based on cross-validated performance.
If you run into errors please first try installing the latest version of XGBoost from drat as described
here: https://xgboost.readthedocs.io/en/latest/build.html
summary.CV.SuperLearner
Summary Function for Cross-Validated Super Learner
Description
summary method for the CV.SuperLearner function
Usage
## S3 method for class 'CV.SuperLearner'
summary(object, obsWeights = NULL, ...)
## S3 method for class 'summary.CV.SuperLearner'
print(x, digits, ...)
Arguments
object An object of class "CV.SuperLearner", the result of a call to CV.SuperLearner.
x An object of class "summary.CV.SuperLearner", the result of a call to summary.CV.SuperLearner.
obsWeights Optional vector for observation weights.
digits The number of significant digits to use when printing.
... additional arguments . . .
Details
Summary method for CV.SuperLearner. Calculates the V-fold cross-validated estimate of either
the mean squared error or the -2*log(L) depending on the loss function used.
Value
summary.CV.SuperLearner returns a list with components
call The function call from CV.SuperLearner
method Describes the loss function used. Currently either least squares of negative log
Likelihood.
V Number of folds
Risk.SL Risk estimate for the super learner
Risk.dSL Risk estimate for the discrete super learner (the cross-validation selector)
Risk.library A matrix with the risk estimates for each algorithm in the library
Table A table with the mean risk estimate and standard deviation across the folds for
the super learner and all algorithms in the library
Author(s)
<NAME> <<EMAIL>>
See Also
CV.SuperLearner
SuperLearner Super Learner Prediction Function
Description
A Prediction Function for the Super Learner. The SuperLearner function takes a training set pair
(X,Y) and returns the predicted values based on a validation set.
Usage
SuperLearner(Y, X, newX = NULL, family = gaussian(), SL.library,
method = "method.NNLS", id = NULL, verbose = FALSE,
control = list(), cvControl = list(), obsWeights = NULL, env = parent.frame())
Arguments
Y The outcome in the training data set. Must be a numeric vector.
X The predictor variables in the training data set, usually a data.frame.
newX The predictor variables in the validation data set. The structure should match X.
If missing, uses X for newX.
SL.library Either a character vector of prediction algorithms or a list containing character
vectors. See details below for examples on the structure. A list of functions
included in the SuperLearner package can be found with listWrappers().
verbose logical; TRUE for printing progress during the computation (helpful for debug-
ging).
family Currently allows gaussian or binomial to describe the error distribution. Link
function information will be ignored and should be contained in the method
argument below.
method A list (or a function to create a list) containing details on estimating the coeffi-
cients for the super learner and the model to combine the individual algorithms
in the library. See ?method.template for details. Currently, the built in options
are either "method.NNLS" (the default), "method.NNLS2", "method.NNloglik",
"method.CC_LS", "method.CC_nloglik", or "method.AUC". NNLS and NNLS2
are non-negative least squares based on the Lawson-Hanson algorithm and the
dual method of Goldfarb and Idnani, respectively. NNLS and NNLS2 will work
for both gaussian and binomial outcomes. NNloglik is a non-negative binomial
likelihood maximization using the BFGS quasi-Newton optimization method.
NN* methods are normalized so weights sum to one. CC_LS uses Goldfarb and
Idnani’s quadratic programming algorithm to calculate the best convex combi-
nation of weights to minimize the squared error loss. CC_nloglik calculates the
convex combination of weights that minimize the negative binomial log like-
lihood on the logistic scale using the sequential quadratic programming algo-
rithm. AUC, which only works for binary outcomes, uses the Nelder-Mead
method via the optim function to minimize rank loss (equivalent to maximizing
AUC).
id Optional cluster identification variable. For the cross-validation splits, id forces
observations in the same cluster to be in the same validation fold. id is passed
to the prediction and screening algorithms in SL.library, but be sure to check the
individual wrappers as many of them ignore the information.
obsWeights Optional observation weights variable. As with id above, obsWeights is passed
to the prediction and screening algorithms, but many of the built in wrappers
ignore (or can’t use) the information. If you are using observation weights,
make sure the library you specify uses the information.
control A list of parameters to control the estimation process. Parameters include saveFitLibrary
and trimLogit. See SuperLearner.control for details.
cvControl A list of parameters to control the cross-validation process. Parameters include
V, stratifyCV, shuffle and validRows. See SuperLearner.CV.control for
details.
env Environment containing the learner functions. Defaults to the calling environ-
ment.
Details
SuperLearner fits the super learner prediction algorithm. The weights for each algorithm in
SL.library is estimated, along with the fit of each algorithm.
The prescreen algorithms. These algorithms first rank the variables in X based on either a univariate
regression p-value of the randomForest variable importance. A subset of the variables in X is
selected based on a pre-defined cut-off. With this subset of the X variables, the algorithms in
SL.library are then fit.
The SuperLearner package contains a few prediction and screening algorithm wrappers. The full
list of wrappers can be viewed with listWrappers(). The design of the SuperLearner package is
such that the user can easily add their own wrappers. We also maintain a website with additional
examples of wrapper functions at https://github.com/ecpolley/SuperLearnerExtra.
Value
call The matched call.
libraryNames A character vector with the names of the algorithms in the library. The format is
’predictionAlgorithm_screeningAlgorithm’ with ’_All’ used to denote the pre-
diction algorithm run on all variables in X.
SL.library Returns SL.library in the same format as the argument with the same name
above.
SL.predict The predicted values from the super learner for the rows in newX.
coef Coefficients for the super learner.
library.predict
A matrix with the predicted values from each algorithm in SL.library for the
rows in newX.
Z The Z matrix (the cross-validated predicted values for each algorithm in SL.library).
cvRisk A numeric vector with the V-fold cross-validated risk estimate for each algo-
rithm in SL.library. Note that this does not contain the CV risk estimate for
the SuperLearner, only the individual algorithms in the library.
family Returns the family value from above
fitLibrary A list with the fitted objects for each algorithm in SL.library on the full train-
ing data set.
cvFitLibrary A list with fitted objects for each algorithm in SL.library on each of V different
training data sets.
varNames A character vector with the names of the variables in X.
validRows A list containing the row numbers for the V-fold cross-validation step.
method A list with the method functions.
whichScreen A logical matrix indicating which variables passed each screening algorithm.
control The control list.
cvControl The cvControl list.
errorsInCVLibrary
A logical vector indicating if any algorithms experienced an error within the CV
step.
errorsInLibrary
A logical vector indicating if any algorithms experienced an error on the full
data.
env Environment passed into function which will be searched to find the learner
functions. Defaults to the calling environment.
times A list that contains the execution time of the SuperLearner, plus separate times
for model fitting and prediction.
Author(s)
<NAME> <<EMAIL>>
References
<NAME>, <NAME>., <NAME>. and <NAME>. (2008) Super Learner, Statistical Applications
of Genetics and Molecular Biology, 6, article 25.
Examples
## Not run:
## simulate data
set.seed(23432)
## training set
n <- 500
p <- 50
X <- matrix(rnorm(n*p), nrow = n, ncol = p)
colnames(X) <- paste("X", 1:p, sep="")
X <- data.frame(X)
Y <- X[, 1] + sqrt(abs(X[, 2] * X[, 3])) + X[, 2] - X[, 3] + rnorm(n)
## test set
m <- 1000
newX <- matrix(rnorm(m*p), nrow = m, ncol = p)
colnames(newX) <- paste("X", 1:p, sep="")
newX <- data.frame(newX)
newY <- newX[, 1] + sqrt(abs(newX[, 2] * newX[, 3])) + newX[, 2] -
newX[, 3] + rnorm(m)
# generate Library and run Super Learner
SL.library <- c("SL.glm", "SL.randomForest", "SL.gam",
"SL.polymars", "SL.mean")
test <- SuperLearner(Y = Y, X = X, newX = newX, SL.library = SL.library,
verbose = TRUE, method = "method.NNLS")
test
# library with screening
SL.library <- list(c("SL.glmnet", "All"), c("SL.glm", "screen.randomForest",
"All", "screen.SIS"), "SL.randomForest", c("SL.polymars", "All"), "SL.mean")
test <- SuperLearner(Y = Y, X = X, newX = newX, SL.library = SL.library,
verbose = TRUE, method = "method.NNLS")
test
# binary outcome
set.seed(1)
N <- 200
X <- matrix(rnorm(N*10), N, 10)
X <- as.data.frame(X)
Y <- rbinom(N, 1, plogis(.2*X[, 1] + .1*X[, 2] - .2*X[, 3] +
.1*X[, 3]*X[, 4] - .2*abs(X[, 4])))
SL.library <- c("SL.glmnet", "SL.glm", "SL.knn", "SL.mean")
# least squares loss function
test.NNLS <- SuperLearner(Y = Y, X = X, SL.library = SL.library,
verbose = TRUE, method = "method.NNLS", family = binomial())
test.NNLS
# negative log binomial likelihood loss function
test.NNloglik <- SuperLearner(Y = Y, X = X, SL.library = SL.library,
verbose = TRUE, method = "method.NNloglik", family = binomial())
test.NNloglik
# 1 - AUC loss function
test.AUC <- SuperLearner(Y = Y, X = X, SL.library = SL.library,
verbose = TRUE, method = "method.AUC", family = binomial())
test.AUC
# 2
# adapted from library(SIS)
set.seed(1)
# training
b <- c(2, 2, 2, -3*sqrt(2))
n <- 150
p <- 200
truerho <- 0.5
corrmat <- diag(rep(1-truerho, p)) + matrix(truerho, p, p)
corrmat[, 4] = sqrt(truerho)
corrmat[4, ] = sqrt(truerho)
corrmat[4, 4] = 1
cholmat <- chol(corrmat)
x <- matrix(rnorm(n*p, mean=0, sd=1), n, p)
x <- x
feta <- x[, 1:4]
fprob <- exp(feta) / (1 + exp(feta))
y <- rbinom(n, 1, fprob)
# test
m <- 10000
newx <- matrix(rnorm(m*p, mean=0, sd=1), m, p)
newx <- newx
newfeta <- newx[, 1:4]
newfprob <- exp(newfeta) / (1 + exp(newfeta))
newy <- rbinom(m, 1, newfprob)
DATA2 <- data.frame(Y = y, X = x)
newDATA2 <- data.frame(Y = newy, X=newx)
create.SL.knn <- function(k = c(20, 30)) {
for(mm in seq(length(k))){
eval(parse(text = paste('SL.knn.', k[mm], '<- function(..., k = ', k[mm],
') SL.knn(..., k = k)', sep = '')), envir = .GlobalEnv)
}
invisible(TRUE)
}
create.SL.knn(c(20, 30, 40, 50, 60, 70))
# library with screening
SL.library <- list(c("SL.glmnet", "All"), c("SL.glm", "screen.randomForest"),
"SL.randomForest", "SL.knn", "SL.knn.20", "SL.knn.30", "SL.knn.40",
"SL.knn.50", "SL.knn.60", "SL.knn.70",
c("SL.polymars", "screen.randomForest"))
test <- SuperLearner(Y = DATA2$Y, X = DATA2[, -1], newX = newDATA2[, -1],
SL.library = SL.library, verbose = TRUE, family = binomial())
test
## examples with multicore
set.seed(23432, "L'Ecuyer-CMRG") # use L'Ecuyer for multicore seeds. see ?set.seed for details
## training set
n <- 500
p <- 50
X <- matrix(rnorm(n*p), nrow = n, ncol = p)
colnames(X) <- paste("X", 1:p, sep="")
X <- data.frame(X)
Y <- X[, 1] + sqrt(abs(X[, 2] * X[, 3])) + X[, 2] - X[, 3] + rnorm(n)
## test set
m <- 1000
newX <- matrix(rnorm(m*p), nrow = m, ncol = p)
colnames(newX) <- paste("X", 1:p, sep="")
newX <- data.frame(newX)
newY <- newX[, 1] + sqrt(abs(newX[, 2] * newX[, 3])) + newX[, 2] - newX[, 3] + rnorm(m)
# generate Library and run Super Learner
SL.library <- c("SL.glm", "SL.randomForest",
"SL.polymars", "SL.mean")
testMC <- mcSuperLearner(Y = Y, X = X, newX = newX, SL.library = SL.library,
method = "method.NNLS")
testMC
## examples with snow
library(parallel)
cl <- makeCluster(2, type = "PSOCK") # can use different types here
clusterSetRNGStream(cl, iseed = 2343)
# make SL functions available on the clusters, use assignment to avoid printing
foo <- clusterEvalQ(cl, library(SuperLearner))
testSNOW <- snowSuperLearner(cluster = cl, Y = Y, X = X, newX = newX,
SL.library = SL.library, method = "method.NNLS")
testSNOW
stopCluster(cl)
## snow example with user-generated wrappers
# If you write your own wrappers and are using snowSuperLearner()
# These new wrappers need to be added to the SuperLearner namespace and exported to the clusters
# Using a simple example here, but can define any new SuperLearner wrapper
my.SL.wrapper <- function(...) SL.glm(...)
# assign function into SuperLearner namespace
environment(my.SL.wrapper) <-asNamespace("SuperLearner")
cl <- makeCluster(2, type = "PSOCK") # can use different types here
clusterSetRNGStream(cl, iseed = 2343)
# make SL functions available on the clusters, use assignment to avoid printing
foo <- clusterEvalQ(cl, library(SuperLearner))
clusterExport(cl, c("my.SL.wrapper")) # copy the function to all clusters
testSNOW <- snowSuperLearner(cluster = cl, Y = Y, X = X, newX = newX,
SL.library = c("SL.glm", "SL.mean", "my.SL.wrapper"), method = "method.NNLS")
testSNOW
stopCluster(cl)
## timing
replicate(5, system.time(SuperLearner(Y = Y, X = X, newX = newX,
SL.library = SL.library, method = "method.NNLS")))
replicate(5, system.time(mcSuperLearner(Y = Y, X = X, newX = newX,
SL.library = SL.library, method = "method.NNLS")))
cl <- makeCluster(2, type = 'PSOCK')
# make SL functions available on the clusters, use assignment to avoid printing
foo <- clusterEvalQ(cl, library(SuperLearner))
replicate(5, system.time(snowSuperLearner(cl, Y = Y, X = X, newX = newX,
SL.library = SL.library, method = "method.NNLS")))
stopCluster(cl)
## End(Not run)
SuperLearner.control Control parameters for the SuperLearner
Description
Control parameters for the SuperLearner
Usage
SuperLearner.control(saveFitLibrary = TRUE, saveCVFitLibrary = FALSE, trimLogit = 0.001)
Arguments
saveFitLibrary Logical. Should the fit for each algorithm be saved in the output from SuperLearner.
saveCVFitLibrary
Logical. Should cross-validated fits for each algorithm be saved in the output
from SuperLearner.
trimLogit number between 0.0 and 0.5. What level to truncate the logit transformation to
maintain a bounded loss function when using the NNloglik method.
Value
A list containing the control parameters.
SuperLearner.CV.control
Control parameters for the cross validation steps in SuperLearner
Description
Control parameters for the cross validation steps in SuperLearner
Usage
SuperLearner.CV.control(V = 10L, stratifyCV = FALSE, shuffle = TRUE,
validRows = NULL)
Arguments
V Integer. Number of splits for the V-fold cross-validation step. The default is 10.
In most cases, between 10 and 20 splits works well.
stratifyCV Logical. Should the data splits be stratified by a binary response? Attempts to
maintain the same ratio in each training and validation sample.
shuffle Logical. Should the rows of X be shuffled before creating the splits.
validRows A List. Use this to pass pre-specified rows for the sample splits. The length of
the list should be V and each entry in the list should contain a vector with the
row numbers of the corresponding validation sample.
Value
A list containing the control parameters
SuperLearnerNews Show the NEWS file for the SuperLearner package
Description
Show the NEWS file of the SuperLearner package. The function is simply a wrapper for the
RShowDoc function
Usage
SuperLearnerNews(...)
SuperLearnerDocs(what = 'SuperLearnerR.pdf', ...)
Arguments
... additional arguments passed to RShowDoc
what specify what document to open. Currently supports the NEWS file and the PDF
files ’SuperLearner.pdf’ and ’SuperLearnerR.pdf’.
Value
A invisible character string given the path to the SuperLearner NEWS file
trimLogit truncated-probabilities logit transformation
Description
computes the logit transformation on the truncated probabilities
Usage
trimLogit(x, trim = 1e-05)
Arguments
x vector of probabilities.
trim value to truncate probabilities at. Currently symmetric truncation (trim and 1-
trim).
Value
logit transformed values
Examples
x <- c(0.00000001, 0.0001, 0.001, 0.01, 0.1, 0.3, 0.7, 0.9, 0.99,
0.999, 0.9999, 0.99999999)
trimLogit(x, trim = 0.001)
data.frame(Prob = x, Logit = qlogis(x), trimLogit = trimLogit(x, 0.001))
write.method.template Method to estimate the coefficients for the super learner
Description
These functions contain the information on the loss function and the model to combine algorithms
Usage
write.method.template(file = "", ...)
## a few built in options:
method.NNLS()
method.NNLS2()
method.NNloglik()
method.CC_LS()
method.CC_nloglik()
method.AUC(nlopt_method=NULL, optim_method="L-BFGS-B", bounds=c(0, Inf), normalize=TRUE)
Arguments
file A connection, or a character string naming a file to print to. Passed to cat.
optim_method Passed to the optim call method. See optim for details.
nlopt_method Either optim_method or nlopt_method must be provided, the other must be
NULL
bounds Bounds for parameter estimates
normalize Logical. Should the parameters be normalized to sum up to 1
... Additional arguments passed to cat.
Details
A SuperLearner method must be a list (or a function to create a list) with exactly 3 elements. The
3 elements must be named require, computeCoef and computePred.
Value
A list containing 3 elements:
require A character vector listing any required packages. Use NULL if no additional
packages are required
computeCoef A function. The arguments are: Z, Y, libraryNames, obsWeights, control,
verbose. The value is a list with two items: cvRisk and coef. This function
computes the coefficients of the super learner. As the super learner minimizes
the cross-validated risk, the loss function information is contained in this func-
tion as well as the model to combine the algorithms in SL.library.
computePred A function. The arguments are: predY, coef, control. The value is a numeric
vector with the super learner predicted values.
Author(s)
<NAME> <<EMAIL>>
See Also
SuperLearner
Examples
write.method.template(file = '')
write.screen.template screening algorithms for SuperLearner
Description
Screening algorithms for SuperLearner to be used with SL.library.
Usage
write.screen.template(file = "", ...)
Arguments
file A connection, or a character string naming a file to print to. Passed to cat.
... Additional arguments passed to cat
Details
Explain structure of a screening algorithm here:
Value
whichVariable A logical vector with the length equal to the number of columns in X. TRUE
indicates the variable (column of X) should be included.
Author(s)
<NAME> <<EMAIL>>
See Also
SuperLearner
Examples
write.screen.template(file = '')
write.SL.template Wrapper functions for prediction algorithms in SuperLearner
Description
Template function for SuperLearner prediction wrappers and built in options.
Usage
write.SL.template(file = "", ...)
Arguments
file A connection, or a character string naming a file to print to. Passed to cat.
... Additional arguments passed to cat
Details
Describe SL.* structure here
Value
A list with two elements:
pred The predicted values for the rows in newX.
fit A list. Contains all objects necessary to get predictions for new observations
from specific algorithm.
Author(s)
<NAME> <<EMAIL>>
See Also
SuperLearner
Examples
write.SL.template(file = '') |
featsoftware-keycloakvuejs | npm | JavaScript | vue-keycloak plugin
---
Introduction
---
This plugin uses the official Keycloak JS adapter
<https://github.com/keycloak/keycloak-js-bowerPlease read the documentation:
<http://www.keycloak.org/docs/latest/securing_apps/index.html#_javascript_adapter#### Excerpt from Keycloak JS doc:
> By default to authenticate you need to call the login function. However, there are two options available to make the
> adapter automatically authenticate. You can pass login-required or check-sso to the init function.
> login-required will authenticate the client if the user is logged-in to Keycloak or display the login page if not.
> check-sso will only authenticate the client if the user is already logged-in, if the user is not logged-in the browser
> will be redirected back to the application and remain unauthenticated.
> To enable login-required set onLoad to login-required and pass to the init method:
> `keycloak.init({ onLoad: 'login-required' })`
Installation
---
### Install using yarn
```
yarn add @dsb-norge/vue-keycloak-js
```
### Install using npm
```
npm install @dsb-norge/vue-keycloak-js --save
```
Usage
---
> `Vue.use(VueKeyCloak, [options])`
Tell Vue to install the plugin, and optionally pass in a JavaScript object additional configuration.
```
import VueKeyCloak from '@dsb-norge/vue-keycloak-js' Vue.use(VueKeyCloak) // You can also pass in options. Check options reference below.Vue.use(VueKeyCloak, options)
```
The plugin adds a `$keycloak` property to the global Vue instance.
This is actually a new Vue instance and can be used as such. It holds this data:
```
{
ready: Boolean, // Flag indicating whether Keycloak has initialised and is ready
authenticated: Boolean,
userName: String, // Username from Keycloak. Collected from tokenParsed['preferred_username']
fullName: String, // Full name from Keycloak. Collected from tokenParsed['name']
logoutFn: Function, // App+Keycloak logout function
token: String, // Access token
}
```
Options
---
You can pass in an object as options to the plugin. The following keys are valid options. See below for descpription.
| Key | Type | Default |
| --- | --- | --- |
| `config` | String | Object | `window.__BASEURL__ + '/config'` |
| `init` | Object | `{onLoad: 'login-required'}` |
| `onReady` | Function(keycloak) | |
### config
**String**
If this option is a string, the plugin will treat it as an URL and make an HTTP GET request to it.
If not present, the plugin will look for a global variable `window.__BASEURL__` and prepend it to `'/config'` and use this a default place to make a GET request.
If no `window.__BASEURL__` exists, `/config` is used.
The plugin then expects the return value to be an object with the following keys and values:
```
{
authRealm: String,
authUrl: String,
authClientId: String,
logoutRedirectUri: String
}
```
These values will be used as constructor parameters to the official Keycloak adapter.
**Object**
If this option is an object, the values will be passed as constructor parameters. The keys must have the same naming as above. No HTTP GET request is done in this case.
### init
This option is the parameter object for the `Keycloak.init` method.
### onReady
This option is a callback function that is executed once Keycloak has initialised and is ready. You can be sure that the Vue instance has a property called `$keycloak` in this function. See above for possible values.
The callback function has one parameter, which is the keycloak object returned from the Keycloak adapter on instatiation.
One use case for this callback could be to instatiate and mount the Vue application. Then we are sure that the Keycloak authentication and the `$keycloak` property are properly finished and hydrated with data:
```
Vue.use(VueKeyCloak, { onReady: (keycloak) => { console.log(`I wonder what Keycloak returns: ${keycloak}`) /* eslint-disable no-new */ new Vue({ el: '#app', router, template: '<App/>', render: h => h(App) }) }})
```
In conjuction with the above, you might find it useful to intercept e.g. axios and set the token on each request:
```
function tokenInterceptor () { axios.interceptors.request.use(config => { config.headers.Authorization = `Bearer ${Vue.prototype.$keycloak.token}` return config }, error => { return Promise.reject(error) })} Vue.use(VueKeyCloak, { onReady: (keycloak) => { tokenInterceptor() /* eslint-disable no-new */ new Vue({ el: '#app', router, template: '<App/>', render: h => h(App) }) }})
```
Develop and deploy
---
```
$ git clone https://github.com/dsb-norge/vue-keycloak-js.git# Do some work, add and/or commit to git. $ npm version patch
```
The command `npm version patch` will automatically run the build, push the branch upstream and publish the package to the NPM registry
Readme
---
### Keywords
* vue
* keycloak |
presmTP | cran | R | Package ‘presmTP’
October 14, 2022
Type Package
Title Methods for Transition Probabilities
Version 1.1.0
Date 2019-10-04
Author <NAME>, <NAME> and <NAME>
Maintainer <NAME> <<EMAIL>>
Description Provides a function for estimating the transition probabilities in an illness-death model.
The transition probabilities can be estimated from the unsmoothed landmark estimators developed
by de Una-Alvarez and Meira-Machado (2015) <doi:10.1111/biom.12288>.
Presmoothed estimates can also be obtained through the use of a parametric family of binary
regression curves, such as logit, probit or cauchit. The additive logistic regression model
and nonparametric regression are also alternatives which have been implemented.
The idea behind the presmoothed landmark estimators is to use the presmoothing techniques
developed by Cao et al. (2005) <doi:10.1007/s00180-007-0076-6> in the
landmark estimation of the transition probabilities.
Depends R (>= 3.0.0)
Encoding UTF-8
License GPL-3
LazyData true
Imports survPresmooth, mgcv
RoxygenNote 6.1.1
NeedsCompilation no
Repository CRAN
Date/Publication 2019-11-01 11:20:02 UTC
R topics documented:
colonID... 2
plot.pst... 3
presmT... 4
summary.pst... 6
colonIDM Chemotherapy for Stage B/C colon cancer.
Description
These are data from one of the first successful trials of adjuvant chemotherapy for colon cancer.
Levamisole is a low-toxicity compound previously used to treat worm infestations in animals; 5-FU
is a moderately toxic (as these things go) chemotherapy agent.
Usage
data("colonIDM")
Format
A data frame with 929 observations on the following 15 variables. Below a brief description is
given for some of these variables.
time1 Time to recurrence/censoring/death, whichever occurs first.
event1 Recurrence/censoring indicator (recurrence=1, alive=0).
Stime Time to censoring/death, whichever occurs first.
event Death/censoring indicator (death=1, alive=0).
rx Treatment - Obs(ervation), Lev(amisole), Lev(amisole)+5-FU.
sex Sex indicator (male=1, female=0).
age Age in years.
obstruct Obstruction of colon by tumour.
perfor Perforation of colon.
adhere Adherence to nearby organs.
nodes Number of lymph nodes with detectable cancer.
differ Differentiation of tumour (1=well, 2=moderate, 3=poor).
extent Extent of local spread (1=submucosa, 2=muscle, 3=serosa, 4=contiguous structures).
surg Time from surgery to registration (0=short, 1=long).
node4 More than 4 positive lymph nodes.
Source
The study is originally described in Laurie (1989).The main report is found in Moertel (1990). This
data set is closest to that of the final report in Moertel (1991). A version of the data with less
follow-up time was used in the paper by Lin (1994).
References
<NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, JB Gerst-
ner, <NAME> and <NAME>. Surgical adjuvant therapy of large-bowel carcinoma: An evaluation
of levamisole and the combination of levamisole and fluorouracil: The North Central Cancer Treat-
ment Group and the Mayo Clinic. Journal of Clinical Oncology, 7:1447-1456, 1989.
DY Lin. Cox regression analysis of multivariate failure time data: the marginal approach. Statistics
in Medicine, 13:2233-2247, 1994.
CG Moertel, TR Fleming, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>,
WA Emerson, DC Tormey, <NAME>, M<NAME> and J<NAME>. Levamisole and fluorouracil
for adjuvant therapy of resected colon carcinoma. New England Journal of Medicine, 332:352-358,
1990.
CG Moertel, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, WA
Emerson, <NAME>, <NAME>, M<NAME> and J<NAME>. Fluorouracil plus Levamisole as
and effective adjuvant therapy after resection of stage II colon carcinoma: a final report. Annals of
Internal Medicine, 122:321-326, 1991.
Examples
data(colonIDM)
head(colonIDM)
plot.pstp Plot for an object of class "pstp"
Description
It draws the estimated probabilities.
Usage
## S3 method for class 'pstp'
plot(x = object, state_ini = 0, ...)
Arguments
x A fitted pstp object as produced by presmTP.
state_ini Initial state of the transition. Defaults to state_ini=0
... For future methods.
Value
No value is returned.
Author(s)
<NAME>, <NAME>, <NAME>.
Examples
res<- presmTP(data = colonIDM, s = 365,method = "uns")
plot(res)
presmTP Methods for estimation of transition probabilities in the illness-death
model
Description
This function is used to obtain unsmoothed and presmoothed estimates of the transition probabilities
in the illness-death model.
Usage
presmTP(data, s, method = "uns", estimand = "S",
bw.selec = "plug-in", fixed.bw = NULL, bound = "none")
Arguments
data A numeric value to be squared
s The first time for obtaining estimates for the transition probabilities.
method The method used to compute the transition probabilities. Possible options are
"uns", "np" "logit", "logit.gam", "probit" and "cauchit". Defaults to
"uns".
estimand An optional character string identifying the function to estimate: "S" for survival
function and "H" for cumulative hazard function. Defaults to "S".
bw.selec An optional (partially matched) character string specifying the method of band-
width selection. "fixed" if no bandwidth selection is done, in which case the
bandwidth(s) given by the fixed.bw argument is (are) used, "plug-in" for plug-in
bandwidth selection and "bootstrap" for bootstrap bandwidth selection. Defaults
to "fixed".
fixed.bw An optional numeric vector with the fixed bandwidth(s) used when the value
of the bw.selec argument is "fixed". It must be of length 1 for estimating sur-
vival and cumulative hazard functions, and of length 2 for density and hazard
functions (in this case, the first element is the presmoothing bandwidth).
bound An optional numeric vector with the fixed bandwidth(s) used when the value
of the bw.selec argument is "fixed". It must be of length 1 for estimating sur-
vival and cumulative hazard functions, and of length 2 for density and hazard
functions (in this case, the first element is the presmoothing bandwidth).
Value
An object of class "pstp" and one of the following classes: "uns", "np", "logit", "logit.gam",
"probit" and "cauchit". Objects are implemented as a list with elements:
est0 data.frame with estimates of the transition probabilities 0->0, 0->1 and 0->2.
est1 data.frame with estimates of the transition probabilities 1->1 and 1->2.
s The first time for obtaining estimates for the transition probabilities.
callp The expression of the estimated probability.
call A call object.
Author(s)
<NAME>, <NAME>, <NAME>.
References
<NAME>., <NAME>. (1978) An Empirical Transition Matrix for Nonhomogeneous Markov
Chains Based on Censored Observations. Scandinavian Journal of Statistics 5(3), 141–150.
<NAME>., <NAME>., <NAME>. and <NAME>. (2005). Presmoothed Kaplan-Meier
and Nelson-Aalen estimators, Journal of Nonparametric Statistics, 17, 31-56.
<NAME>., <NAME>. and <NAME>. (2006). Nonparametric estimation
of transition probabilities in a non-Markov illness-death model. Lifetime Data Anal 12(3), 325–344.
Lopez-de-Ullibarri, I and <NAME>. (2013). survPresmooth: An R Package for Presmoothed
Estimation in Survival Analysis, Journal of Statistical Software, 54(11), 1-26. URL: http://www.jstatsoft.org/v54/i11/.
de Una-Alvarez J. and <NAME>. (2015). Nonparametric estimation of transition probabil-
ities in a non-Markov illness-death model: a comparative study. Biometrics 71, 364–375.
<NAME>. (2016). Smoothed landmark estimators of the transition probabilities, SORT-
Statistics and Operations Research Transactions, 40, 375-398.
Examples
#Unsmoothed
res1<- presmTP(data = colonIDM, s = 365,method = "uns" )
res1$est0$t
res1$est0$p02
res1$est1$t
summary(res1, state_ini=1, time=365*1:5)
plot(res1)
res1$call
class(res1)
#Nonparametric
res2<- presmTP(data = colonIDM, s = 365,method = "np" )
res3<- presmTP(data = colonIDM, s = 365,method = "np", estimand="S")
res4<- presmTP(data = colonIDM, s = 365,method = "np", estimand="H")
res5<- presmTP(data = colonIDM, s = 365,method = "np",
bw.selec="fixed", fixed.bw=30)
#Presmoothed - Logit
res6<- presmTP(data = colonIDM, s = 365,method = "logit" )
summary(res6, state_ini=1, time=365*1:5)
#Presmoothed - Logit GAM
res7<- presmTP(data = colonIDM, s = 365,method = "logit.gam" )
summary.pstp Summarizing fits of "pstp" class
Description
Returns a a data.frame or list containing the estimates of the probabilities.
Usage
## S3 method for class 'pstp'
summary(object, state_ini = 0, times = NULL, ...)
Arguments
object A fitted pstp object as produced by presmTP.
state_ini Initial state of the transition. Defaults to state_ini=0.
times Vector of times; the returned data frame will contain 1 row for each time.
... For future methods.
Value
A data frame or a list containing the estimates of the probability.
Author(s)
<NAME>, <NAME>, <NAME>.
Examples
res<- presmTP(data = colonIDM, s = 365, method = "uns")
summary(res, state_ini=1, times=365*1:5) |
DELTD | cran | R | Package ‘DELTD’
October 12, 2022
Type Package
Title Kernel Density Estimation using Lifetime Distributions
Version 2.6.8
Author <NAME>, <NAME>.
Maintainer <NAME> <<EMAIL>>
Description A collection of asymmetrical kernels belong to lifetime distributions for kernel den-
sity estimation is presented.
Mean Squared Errors (MSE) are calculated for estimated curves. For this purpose, R func-
tions allow the distribution to be Gamma, Exponential or Weibull.
For de-
tails see Chen (2000a,b), <NAME> Kawczak (2003) and Salha et al. (2014) <doi:10.12988/pms.2014.4616>.
License GPL-2
Encoding UTF-8
LazyData true
RoxygenNote 7.1.1
URL https://CRAN.R-project.org/package=DELTD
Depends R (>= 2.10)
NeedsCompilation no
Repository CRAN
Date/Publication 2022-09-20 14:50:02 UTC
R topics documented:
DELTD-packag... 2
Bet... 3
B... 5
Erlan... 7
Gamm... 8
Log... 10
ms... 12
plot.Bet... 14
plot.B... 15
plot.Erlan... 16
plot.Gamm... 17
plot.Log... 18
TUN... 19
DELTD-package DELTD
Description
A collection of asymmetrical kernels belong to lifetime distributions for kernel density estimation is
presented. i.e. plot.BS, plot.Beta, plot.Erlang, plot.Gamma and plot.LogN. Estimated values
can also observed by using Beta, BS, Gamma, Erlang and LogN. For calculating mean squared error
by using different kernels functions are mse can be used.
A collection of asymmetrical kernels belong to lifetime distributions for kernel density estimation
is presented. i.e. plot.BS, plot.Erlang, plot.Gamma and plot.LogN. Estimated values can also
observed by using BS, Gamma, Erlang and LogN, where data can belong to any distribution. For
calculating mean squared error by using different kernel functions mse .
Details
Kernel Density Estimation using Lifetime Distributions
Kernel Density Estimation using Lifetime Distributions
Author(s)
<NAME>, <NAME>.
<NAME>, <NAME>.
References
• <NAME>.; <NAME>. 2003. Birnbaum-Saunders & Lognormal kernel estimators for modeling
durations in high frequency financial data. Annals of Economics and Finance 4, 103–124.
• <NAME>.; <NAME>.; <NAME>. 2014. Hazard rate function estimation using Erlang
Kernel. Pure Mathematical Sciences 3 (4), 141–152.
• <NAME>. 2000. Probability density function estimation using Gamma kernels. Annals of
the Institute of Statistical Mathematics 52 (3), 471-480.
• <NAME>. 2000. Beta kernel smothers for regression curves. Statistica Sinica 10, 73-91.
• <NAME>.; <NAME>.; <NAME>.; <NAME>. 1993. Density Estimation
using Distance Sampling. Chapman & Hall, London.
• <NAME>.; <NAME>. 2003. Birnbaum-Saunders & Lognormal kernel estimators for modeling
durations in high frequency financial data. Annals of Economics and Finance 4, 103-124.
• <NAME>.; <NAME>.; <NAME>. 2014. Hazard rate function estimation using Erlang
Kernel. Pure Mathematical Sciences 3 (4), 141-152.
• <NAME>. 2000. Probability density function estimation using Gamma kernels. Annals of
the Institute of Statistical Mathematics 52 (3), 471-480.
• <NAME>. 2000.Beta kernel smoothers for regression curves. Statistica Sinica 10, 73-91.
See Also
Useful links:
• https://CRAN.R-project.org/package=DELTD
Useful links:
• https://CRAN.R-project.org/package=DELTD
Beta Estimate Density Values by Beta kernel
Description
This function provide the estimated Kernel density values by using Beta Kernel. The Beta kernel
is developed by Chen (2000) by using Beta distribution of first kind. He was first to introduce
asymetrical kernels to control boundary Bias. Beta Kernel is
KBeta( x +1, 1−x +1) (y) =
h h
Usage
Beta(x = NULL, y, k = NULL, h = NULL)
Arguments
x scheme for generating grid points
y a numeric vector of positive values
k number of gird points
h the bandwidth
Details
In this function, choice of bandwidth, number of grid points and scheme that how these grid points
are generated are user based. If any parameter(s) is missing then function used default parameters.
But at least x or k should be specified otherwise NA will be produced. If x is missing then function
will generate k grid points by using uniform distribution. Similarly, if k is missing then function
consider it same to length of main vector. In case if h is missing then function used normal scale
rule bandwidth for non-normal data and described in Silverman (1986). This function can be only
used if data is between (0, 1). Similarly, x should be also lies between (0, 1).
Value
x grid points
y estimated values of density
Author(s)
<NAME>, <NAME>.
References
<NAME>. 2000. Beta kernel smothers for regression curves. Statistica Sinica 10, 73-91. Silver-
man, B. W. 1986. Density Estimation. Chapman & Hall/ CRC, London.
See Also
For further kernels see Erlang, BS, Gammaand LogN. To plot its density see plot.Beta and to cal-
culate MSE mse.
Examples
## Data: Simulated or real data can be used
## Number of grid points "k" should be at least equal to the data size.
## If user defines the generating scheme of grid points then length
## of grid points should be equal or greater than "k", Otherwise NA will be produced.
y <- runif(50)
xx <- sample(0.00001:900, 500, replace = FALSE)/1000
h <- 0.9
Beta(x = xx, y = y, k = 500, h = h)
## If scheme for generating grid points is unknown
y <- runif(500)
h <- 0.9
Beta(x = xx, y = y, k = 500, h = h)
## Not run:
## If user do not mention the number of grid points
y <- runif(1000)
xx <- seq(0.001, 1000, length = 2000)
## any bandwidth can be used
require(kedd)
h <- h.bcv(y) ## Biased cross validation
Beta(x = xx, y = y, h = h)
## End(Not run)
## Not run:
##if both generating scheme and number of grid points are missing then function generate NA
y <- runif(1000)
band = 0.8
Beta(y = y, h = band)
## End(Not run)
## if bandwidth is missing
y <- runif(100)
xx <- seq(0.001, 100, length = 300)
Beta(x = xx, y = y, k = 200)
BS Estimate Density Values by Birnbaum-Saunders kernel
Description
This function calculates the estimated Values by using Birnbaum-Saunders Kernel. The Birnbaum-
Saunders kernel is developed by Jin and Kawczak (2003). They claimed that performance of their
developed kernel is better near the boundary points in terms of boundary reduction.
r r
1 1 x 1 y x
K 1 (y) = √ + exp − −2+
BS(h 2 ,x) 2 2πh xy y3 2h x y
Usage
BS(x = NULL, y, k = NULL, h = NULL)
Arguments
x scheme for generating grid points
y a numeric vector of positive values.
k gird points
h the bandwidth
Details
In this function, choice of bandwidth, number of grid points and scheme that how these grid points
are generated are user based. If any parameter(s) is missing then function used default parameters.
But at least x or k should be specified otherwise NA will be produced. If x is missing then function
will generate k grid points between minimum and maximum values of vector. Similarly, if k is
missing then function consider it same to length of main vector. In case if h is missing then function
used normal scale rule bandwidth for non-normal data and described in Silverman (1986).
Value
x grid points
y estimated values of density
Author(s)
<NAME>, <NAME>.
References
<NAME>.; <NAME>. 2003. Birnbaum-Saunders & Lognormal kernel estimators for modeling dura-
tions in high frequency financial data. Annals of Economics and Finance 4, 103-124.
See Also
For further kernels see Erlang, Gamma and LogN. To plot the density by using BS kernel plot.BS
and to calculate MSE by mse.
Examples
## Data: Simulated or real data can be used
## Number of grid points "k" should be at least equal to the data size.
## If user defines the generating scheme of grid points then length
## of grid points should be equal or greater than "k", Otherwise NA will be produced.
alpha = 10
theta = 15 / 60
y <- rgamma(n = 1000, shape = alpha, scale = theta)
xx <- seq(min(y) + 0.05, max(y), length =200)
h <- 1.1
den <- BS(x = xx, y = y, k = 200, h = h)
##If scheme for generating grid points is unknown
y <- rgamma(n = 1000, shape = alpha, scale = theta)
h <- 3
BS(y = y, k = 90, h = h)
## Not run:
##If user do not mention the number of grid points
y <- rgamma(n = 1000, shape = alpha, scale = theta)
xx <- seq(0.001, 1000, length = 1000)
#any bandwidth can be used
require(KernSmooth)
h <- dpik(y) #Direct Plug-In Bandwidth
BS(x = xx, y = y, h = h)
## End(Not run)
## Not run:
#if both generating scheme and number of grid points are missing then function generate NA
y <- rgamma(n = 1000, shape = alpha, scale = theta)
band = 3
BS(y = y, h = band)
## End(Not run)
#if bandwidth is missing
y <- rgamma(n = 1000, shape = alpha, scale = theta)
xx <- seq(0.001, 100, length = 1000)
BS(x = xx, y = y, k = 900)
Erlang Estimate Density Values by Erlang kernel
Description
This function provide the estimated values for density by using Erlang Kernel. Erlang kernel is
developed by Salha et al. (2014). They developed this asymmetrical kernal with its hazard function
and also proved its asymtotic normality.
h+1
1 1 1 h
KE(x, h1 ) (y) = (1 + ) y exp − (1 + )
h
Γ(1 + h1 ) x h x h
Usage
Erlang(x = NULL, y, k = NULL, h = NULL)
Arguments
x scheme for generating grid points
y a numeric vector of positive values.
k gird points.
h the bandwidth
Details
see the details in the BS.
Value
x grid points
y estimated values of density
Author(s)
<NAME>, <NAME>.
References
<NAME>.; <NAME>.; <NAME>. 2014. Hazard rate function estimation using Erlang
Kernel. Pure Mathematical Sciences 3 (4), 141-152.
See Also
For further MSE by using other kernels see Beta, BS, Gamma and LogN. For plotting these estimated
values plot.Erlang and for calculating MSE use mse.
Examples
## Data: Simulated or real data can be used
## Number of grid points "k" should be at least equal to the data size.
## If user defines the generating scheme of grid points then length
## of grid points should be equal or greater than "k", Otherwise NA will be produced.
y <- rlnorm(100, meanlog = 0, sdlog = 1)
xx <- seq(min(y) + 0.05, max(y), length = 500)
h <-2
den <- Erlang(x = xx, y = y, k = 200, h = h)
##If scheme for generating grid points is unknown
y <- rlnorm(1000, meanlog = 0, sdlog = 1)
h <- 3
Erlang(y = y, k = 90, h = h)
## Not run:
##If user do not mention the number of grid points
y <- rlnorm(100, meanlog = 0, sdlog = 1)
xx <- seq(0.001, 1000, length = 1000)
#any bandwidth can be used
require(kedd)
h <- h.ucv(y) #Unbaised cross validation bandwidth
Erlang(x = xx, y = y, h = h)
## End(Not run)
## Not run:
#if generating scheme and number of grid points are missing then function generate NA
y <- rlnorm(100, meanlog = 0, sdlog = 1)
band = 3
Erlang(y = y, h = band)
## End(Not run)
#if bandwidth is missing
y <- rlnorm(100, meanlog = 0, sdlog = 1)
xx <- seq(0.001, 100, length = 100)
Erlang(x = xx, y = y, k = 90)
Gamma Estimate Density Values by Gamma kernel
Description
This function provide the estimated Kernel density values by using Gamma Kernel.The Gamma ker-
nel is developed by Chen (2000). He was first to introduce asymetrical kernels to control boundary
Bias. Gamma Kernel is
x
y h exp(− hy )
KGam1( hx +1,h) (y) = x
Usage
Gamma(x = NULL, y, k = NULL, h = NULL)
Arguments
x scheme for generating grid points
y a numeric vector of positive values
k number of gird points
h the bandwidth
Details
see the details in the BS.
Value
x grid points
y estimated values of density
Author(s)
<NAME>, <NAME>.
References
<NAME>. 2000. Probability density function estimation using Gamma kernels. Annals of the
Institute of Statistical Mathematics 52 (3), 471-480. Silverman, <NAME>. 1986. Density Estimation.
Chapman & Hall/ CRC, London.
See Also
For further kernels see Erlang, BS, Betaand LogN. To plot its density see plot.Gamma and to cal-
culate MSE mse.
Examples
##Number of grid points "k" should be at least equal to the data size.
###If user defines the generating scheme of grid points then length
####of grid points should be equal or greater than "k". Otherwise NA will be produced.
y <- rexp(100, 1)
xx <- seq(min(y) + 0.05, max(y), length = 500)
h <- 2
den <- Gamma(x = xx, y = y, k = 200, h = h)
##If scheme for generating grid points is unknown
y <- rexp(200, 1)
h <- 3
Gamma(y = y, k = 90, h = h)
## Not run:
y <- data(TUNA)
xx <- seq(min(y) + 0.05, max(y), length = 500)
h <- 2
den <- Gamma(x = xx, y = y, k = 200, h = h)
## End(Not run)
## Not run:
##If user do not mention the number of grid points
y <- rexp(1000, 1)
xx <- seq(0.001, 1000, length = 1000)
#any bandwidth can be used
require(KernSmooth)
h <- dpik(y)
Gamma(x = xx, y = y, h = h)
## End(Not run)
## Not run:
#if generating scheme and number of grid points are missing then function generate NA
y <- rexp(1000, 1)
band = 3
Gamma(y = y, h = band)
## End(Not run)
#if bandwidth is missing
y <- rexp(100,1)
xx <- seq(0.001, max(y), length = 100)
Gamma(x = xx, y = y, k = 90)
LogN Estimate Density Values by Lognormal kernel
Description
The LogN estimate Values of density by using Lognormal Kernel.The Lognomal kernel is developed
by <NAME> Kawczak (2003). For this too, they claimed that performance of their developed kernel
is better near the boundary points in terms of boundary reduction. Lognormal Kernel is
KLN (ln(x),4 ln(1+h)) =p exp −
(8π ln(1 + h))y) (8 ln(1 + h))
Usage
LogN(x = NULL, y, k = NULL, h = NULL)
Arguments
x scheme for generating grid points
y a numeric vector of positive values.
k gird points.
h the bandwidth
Details
see the details in the BS.
Value
x grid points
y estimated values of density
Author(s)
<NAME>, <NAME>.
References
<NAME>.; Kawczak, J. 2003. Birnbaum-Saunders & Lognormal kernel estimators for modeling dura-
tions in high frequency financial data. Annals of Economics and Finance 4, 103-124.
See Also
For further kernels see Beta, Erlang, Gamma and BS. To plot its density see plot.LogN and to
calculate MSE use mse.
Examples
## Data: Simulated or real data can be used
## Number of grid points "k" should be at least equal to the data size.
## If user defines the generating scheme of grid points then length
## of grid points should be equal or greater than "k", Otherwise NA will be produced.
y <- rweibull(350, 1)
xx <- seq(0.001, max(y), length = 500)
h <- 2
den <- LogN(x = xx, y = y, k = 200, h = h)
##If scheme for generating grid points is unknown
n <- 1000
y <- abs(rlogis(n, location = 0, scale = 1))
h <- 3
LogN(y = y, k = 90, h = h)
## Not run:
##If user do not mention the number of grid points
y <- rweibull(350, 1)
xx <- seq(0.00001, max(y), 500)
#any bandwidth can be used
require(ks)
h <- hscv(y) #Smooth cross validation bandwidth
LogN(x = xx, y = y, h = h)
## End(Not run)
## Not run:
#if both scheme and number of grid points are missing then function generate NA
n <- 1000
y <- abs(rlogis(n, location = 0, scale = 1))
band = 3
LogN(y = y, h = band)
## End(Not run)
#if bandwidth is missing
y <- rweibull(350, 1)
xx <- seq(0.001, 100, length = 500)
LogN(x = xx, y = y, k = 90)
mse Calculate Mean Squared Error( MSE) by using different Kernels
Description
This function calculates the mean squared error (MSE) by using user specified kernel. But distri-
bution of vector should be Exponential, Gamma or Weibull. Any other choice of distribution will
result NaN.
Usage
mse(kernel, type)
Arguments
kernel type of kernel which is to be used
type mention distribution of vector.If exponential distribution then use "Exp". If use
gamma distribution then use "Gamma".If Weibull distribution then use "Weibull".
Value
Mean Squared Error (MSE)
Author(s)
<NAME>, <NAME>.
References
• <NAME>.; <NAME>. 2003. Birnbaum-Saunders & Lognormal kernel estimators for modeling
durations in high frequency financial data. Annals of Economics and Finance 4, 103-124.
• <NAME>.; <NAME>.; <NAME>. 2014. Hazard rate function estimation using Erlang
Kernel. Pure Mathematical Sciences 3 (4), 141-152.
• <NAME>. 2000. Probability density function estimation using Gamma kernels. Annals of
the Institute of Statistical Mathematics 52 (3), 471-480.
• <NAME>. 2000. Beta kernel smothers for regression curves. Statistica Sinica 10, 73-91.
Examples
y <- rexp(100, 1)
xx <- seq(min(y) + 0.05, max(y), length = 500)
h <- 2
gr <- Gamma(x = xx, y = y, k = 200, h = h)
mse(kernel = gr, type = "Exp")
## if distribution is other than mentioned \code{type} is used then NaN will be produced.
## Not run:
mse(kernel = gr, type ="Beta")
## End(Not run)
plot.Beta Density Plot by Beta kernel
Description
Plot density by using Beta Kernel.
Usage
## S3 method for class 'Beta'
plot(x, ...)
Arguments
x an object of class "Beta"
... Not presently used in this implementation
Value
nothing
Author(s)
<NAME>, <NAME>.
References
<NAME>. 2000. Beta kernel smothers for regression curves. Statistica Sinica 10, 73-91.
See Also
For further kernels see plot.Gamma, plot.Erlang, plot.BS and plot.LogN. To calculate its esti-
mated values see Beta and for MSE see mse.
Examples
y <- runif(100)
h <- 0.5
xx <- sample(0.00001:900, 50, replace = FALSE)/1000
den <- Beta(x = xx, y = y, k = 50, h = h)
plot(den, type = "p")
##other details can also be added
y <- runif(100)
h <- 0.7
xx <- sample(0.00001:900, 50, replace = FALSE)/1000
den <- Beta(x = xx, y = y, k = 50, h = h)
plot(den, type = "l", ylab = "Density Function", lty = 1, xlab = "Time")
plot.BS Density Plot by Birnbaum-Saunders kernel
Description
Plot Kernel density by using Birnbaum-Saunders Kernel.
Usage
## S3 method for class 'BS'
plot(x, ...)
Arguments
x An object of class "BS"
... Not presently used in this implementation
Value
Nothing
Author(s)
<NAME>, <NAME>.
References
Jin, X.; Kawczak, J. 2003. Birnbaum-Saunders & Lognormal kernel estimators for modeling dura-
tions in high frequency financial data. Annals of Economics and Finance 4, 103-124.
See Also
For further kernels see plot.Beta, plot.Erlang, plot.Gamma and plot.LogN. For estimated val-
ues BS and for MSE mse.
Examples
alpha = 10
theta = 15 / 60
y <- rgamma(n = 10000, shape = alpha, scale = theta)
h <- 1.5
xx <- seq(min(y) + 0.05, max(y), length = 200)
den <- BS(x = xx, y = y, k = 200, h = h)
plot(den, type = "l")
##other details can also be added
y <- rgamma(n = 10000, shape = alpha, scale = theta)
h <- 0.79 * IQR(y) * length(y) ^ (-1/5) #Normal Scale Rule Bandwidth
gr <- BS(x = xx, y = y, k = 200, h = h)
plot(gr, type = "s", ylab = "Density Function", lty = 1, xlab = "Time")
## To add true density along with estimated
d1 <- density(y, bw = h)
lines(d1, type = "p", col = "red")
legend("topright", c("Real Density", "Density by Birnbaum-Saunders Kernel"),
col=c("red", "black"), lty = c(1,2))
plot.Erlang Density Plot by Erlang kernel
Description
Plot Kernel density by using Erlang Kernel.
Usage
## S3 method for class 'Erlang'
plot(x, ...)
Arguments
x An object of class "Erlang"
... Not presently used in this implementation
Value
Nothing
Author(s)
<NAME>, <NAME>.
References
<NAME>.; <NAME>.; <NAME>. 2014. Hazard rate function estimation using Erlang
Kernel. Pure Mathematical Sciences 3 (4), 141-152.
See Also
For further MSE by using other kernels see plot.Beta, plot.BS, plot.Gamma and plot.LogN. For
estimated values Erlang and for calculating MSE see mse.
Examples
y <- rlnorm(100, meanlog = 0, sdlog = 1)
h <- 1.5
xx <- seq(min(y) + 0.05, max(y), length = 200)
den <- Erlang(x = xx, y = y, k = 200, h = h)
plot(den, type = "l")
##other details can also be added
y <- rlnorm(100, meanlog = 0, sdlog = 1)
grid <- seq(min(y) + 0.05, max(y), length = 200)
h <- 0.79 * IQR(y) * length(y) ^ (-1/5)
gr <- Erlang(x = grid, y = y, k = 200, h = h)
plot(gr, type = "s", ylab = "Density Function", lty = 1, xlab = "Time")
## To add true density along with estimated
d1 <- density(y, bw = h)
lines(d1, type = "p", col = "red")
legend("topright", c("Real Density", "Density by Erlang Kernel"),
col=c("red", "black"), lty=c(1,2))
plot.Gamma Density Plot by Gamma kernel
Description
Plot density by using Gamma Kernel.
Usage
## S3 method for class 'Gamma'
plot(x, ...)
Arguments
x an object of class "Gamma"
... Not presently used in this implementation
Value
nothing
Author(s)
<NAME>, <NAME>.
References
<NAME>. 2000. Probability density function estimation using Gamma kernels. Annals of the
Institute of Statistical Mathematics 52 (3), 471-480.
See Also
For further kernels see plot.Beta, plot.Erlang, plot.BS and plot.LogN. To calculate its esti-
mated values see Gamma and for MSE mse.
Examples
y <- rexp(100, 1)
h <- 1.5
xx <- seq(min(y) + 0.05, max(y), length =200)
den <- Gamma(x=xx, y=y, k=200, h=h)
plot(den, type = "l")
##other details can also be added
y <- rexp(100, 2)
h <- 0.79 * IQR(y) * length(y) ^ (-1/5)
gr <- Gamma(x=xx, y=y, k=200, h=h)
plot(gr, type = "s", ylab = "Density Function", lty = 1, xlab = "Time")
## To add true density along with estimated
d1 <- density(y, bw=h)
lines(d1, type="p", col="red")
legend("topright", c("Real Density", "Density by Gamma Kernel"),
col=c("red", "black"), lty=c(1,2))
plot.LogN Density Plot by Lognormal kernel
Description
Plot Kernel density by using Lognormal Kernel.
Usage
## S3 method for class 'LogN'
plot(x, ...)
Arguments
x An object of class "LogN"
... Not presently used in this implementation
Value
Nothing
Author(s)
<NAME>, <NAME>.
References
<NAME>.; <NAME>. 2003. Birnbaum-Saunders & Lognormal kernel estimators for modeling dura-
tions in high frequency financial data. Annals of Economics and Finance 4, 103-124.
See Also
For further kernels see plot.Beta, plot.Erlang, plot.Gamma and plot.BS. To calculate MSE use
mse and for estimated values for density estimation see LogN.
Examples
n <- 1000
y <- abs(rlogis(n, location = 0, scale = 1))
xx <- seq(min(y) + 0.05, max(y), length =90)
h <- 0.00003
den <- LogN(x = xx, y = y, k = 90, h = h)
plot(den, type = "l")
##other details can also be added
y <- abs(rlogis(n, location = 0, scale = 1))
h <- 3
gr <- LogN(x = xx, y = y, k = 90, h = h)
plot(gr, type = "s", ylab = "Density Function", lty = 1, xlab = "Time")
## To add true density along with estimated
d1 <- density(y, bw = h)
lines(d1, type = "p", col = "green")
legend("topleft", c("Real Density", "Density by Lognormal Kernel"),
col = c("green", "black"), lty = c(1,2))
TUNA Data of Tuna fish
Description
Data is about Tuna, which is saltwater fish. Its seasonal migration is between waters off the coast
of Australia and the Indian Ocean. The data represents a line transect aerial survey of Southern
Bluefin Tuna in the Great Australian Bight in summer when the tuna tend to stay on the surface.
The abundance D is measured by
N
D=
A
, where N is the total number of surface schools in the Bight and A is the survey area. To estimate
D, an aircraft with two spotters on board is used to fly randomly allocated transect lines to detect
tuna schools. Each school sighted from transect is counted and its perpendicular distance to transect
is measured.
Usage
TUNA
Format
A vector with 64 observations
References
<NAME>.; <NAME>.; <NAME>.; <NAME>. 1993. Density Estimation using
Distance Sampling. Chapman & Hall, London. |
bdlp | cran | R | Package ‘bdlp’
October 12, 2022
Version 0.9-2
Date 2021-01-03
Title Transparent and Reproducible Artificial Data Generation
Depends R (>= 3.0.0), graphics
Imports GenOrd, MultiOrd, stringdist, rgl, RSQLite, MASS, DBI,
methods, grDevices, stats, utils
Description The main function generateDataset() processes a user-supplied .R file that
contains metadata parameters in order to generate actual data. The metadata parameters
have to be structured in the form of metadata objects, the format of which is
outlined in the package vignette. This approach allows to generate artificial data
in a transparent and reproducible manner.
License GPL-2
LazyLoad yes
Author <NAME> [aut, cre]
Maintainer <NAME> <<EMAIL>>
RoxygenNote 7.1.1
Suggests knitr, rmarkdown
VignetteBuilder knitr
NeedsCompilation no
Repository CRAN
Date/Publication 2021-01-10 15:10:05 UTC
R topics documented:
addCluste... 2
checkSetu... 3
createFileskeleto... 3
deleteCluste... 4
generateDat... 5
generateData,metadata.binary-metho... 5
generateData,metadata.functional-metho... 6
generateData,metadata.metric-metho... 6
generateData,metadata.ordinal-metho... 7
generateData,metadata.randomstring-metho... 7
generateDatabas... 8
getRandomstring... 9
initializeObjec... 9
metadata.binary-clas... 10
metadata.functional-clas... 10
metadata.general-clas... 11
metadata.metric-clas... 11
metadata.ordinal-clas... 11
metadata.randomstring-clas... 11
plot3dMetadat... 12
plot3dMetadata,metadata.metric-metho... 12
plotMetadat... 13
plotMetadata,metadata.binary-metho... 13
plotMetadata,metadata.functional-metho... 14
plotMetadata,metadata.metric-metho... 14
plotMetadata,metadata.ordinal-metho... 15
sampleGri... 15
saveSetu... 16
summarizeSetu... 17
addCluster Add an empty cluster to a metadata object
Description
Add an empty cluster to a metadata object
Usage
addCluster(m)
Arguments
m A metadata object
Value
A metadata object with an empty additional cluster
Examples
require(MASS)
m <- new("metadata.metric",
clusters = list(c1 = list(n = 25, mu = c(4,5), Sigma=diag(1,2)),
c2 = list(n = 25, mu = c(-1,-2), Sigma=diag(1,2))),
genfunc = mvrnorm)
m2 <- addCluster(m)
checkSetup Performs various consistency checks on a setup file
Description
Performs various consistency checks on a setup file
Usage
checkSetup(file)
Arguments
file A .R file with a new simulation setup
createFileskeleton Create a new setup file template
Description
Create a new setup file template
Usage
createFileskeleton(
newname,
mail,
inst,
author,
type = c("metric", "functional", "ordinal", "binary", "randomstring", "wordnet"),
infotable = NULL,
ref = "Unpublished",
codefile = F
)
Arguments
newname The name of the new setup (and subsequently the file name)
mail The contact e-mail address of the author
inst The institution of the author
author The full name of the author
type The data type of this setup
infotable The setup summary table
ref The reference to the publication where the setup was used
codefile If functions that are needed for the data generation of the setup are stored in
some other .R file, the path can be supplied
deleteCluster Delete a cluster from a metadata object
Description
Delete a cluster from a metadata object
Usage
deleteCluster(m, clnumber)
Arguments
m A metadata object
clnumber The cluster to delete
Value
A metadata object
Examples
require(MASS)
m <- new("metadata.metric",
clusters = list(c1 = list(n = 25, mu = c(4,5), Sigma=diag(1,2)),
c2 = list(n = 25, mu = c(-1,-2), Sigma=diag(1,2))),
genfunc = mvrnorm)
m2 <- deleteCluster(m, 2)
generateData Generate a dataset from a metadata object
Description
Generate a dataset from a metadata object
Usage
generateData(m)
Arguments
m A metadata object
Value
A dataset as specified by the metadata object
Examples
require(MASS)
m <- new("metadata.metric",
clusters = list(c1 = list(n = 25, mu = c(4,5), Sigma=diag(1,2)),
c2 = list(n = 25, mu = c(-1,-2), Sigma=diag(1,2))),
genfunc = mvrnorm)
generateData(m)
generateData,metadata.binary-method
Generate a dataset from a metadata object
Description
Generate a dataset from a metadata object
Usage
## S4 method for signature 'metadata.binary'
generateData(m)
Arguments
m A metadata object
Value
A dataset as specified by the metadata object
generateData,metadata.functional-method
Generate a dataset from a metadata object
Description
Generate a dataset from a metadata object
Usage
## S4 method for signature 'metadata.functional'
generateData(m)
Arguments
m A metadata object
Value
A dataset as specified by the metadata object
generateData,metadata.metric-method
Generate a dataset from a metadata object
Description
Generate a dataset from a metadata object
Usage
## S4 method for signature 'metadata.metric'
generateData(m)
Arguments
m A metadata object
Value
A dataset as specified by the metadata object
generateData,metadata.ordinal-method 7
generateData,metadata.ordinal-method
Generate a dataset from a metadata object
Description
Generate a dataset from a metadata object
Usage
## S4 method for signature 'metadata.ordinal'
generateData(m)
Arguments
m A metadata object
Value
A dataset as specified by the metadata object
generateData,metadata.randomstring-method
Generate a dataset from a metadata object
Description
Generate a dataset from a metadata object
Usage
## S4 method for signature 'metadata.randomstring'
generateData(m)
Arguments
m A metadata object
Value
A dataset as specified by the metadata object
generateDatabase Generates a number of datasets from one metadata scenario
Description
Generates a number of datasets from one metadata scenario
Usage
generateDatabase(
name = NULL,
setnr = NULL,
draws = 1,
seedinfo = list(100, paste(R.version$major, R.version$minor, sep = "."), RNGkind()),
metaseedinfo = list(100, paste(R.version$major, R.version$minor, sep = "."),
RNGkind()),
file = NULL,
seedincrement = 1
)
Arguments
name The path to the setup file
setnr The metadata scenario, as taken from the info table
draws The number of datasets that are drawn from the metadata scenario
seedinfo The random number generator seed parameters
metaseedinfo If necessary, a separate set of random number generator parameters for the meta-
data (e.g. cluster centers)
file A custom file name for the output database. Defaults to the pattern setup-
name_setnr_seed.db
seedincrement The random number seed will by default increase by 1 for each draw from the
base seed given in seedinfo unless specified otherwise here
Value
An SQLite database that contains the desired number of data sets drawn from a certain metadata
scenario
Examples
## Not run:
source(system.file("dangl2014.R", package="bdlp"))
generateDatabase(name="dangl2014.R", setnr=1, draws=10)
unlink("dangl2014_set_1_seed_100.db")
## End(Not run)
getRandomstrings Generates random strings
Description
Generates random strings
Usage
getRandomstrings(
center = NULL,
maxdist = NULL,
length = nchar(center),
n = 1,
method = "lv"
)
Arguments
center Reference string, i.e. the cluster center
maxdist The maximum allowed string distance
length The length of the string
n Number of strings to be generated
method The string distance method used to calculate the string, defaults to Levensthein
distance
Value
A character string
Examples
getRandomstrings(center="hello", maxdist = 2, n = 5)
initializeObject Initialize a new metadata object
Description
Initialize a new metadata object
Usage
initializeObject(
type,
k,
genfunc,
seed = list(100, paste(R.version$major, R.version$minor, sep = "."), RNGkind())
)
Arguments
type The data type for the new object
k Number of clusters
genfunc The distribution function for data generation
seed The random number seed parameters for the data generation
Value
A metadata object
Examples
require(MASS)
initializeObject(type = "metric", k = 3, genfunc = mvrnorm)
metadata.binary-class A class that represents a metadata object for binary data
Description
A class that represents a metadata object for binary data
metadata.functional-class
A class that represents a metadata object for functional data
Description
A class that represents a metadata object for functional data
metadata.general-class
A class to represent a metadata object
Description
A class to represent a metadata object
Fields
clusters A list of cluster information
genfunc A string specifying a distribution for the random numbers
seedinfo A list with the parameters for the random number generator
metadata.metric-class A class that represents a metadata object for metric data
Description
A class that represents a metadata object for metric data
Fields
standardization If standardization is needed, function can be supplied
metadata.ordinal-class
A class that represents a metadata object for ordinal data
Description
A class that represents a metadata object for ordinal data
metadata.randomstring-class
A class that represents a metadata object for string data
Description
A class that represents a metadata object for string data
plot3dMetadata 3d plot of a metric metadata object
Description
3d plot of a metric metadata object
Usage
plot3dMetadata(m)
Arguments
m A metadata object (for metric data)
Value
A 3d plot using function plot3d from package rgl
Examples
require(MASS)
m <- new("metadata.metric",
clusters = list(c1 = list(n = 25, mu = c(4,5,4), Sigma=diag(1,3)),
c2 = list(n = 25, mu = c(-1,-2,-2), Sigma=diag(1,3))),
genfunc = mvrnorm)
plot3dMetadata(m)
plot3dMetadata,metadata.metric-method
3d plot of a metric metadata object
Description
3d plot of a metric metadata object
Usage
## S4 method for signature 'metadata.metric'
plot3dMetadata(m)
Arguments
m A metadata object (for metric data)
Value
A 3d plot using function plot3d from package rgl
plotMetadata Plot a metadata object
Description
Plot a metadata object
Usage
plotMetadata(m)
Arguments
m A metadata object
Value
A plot, created by generating an instance of the dataset from the metadata object
Examples
require(MASS)
m <- new("metadata.metric",
clusters = list(c1 = list(n = 25, mu = c(4,5), Sigma=diag(1,2)),
c2 = list(n = 25, mu = c(-1,-2), Sigma=diag(1,2))),
genfunc = mvrnorm)
plotMetadata(m)
plotMetadata,metadata.binary-method
Plot a metadata object
Description
Plot a metadata object
Usage
## S4 method for signature 'metadata.binary'
plotMetadata(m)
Arguments
m A metadata object
Value
A plot, created by generating an instance of the dataset from the metadata object
plotMetadata,metadata.functional-method
Plot a metadata object
Description
Plot a metadata object
Usage
## S4 method for signature 'metadata.functional'
plotMetadata(m)
Arguments
m A metadata object
Value
A plot, created by generating an instance of the dataset from the metadata object
plotMetadata,metadata.metric-method
Plot a metadata object
Description
Plot a metadata object
Usage
## S4 method for signature 'metadata.metric'
plotMetadata(m)
Arguments
m A metadata object
Value
A plot, created by generating an instance of the dataset from the metadata object
plotMetadata,metadata.ordinal-method
Plot a metadata object
Description
Plot a metadata object
Usage
## S4 method for signature 'metadata.ordinal'
plotMetadata(m)
Arguments
m A metadata object
Value
A plot, created by generating an instance of the dataset from the metadata object
sampleGrid Sample grid points for functional data
Description
Sample grid points for functional data
Usage
sampleGrid(total_n, minT, maxT, granularity, regular = FALSE)
Arguments
total_n Number of Observations
minT Minimum number of time points sampled
maxT Maximum number of time points sampled
granularity Number of possible time points in total
regular If TRUE, maxT time points are sampled at the same time points for each func-
tion
Value
A binary matrix indicating whether the function should be evaluated at a given time point
Examples
sampleGrid(total_n = 10, minT = 4, maxT = 10, granularity = 20)
saveSetup Saves a list of metadata objects to a new setup file
Description
Saves a list of metadata objects to a new setup file
Usage
saveSetup(
name,
author,
mail,
inst,
cit = "Unpublished",
objects,
table,
seedinfo = list(100, paste(R.version$major, R.version$minor, sep = "."), RNGkind()),
metaseedinfo = list(100, paste(R.version$major, R.version$minor, sep = "."),
RNGkind()),
custom_funcs = NULL,
custom_name = NULL
)
Arguments
name The name of the new setup (and thus also the filename)
author Full name of the author
mail Contact e-mail address of the author
inst Institution of the author
cit Reference to the publication where the setup was used, defaults to unpublished
objects List of metadata objects
table Info table for the setup
seedinfo Random number generator parameters for the data sets
metaseedinfo Random number generator parameters for the metadata
custom_funcs Custom functions that are needed to generate the meta(data)
custom_name Custom filename that deviates from the authorYear format
Value
A .R file that can be processed by create.dataset
Examples
require(MASS)
a = new("metadata.metric",
clusters = list(c1 = list(n = 25, mu = c(4,5), Sigma=diag(1,2)),
c2 = list(n = 25, mu = c(-1,-2), Sigma=diag(1,2))),
genfunc = mvrnorm)
b = new("metadata.metric",
clusters = list(c1 = list(n = 44, mu = c(1,2), Sigma=diag(1,2)),
c2 = list(n = 66, mu = c(-5,-6), Sigma=diag(1,2))),
genfunc = mvrnorm)
## Not run:
saveSetup(name="doe2002.R", author="<NAME>", mail="<EMAIL>",
inst="Example University", cit="Simple Data, pp. 23-24", objects=list(a, b),
table=data.frame(n = c(50, 110), k = c(2,2), shape = c("spherical", "spherical")))
unlink("doe2002.R")
## End(Not run)
summarizeSetup Returns the setup summary
Description
Returns the setup summary
Usage
summarizeSetup(name)
Arguments
name The name of the setup
Value
The summary table of name |
github.com/juju/ratelimit | go | Go | README
[¶](#section-readme)
---
### ratelimit
--
import "github.com/juju/ratelimit"
The ratelimit package provides an efficient token bucket implementation. See
<http://en.wikipedia.org/wiki/Token_bucket>.
#### Usage
###### func Reader
```
func Reader(r io.Reader, bucket *Bucket) io.Reader
```
Reader returns a reader that is rate limited by the given token bucket. Each token in the bucket represents one byte.
###### func Writer
```
func Writer(w io.Writer, bucket *Bucket) io.Writer
```
Writer returns a writer that is rate limited by the given token bucket. Each token in the bucket represents one byte.
###### type Bucket
```
type Bucket struct {
}
```
Bucket represents a token bucket that fills at a predetermined rate. Methods on Bucket may be called concurrently.
###### func NewBucket
```
func NewBucket(fillInterval time.Duration, capacity int64) *Bucket
```
NewBucket returns a new token bucket that fills at the rate of one token every fillInterval, up to the given maximum capacity. Both arguments must be positive.
The bucket is initially full.
###### func NewBucketWithQuantum
```
func NewBucketWithQuantum(fillInterval time.Duration, capacity, quantum int64) *Bucket
```
NewBucketWithQuantum is similar to NewBucket, but allows the specification of the quantum size - quantum tokens are added every fillInterval.
###### func NewBucketWithRate
```
func NewBucketWithRate(rate float64, capacity int64) *Bucket
```
NewBucketWithRate returns a token bucket that fills the bucket at the rate of rate tokens per second up to the given maximum capacity. Because of limited clock resolution, at high rates, the actual rate may be up to 1% different from the specified rate.
###### func (*Bucket) Available
```
func (tb *Bucket) Available() int64
```
Available returns the number of available tokens. It will be negative when there are consumers waiting for tokens. Note that if this returns greater than zero, it does not guarantee that calls that take tokens from the buffer will succeed, as the number of available tokens could have changed in the meantime. This method is intended primarily for metrics reporting and debugging.
###### func (*Bucket) Rate
```
func (tb *Bucket) Rate() float64
```
Rate returns the fill rate of the bucket, in tokens per second.
###### func (*Bucket) Take
```
func (tb *Bucket) Take(count int64) time.Duration
```
Take takes count tokens from the bucket without blocking. It returns the time that the caller should wait until the tokens are actually available.
Note that if the request is irrevocable - there is no way to return tokens to the bucket once this method commits us to taking them.
###### func (*Bucket) TakeAvailable
```
func (tb *Bucket) TakeAvailable(count int64) int64
```
TakeAvailable takes up to count immediately available tokens from the bucket. It returns the number of tokens removed, or zero if there are no available tokens.
It does not block.
###### func (*Bucket) TakeMaxDuration
```
func (tb *Bucket) TakeMaxDuration(count int64, maxWait time.Duration) (time.Duration, bool)
```
TakeMaxDuration is like Take, except that it will only take tokens from the bucket if the wait time for the tokens is no greater than maxWait.
If it would take longer than maxWait for the tokens to become available, it does nothing and reports false, otherwise it returns the time that the caller should wait until the tokens are actually available, and reports true.
###### func (*Bucket) Wait
```
func (tb *Bucket) Wait(count int64)
```
Wait takes count tokens from the bucket, waiting until they are available.
###### func (*Bucket) WaitMaxDuration
```
func (tb *Bucket) WaitMaxDuration(count int64, maxWait time.Duration) bool
```
WaitMaxDuration is like Wait except that it will only take tokens from the bucket if it needs to wait for no greater than maxWait. It reports whether any tokens have been removed from the bucket If no tokens have been removed, it returns immediately.
Documentation
[¶](#section-documentation)
---
### Overview [¶](#pkg-overview)
Package ratelimit provides an efficient token bucket implementation that can be used to limit the rate of arbitrary things.
See <http://en.wikipedia.org/wiki/Token_bucket>.
### Index [¶](#pkg-index)
* [func Reader(r io.Reader, bucket *Bucket) io.Reader](#Reader)
* [func Writer(w io.Writer, bucket *Bucket) io.Writer](#Writer)
* [type Bucket](#Bucket)
* + [func NewBucket(fillInterval time.Duration, capacity int64) *Bucket](#NewBucket)
+ [func NewBucketWithClock(fillInterval time.Duration, capacity int64, clock Clock) *Bucket](#NewBucketWithClock)
+ [func NewBucketWithQuantum(fillInterval time.Duration, capacity, quantum int64) *Bucket](#NewBucketWithQuantum)
+ [func NewBucketWithQuantumAndClock(fillInterval time.Duration, capacity, quantum int64, clock Clock) *Bucket](#NewBucketWithQuantumAndClock)
+ [func NewBucketWithRate(rate float64, capacity int64) *Bucket](#NewBucketWithRate)
+ [func NewBucketWithRateAndClock(rate float64, capacity int64, clock Clock) *Bucket](#NewBucketWithRateAndClock)
* + [func (tb *Bucket) Available() int64](#Bucket.Available)
+ [func (tb *Bucket) Capacity() int64](#Bucket.Capacity)
+ [func (tb *Bucket) Rate() float64](#Bucket.Rate)
+ [func (tb *Bucket) Take(count int64) time.Duration](#Bucket.Take)
+ [func (tb *Bucket) TakeAvailable(count int64) int64](#Bucket.TakeAvailable)
+ [func (tb *Bucket) TakeMaxDuration(count int64, maxWait time.Duration) (time.Duration, bool)](#Bucket.TakeMaxDuration)
+ [func (tb *Bucket) Wait(count int64)](#Bucket.Wait)
+ [func (tb *Bucket) WaitMaxDuration(count int64, maxWait time.Duration) bool](#Bucket.WaitMaxDuration)
* [type Clock](#Clock)
### Constants [¶](#pkg-constants)
This section is empty.
### Variables [¶](#pkg-variables)
This section is empty.
### Functions [¶](#pkg-functions)
####
func [Reader](https://github.com/juju/ratelimit/blob/v1.0.2/reader.go#L17) [¶](#Reader)
```
func Reader(r [io](/io).[Reader](/io#Reader), bucket *[Bucket](#Bucket)) [io](/io).[Reader](/io#Reader)
```
Reader returns a reader that is rate limited by the given token bucket. Each token in the bucket represents one byte.
####
func [Writer](https://github.com/juju/ratelimit/blob/v1.0.2/reader.go#L41) [¶](#Writer)
```
func Writer(w [io](/io).[Writer](/io#Writer), bucket *[Bucket](#Bucket)) [io](/io).[Writer](/io#Writer)
```
Writer returns a reader that is rate limited by the given token bucket. Each token in the bucket represents one byte.
### Types [¶](#pkg-types)
####
type [Bucket](https://github.com/juju/ratelimit/blob/v1.0.2/ratelimit.go#L44) [¶](#Bucket)
```
type Bucket struct {
// contains filtered or unexported fields
}
```
Bucket represents a token bucket that fills at a predetermined rate.
Methods on Bucket may be called concurrently.
####
func [NewBucket](https://github.com/juju/ratelimit/blob/v1.0.2/ratelimit.go#L79) [¶](#NewBucket)
```
func NewBucket(fillInterval [time](/time).[Duration](/time#Duration), capacity [int64](/builtin#int64)) *[Bucket](#Bucket)
```
NewBucket returns a new token bucket that fills at the rate of one token every fillInterval, up to the given maximum capacity. Both arguments must be positive. The bucket is initially full.
####
func [NewBucketWithClock](https://github.com/juju/ratelimit/blob/v1.0.2/ratelimit.go#L85) [¶](#NewBucketWithClock)
```
func NewBucketWithClock(fillInterval [time](/time).[Duration](/time#Duration), capacity [int64](/builtin#int64), clock [Clock](#Clock)) *[Bucket](#Bucket)
```
NewBucketWithClock is identical to NewBucket but injects a testable clock interface.
####
func [NewBucketWithQuantum](https://github.com/juju/ratelimit/blob/v1.0.2/ratelimit.go#L136) [¶](#NewBucketWithQuantum)
```
func NewBucketWithQuantum(fillInterval [time](/time).[Duration](/time#Duration), capacity, quantum [int64](/builtin#int64)) *[Bucket](#Bucket)
```
NewBucketWithQuantum is similar to NewBucket, but allows the specification of the quantum size - quantum tokens are added every fillInterval.
####
func [NewBucketWithQuantumAndClock](https://github.com/juju/ratelimit/blob/v1.0.2/ratelimit.go#L143) [¶](#NewBucketWithQuantumAndClock)
```
func NewBucketWithQuantumAndClock(fillInterval [time](/time).[Duration](/time#Duration), capacity, quantum [int64](/builtin#int64), clock [Clock](#Clock)) *[Bucket](#Bucket)
```
NewBucketWithQuantumAndClock is like NewBucketWithQuantum, but also has a clock argument that allows clients to fake the passing of time. If clock is nil, the system clock will be used.
####
func [NewBucketWithRate](https://github.com/juju/ratelimit/blob/v1.0.2/ratelimit.go#L98) [¶](#NewBucketWithRate)
```
func NewBucketWithRate(rate [float64](/builtin#float64), capacity [int64](/builtin#int64)) *[Bucket](#Bucket)
```
NewBucketWithRate returns a token bucket that fills the bucket at the rate of rate tokens per second up to the given maximum capacity. Because of limited clock resolution,
at high rates, the actual rate may be up to 1% different from the specified rate.
####
func [NewBucketWithRateAndClock](https://github.com/juju/ratelimit/blob/v1.0.2/ratelimit.go#L104) [¶](#NewBucketWithRateAndClock)
```
func NewBucketWithRateAndClock(rate [float64](/builtin#float64), capacity [int64](/builtin#int64), clock [Clock](#Clock)) *[Bucket](#Bucket)
```
NewBucketWithRateAndClock is identical to NewBucketWithRate but injects a testable clock interface.
####
func (*Bucket) [Available](https://github.com/juju/ratelimit/blob/v1.0.2/ratelimit.go#L250) [¶](#Bucket.Available)
```
func (tb *[Bucket](#Bucket)) Available() [int64](/builtin#int64)
```
Available returns the number of available tokens. It will be negative when there are consumers waiting for tokens. Note that if this returns greater than zero, it does not guarantee that calls that take tokens from the buffer will succeed, as the number of available tokens could have changed in the meantime. This method is intended primarily for metrics reporting and debugging.
####
func (*Bucket) [Capacity](https://github.com/juju/ratelimit/blob/v1.0.2/ratelimit.go#L264) [¶](#Bucket.Capacity)
```
func (tb *[Bucket](#Bucket)) Capacity() [int64](/builtin#int64)
```
Capacity returns the capacity that the bucket was created with.
####
func (*Bucket) [Rate](https://github.com/juju/ratelimit/blob/v1.0.2/ratelimit.go#L269) [¶](#Bucket.Rate)
```
func (tb *[Bucket](#Bucket)) Rate() [float64](/builtin#float64)
```
Rate returns the fill rate of the bucket, in tokens per second.
####
func (*Bucket) [Take](https://github.com/juju/ratelimit/blob/v1.0.2/ratelimit.go#L196) [¶](#Bucket.Take)
```
func (tb *[Bucket](#Bucket)) Take(count [int64](/builtin#int64)) [time](/time).[Duration](/time#Duration)
```
Take takes count tokens from the bucket without blocking. It returns the time that the caller should wait until the tokens are actually available.
Note that if the request is irrevocable - there is no way to return tokens to the bucket once this method commits us to taking them.
####
func (*Bucket) [TakeAvailable](https://github.com/juju/ratelimit/blob/v1.0.2/ratelimit.go#L221) [¶](#Bucket.TakeAvailable)
```
func (tb *[Bucket](#Bucket)) TakeAvailable(count [int64](/builtin#int64)) [int64](/builtin#int64)
```
TakeAvailable takes up to count immediately available tokens from the bucket. It returns the number of tokens removed, or zero if there are no available tokens. It does not block.
####
func (*Bucket) [TakeMaxDuration](https://github.com/juju/ratelimit/blob/v1.0.2/ratelimit.go#L212) [¶](#Bucket.TakeMaxDuration)
```
func (tb *[Bucket](#Bucket)) TakeMaxDuration(count [int64](/builtin#int64), maxWait [time](/time).[Duration](/time#Duration)) ([time](/time).[Duration](/time#Duration), [bool](/builtin#bool))
```
TakeMaxDuration is like Take, except that it will only take tokens from the bucket if the wait time for the tokens is no greater than maxWait.
If it would take longer than maxWait for the tokens to become available, it does nothing and reports false,
otherwise it returns the time that the caller should wait until the tokens are actually available, and reports true.
####
func (*Bucket) [Wait](https://github.com/juju/ratelimit/blob/v1.0.2/ratelimit.go#L169) [¶](#Bucket.Wait)
```
func (tb *[Bucket](#Bucket)) Wait(count [int64](/builtin#int64))
```
Wait takes count tokens from the bucket, waiting until they are available.
####
func (*Bucket) [WaitMaxDuration](https://github.com/juju/ratelimit/blob/v1.0.2/ratelimit.go#L180) [¶](#Bucket.WaitMaxDuration)
```
func (tb *[Bucket](#Bucket)) WaitMaxDuration(count [int64](/builtin#int64), maxWait [time](/time).[Duration](/time#Duration)) [bool](/builtin#bool)
```
WaitMaxDuration is like Wait except that it will only take tokens from the bucket if it needs to wait for no greater than maxWait. It reports whether any tokens have been removed from the bucket If no tokens have been removed, it returns immediately.
####
type [Clock](https://github.com/juju/ratelimit/blob/v1.0.2/ratelimit.go#L327) [¶](#Clock)
```
type Clock interface {
// Now returns the current time.
Now() [time](/time).[Time](/time#Time)
// Sleep sleeps for at least the given duration.
Sleep(d [time](/time).[Duration](/time#Duration))
}
```
Clock represents the passage of time in a way that can be faked out for tests. |
GITHUB_J-AugustoManzano_livro_Logo-Intro-UCBLogo.zip_unzipped_Parte_1.pdf | free_programming_book | Unknown | CAPTULO 1 - Introduo A programao declarativa caracteriza-se por ser um paradigma de programao onde se diz a um computador o que deve ser feito e no como ser feito como ocorre no paradigma imperativo. Neste sentido, este captulo apresenta a linguagem de programao Logo baseada na linguagem Lisp que o primeiro exemplar de uma linguagem de programao declarativa.
1.1 - Linguagem Logo A linguagem Logo foi desenvolvida durante a dcada de 1960, por uma equipe multidisciplinar dirigida pelo Filsofo, Matemtico, Pesquisador e Professor <NAME>, com coautoria de <NAME> no Massachusetts Institute of Technology (MIT) com a participao direta do Professor <NAME> e participao indireta de <NAME> e <NAME>.
Em 1960 o Professor Papert conhece na Universidade de Sorbonne (Frana) <NAME> e inicia com este um trabalho relacionado a teoria de aprendizagem. Deste momento histrico veio a influncia para seus estudos nas reas de inteligncia artificial e robtica educacional, dando origem a linguagem Logo (LOGO FOUNDATION, 2015).
Em 1961 em uma conferncia na Inglaterra o Professor <NAME> conhece o Professor <NAME> do MIT um dos grandes expoentes da Inteligncia Artificial (IA). Nessa ocasio eles apresentaram artigos muito semelhantes sobre o estudo de IA e o uso dessa tecnologia por crianas. Isso os aproxima e leva em 1964 o Professor Papert a integrar o Grupo de Inteligncia Artificial do MIT. No MIT Papert conhece <NAME>, ex-aluno do Professor Minsky que foi trabalhar em uma empresa de pesquisa e desenvolvimento chamada Bolt, Beranek and Newman (BBN). Na BBN <NAME> conhece <NAME> e pe Feurzeig e Papert em contato.
Nesta ocasio Bobrow, Papert e Feurzeig conversam sobre a linguagem que Papert deseja criar para crianas chamada Mathland, que posteriormente tornou-se Logo como um dialeto da linguagem Lisp (PALEOTRONIC, 2021).
Logo uma linguagem declarativa com toques de programao imperativa. fundamentada sobre os princpios da programao lgica e funcional tendo sido direcionada inicialmente a crianas. Quando a linguagem surgiu no existiam microcomputadores e o acesso a essa tecnologia era extremante restrito. No entanto, aps o ano de 1975 com o surgimento dos microcomputadores e o barateamento da tecnologia computacional o uso da linguagem se tornou mais popular. O ambiente de trabalho baseado principalmente sobre uma interface plana com um cone (cursor central) chamado tartaruga que tem por objetivo percorrer o plano desenhando imagens baseadas em figuras geomtricas a partir de quatro comandos principais, suas primitivas de operao: FORWARD (para frente), BACK (para trs), RIGHT (para a direita), LEFT
(para a esquerda). Alm desses recursos existem outros que complementam a linguagem, tais como: REPEAT (repita), IF (se condio for), TO (para), PENDOWN (abaixe a caneta), PENERASE
(apagar), entre outros.
Apesar da linguagem ser muito conhecida devido ao modo de operao chamado geometria da tartaruga, ela mais completa do que isso possuindo outros recursos. O efeito do modo geometria da tartaruga apenas uma parte do que a linguagem efetivamente .
Ser conhecida como a primeira linguagem para crianas tambm trouxe um pequeno inconveniente. Muitas pessoas deixam de levar a linguagem a srio por pensarem, apenas, que uma linguagem ldica para ensinar apenas crianas, quando pessoas de outras idades podem tambm fazer grande uso desta linguagem.
[ 13 ]
Linguagem Logo: Introduo com UCBLogo 1.2 - Obteno e instalao do interpretador UCBLogo Este livro foca o uso da linguagem Logo (exclusivamente em ingls) segundo o dialeto existente no interpretador "Berkeley Logo" tambm conhecido como "UCBLogo". A sigla "UCB" tem seu significado a partir de "University of California, Berkeley". Dos muitos interpretadores Logo existentes o "UCBLogo" o nico que, aparentemente, no possui localizao lingustica para outros idiomas, sendo disponibilizado apenas em ingls.
A linguagem Logo do "UCBLogo" considerada o dialeto mais prxima do padro original desenvolvida na dcada de 1960. Este projeto inspirou o desenvolvido do famoso interpretador
"MSWLogo" para o sistema operacional Windows atualmente chamado "UCBLogo" que deu origem aos interpretadores brasileiros "SuperLogo" e "BetaLogo", que infelizmente andam desatualizados e comeam a ter certos problemas para serem executados nos sistemas operacionais Windows mais recentes. Outro detalhe curioso sobre o ambiente "UCBLogo" o fato deste ser apresentado de forma minimalista com sua aparncia lembrando os saudosos ambientes Logo da dcada de 1970 como forma consagrados pelos microcomputadores da Amiga, Apple,
Atari, Commodore, Texas, TRS, entre tantos outros.
O interpretador "UCBLogo" fornecido para os sistemas operacionais Windows, UNIX (Linux) e Mac OS. Apesar de apresentadas as instalaes para UNIX (Linux) e Mac OS esta obra d ateno a instalao e uso do interpretador no sistema operacional Windows.
1.2.1 - Instalao Windows Para obter o interpretador "UCBLogo" disponvel ao Windows necessrio acessar o endereo
"https://people.eecs.berkeley.edu/~bh/logo" apresentado junto a figura 1.1. Abra seu navegador e adicione o endereo para acesso ao stio.
Figura 1.1 - Stio oficial do projeto "UCBLogo"
Localize no quinto pargrafo da pgina do projeto "UCBLogo" os pontos de ligao (links) para acesso os arquivos do programa como indica a figura 1.2. No pargrafo apontado selecione o link do sistema operacional desejado.
[ 14 ]
INTRODUO Figura 1.2 - Indicao do quinto pargrafo com as opes de downloads Para o sistema operacional Windows selecione o link Windows como mostra a figura 1.3. O arquivo ser copiado para seu sistema, possivelmente para a pasta "Downloads". Aps concluda a cpia feche o programa navegador e abra a referida pasta a partir do "Explorer" e selecione com um duplo clique do boto de ao do mouse (normalmente o boto esquerdo) o arquivo "ucblogosetup.exe" para proceder sua instalao.
Figura 1.3 - Seleo do ponto de ligao "Windows"
Antes de iniciar o processo de instalao pode ocorrer a apresentao da caixa de mensagem
"Abrir Arquivo - Aviso de Segurana" como mostra a figura 1.4. Neste instante, basta para instalar acionar o boto "Executar". Se for acionado o boto "Cancelar" o processo de instalao
cancelado.
Aps acionar o boto "Executar" da caixa de mensagem "Abrir Arquivo - Aviso de Segurana"
ocorre a apresentao da caixa de dilogo "License Agreement" indicada sob o ttulo "Setup Berkeley Logo 6.2" como apresenta a figura 1.5. Neste instante, leia todo o contrato de uso do programa e caso concorde com seus termos acione a opo "I accept the agreement" e em seguida acione o boto "Next >". Caso no concorde com o contrato mantenha a seleo da opo "I do not accept the agreement" e acione o boto "Cancel" para encerrar o processo de instalao.
Se as opes de aceite da instalao forma anteriormente aceitas ocorrer a apresentao da caixa de dilogo "Select Components" indicada na figura 1.6. Neste momento possvel selecionar o tipo de instalao. Normalmente mantem-se a opo "Full instalation" que permite instalar todos os mdulos que acompanham o programa, mas possvel selecionar as opes
[ 15 ]
Linguagem Logo: Introduo com UCBLogo
"Compact instalation" e "Custom instalation". A opo "Compact instalation" instala apenas os trs primeiros mdulos, enquanto a opo "Custom instalation" permite selecionar seletivamente qual mdulo deseja-se ou no instalar. pertinente salientar que o nico mdulo obrigatrio para qualquer opo "Program Files". Mantendo-se "Full instalation" como opo de instalao acione o boto "Next >".
Figura 1.4 - Caixa de mensagem
"Aviso de Segurana"
Figura 1.5 - Caixa de dilogo
"License Agreement"
Figura 1.6 - Caixa de dilogo
"Select Components"
Figura 1.7 - Caixa de dilogo
"Select Additional Tasks"
A prxima etapa do processo de instalao a seleo das formas de acesso a ferramenta no sistema a partir da apresentao da caixa de dilogo "Select Additional Tasks" indicada na figura 1.7. Neste momento, basta manter as opes selecionadas e acionar na sequncia o boto
"Next >".
A partir deste instante o programa est pronto. Por meio da caixa de dilogo "Ready to Install"
apresentada na figura 1.8 indicado um relatrio das opes anteriormente selecionadas. Caso deseje mudar alguma coisa basta acionar o boto " < Back" para voltar e redefinir suas opes.
Se tudo estiver em ordem basta acionar o boto "Install".
Assim que o boto "Install" acionado o processo de instalao iniciado. Sua evoluo apresentada junto a caixa de dilogo "Installing" indicada na figura 1.9. Durante a instalao se for acionado o boto "Cancel" o processo de instalao poder ser cancelado (encerrado).
Aps certo tempo ocorre a apresentao da caixa de dilogo "Completing the Berkeley Logo Setup Wizard" como mostra a figura 1.10. Neste momento desmarque a seleo das opes
"View README file" e "Launch Berkely Logo" e acione o boto "Finish".
Concluda a instalao a linguagem j pode ser utilizada. Basicamente so duas as formas automticas de acesso: uma a partir da seleo do cone "Berkeley Logo" criado na rea de trabalho
(Desktop) do sistema ou a partir do acesso pelo boto "Iniciar". Se acionado o boto "Iniciar",
localize no menu a pasta "Berkely Logo" e na pasta localize o cone "Berkeley Logo" que dever ento ser selecionado.
[ 16 ]
INTRODUO Figura 1.8 - Caixa de dilogo
"Ready to Install"
Figura 1.9 - Caixa de dilogo
"Installing"
Figura 1.10 - Caixa de dilogo
"Completing the Berkeley Logo Setup Wizard"
Alm das formas indicadas, h outra maneira de se fazer acesso a linguagem por meio da janela
"Prompt de Comando". Mas para que isso funcione necessrio realizar uma configurao manual para que o sistema todo saiba onde localizar o interpretador Logo alm do uso de sua chamada pelo cone de acesso.
A instalao do interpretador "UCBLogo" ocorre automaticamente em um local padro estando localizado em "C:\Users\Nome\AppData\Local\Programs\UCBLogo", onde "Nome" a indicao do nome de acesso a conta de usurio do sistema. Em seu sistema provavelmente ser seu nome. A figura 1.11 mostra este local a partir do usurio "Augusto".
Figura 1.11 - Indicao do local de instalao do "UCBLogo"
Para fazer o sistema "enxergar" este local necessrio coloc-lo nas variveis de ambiente do sistema. Uma das formas de realizar esta ao a partir do campo "Pesquisar" ao lado direito do boto "Iniciar". Neste campo escreva "variveis" e acesse o cone "Editar as variveis de ambiente do sistema" como mostra a figura 1.12.
[ 17 ]
Linguagem Logo: Introduo com UCBLogo Figura 1.12 - Indicao da seleo do modo "Painel de Controle"
Ser apresentada a caixa de dilogo "Propriedades do Sistema", selecione a guia "Avanado"
como mostra a figura 1.13. Neste momento acione o boto "Variveis do Ambiente..." e ser apresentada outra caixa de dilogo "Variveis de Ambiente" como indica a figura 1.14.
Figura 1.13 - Caixa de dilogo
"Propriedades do Sistema"
Figura 1.14 - Caixa de dilogo
"Variveis de Ambiente"
Veja que a caixa de dilogo "Variveis de Ambiente" possui duas reas de acesso, uma denominada "Variveis de usurio para Nome", onde "Nome" o usurio ativo e onde se faz a definio de variveis para este usurio e outra denominada "Variveis do sistema" que permite definir as variveis para todo o sistema e consequentemente para todos os usurios.
A seleo da rea de acesso uma mera questo de escolha ou necessidade especfica. Considerando que seja definido o acesso automtico para todo o sistema direcione seu foco para a rea "Variveis do sistema". Assim sendo, localize na lista apresenta a varivel "Path", selecione com o ponteiro do mouse a opo como apresentado na figura 1.15 e acione o boto "Editar...".
[ 18 ]
INTRODUO Figura 1.15 - Caixa de dilogo "Variveis de Ambiente" com varivel "Path" selecionada
apresentada uma caixa de edio chamada "Editar a varivel de ambiente" como indicado na figura 1.16. Neste instante acione o boto "Novo" e observe que um campo de entrada ento aberto. Neste momento escreva neste campo o local de acesso ao interpretador "UCBLogo", ou seja, informe "C:\Users\Nome\AppData\Local\Programs\UCBLogo" como apresentado na figura 1.17. Acione na sequncia o boto "OK" para "Editar a varivel de ambiente", acione novamente o boto "OK" para "Variveis de Ambiente" e por ltimo, em seguida, acione o boto
"OK" para "Propriedades do Sistema".
Figura 1.16 - Caixa de edio
"Editar a varivel de ambiente" - boto "Novo"
Figura 1.17 - Caixa de dilogo
"Editar a varivel de ambiente"
A partir deste momento tem-se uma terceira forma de acesso ao ambiente. Para acionar o aplicativo "Prompt de Comando" escreva no campo de pesquisa ao lado direito do boto "Iniciar"
o termo "CMD" e acione a tecla "<Enter>" ou ento a partir do boto "Iniciar" selecione a pasta
"Sistema do Windows" e selecione "Prompt de Comando". Aps o carregamento em memria do aplicativo "Prompt de Comando" basta no prompt de comando executar a instruo de chamada "ucblogo" para que o ambiente seja apresentado.
O uso do aplicativo "Prompt de Comando" para executar o interpretador "UCBLogo" permite que na chamada seja passado como parmetro o carregamento de arquivos de cdigos gravados. Para fazer uso sem nenhum parmetro basta executar no prompt a chamada "ucblogo".
[ 19 ]
Linguagem Logo: Introduo com UCBLogo 1.2.2 - Instalao UNIX (Linux)
A instalao no sistema operacional Unix/Linux pode ser realizada de duas maneiras: uma diretamente do endereo "https://people.eecs.berkeley.edu/~bh/logo" e outra a partir do gerenciador de pacotes da distribuio em uso (forma mais segura). A primeira forma pode ser usada basicamente para qualquer distribuio e a segunda forma depender da existncia de um programa de gerenciamento de pacote compatvel a distribuio em uso.
Para o primeiro caso selecione o link Unix/Linux como mostra a figura 1.18. O arquivo ser copiado para seu sistema, possivelmente para o diretrio "Downloads". Aps concluda a cpia feche o programa navegador e abra o "Terminal".
Figura 1.18 - Seleo do ponto de ligao "Unix/Linux"
Em seguida abra o diretrio Downloads e execute na linha de prompt as seguintes instrues:
uncompress ucblogo.tar.gz tar -xf ucblogo.tar.gz cd ucblogo-6.2.2
./configure
./make Eventualmente por alguma razo desconhecida ou de caractersticas de configurao essas instrues podem no surtir o efeito desejado e, neste caso, no h o que ser tratado aqui, at que voc corrija os problemas de seu sistema, repita a sequncia indicada at conseguir a instalao. No entanto, a forma mais tranquila de instalao fazer uso do programa gerenciador de pacotes da distribuio em uso, como: rpm, yum e dnf (Fedora); apt e apt-get (Ubuntu e Debian ); yast e zypper (SUSE); equo (Sabayon); pacman (Arch) entre outros, mas isso no significa que seu gerenciador de pacotes ter acesso ao programa "ucblogo" em seu repositrio padro. Para os testes desse livro forma usadas as distribuies: "Fedora" e "Ubuntu".
No sistema operacional "Fedora Linux" (teste realizado na verso 33) siga os seguintes passos:
Abra o "Terminal";
Execute a instruo "sudo dnf update";
Reinicie o sistema;
Abra novamente o "Terminal";
Execute a instruo "sudo dnf install ucblogo";
Aguarde o termino da instalao e no final feche o "Terminal";
Localize e acione na rea de trabalho, no menu "Mostrar aplicativos" o cone "Berkeley Logo";
A verso instalada refere-se a edio "6.2.1".
[ 20 ]
INTRODUO No sistema operacional "Fedora Linux" possvel realizar a chamada do interpretador diretamente no "Terminal" a partir da execuo "ucblogo".
No sistema operacional "Ubuntu Linux" (teste realizado na verso 20) siga os seguintes passos:
Abra o "Terminal";
Execute a instruo "sudo apt update";
Reinicie o sistema;
Abra novamente o "Terminal";
Execute a instruo "sudo apt install ucblogo";
Aguarde o termino da instalao e no final execute no "Terminal" a chamada "ucblogo";
A verso instalada refere-se a edio "6.1".
Os usos dos outros gerenciadores de pacotes no so muito diferentes. aconselhvel sempre consultar atentamente a documentao de cada distribuio. Para outras distribuies podese localizar especificamente um pacote preparado especialmente e proceder sua instalao de acordo com as regras da distribuio em uso.
1.2.3 - Instalao macOS Para obter o interpretador "UCBLogo" do macOS necessrio acessar o endereo "https://people.eecs.berkeley.edu/~bh/logo". Selecione o link MacOS X como mostra a figura 1.19.
Figura 1.19 - Seleo do ponto de ligao "MacOS X"
Provavelmente ser apresentada uma caixa de dilogo pedindo autorizao para a recepo da cpia a partir do local em uso como mostra a figura 1.20. Para este caso, acione o boto "Permitir".
Figura 1.20 - Pedido de autorizao de cpia O arquivo ser copiado para a pasta "Downloads". Concluda a cpia feche o programa navegador, abra a referida pasta no "Finder" e selecione com um duplo clique do boto de ao do mouse (normalmente o boto esquerdo) o arquivo "UCBLogo.dmg", minimize a janela "Finder"
e observe na rea de trabalho a indicao da aplicao aberta, como apresenta a figura 1.21.
[ 21 ]
Linguagem Logo: Introduo com UCBLogo Figura 1.21 - Disponibilidade para uso do programa "UCBLogo"
Agora maximize a janela do "Finder" e abra a opo "Aplicativos". Depois arraste o cone
"UCBLogo" da janela "UCBLogo" para a janela "Aplicativos". Pronto, o programa est instalado,
mas no pode ser executado. Feche as janelas abertas e na rea de trabalho faa eject do cone
"UCBLogo".
Antes de realizar o uso provvel ser necessrio criar a autorizao de execuo por ser o programa obtido de uma fonte no homologada. Assim sendo, no "Finder" abra "Aplicativos",
pressione e segure a tecla "Control", selecione o cone "UCBLogo" e selecione no menu de contexto a opo "Abrir". Ser apresentada uma caixa de dilogo, como indicado na figura 1.22,
informando que o programa procede de uma fonte desconhecida e autorizar a execuo de uma aplicao pode ser perigosa, mas no neste caso. Assim sendo, acione o boto "Abrir"
Figura 1.22 - Apresentao de advertncia de execuo Ser na sequncia apresentado outra caixa de dilogo indicando que o programa deseja acesso,
como mostra a figura 1.23. Neste momento, basta aciona o boto "OK".
Figura 1.23 - Autorizao de execuo A partir desses procedimentos o programa pode ser utilizado. Para tanto, a partir do "Finder"
abra "Aplicativos" e localize para seleo com um duplo clique do ponteiro do mouse o interpretador "UCBLogo".
[ 22 ]
INTRODUO 1.3 - O ambiente UCBLogo Assim que o interpretador "UCBLogo" carregado ocorre a apresentao de sua rea de trabalho operacional como mostram as figuras 1.24 (Windows), 1.25 (Linux) e 1.26 (macOS). Observe a apresentao da mensagem "Welcome to Berkeley Logo version 6.2" e a indicao do prompt
"?" que estabelece a porta de acesso ao ambiente.
Figura 1.24 - Ambiente do interpretador "UCBLogo" no "Windows 10"
Figura 1.25 - Ambiente do interpretador "UCBLogo" no "Linux (Fedora 33)"
Figura 1.26 - Ambiente do interpretador "UCBLogo" no "macOS Catalina"
[ 23 ]
Linguagem Logo: Introduo com UCBLogo Veja que o ambiente possui um estilo retro (no importando o sistema operacional em uso)
configurado com o fundo de tela em tom preto e as fontes de texto em tom branco. A configurao da tela possui a barra de ttulo "Berkeley Logo" em conjunto dos botes (lado direito):
Minimizar, Maximizar e Fechar. Abaixo da barra de ttulo possui uma barra de menu com as opes: File, Edit, Logo e Font. As aes do menu quando selecionadas apresentam um conjunto de operaes sob certa categoria de ao. Veja a seguir a descrio de cada uma dessas operaes:
Comando "Load Logo Session" do menu "File" ou acione as teclas de atalho "<Ctrl>+<O>"
Este comando efetua o carregamento em memria de um arquivo de scripts Logo com a extenso ".lg".
Comando "Save Logo Session" do menu "File" ou acione as teclas de atalho "<Ctrl>+<S>"
Este comando efetua gravao em disco dos de scripts Logo que estejam na memria com a extenso ".lg".
Comando "Save As" do menu "File"
Este comando efetua gravao em disco dos de scripts Logo que estejam na memria com outro nome.
Comando "Page Setup" do menu "File"
Este comando abre a caixa de dilogo de configurao do tamanho e modo de apresentao do papel a ser utilizado na impressora.
Comando "Print Text Window" do menu "File"
Este comando efetua a impresso em papel do texto escrito na rea de trabalho do programa.
Comando "Print Preview Text Window" do menu "File"
Este comando efetua a visualizao do contedo de texto a ser impresso em papel dando a possibilidade de realizar a impresso.
Comando "Print Turtle Graphics" do menu "File"
Este comando efetua a impresso em papel da imagem desenhada na rea de trabalho do programa.
Comando "Turtle Graphics Print Preview" do menu "File"
Este comando efetua a visualizao do contedo de imagem desenhada a ser impresso em papel dando a possibilidade de realizar a impresso.
Comando "Quit" do menu "File" ou acione as teclas de atalho "<Ctrl>+<Q>"
Este comando efetua o encerramento da execuo do interpretador.
Comando "Copy" do menu "Edit" ou acione as teclas de atalho "<Ctrl>+<C>"
Este comando efetua a cpia de certo contedo para a rea de transferncia.
[ 24 ]
INTRODUO Comando "Paste" do menu "Edit" ou acione as teclas de atalho "<Ctrl>+<V>"
Este comando efetua a duplicao do contedo existente na rea de transferncia.
Comando "Pause" do menu "Logo" ou acione as teclas de atalho "<Alt>+<P>"
Este comando efetua a pausa da execuo de alguma operao do programa.
Comando "Stop" do menu "Logo" ou acione as teclas de atalho "<Alt>+<S>"
Este comando efetua o cancelamento da execuo de alguma operao do programa.
Comando "Select Font..." do menu "Font"
Este comando efetua a possibilidade de realizar a troca do formato das letras usadas no ambiente de forma momentnea. Procure sempre usar fontes do tipo mono espaadas.
Comando "Increase Font Size" do menu "Logo" ou acione as teclas de atalho "<Alt>+<+>"
Este comando efetua o aumento do tamanho das fontes de texto.
Comando "Decrease Font Size" do menu "Logo" ou acione as teclas de atalho "<Alt>+<->"
Este comando efetua a diminuio do tamanho das fontes de texto.
Para encerrar a execuo do interpretador use o comando de menu "File/Quit UCBLogo" ou acione as teclas de atalho "<Alt>+<F4>", "<Ctrl>+<Q>" ou acione o boto "X" na barra de ttulo.
[ 25 ]
Linguagem Logo: Introduo com UCBLogo ANOTAES
___________________________________________________________________________ ___________________________________________________________________________
___________________________________________________________________________ ___________________________________________________________________________
___________________________________________________________________________ ___________________________________________________________________________
___________________________________________________________________________ ___________________________________________________________________________
___________________________________________________________________________ ___________________________________________________________________________
___________________________________________________________________________ ___________________________________________________________________________
___________________________________________________________________________ ___________________________________________________________________________
___________________________________________________________________________ ___________________________________________________________________________
[ 26 ] |
github.com/iximiuz/cdebug | go | Go | README
[¶](#section-readme)
---
### cdebug - a swiss army knife of container debugging
```
! Support development of this project > patreon.com/iximiuz
```
With this tool you can:
* Troubleshoot containers lacking shell and/or debugging tools
* Forward unpublished or even localhost ports to your host system
* Expose endpoints from the host system to containers & Kubernetes networks
* Handily export image's and/or container's filesystem to local folders
* and more :)
The following *commands* x *runtimes* are supported:
| | Docker | Containerd | Kubernetes | Kubernetes CRI | runc |
| --- | --- | --- | --- | --- | --- |
| `exec` | ✅ | ✅ | - | - | - |
| `port-forward` local | ✅ | - | - | - | - |
| `port-forward` remote | 🛠️ | - | 🛠️ | - | - |
| `export` | - | - | - | - | - |
#### Installation
It's a statically linked Go binary, so you know the drill:
```
GOOS=linux GOARCH=amd64
curl -Ls https://github.com/iximiuz/cdebug/releases/latest/download/cdebug_${GOOS}_${GOARCH}.tar.gz | tar xvz
sudo mv cdebug /usr/local/bin
```
##### Homebrew
If you're a [Homebrew](https://brew.sh/) user, you can install the tool via brew on macOS or Linux:
```
$ brew install cdebug
```
At the moment, the following systems are (kinda sorta) supported:
* linux/amd64
* darwin/amd64
* darwin/arm64
#### Commands
##### cdebug exec
Run an interactive shell in a scratch, slim, or distroless container, with ease:
```
cdebug exec -it [docker|containerd://]<container>
```
The `cdebug exec` command is a crossbreeding of `docker exec` and `kubectl debug` commands.
You point the tool at a running container, say what toolkit image to use, and it starts a debugging "sidecar" container that *feels* like a `docker exec` session to the target container:
* The root filesystem of the debugger ***is*** the root filesystem of the target container.
* The target container isn't recreated and/or restarted.
* No extra volumes or copying of debugging tools is needed.
* The debugging tools ***are*** available in the target container.
By default, the `busybox:musl` (statically compiled) image is used for the debugger sidecar, but you can override it with the `--image` flag. Combining this with the superpower of Nix and [Nixery](https://nixery.dev/),
you can get all your favorite tools by simply listing them in the image name:
```
cdebug exec -it --image nixery.dev/shell/ps/vim/tshark <target-container>
```
How it works The technique is based on the ideas from this [blog post](https://iximiuz.com/en/posts/docker-debug-slim-containers).
![](https://github.com/iximiuz/cdebug/raw/v0.0.14/assets/images/cdebug-exec.png)
Oversimplifying, the debugger container is started like:
```
docker run [-it] \
--network container:<target> \
--pid container:<target> \
--uts container:<target> \
<toolkit-image>
sh -c <<EOF ln -s /proc/$$/root/bin/ /proc/1/root/.cdebug
export PATH=$PATH:/.cdebug chroot /proc/1/root sh EOF
```
The secret sauce is the symlink + PATH modification + chroot-ing.
##### cdebug port-forward
Forward local ports to containers and vice versa. This command is another crossbreeding -
this time it's `kubectl port-forward` and `ssh -L|-R`.
Currently, only local port forwarding (`cdebug port-forward -L`) is supported,
but remote port forwarding is under active development.
Local port forwarding use cases (works for Docker Desktop too!):
* Publish "unpublished" port 80 to a random port on the host: `cdebug port-forward <target> -L 80`
* Expose container's localhost to the host system: `cdebug port-forward <target> -L 127.0.0.1:5432`
* Proxy local traffic to a remote host via the target: `cdebug port-forward <target> -L <LOCAL_HOST>:<LOCAL_PORT>:<REMOTE_HOST>:<REMOTE_PORT>`
* 🛠️ Expose a Kubernetes service to the host system: `cdebug port-forward <target> -L 8888:my.svc.cluster.local:443`
Remote port forwarding use cases:
* Start a container/Pod forwarding traffic destined to its `<IP>:<port>` to a non-cluster endpoint reachable from the host system.
* ...
How it works
**Local port forwarding** is implemented by starting an extra forwarder container in the target's network and publishing its ports to the host using the standard means (e.g.,
`docker run --publish`). The forwarder container itself runs something like:
`socat TCP-LISTEN:<REMOTE_PORT>,fork TCP-CONNECT:<REMOTE_HOST>:<REMOTE_PORT>`
![](https://github.com/iximiuz/cdebug/raw/v0.0.14/assets/images/cdebug-port-forward-local-direct.png)
If the *REMOTE_HOST* doesn't belong to the target or it's the target's localhost,
an extra sidecar container is started in the target's network namespace with another socat forwarding traffic from the target public interface to `REMOTE_HOST:REMOTE_PORT`.
![](https://github.com/iximiuz/cdebug/raw/v0.0.14/assets/images/cdebug-port-forward-local-sidecar.png)
**Remote port forwarding** will use similar tricks but combined with more advanced reverse tunneling.
#### Examples
Below are a few popular scenarios formatted as reproducible demos.
##### A simple interactive shell to a distroless container
First, a target container is needed. Let's use a distroless nodejs image for that:
```
$ docker run -d --rm \
--name my-distroless gcr.io/distroless/nodejs \
-e 'setTimeout(() => console.log("Done"), 99999999)'
```
Now, let's start an interactive shell (using busybox) into the above container:
```
$ cdebug exec -it my-distroless
```
Exploring the filesystem shows that it's a rootfs of the nodejs container:
```
/ $# ls -lah total 60K drwxr-xr-x 1 root root 4.0K Oct 17 23:49 .
drwxr-xr-x 1 root root 4.0K Oct 17 23:49 ..
👉 lrwxrwxrwx 1 root root 18 Oct 17 23:49 .cdebug-c153d669 -> /proc/55/root/bin/
-rwxr-xr-x 1 root root 0 Oct 17 19:49 .dockerenv drwxr-xr-x 2 root root 4.0K Jan 1 1970 bin drwxr-xr-x 2 root root 4.0K Jan 1 1970 boot drwxr-xr-x 5 root root 340 Oct 17 19:49 dev drwxr-xr-x 1 root root 4.0K Oct 17 19:49 etc drwxr-xr-x 3 nonroot nonroot 4.0K Jan 1 1970 home drwxr-xr-x 1 root root 4.0K Jan 1 1970 lib drwxr-xr-x 2 root root 4.0K Jan 1 1970 lib64 drwxr-xr-x 5 root root 4.0K Jan 1 1970 nodejs
...
```
Notice 👉 above - that's where the debugging tools live:
```
/ $# echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/.cdebug-c153d669
```
The process tree of the debugger container is the process tree of the target:
```
/ $# ps auxf PID USER TIME COMMAND
1 root 0:00 /nodejs/bin/node -e setTimeout(() => console.log("Done"),
13 root 0:00 sh -c set -euo pipefail sleep 999999999 & SANDBOX_PID=$!
19 root 0:00 sleep 999999999
21 root 0:00 sh
28 root 0:00 [sleep]
39 root 0:00 [sleep]
45 root 0:00 ps auxf
```
##### An interactive shell with code editor (vim)
If the tools provided by busybox aren't enough, you can bring your own tools with a ~~little~~ huge help of the [nixery](https://nixery.dev/) project:
```
$ cdebug exec -it --image nixery.dev/shell/vim my-distroless
```
##### An interactive shell with tshark and other advanced tools
Even more powerful exammple:
```
$ cdebug exec -it --image nixery.dev/shell/ps/findutils/tshark my-distroless
```
##### Debugging containerd containers (no Docker required)
First, start the target container:
```
$ sudo ctr image pull docker.io/library/nginx:latest
$ sudo ctr run -d docker.io/library/nginx:latest nginx-1
```
Run an interactive shell in the target container using simple `cdebug exec`:
```
$ sudo cdebug exec -it containerd://nginx-1
/ $# wget -O- 127.0.0.1
```
Run VIM in the target container using `cdebug exec --image nixery.dev/shell/vim`:
```
$ sudo cdebug exec -it --rm --image nixery.dev/shell/vim containerd://nginx-1
```
##### Debugging nerdctl containers (no Docker required)
Start a container using nerdctl:
```
$ sudo $(which nerdctl) run -d --name nginx-1 nginx 9f8763d82259a6e3e747df83d0ce8b7ee3d33d94269a72cd04e0e3862a3abc5f
```
Run the debugger using the `nerdctl://` schema and the target's name:
```
$ sudo cdebug exec -it --rm nerdctl://nginx-1
```
Or run a debugging session in the above container using the `containerd://` schema:
```
$ sudo cdebug exec -it --rm containerd://9f876
```
##### Debugging Kubernetes Pods (node access is assumed)
Currently, only containerd CRI is supported. First, you'll need to list the running containers:
```
$ ctr -n k8s.io container ls CONTAINER IMAGE RUNTIME 155227c0e9aa8 k8s.gcr.io/pause:3.5 io.containerd.runc.v2 2220eacd9cb26 registry.k8s.io/kube-apiserver:v1.25.3 io.containerd.runc.v2 22efcb35a651a registry.k8s.io/etcd:3.5.4-0 io.containerd.runc.v2 28e06cc63b822 docker.io/calico/cni:v3.24.1 io.containerd.runc.v2 30754c8492f18 docker.io/calico/node:v3.24.1 io.containerd.runc.v2 61acdb0231516 docker.io/calico/kube-controllers:v3.24.1 io.containerd.runc.v2
...
```
Now you can exec into a Pod's container bringing your own debugging tools:
```
$ cdebug exec -n k8s.io -it --rm containerd://2220ea
```
##### Publish "forgotten" port
Start an nginx container but don't expose its port 80:
```
$ docker run -d --name nginx-1 nginx:1.23
```
Forward local port 8080 to the nginx's 80:
```
$ cdebug port-forward nginx-1 -L 8080:80
$ curl localhost:8080
```
##### Expose localhost's ports
Start a containerized service that listens only on its localhost:
```
$ docker run -d --name svc-1 python:3-alpine python3 -m 'http.server' -b 127.0.0.1 8888
```
Tap into the above service:
```
$ cdebug port-forward svc-1 -L 127.0.0.1:8888 Pulling forwarder image...
latest: Pulling from shell/socat Digest: sha256:b43b6cf8d22615616b13c744b8ff525f5f6c0ca6c11b37fa3832a951ebb3c20c Status: Image is up to date for nixery.dev/shell/socat:latest Forwarding 127.0.0.1:49176 to 127.0.0.1:8888 through 172.17.0.4:34128
$ curl localhost:49176
<!DOCTYPE HTML>
<html lang="en">
<head>
...
```
#### F.A.Q
**Q:** Running `cdebug exec` fails with `rm: cannot remove '/proc/1/root/nix': Permission denied` or
`ln: /proc/1/root/.cdebug-XXXXXXXX: Permission denied`.
Chances are your target container has been started with elevated permissions while you're trying to run a non-privileged debugger sidecar. Try `cdebug exec --privileged` instead.
#### Similar tools
* [`docker-slim debug`](https://github.com/docker-slim/docker-slim) - a PoC `debug` command for DockerSlim (contributed by [D4N](https://github.com/D4N))
* [`debug-ctr`](https://github.com/felipecruz91/debug-ctr) - a debugger that creates a new container out of the original container with the toolkit mounted in a volume (by [<NAME>](https://github.com/felipecruz91))
* [`docker-debug`](https://github.com/zeromake/docker-debug) - much like `cdebug exec` but without the chroot trick.
* [`docker-opener`](https://github.com/artemkaxboy/docker-opener) - a multi-purpose tool that in particular can run a shell session into your container (and if there is no shell inside, it'll bring its own busybox).
* [`cntr`](https://github.com/Mic92/cntr) - is "a replacement for `docker exec` that brings all your developers tools with you" by mounting the file system from one container (or the host) into the target container and creating a nested container with the help of a FUSE filesystem. Supports a huge range of runtimes (docker, podman, LXC/LXD, rkt, systemd-nspawn, containerd) because it operates on the OS level.
* [`kdiag`](https://github.com/solo-io/kdiag) - a kubectl plugin to get shell access to scratch containers, stream logs from multiple pods simultaneously, and do reverse port forwarding to Kubernetes clusters.
#### TODO:
* More `exec` flags (like in `docker run`): `--cap-add`, `--cap-drop`, `--env`, `--volume`, etc.
* Helper command(s) suggesting nix(ery) packages
* Non-docker runtimes (containerd, runc, k8s)
* E2E Tests
#### Contributions
It's a pre-alpha with no sound design yet, so I may not be accepting all PRs. Sorry about that :)
Documentation
[¶](#section-documentation)
---
![The Go Gopher](/static/shared/gopher/airplane-1200x945.svg)
There is no documentation for this package. |
optiSolve | cran | R | Package ‘optiSolve’
October 14, 2022
Type Package
Title Linear, Quadratic, and Rational Optimization
Version 1.0
Date 2021-10-13
Author <NAME>
Maintainer <NAME> <<EMAIL>>
Depends R (>= 3.4)
Description Solver for linear, quadratic, and rational programs with linear, quadratic, and ratio-
nal constraints. A unified interface to different R packages is provided. Optimization prob-
lems are transformed into equivalent formulations and solved by the respective package. For ex-
ample, quadratic programming problems with linear, quadratic and rational con-
straints can be solved by augmented Lagrangian minimization using package 'alabama', or by se-
quential quadratic programming using solver 'slsqp'. Alternatively, they can be reformu-
lated as optimization problems with second order cone constraints and solved with package 'cccp'.
License GPL-2
Imports Matrix, shapes, alabama, cccp, nloptr, MASS, methods, plyr,
stringr, stats, Rcpp (>= 0.12.4)
RoxygenNote 7.1.2
NeedsCompilation no
Repository CRAN
Date/Publication 2021-10-13 12:32:04 UTC
R topics documented:
optiSolve-packag... 2
adjus... 3
co... 4
lbco... 5
linco... 7
linfu... 9
my... 10
myQ... 11
myQ... 11
phenotyp... 12
print.copValidatio... 12
quadco... 14
quadfu... 16
ratioco... 17
ratiofu... 19
solveco... 21
ubco... 23
validat... 25
optiSolve-package Linear, Quadratic, and Rational Optimization
Description
Solver for linear, quadratic, and rational programs with linear, quadratic, and rational constraints.
A unified interface to different R packages is provided. Optimization problems are transformed
into equivalent formulations and solved by the respective package. For example, quadratic pro-
gramming problems with linear, quadratic and rational constraints can be solved by augmented
Lagrangian minimization using package ’alabama’, or by sequential quadratic programming using
solver ’slsqp’. Alternatively, they can be reformulated as optimization problems with second order
cone constraints and solved with package ’cccp’.
Details
The following steps are included in solving a constrained optimization problem (cop):
1) Define the objective with one of the following functions:
linfun defines a linear objective function,
quadfun defines a quadratic objective function,
ratiofun defines a rational objective function.
2) Define the constraints by using the following functions:
lincon defines linear equality and inequality constraints,
quadcon defines quadratic constraints,
ratiocon defines rational constraints,
lbcon defines lower bounds for the variables,
ubcon defines upper bounds for the variables.
3) Put the objective function and the constraints together to define the optimization problem:
cop defines a constrained optimization problem.
4) Solve the optimization problem:
solvecop solves a constrained optimization problem.
5) Check if the solution fulfils all constraints:
validate checks if the solution fulfils all constraints, and calculates the values of the constraints.
Author(s)
<NAME>
Maintainer: <NAME> <<EMAIL>>
References
<NAME>. (1988). A software package for sequential quadratic programming, Technical Report
DFVLR-FB 88-28, Institut fuer Dynamik der Flugsysteme, Oberpfaffenhofen, July 1988.
<NAME>, Optimization, 2004, Springer.
<NAME>, <NAME>, <NAME>, Optimization With Constraints, 2004, IMM, Technical Univer-
sity of Denmark.
adjust Adjust Constraints and Objective Functions
Description
Constraints and objective functions are adjusted so that they refer to a larger or smaller set of
variables.
Usage
adjust(x, ids)
Arguments
x Constraint or objective function of class "linFun", "linCon", "quadFun", "quadCon",
"ratioFun", and "ratioCon".
ids Vector with ids of the variables.
Details
Constraints and objective functions are adjusted so that they refer to a larger or smaller set of
variables. Additional variables do not affect the value of the constraint or objective function.
Value
A data frame (invisible) containing values and bounds of the constraints, the value of the objective
function, and column valid which is TRUE if all constraints are fulfilled.
See Also
The main function for solving constrained programming problems is solvecop.
cop Constrained Optimization Problem
Description
Define a constrained optimization problem with a linear, quadratic, or rational objective function,
and linear, quadratic, rational, and boundary constraints.
Usage
cop(f, max=FALSE, lb=NULL, ub=NULL, lc=NULL, ...)
Arguments
f Objective function, defined with function linfun, quadfun, or ratiofun.
max Logical value. Should the function be maximized? This is possible only for
linear objective functions.
lb Lower bounds for the variables, defined with function lbcon.
ub Upper bounds for the variables, defined with function ubcon.
lc Linear inequality and equality constraints, defined with function lincon.
... Quadratic and rational inequality constraints, defined with functions quadcon
and ratiocon.
Details
Define a constrained optimization problem with a linear, quadratic, or rational objective function,
and linear, quadratic, rational, and boundary constraints. The optimization problem can be solved
with function solvecop.
Value
An object of class COP, which may contain the following components
f List with S3-class "linFun", "quadFun", or "ratioFun", defining the objective
function
max Logical value. Should the objective function be maximized?
lb List with S3-class "lbCon", defining lower bounds.
ub List with S3-class "ubCon", defining upper bounds.
lc List with S3-class "linCon", defining linear constraints
qc List with S3-class "quadCon", defining quadratic constraints
rc List with S3-class "ratioCon", defining rational constraints
x Vector with NAs
id Vector with names of the variables that are to be optimized
madeDefinite Logical variable indicating whether non-positive-semidefinite matrices have al-
ready been approximated by positive-definite matrices.
Author(s)
<NAME>
See Also
The main function for solving constrained programming problems is solvecop.
lbcon Lower Bounds
Description
Define lower bounds for the variables of the form
val <= x.
Usage
lbcon(val=numeric(0), id=seq_along(val))
Arguments
val Numeric vector with lower bounds for the variables. If val is a single value,
then this value will be used for all variables in vector id.
id Vector defining the names of the variables to which the constraint applies. Each
variable name corresponds to one component of x. Variable names must be
consistent across constraints.
Details
Define lower bounds for the variables of the form
val <= x.
Vector x contains only the variables included in argument id.
Value
An object of class lbCon.
See Also
The main function for solving constrained programming problems is solvecop.
Examples
### Linear programming with linear and quadratic constraints ###
### Example from animal breeding ###
### The mean breeding value BV is maximized whereas the ###
### mean kinship in the offspring x'Qx+d is restricted ###
### Lower and upper bounds for females are identical, so ###
### their contributions are not optimized. ###
### Lower and upper bounds for some males are defined. ###
data(phenotype)
data(myQ)
A <- t(model.matrix(~Sex-1, data=phenotype))
A[,1:5]
val <- c(0.5, 0.5)
dir <- c("==","==")
Nf <- sum(phenotype$Sex=="female")
id <- phenotype$Indiv
lbval <- setNames(rep(0, length(id)), id)
ubval <- setNames(rep(NA, length(id)), id)
lbval[phenotype$Sex=="female"] <- 1/(2*Nf)
ubval[phenotype$Sex=="female"] <- 1/(2*Nf)
lbval["276000102379430"] <- 0.02
ubval["276000121507437"] <- 0.03
mycop <- cop(f = linfun(a=phenotype$BV, id=id, name="BV"),
max= TRUE,
lb = lbcon(lbval, id=id),
ub = ubcon(ubval, id=id),
lc = lincon(A=A, dir=dir, val=val, id=id),
qc = quadcon(Q=myQ, d=0.001, val=0.045,
name="Kinship", id=rownames(myQ)))
res <- solvecop(mycop, solver="cccp2", quiet=FALSE)
Evaluation <- validate(mycop, res)
# valid solver status
# TRUE cccp2 optimal
#
# Variable Value Bound OK?
# -------------------------------------
# BV 0.5502 max :
# -------------------------------------
# lower bounds all x >= lb : TRUE
# upper bounds all x <= ub : TRUE
# Sexfemale 0.5 == 0.5 : TRUE
# Sexmale 0.5 == 0.5 : TRUE
# Kinship 0.045 <= 0.045 : TRUE
# -------------------------------------
res$x["276000102379430"]
res$x["276000121507437"]
lincon Linear Constraints
Description
Define linear equality and inequality constraints of the form
Ax + ddirval
Usage
lincon(A, d=rep(0, nrow(A)), dir=rep("==",nrow(A)), val=rep(0, nrow(A)),
id=1:ncol(A), use=rep(TRUE,nrow(A)), name=rownames(A))
Arguments
A Numeric matrix of the constraint coefficients.
d Numeric vector.
dir Character vector with the directions of the constraints. Each element must be
one of "<=", "==", and ">=".
val Numeric vector with threshold values.
id Vector (if present), defining the names of the variables to which the constraint
applies. Each variable name corresponds to one component of x. Variable names
must be consistent across constraints.
use Logical vector indicating the constraints to be included in the optimization prob-
lem. If use[i]=FALSE, then linear constraint i does not affect the result, but the
value of the linear function A[i,] x + d[i] will be reported by function validate.
name Vector with names of the constraints.
Details
Define linear inequality and equality constraints of the form
Ax + ddirval
(component wise). If parameter id is specified, then vector x contains only the indicated variables.
Value
An object of class linCon.
See Also
The main function for solving constrained programming problems is solvecop.
Examples
### Quadratic programming with linear constraints ###
### Example from animal breeding ###
### The mean kinship in the offspring x'Qx+d is minized ###
### and the mean breeding value is restricted. ###
data(phenotype)
data(myQ)
A <- t(model.matrix(~Sex+BV-1, data=phenotype))
A[,1:5]
val <- c(0.5, 0.5, 0.40)
dir <- c("==","==",">=")
mycop <- cop(f = quadfun(Q=myQ, d=0.001, name="Kinship", id=rownames(myQ)),
lb = lbcon(0, id=phenotype$Indiv),
ub = ubcon(NA, id=phenotype$Indiv),
lc = lincon(A=A, dir=dir, val=val, id=phenotype$Indiv))
res <- solvecop(mycop, solver="cccp", quiet=FALSE)
validate(mycop, res)
# valid solver status
# TRUE cccp optimal
#
# Variable Value Bound OK?
# -------------------------------------
# Kinship 0.0322 min :
# -------------------------------------
# lower bounds all x >= lb : TRUE
# Sexfemale 0.5 == 0.5 : TRUE
# Sexmale 0.5 == 0.5 : TRUE
# BV 0.4 >= 0.4 : TRUE
# -------------------------------------
linfun Linear Objective Function
Description
Define a linear objective function of the form
f (x) = a0 x + d
.
Usage
linfun(a, d=0, id=1:length(a), name="lin.fun")
Arguments
a Numeric vector of the coefficients.
d Numeric value.
id Vector defining the names of the variables to which the function applies. Each
variable name corresponds to one component of x. Variable names must be
consistent across constraints.
name Name for the objective function.
Details
Define linear objective function of the form
f (x) = a0 x + d
.
Value
An object of class linFun.
See Also
The main function for solving constrained programming problems is solvecop.
Examples
### Linear programming with linear and quadratic constraints ###
### Example from animal breeding ###
### The mean breeding value BV is maximized whereas the ###
### mean kinship in the offspring x'Qx+d is restricted ###
data(phenotype)
data(myQ)
A <- t(model.matrix(~Sex-1, data=phenotype))
A[,1:5]
val <- c(0.5, 0.5)
dir <- c("==","==")
mycop <- cop(f = linfun(a=phenotype$BV, id=phenotype$Indiv, name="BV"),
max= TRUE,
lb = lbcon(0, id=phenotype$Indiv),
ub = ubcon(NA, id=phenotype$Indiv),
lc = lincon(A=A, dir=dir, val=val, id=phenotype$Indiv),
qc = quadcon(Q=myQ, d=0.001, val=0.035, name="Kinship", id=rownames(myQ)))
res <- solvecop(mycop, solver="cccp2", quiet=FALSE)
validate(mycop, res)
# valid solver status
# TRUE cccp2 optimal
#
# Variable Value Bound OK?
# -------------------------------------
# BV 0.7667 max :
# -------------------------------------
# lower bounds all x >= lb : TRUE
# Sexfemale 0.5 == 0.5 : TRUE
# Sexmale 0.5 == 0.5 : TRUE
# Kinship 0.035 <= 0.035 : TRUE
# -------------------------------------
myQ Kinship Matrix
Description
Kinship matrix of the cattle listed in data frame phenotype. This is an (almost) positive semidefinite
matrix.
Usage
data(myQ)
Format
Matrix
myQ1 Kinship Matrix
Description
Matrix needed to compute kinship at native alleles for the cattle listed in data frame phenotype.
This is an (almost) positive semidefinite matrix.
Usage
data(myQ1)
Format
Matrix
myQ2 Kinship Matrix
Description
Matrix needed to compute kinship at native alleles for the cattle listed in data frame phenotype.
This is an (almost) positive semidefinite matrix.
Usage
data(myQ2)
Format
Matrix
phenotype Phenotypes of Genotyped Cattle
Description
Phenotypes of cattle.
Usage
data(phenotype)
Format
Data frame containing information on genotyped cattle. The columns contain the IDs of the indi-
viduals (Indiv), simulated breeding values (BV), simulated sexes (Sex), and genetic contributions
from other breeds (MC).
print.copValidation Print Validation of a Solution
Description
Print the validation results for the solution of an optimization problem.
Usage
## S3 method for class 'copValidation'
print(x, ...)
Arguments
x The result of function validate.
... Unused additional arguments.
Details
Print the validation results for the solution of an optimization problem.
Value
A list of class copValidation (invisible) with components:
summary Data frame containing one row for each constraint with the value of the con-
straint in column Val, the bound for the constraint in column Bound, and col-
umn OK states if the constraint is fulfilled. The value of the objective function
is shown in the first row. Additional rows contain the values of disabled con-
straints.
info Data frame with component valid indicating if all constraints are fulfilled, com-
ponent solver containing the name of the solver used for optimization, and
component status describing the solution as reported by the solver.
var Data frame with the values of the objective function and constraints at the opti-
mum.
obj.fun Named numeric value with value and name of the objective function at the opti-
mum.
See Also
The main function for solving constrained programming problems is solvecop.
Examples
### Quadratic programming with linear constraints ###
### Example from animal breeding ###
### where the mean kinship in the offspring is minized ###
data(phenotype)
data(myQ)
A <- t(model.matrix(~Sex+BV-1, data=phenotype))
rownames(A) <- c("male.cont","female.cont", "Breeding.Value")
val <- c(0.5, 0.5, 0.40)
dir <- c("==","==",">=")
mycop <- cop(f = quadfun(Q=myQ, d=0.001, name="Kinship", id=rownames(myQ)),
lb = lbcon(0, id=phenotype$Indiv),
ub = ubcon(NA, id=phenotype$Indiv),
lc = lincon(A=A, dir=dir, val=val, id=phenotype$Indiv))
res <- solvecop(mycop, solver="cccp", quiet=FALSE, trace=FALSE)
head(res$x)
Evaluation <- validate(mycop, res, quiet=TRUE)
print(Evaluation)
# valid solver status
# TRUE cccp optimal
#
# Variable Value Bound OK?
# ---------------------------------------
# Kinship 0.0322 min :
# ---------------------------------------
# lower bounds all x >= lb : TRUE
# male.cont 0.5 == 0.5 : TRUE
# female.cont 0.5 == 0.5 : TRUE
# Breeding.Value 0.4 >= 0.4 : TRUE
# ---------------------------------------
quadcon Quadratic Constraint
Description
Define a quadratic constraint of the form
x0 Qx + a0 x + d ≤ val
Usage
quadcon(Q, a=rep(0, nrow(Q)), d=0, dir="<=", val,
id=1:nrow(Q), name="quadratic", use=TRUE)
Arguments
Q Numeric symmetric matrix of the constraint coefficients.
a Numeric vector.
d Numeric value.
dir Character string "<=".
val Numeric threshold value, which is the upper bound for the quadratic function.
id Vector defining the names of the variables to which the constraint applies. Each
variable name corresponds to one component of x. Variable names must be
consistent across constraints.
name Name for the constraint.
use Logical value indicating if the constraint should be included in the optimization
problem. If use=FALSE, then constraint does not affect the result, but the value
of the quadratic function will be reported by function validate.
Details
Define a quadratic inequality constraint of the form
x0 Qx + a0 x + d ≤ val.
Vector x contains only the variables included in argument id.
Value
An object of class quadCon.
See Also
The main function for solving constrained programming problems is solvecop.
Examples
### Linear programming with linear and quadratic constraints ###
### Example from animal breeding ###
### The mean breeding value BV is maximized whereas the ###
### mean kinship in the offspring x'Qx+d is restricted ###
data(phenotype)
data(myQ)
A <- t(model.matrix(~Sex-1, data=phenotype))
A[,1:5]
val <- c(0.5, 0.5)
dir <- c("==","==")
mycop <- cop(f = linfun(a=phenotype$BV, id=phenotype$Indiv, name="BV"),
max= TRUE,
lb = lbcon(0, id=phenotype$Indiv),
ub = ubcon(NA, id=phenotype$Indiv),
lc = lincon(A=A, dir=dir, val=val, id=phenotype$Indiv),
qc = quadcon(Q=myQ, d=0.001, val=0.035, name="Kinship", id=rownames(myQ)))
res <- solvecop(mycop, solver="cccp2", quiet=FALSE)
validate(mycop, res)
# valid solver status
# TRUE cccp2 optimal
#
# Variable Value Bound OK?
# -------------------------------------
# BV 0.7667 max :
# -------------------------------------
# lower bounds all x >= lb : TRUE
# Sexfemale 0.5 == 0.5 : TRUE
# Sexmale 0.5 == 0.5 : TRUE
# Kinship 0.035 <= 0.035 : TRUE
# -------------------------------------
quadfun Quadratic Objective Function
Description
Define a quadratic objective function of the form
f (x) = xT Qx + aT x + d
Usage
quadfun(Q, a=rep(0, nrow(Q)), d=0, id=1:nrow(Q), name="quad.fun")
Arguments
Q Numeric symmetric matrix of the constraint coefficients.
a Numeric vector.
d Numeric value.
id Vector (if present), defining the names of the variables to which the function
applies. Each variable name corresponds to one component of x. Variable names
must be consistent across constraints.
name Name for the objective function.
Details
Define a quadratic objective function of the form
f (x) = xT Qx + aT x + d
Value
An object of class quadFun.
See Also
The main function for solving constrained programming problems is solvecop.
Examples
### Quadratic programming with linear constraints ###
### Example from animal breeding ###
### The mean kinship in the offspring x'Qx+d is minized ###
### and the mean breeding value is restricted. ###
data(phenotype)
data(myQ)
A <- t(model.matrix(~Sex+BV-1, data=phenotype))
A[,1:5]
val <- c(0.5, 0.5, 0.40)
dir <- c("==","==",">=")
mycop <- cop(f = quadfun(Q=myQ, d=0.001, name="Kinship", id=rownames(myQ)),
lb = lbcon(0, id=phenotype$Indiv),
ub = ubcon(NA, id=phenotype$Indiv),
lc = lincon(A=A, dir=dir, val=val, id=phenotype$Indiv))
res <- solvecop(mycop, solver="cccp", quiet=FALSE)
validate(mycop, res)
# valid solver status
# TRUE cccp optimal
#
# Variable Value Bound OK?
# -------------------------------------
# Kinship 0.0322 min :
# -------------------------------------
# lower bounds all x >= lb : TRUE
# Sexfemale 0.5 == 0.5 : TRUE
# Sexmale 0.5 == 0.5 : TRUE
# BV 0.4 >= 0.4 : TRUE
# -------------------------------------
ratiocon Rational Constraint
Description
Define a rational constraint of the form
xT Q1 x + aT1 x + d1
≤ val
xT Q2 x + aT2 x + d2
Usage
ratiocon(Q1, a1=rep(0, nrow(Q1)), d1=0, Q2, a2=rep(0, nrow(Q2)), d2=0, dir="<=", val,
id=1:nrow(Q1), name="rational", use=TRUE)
Arguments
Q1 Numeric quadratic matrix.
a1 Numeric vector.
d1 Numeric value.
Q2 Numeric quadratic matrix.
a2 Numeric vector.
d2 Numeric value.
dir Character string "<=".
val Numeric threshold value, which is the upper bound for the rational function.
id Vector defining the names of the variables to which the constraint applies. Each
variable name corresponds to one component of x. Variable names must be
consistent across constraints.
name Name for the constraint.
use Logical value indicating if the constraint should be included in the optimization
problem. If use=FALSE, then the constraint does not affect the result, but the
value of the rational function will be reported by function validate.
Details
Define a rational inequality constraint of the form
xT Q1 x + aT1 x + d1
≤ val.
xT Q2 x + aT2 x + d2
Vector x contains only the variables included in argument id.
For rational constraints it is required that there is a linear constraint ensuring that sum(x) is a
constant. Furthermore, the denominator must be non-negative.
Value
An object of class ratioCon.
See Also
The main function for solving constrained programming problems is solvecop.
Examples
### Constrained optimization with rational objective ###
### function and linear and quadratic constraints ###
### Example from animal breeding ###
### The mean kinship at native alleles in the offspring is minimized ###
### The mean breeding value and the mean kinship are constrained ###
data(phenotype)
data(myQ)
data(myQ1)
data(myQ2)
A <- t(model.matrix(~Sex+BV+MC-1, data=phenotype))
A[,1:5]
val <- c(0.5, 0.5, 0.4, 0.5 )
dir <- c("==", "==", ">=", "<=")
mycop <- cop(f = quadfun(Q=myQ, d=0.001, name="Kinship", id=rownames(myQ)),
lb = lbcon(0, id=phenotype$Indiv),
ub = ubcon(NA, id=phenotype$Indiv),
lc = lincon(A=A, dir=dir, val=val, id=phenotype$Indiv),
rc = ratiocon(Q1=myQ1, Q2=myQ2, d1=0.0004, d2=0.00025, val=0.040,
id=rownames(myQ1), name="nativeKinship")
)
res <- solvecop(mycop, solver="slsqp", quiet=FALSE)
validate(mycop, res)
# valid solver status
# TRUE slsqp successful completion
#
# Variable Value Bound OK?
# --------------------------------------
# Kinship 0.0324 min :
# --------------------------------------
# lower bounds all x >= lb : TRUE
# Sexfemale 0.5 == 0.5 : TRUE
# Sexmale 0.5 == 0.5 : TRUE
# BV 0.4 >= 0.4 : TRUE
# MC 0.4668 <= 0.5 : TRUE
# nativeKinship 0.04 <= 0.04 : TRUE
# --------------------------------------
ratiofun Rational Objective Function
Description
Define a rational objective function of the form
f (x) =
Usage
ratiofun(Q1, a1=rep(0, nrow(Q1)), d1=0, Q2, a2=rep(0, nrow(Q2)), d2=0,
id=1:nrow(Q1), name="ratio.fun")
Arguments
Q1 Numeric quadratic matrix.
a1 Numeric vector.
d1 Numeric value.
Q2 Numeric quadratic matrix.
a2 Numeric vector.
d2 Numeric value.
id Vector defining the names of the variables to which the constraint applies. Each
variable name corresponds to one component of x. Variable names must be
consistent across constraints.
name Name for the constraint.
Details
Define a rational ofjective function of the form
f (x) =
Reasonable bounds for the variables should be provided because the function can have several local
optima. Solvers 'slsqp' (the default) and 'alabama' are recommended.
Value
An object of class ratioFun.
See Also
The main function for solving constrained programming problems is solvecop.
Examples
### Constrained optimization with rational objective ###
### function and linear and quadratic constraints ###
### Example from animal breeding ###
### The mean kinship at native alleles in the offspring is minimized ###
### The mean breeding value and the mean kinship are constrained ###
data(phenotype)
data(myQ)
data(myQ1)
data(myQ2)
Ax <- t(model.matrix(~Sex+BV+MC-1, data=phenotype))
Ax[,1:5]
val <- c(0.5, 0.5, 0.4, 0.5 )
dir <- c("==", "==", ">=", "<=")
mycop <- cop(f = ratiofun(Q1=myQ1, Q2=myQ2, d1=0.0004, d2=0.00025,
id=rownames(myQ1), name="nativeKinship"),
lb = lbcon(0, id=phenotype$Indiv),
ub = ubcon(NA, id=phenotype$Indiv),
lc = lincon(A=Ax, dir=dir, val=val, id=phenotype$Indiv),
qc = quadcon(Q=myQ, d=0.001, val=0.035,
name="Kinship", id=rownames(myQ)))
res <- solvecop(mycop, quiet=FALSE)
validate(mycop, res)
# valid solver status
# TRUE slsqp successful completion
#
# Variable Value Bound OK?
# --------------------------------------
# nativeKinship 0.0366 min :
# --------------------------------------
# lower bounds all x >= lb : TRUE
# Sexfemale 0.5 == 0.5 : TRUE
# Sexmale 0.5 == 0.5 : TRUE
# BV 0.4 >= 0.4 : TRUE
# MC 0.4963 <= 0.5 : TRUE
# Kinship 0.035 <= 0.035 : TRUE
# --------------------------------------
solvecop Solve a Constrained Optimization Problem
Description
Solve a constrained optimization problem with a linear, quadratic, or rational objective function,
and linear, quadratic, rational, and boundary constraints.
Usage
solvecop(op, solver="default", make.definite=FALSE, X=NULL, quiet=FALSE, ...)
Arguments
op An optimization problem, usually created with function cop.
solver Character string with the name of the solver. Available solvers are "alabama",
"cccp", "cccp2", and "slsqp". Solver "csdp" is temporarily disabled because
the package Rcsdp has been removed from Cran. The default means that the
solver is chosen automatically. The solvers are described in the Details section.
make.definite Logical variable indicating whether non-positive-semidefinite matrices should
be approximated by positive-definite matrices. This is always done for solvers
that are known not to convergue otherwise.
X Starting vector of parameter values (not needed). Any initial vector, even those
violating linear inequality constraints, may be specified. Ignored by solvers
"cccp" and "csdp". For "slsqp" the lower and upper bounds must not be
violated.
quiet Logical variable indicating whether output to console should be switched off.
... Tuning parameters of the solver. The available parameters depend on the solver
and will be printed when the function is used with quiet=FALSE. In section
Details it is mentioned where descriptions of these parameters can be found.
Details
Solve a constrained optimization problem with a linear, quadratic, or rational objective function,
and linear, quadratic, rational, and boundary constraints.
Solver
"alabama": The augmented lagrangian minimization algorithm auglag from package alabama is
called. The method combines the objective function and a penalty for each constraint into a single
function. This modified objective function is then passed to another optimization algorithm with
no constraints. If the constraints are violated by the solution of this sub-problem, then the size of
the penalties is increased and the process is repeated. The default methods for the uncontrained
optimization in the inner loop is the quasi-Newton method called BFGS. Tuning parameters used
for the outer loop are described in the details section of the help page of function auglag. Tuning
parameters used for the inner loop are described in the details section of the help page of function
optim.
"cccp" and "cccp2": Function cccp from package cccp for solving cone constrained convex pro-
grams is called. For solver "cccp", quadratic constraints are converted into second order cone
constraints, which requires to approximate non-positive-semidefinite matrices by positive-definite
matrices. For solver "cccp2", quadratic constraints are defined by functions. The implemented
algorithms are partially ported from CVXOPT. Tuning parameters are those from function ctrl.
"slsqp": The sequential (least-squares) quadratic programming (SQP) algorithm slsqp for gradient-
based optimization from package nloptr. The algorithm optimizes successive second-order (quadratic/least-
squares) approximations of the objective function, with first-order (affine) approximations of the
constraints. Available parameters are described in nl.opts
Value
A list with the following components:
x Named numeric vector with parameters optimizing the objective function while
satisfying constraints, if convergence is successful.
solver Name of the solver used for optimization.
status Message indicating type of convergence as reported by the solver.
Author(s)
<NAME>
Examples
### Quadratic programming with linear constraints ###
### Example from animal breeding ###
### where the mean kinship in the offspring is minized ###
data(phenotype)
data(myQ)
A <- t(model.matrix(~Sex+BV-1, data=phenotype))
rownames(A) <- c("male.cont","female.cont", "Breeding.Value")
val <- c(0.5, 0.5, 0.40)
dir <- c("==","==",">=")
mycop <- cop(f = quadfun(Q=myQ, d=0.001, name="Kinship", id=rownames(myQ)),
lb = lbcon(0, id=phenotype$Indiv),
ub = ubcon(NA, id=phenotype$Indiv),
lc = lincon(A=A, dir=dir, val=val, id=phenotype$Indiv))
res <- solvecop(mycop, solver="cccp", quiet=FALSE, trace=FALSE)
head(res$x)
hist(res$x,breaks=50,xlim=c(0,0.5))
Evaluation <- validate(mycop, res)
Evaluation$summary
Evaluation$info
Evaluation$obj.fun
Evaluation$var
Evaluation$var$Breeding.Value
ubcon Upper Bounds
Description
Define upper bounds for the variables of the form
x <= val.
Usage
ubcon(val=numeric(0), id=seq_along(val))
Arguments
val Numeric vector with upper bounds for the variables. If val is a single value,
then this value will be used for all variables in vector id.
id Vector defining the names of the variables to which the constraint applies. Each
variable name corresponds to one component of x. Variable names must be
consistent across constraints.
Details
Define upper bounds for the variables of the form
x <= val.
Vector x contains only the variables included in argument id.
Value
An object of class ubCon.
See Also
The main function for solving constrained programming problems is solvecop.
Examples
### Linear programming with linear and quadratic constraints ###
### Example from animal breeding ###
### The mean breeding value BV is maximized whereas the ###
### mean kinship in the offspring x'Qx+d is restricted ###
### Lower and upper bounds for females are identical, so ###
### their contributions are not optimized. ###
### Lower and upper bounds for some males are defined. ###
data(phenotype)
data(myQ)
A <- t(model.matrix(~Sex-1, data=phenotype))
A[,1:5]
val <- c(0.5, 0.5)
dir <- c("==","==")
Nf <- sum(phenotype$Sex=="female")
id <- phenotype$Indiv
lbval <- setNames(rep(0, length(id)), id)
ubval <- setNames(rep(NA, length(id)), id)
lbval[phenotype$Sex=="female"] <- 1/(2*Nf)
ubval[phenotype$Sex=="female"] <- 1/(2*Nf)
lbval["276000102379430"] <- 0.02
ubval["276000121507437"] <- 0.03
mycop <- cop(f = linfun(a=phenotype$BV, id=id, name="BV"),
max= TRUE,
lb = lbcon(lbval, id=id),
ub = ubcon(ubval, id=id),
lc = lincon(A=A, dir=dir, val=val, id=id),
qc = quadcon(Q=myQ, d=0.001, val=0.045,
name="Kinship", id=rownames(myQ)))
res <- solvecop(mycop, solver="cccp2", quiet=FALSE)
Evaluation <- validate(mycop, res)
# valid solver status
# TRUE cccp2 optimal
#
# Variable Value Bound OK?
# -------------------------------------
# BV 0.5502 max :
# -------------------------------------
# lower bounds all x >= lb : TRUE
# upper bounds all x <= ub : TRUE
# Sexfemale 0.5 == 0.5 : TRUE
# Sexmale 0.5 == 0.5 : TRUE
# Kinship 0.045 <= 0.045 : TRUE
# -------------------------------------
validate Validate a Solution
Description
Validate a solution of an optimization problem.
Usage
validate(op, sol, quiet=FALSE, tol=0.0001)
Arguments
op The constrained optimization problem defined with function cop.
sol The solution of the optimization problem obtained with function solvecop.
quiet Logical variable indicating whether output to console should be switched off.
tol The tolerance. A constraint is considered fulfilled even if the value exceeds (falls
below) the thresshold value by tol.
Details
Validate a solution of an optimization problem by checking if the constraints are fulfilled.
Values and bounds of the constraints are printed.
Value
A list of class copValidation with components:
summary Data frame containing one row for each constraint with the value of the con-
straint in column Val, the bound for the constraint in column Bound, and col-
umn OK states if the constraint is fulfilled. The value of the objective function
is shown in the first row. Additional rows contain the values of disabled con-
straints.
info Data frame with component valid indicating if all constraints are fulfilled, com-
ponent solver containing the name of the solver used for optimization, and
component status describing the solution as reported by the solver.
var Data frame with the values of the objective function and constraints at the opti-
mum.
obj.fun Named numeric value with value and name of the objective function at the opti-
mum.
Author(s)
<NAME>
See Also
The main function for solving constrained programming problems is solvecop. |
apicache-keyv | npm | JavaScript | A simple API response caching middleware for Express/Node using plain-english durations.
===
#### Supports any Keyv-compatible storage backend
I added the ability to use keyv storage backends.
Why?
---
Because route-caching of simple data/responses should ALSO be simple.
Usage
---
To use, simply inject the middleware (example: `apicache.middleware('5 minutes', [optionalMiddlewareToggle])`) into your routes. Everything else is automagic.
#### Cache a route
```
import express from 'express'import apicache from 'apicache' let app = express()let cache = apicache.middleware app.get('/api/collection/:id?', cache('5 minutes'), (req, res) => { // do some work... this will only occur once per 5 minutes res.json({ foo: 'bar' })})
```
#### Cache all routes
```
let cache = apicache.middleware app.use(cache('5 minutes')) app.get('/will-be-cached', (req, res) => { res.json({ success: true })})
```
#### Use with Keyv storage
```
import express from 'express'import Keyv from 'keyv'import apicache from 'apicache' let app = express() let cacheWithRedis = apicache.options({ storage: new CacheStorage(new Keyv('redis://user:pass@localhost:6379')),}).middleware let cacheWithPostgres = apicache.options({ storage: new CacheStorage(new Keyv('postgresql://user:pass@localhost:5432/dbname')),}).middleware app.get('/will-be-cached-with-redis', cacheWithRedis('5 minutes'), (req, res) => { res.json({ success: true })}) app.get('/will-be-cached-with-pg', cacheWithPostgres('5 minutes'), (req, res) => { res.json({ success: true })})
```
#### Cache grouping and manual controls
```
import apicache from 'apicache'let cache = apicache.middleware app.use(cache('5 minutes')) // routes are automatically added to index, but may be further added// to groups for quick deleting of collectionsapp.get('/api/:collection/:item?', (req, res) => { req.apicacheGroup = req.params.collection res.json({ success: true })}) // add route to display cache performance (courtesy of @killdash9)app.get('/api/cache/performance', (req, res) => { res.json(apicache.getPerformance())}) // add route to display cache indexapp.get('/api/cache/index', (req, res) => { res.json(apicache.getIndex())}) // add route to manually clear target/groupapp.get('/api/cache/clear/:target?', (req, res) => { res.json(apicache.clear(req.params.target))}) /* GET /api/foo/bar --> caches entry at /api/foo/bar and adds a group called 'foo' to indexGET /api/cache/index --> displays indexGET /api/cache/clear/foo --> clears all cached entries for 'foo' group/collection */
```
#### Use with middleware toggle for fine control
```
// higher-order function returns false for responses of other status codes (e.g. 403, 404, 500, etc)const onlyStatus200 = (req, res) => res.statusCode === 200 const cacheSuccesses = cache('5 minutes', onlyStatus200) app.get('/api/missing', cacheSuccesses, (req, res) => { res.status(404).json({ results: 'will not be cached' })}) app.get('/api/found', cacheSuccesses, (req, res) => { res.json({ results: 'will be cached' })})
```
#### Prevent cache-control header "max-age" from automatically being set to expiration age
```
let cache = apicache.options({ headers: { 'cache-control': 'no-cache', },}).middleware let cache5min = cache('5 min') // continue to use normally
```
API
---
* `apicache.options([globalOptions])` - getter/setter for global options. If used as a setter, this function is chainable, allowing you to do things such as... say... return the middleware.
* `apicache.middleware([duration], [toggleMiddleware], [localOptions])` - the actual middleware that will be used in your routes. `duration` is in the following format "[length][unit]", as in `"10 minutes"` or `"1 day"`. A second param is a middleware toggle function, accepting request and response params, and must return truthy to enable cache for the request. Third param is the options that will override global ones and affect this middleware only.
* `middleware.options([localOptions])` - getter/setter for middleware-specific options that will override global ones.
* `apicache.getPerformance()` - returns current cache performance (cache hit rate)
* `apicache.getIndex()` - returns current cache index [of keys]
* `apicache.clear([target])` - clears cache target (key or group), or entire cache if no value passed, returns new index.
* `apicache.newInstance([options])` - used to create a new ApiCache instance (by default, simply requiring this library shares a common instance)
* `apicache.clone()` - used to create a new ApiCache instance with the same options as the current one
#### Available Options (first value is default)
```
{ debug: false|true, // if true, enables console output defaultDuration: '1 hour', // should be either a number (in ms) or a string, defaults to 1 hour enabled: true|false, // if false, turns off caching globally (useful on dev) redisClient: client, // if provided, uses the [node-redis](https://github.com/NodeRedis/node_redis) client instead of [memory-cache](https://github.com/ptarjan/node-cache) appendKey: fn(req, res), // appendKey takes the req/res objects and returns a custom value to extend the cache key headerBlacklist: [], // list of headers that should never be cached statusCodes: { exclude: [], // list status codes to specifically exclude (e.g. [404, 403] cache all responses unless they had a 404 or 403 status) include: [], // list status codes to require (e.g. [200] caches ONLY responses with a success/200 code) }, trackPerformance: false, // enable/disable performance tracking... WARNING: super cool feature, but may cause memory overhead issues headers: { // 'cache-control': 'no-cache' // example of header overwrite }}
```
##### *Optional: Typescript Types (courtesy of [@danielsogl](https://github.com/danielsogl))
```
$ npm install -D @types/apicache
```
Custom Cache Keys
---
Sometimes you need custom keys (e.g. save routes per-session, or per method).
We've made it easy!
**Note:** All req/res attributes used in the generation of the key must have been set previously (upstream). The entire route logic block is skipped on future cache hits so it can't rely on those params.
```
apicache.options({ appendKey: (req, res) => req.method + res.session.id,})
```
Cache Key Groups
---
Oftentimes it benefits us to group cache entries, for example, by collection (in an API). This would enable us to clear all cached "post" requests if we updated something in the "post" collection for instance. Adding a simple `req.apicacheGroup = [somevalue];` to your route enables this. See example below:
```
var apicache = require('apicache')var cache = apicache.middleware // GET collection/idapp.get('/api/:collection/:id?', cache('1 hour'), function(req, res, next) { req.apicacheGroup = req.params.collection // do some work res.send({ foo: 'bar' })}) // POST collection/idapp.post('/api/:collection/:id?', function(req, res, next) { // update model apicache.clear(req.params.collection) res.send('added a new item, so the cache has been cleared')})
```
Additionally, you could add manual cache control to the previous project with routes such as these:
```
// GET apicache index (for the curious)app.get('/api/cache/index', function(req, res, next) { res.send(apicache.getIndex())}) // GET apicache index (for the curious)app.get('/api/cache/clear/:key?', function(req, res, next) { res.send(200, apicache.clear(req.params.key || req.query.key))})
```
Debugging/Console Out
---
#### Using Node environment variables (plays nicely with the hugely popular [debug](https://www.npmjs.com/package/debug) module)
```
$ export DEBUG=apicache
$ export DEBUG=apicache,othermoduleThatDebugModuleWillPickUp,etc
```
#### By setting internal option
```
import apicache from 'apicache' apicache.options({ debug: true })
```
Client-Side Bypass
---
When sharing `GET` routes between admin and public sites, you'll likely want the routes to be cached from your public client, but NOT cached when from the admin client. This is achieved by sending a `"x-apicache-bypass": true` header along with the requst from the admin.
The presence of this header flag will bypass the cache, ensuring you aren't looking at stale data.
Contributors
---
Special thanks to all those that use this library and report issues, but especially to the following active users that have helped add to the core functionality!
* [@killdash9](https://github.com/killdash9) - restify support, performance/stats system, and too much else at this point to list
* [@svozza](https://github.com/svozza) - added restify tests, test suite refactor, and fixed header issue with restify. Node v7 + Restify v5 conflict resolution, etag/if-none-match support, etcetc, etc. Triple thanks!!!
* [@andredigenova](https://github.com/andredigenova) - Added header blacklist as options, correction to caching checks
* [@peteboere](https://github.com/peteboere) - Node v7 headers update
* [@rutgernation](https://github.com/rutgernation) - JSONP support
* [@enricsangra](https://github.com/enricsangra) - added x-apicache-force-fetch header
* [@tskillian](https://github.com/tskillian) - custom appendKey path support
* [@agolden](https://github.com/agolden) - Content-Encoding preservation (for gzip, etc)
* [@davidyang](https://github.com/davidyang) - express 4+ compatibility
* [@nmors](https://github.com/nmors) - redis support
* [@maytis](https://github.com/maytis), [@ashwinnaidu](https://github.com/ashwinnaidu) - redis expiration
* [@ubergesundheit](https://github.com/ubergesundheit) - Corrected buffer accumulation using res.write with Buffers
* [@danielsogl](https://github.com/danielsogl) - Keeping dev deps up to date, Typescript Types
* [@vectart](https://github.com/vectart) - Added middleware local options support
* [@davebaol](https://github.com/davebaol) - Added string support to defaultDuration option (previously just numeric ms)
* [@Rauttis](https://github.com/rauttis) - Added ioredis support
* [@fernandolguevara](https://github.com/fernandolguevara) - Added opt-out for performance tracking, great emergency fix, thank you!!
### Bugfixes, tweaks, documentation, etc.
* @Amhri, @Webcascade, @conmarap, @cjfurelid, @scambier, @lukechilds, @Red-Lv, @gesposito, @viebel, @RowanMeara, @GoingFast, @luin, @keithws, @daveross, @apascal, @guybrush
### Changelog
* **v1.5.3** - multiple fixes: Redis should be connected before using (thanks @guybrush)
* **v1.5.2** - multiple fixes: Buffer deprecation and _headers deprecation, { trackPerformance: false } by default per discussion (sorry semver...)
* **v1.5.1** - adds { trackPerformance } option to enable/disable performance tracking (thanks @fernandolguevara)
* **v1.5.0** - exposes apicache.getPerformance() for per-route cache metrics (@killdash9 continues to deliver)
* **v1.4.0** - cache-control header now auto-decrements in cached responses (thanks again, @killdash9)
* **v1.3.0** - [securityfix] apicache headers no longer embedded in cached responses when NODE_ENV === 'production' (thanks for feedback @satya-jugran, @smddzcy, @adamelliotfields). Updated deps, now requiring Node v6.00+.
* **v1.2.6** - middlewareToggle() now prevents response block on cache hit + falsy toggle (thanks @apascal)
* **v1.2.5** - uses native Node setHeader() rather than express.js header() (thanks @keithws and @daveross)
* **v1.2.4** - force content type to Buffer, using old and new Buffer creation syntax
* **v1.2.3** - add etag to if-none-match 304 support (thanks for the test/issue @svozza)
* **v1.2.2** - bugfix: ioredis.expire params (thanks @GoingFast and @luin)
* **v1.2.1** - Updated deps
* **v1.2.0** - Supports ioredis (thanks @Rauttis)
* **v1.1.1** - bugfixes in expiration timeout clearing and content header preservation under compression (thanks @RowanMeara and @samimakicc).
* **v1.1.0** - added the much-requested feature of a custom appendKey function (previously only took a path to a single request attribute). Now takes (request, response) objects and returns some value to be appended to the cache key.
* **v1.0.0** - stamping v0.11.2 into official production version, will now begin developing on branch v2.x (redesign)
* **v0.11.2** - dev-deps update, courtesy of @danielsogl
* **v0.11.1** - correction to status code caching, and max-age headers are no longer sent when not cached. middlewareToggle now works as intended with example of statusCode checking (checks during shouldCacheResponse cycle)
* **v0.11.0** - Added string support to defaultDuration option, previously just numeric ms - thanks @davebaol
* **v0.10.0** - added ability to blacklist headers (prevents caching) via options.headersBlacklist (thanks @andredigenova)
* **v0.9.1** - added eslint in prep for v1.x branch, minor ES6 to ES5 in master branch tests
* **v0.9.0** - corrected Node v7.7 & v8 conflicts with restify (huge thanks to @svozza for chasing this down and fixing upstream in restify itself). Added coveralls. Added middleware.localOptions support (thanks @vectart). Added ability to overwrite/embed headers
(e.g. "cache-control": "no-cache") through options.
* **v0.8.8** - corrected to use node v7+ headers (thanks @peteboere)
* **v0.8.6, v0.8.7** - README update
* **v0.8.5** - dev dependencies update (thanks @danielsogl)
* **v0.8.4** - corrected buffer accumulation, with test support (thanks @ubergesundheit)
* **v0.8.3** - added tests for x-apicache-bypass and x-apicache-force-fetch (legacy) and fixed a bug in the latter (thanks @Red-Lv)
* **v0.8.2** - test suite and mock API refactor (thanks @svozza)
* **v0.8.1** - fixed restify support and added appropriate tests (thanks @svozza)
* **v0.8.0** - modifies response accumulation (thanks @killdash9) to support res.write + res.end accumulation, allowing integration with restify. Adds gzip support (Node v4.3.2+ now required) and tests.
* **v0.7.0** - internally sets cache-control/max-age headers of response object
* **v0.6.0** - removed final dependency (debug) and updated README
* **v0.5.0** - updated internals to use res.end instead of res.send/res.json/res.jsonp, allowing for any response type, adds redis tests
* **v0.4.0** - dropped lodash and memory-cache external dependencies, and bumped node version requirements to 4.0.0+ to allow Object.assign native support
Readme
---
### Keywords
* cache
* API
* redis
* memcache
* response
* express
* JSON
* duration
* middleware
* memory |
django-websocket-redis-plus | readthedoc | Python | django-websocket-redis 0.5.1 documentation
[django-websocket-redis](index.html#document-index)
---
Websockets for Django applications using Redis as message queue[¶](#websockets-for-django-applications-using-redis-as-message-queue)
===
This module implements websockets on top of Django without requiring any additional framework. For messaging it uses the [Redis datastore](http://redis.io/). In a production environment, it is intended to work under
[uWSGI](http://uwsgi-docs.readthedocs.org/en/latest/WebSockets.html) and behind [NGiNX](http://nginx.com/). In a development environment, it can be used with `manage runserver`.
Project’s home[¶](#project-s-home)
---
Check for the latest release of this project on [Github](https://github.com/jrief/django-websocket-redis).
Please report bugs or ask questions using the [Issue Tracker](https://github.com/jrief/django-websocket-redis/issues).
Contents[¶](#contents)
---
### Introduction[¶](#introduction)
Application servers such as Django and Ruby-on-Rails have been developed without intention to create long living connections. Therefore these frameworks are not a good fit for web applications, which shall react on asynchronous events initiated by the server. One feasible solution for clients wishing to be notified for events is to continuously poll the server using an XMLHttpRequest (Ajax).
This however produces a lot of traffic, and depending on the granularity of the polling interval,
it is not a viable solution for real time events such as chat applications or browser based multiplayer games.
Web application written in Python usually use WSGI as the communication layer between the webserver and themselves. WSGI is a stateless protocol which defines how to handle requests and making responses in a simple way abstracted from the HTTP protocol, but by design it does not support non-blocking requests.
#### The WSGI protocol can not support websockets[¶](#the-wsgi-protocol-can-not-support-websockets)
In Django, the web server accepts an incoming request, sets up a WSGI dictionary which then is passed to the application server. There the HTTP headers and the payload is created and immediately afterwards the request is finished and flushed to the client. This processing typically requires a few dozen milliseconds. The throughput, such a server can handle, is the average response time multiplied by the number of concurrent workers. Each worker requires its own thread/process and a good rule of thumb is to configure not more than twice as many workers as the number of cores available on that host. Otherwise you will see a decrease in overall performance, caused by too many context switches, for which the scheduler of the operating system is responsible.
Due to this workflow, it is almost impossible to add support for long term connections, such as websockets, on top of the WSGI protocol specification. Therefore most websocket implementations go for another approach. The websocket connection is controlled by a service running side by side with the default application server. Here, a webserver with support for long term connections,
dispatches the requests from the clients.
A webserver able to dispatch websocket requests is the [NGiNX](http://nginx.com/) server. Normal requests are sent to Django using the WSGI protocol, whereas the long living websocket connections are passed over to a special service responsible only for that.
A typical implementation proposal is to use [socket.io](http://socket.io/) running inside a [NodeJS](http://nodejs.org/) loop.
Here, **Django** communicates with **Node.JS** using a RESTful API. This however is hard to maintain because it pulls in two completely different technologies. In alternative proposals, other Python based asynchronous event frameworks such as [Tornado](http://www.tornadoweb.org/) or [Twisted](http://twistedmatrix.com/) are used. But they all look like makeshift solutions, since one has to run a second framework side by side with **Django**. This makes the project dependent on another infrastructure and thus harder to maintain. Moreover, having to run two concurrent frameworks can be quite embarrassing during application development,
specially while debugging code.
#### uWSGI[¶](#uwsgi)
While searching for a simpler solution, I found out that [uWSGI offers websockets](http://uwsgi-docs.readthedocs.org/en/latest/WebSockets.html) right out of the box. With [Redis](http://redis.io/) as a message queue, and a few lines of Python code, one can bidirectionally communicate with any WSGI based framework, for instance **Django**. Of course, here it also is prohibitive to create a new thread for each open websocket connection. Therefore that part of the code runs in one single thread/process for all open connections in a cooperative concurrency mode using the excellent [gevent](http://www.gevent.org/) and [greenlet](http://greenlet.readthedocs.org/) libraries.
This approach has some advantages:
* It is simpler to implement.
* The asynchronous I/O loop handling websockets can run
+ inside Django with `./manage.py runserver`, giving full debugging control.
+ as a stand alone HTTP server, using uWSGI.
+ using NGiNX or Apache (>= 2.4) as proxy in two decoupled loops, one for WSGI and one for
websocket HTTP in front of two separate uWSGI workers.
* The whole Django API is available in this loop, provided that no blocking calls are made.
Therefore the websocket code can access the Django configuration, the user and the session cache,
etc.
#### Using Redis as a message queue[¶](#using-redis-as-a-message-queue)
One might argue that all this is not as simple, since an additional service – the Redis data server
– must run side by side with Django. Websockets are bidirectional but their normal use case is to trigger server initiated events on the client. Although the other direction is possible, it can be handled much easier using Ajax – adding an additional TCP/IP handshake.
Here, the only “stay in touch with the client” is the file descriptor attached to the websocket.
And since we speak about thousands of open connections, the footprint in terms of memory and CPU resources must be brought down to a minimum. In this implementation, only one open file handle is required for each open websocket connection.
Productive webservers require some kind of session store anyway. This can be a [memcached](http://memcached.org/) or a Redis data server. Therefore, such a service must run anyway and if we can choose between one of them, we shall use one with integrated message queuing support. When using Redis for caching and as a session store, we practically get the message queue for free.
##### Scalability[¶](#scalability)
One of the nice features of Redis is its infinite scalability. If one Redis server can’t handle its workload, interconnect it with another one and all events and messages are mirrored across this network. Since **django-websocket-redis** can be deployed multiple times and as self-contained Django applications, this configuration can scale infinitely, just interconnect the Redis servers to each other.
On the main entry point of your site, add a loadbalancer capable of proxying the websocket protocol.
This can be any OSI level 4 loadbalancer such as the [Linux Virtual Server](http://www.linuxvirtualserver.org/) project, or if you prefer OSI level 7, the excellent [HAProxy](http://blog.haproxy.com/2012/11/07/websockets-load-balancing-with-haproxy/).
### Installation and Configuration[¶](#installation-and-configuration)
#### Installation[¶](#installation)
If not already done, install the **Redis server**, using the installation tool offered by the operating system, such as `aptitude`, `yum`, `port` or install [Redis from source](http://redis.io/download).
Start the Redis service on your host
```
$ sudo service redis-server start
```
Check if Redis is up and accepting connections
```
$ redis-cli ping PONG
```
Install **Django Websocket for Redis**. The latest stable release can be found on PyPI
```
pip install django-websocket-redis
```
or the newest development version from github
```
pip install -e git+https://github.com/jrief/django-websocket-redis#egg=django-websocket-redis
```
**Websocket for Redis** does not define any database models. It can therefore be installed without any database synchronization.
##### Dependencies[¶](#dependencies)
* [Django](http://djangoproject.com/) >=1.5
* redis >=2.10.3 (a [Python client for Redis](https://pypi.python.org/pypi/redis/))
* [uWSGI](http://projects.unbit.it/uwsgi/) >=1.9.20
* [gevent](https://pypi.python.org/pypi/gevent) >=1.0.1
* [greenlet](https://pypi.python.org/pypi/greenlet) >=0.4.5
* optional, but recommended: [wsaccel](https://pypi.python.org/pypi/wsaccel) >=0.6
#### Configuration[¶](#configuration)
Add `"ws4redis"` to your project’s `INSTALLED_APPS` setting
```
INSTALLED_APPS = (
...
'ws4redis',
...
)
```
Specify the URL that distinguishes websocket connections from normal requests
```
WEBSOCKET_URL = '/ws/'
```
If the Redis datastore uses connection settings other than the defaults, use this dictionary to override these values
```
WS4REDIS_CONNECTION = {
'host': 'redis.example.com',
'port': 16379,
'db': 17,
'password': 'verysecret',
}
```
Note
Specify only the values, which deviate from the default.
If your Redis instance is accessed via a Unix Domain Socket, you can configure that as well:
```
WS4REDIS_CONNECTION = {
'unix_socket_path': '/tmp/redis.sock',
'db': 5
}
```
**Websocket for Redis** can be configured with `WS4REDIS_EXPIRE`, to additionally persist messages published on the message queue. This is advantageous in situations, where clients shall be able to access the published information after reconnecting the websocket, for instance after a page is reloaded.
This directive sets the number in seconds, each received message is persisted by Redis, additionally of being published on the message queue
```
WS4REDIS_EXPIRE = 7200
```
**Websocket for Redis** can prefix each entry in the datastore with a string. By default, this is empty. If the same Redis connection is used to store other kinds of data, in order to avoid name clashes you’re encouraged to prefix these entries with a unique string, say
```
WS4REDIS_PREFIX = 'ws'
```
Override `ws4redis.store.RedisStore` with a customized class, in case you need an alternative implementation of that class
```
WS4REDIS_SUBSCRIBER = 'myapp.redis_store.RedisSubscriber'
```
This directive is required during development and ignored in production environments. It overrides Django’s internal main loop and adds a URL dispatcher in front of the request handler
```
WSGI_APPLICATION = 'ws4redis.django_runserver.application'
```
Ensure that your template context contains at least these processors:
```
TEMPLATE_CONTEXT_PROCESSORS = (
...
'django.contrib.auth.context_processors.auth',
'django.core.context_processors.static',
'ws4redis.context_processors.default',
...
)
```
**Websocket for Redis** allows each client to subscribe and to publish on every possible channel. To restrict and control access, the `WS4REDIS_ALLOWED_CHANNELS` options should be set to a callback function anywhere inside your project. See the example and warnings in
[Safety considerations](index.html#safetyconsiderations).
##### Check your Installation[¶](#check-your-installation)
With **Websockets for Redis** your Django application has immediate access to code written for websockets. Change into the `examples` directory and start a sample chat server
```
./manage.py migrate
... create database tables
... answer the questions
./manage.py runserver
```
Point a browser onto <http://localhost:8000/chat/>, you should see a simple chat server. Enter a message and send it to the server. It should be echoed immediately on the billboard.
Point a second browser onto the same URL. Now each browser should echo the message entered into input field.
In the examples directory, there are two chat server implementations, which run out of the box.
One simply broadcasts messages to every client listening on that same websocket URL. The other chat server can be used to send messages to specific users logged into the system. Use these demos as a starting point for your application.
#### Replace memcached with Redis[¶](#replace-memcached-with-redis)
Since Redis has to be added as an additional service to the current infrastructure, at least another service can be safely removed: *memcached*. This is required by typical Django installations and is used for caching and session storage.
It’s beyond the scope of this documentation to explain how to set up a caching and/or session store using Redis, so please check [django-redis-sessions](https://github.com/martinrusev/django-redis-sessions) and optionally [django-redis-cache](https://github.com/sebleier/django-redis-cache) for details,
but it should be as easy as installing
```
pip install django-redis-sessions
```
and adding
```
SESSION_ENGINE = 'redis_sessions.session'
SESSION_REDIS_PREFIX = 'session'
```
to the file `settings.py`. Here is a full description on how to use
[Redis as Django session store and cache backend](http://michal.karzynski.pl/blog/2013/07/14/using-redis-as-django-session-store-and-cache-backend/).
Also keep in mind, that accessing session data is a blocking I/O call. Hence the connection from the websocket loop to the session store **must use gevent**, otherwise the websockets may block altogether. Therefore, if you for some reason you have to remain with your current session store,
make sure its monkey patched with gevent.
Warning
**Never** store session data in the database in combination with *Websockets for Redis*!
### Running WebSocket for Redis[¶](#running-websocket-for-redis)
**WebSocket for Redis** is a library which runs side by side with Django. It has its own separate main loop, which does nothing else than keeping the WebSocket alive and dispatching requests from **Redis** to the configured WebSockets and vice versa.
#### Django with WebSockets for Redis in development mode[¶](#django-with-websockets-for-redis-in-development-mode)
With **WebSockets for Redis**, a Django application has immediate access to code written for WebSockets. Make sure, that Redis is up and accepts connections.
```
$ redis-cli ping PONG
```
Then start the Django development server.
```
./manage.py runserver
```
As usual, this command shall only be used for development.
The `runserver` command is a monkey patched version of the original Django main loop and works similar to it. If an incoming request is of type WSGI, everything works as usual. However, if the patched handler detects an incoming request wishing to open a WebSocket, then the Django main loop is hijacked by **ws4redis**. This separate loop then waits until `select` notifies that some data is available for further processing, or by the WebSocket itself, or by the Redis message queue.
This hijacked main loop finishes when the WebSocket is closed or when an error occurs.
Note
In development, one thread is created for each open WebSocket.
Opened WebSocket connections exchange so called Ping/Pong messages. They keep the connections open,
even if there is no payload to be sent. In development mode, the “WebSocket” main loop does not send these stay alive packages, because normally there is no proxy or firewall between the server and the client which could drop the connection. This could be easily implemented, though.
#### Django with WebSockets for Redis as a stand alone uWSGI server[¶](#django-with-websockets-for-redis-as-a-stand-alone-uwsgi-server)
In this configuration the **uWSGI** server owns the main loop. To distinguish WebSockets from normals requests, modify the Python starter module `wsgi.py` to
```
import os os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myapp.settings')
from django.conf import settings from django.core.wsgi import get_wsgi_application from ws4redis.uwsgi_runserver import uWSGIWebsocketServer
_django_app = get_wsgi_application()
_websocket_app = uWSGIWebsocketServer()
def application(environ, start_response):
if environ.get('PATH_INFO').startswith(settings.WEBSOCKET_URL):
return _websocket_app(environ, start_response)
return _django_app(environ, start_response)
```
Run uWSGI as stand alone server with
```
uwsgi --virtualenv /path/to/virtualenv --http :80 --gevent 100 --http-websockets --module wsgi
```
This will answer, both Django and WebSocket requests on port 80 using HTTP. Here the modified
`application` dispatches incoming requests depending on the URL on either a Django handler or into the WebSocket’s main loop.
This configuration works for testing uWSGI and low traffic sites. Since uWSGI then runs in one thread/process, blocking calls such as accessing the database, would also block all other HTTP requests. Adding `--gevent-monkey-patch` to the command line may help here, but Postgres for instance requires to monkey patch its blocking calls with **gevent** using the [psycogreen](https://bitbucket.org/dvarrazzo/psycogreen/) library.
Moreover, only one CPU core is then used, and static files must be handled by another webserver.
##### Serving static files[¶](#serving-static-files)
In this configuration, you are not able to serve static files, because Django does not run in debug mode and uWSGI does not know how to server your deployed static files. Therefore in `urls.py` add
`staticfiles_urlpatterns` to your urlpatterns:
```
from django.conf.urls import url, patterns, include from django.contrib.staticfiles.urls import staticfiles_urlpatterns
urlpatterns = patterns('',
....
) + staticfiles_urlpatterns()
```
Note
Remember to remove `staticfiles_urlpatterns` when upgrading to a more scalable configuration as explained in the next section.
#### Django with WebSockets for Redis behind NGiNX using uWSGI[¶](#django-with-websockets-for-redis-behind-nginx-using-uwsgi)
This is the most scalable solution. Here two instances of a uWSGI server are spawned, one to handle normal HTTP requests for Django and one to handle WebSocket requests.
Assure that you use NGiNX version 1.3.13 or later, since earlier versions have no support for WebSocket proxying. The web server undertakes the task of dispatching normal requests to one uWSGI instance and WebSocket requests to another one. The responsible configuration section for NGiNX shall look like:
```
location / {
include /etc/nginx/uwsgi_params;
uwsgi_pass unix:/path/to/django.socket;
}
location /ws/ {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://unix:/path/to/web.socket;
}
```
For details refer to NGiNX’s configuration on [WebSocket proxying](http://nginx.org/en/docs/http/websocket.html).
Since both uWSGI handlers create their own main loop, they also require their own application and different UNIX sockets. Create two adopter files, one for the Django loop, say `wsgi_django.py`
```
import os os.environ.update(DJANGO_SETTINGS_MODULE='my_app.settings')
from django.core.wsgi import get_wsgi_application application = get_wsgi_application()
```
and one for the WebSocket loop, say `wsgi_websocket.py`
```
import os import gevent.socket import redis.connection redis.connection.socket = gevent.socket os.environ.update(DJANGO_SETTINGS_MODULE='my_app.settings')
from ws4redis.uwsgi_runserver import uWSGIWebsocketServer application = uWSGIWebsocketServer()
```
Start those two applications as separate uWSGI instances
```
uwsgi --virtualenv /path/to/virtualenv --socket /path/to/django.socket --buffer-size=32768 --workers=5 --master --module wsgi_django uwsgi --virtualenv /path/to/virtualenv --http-socket /path/to/web.socket --gevent 1000 --http-websockets --workers=2 --master --module wsgi_websocket
```
The NGiNX web server is now configured as a scalable application server which can handle a thousand WebSockets connections concurrently.
If you feel uncomfortable with separating WebSocket from normal requests on NGiNX, consider that you already separate static and media requests on the web server. Hence, WebSockets are just another extra routing path.
#### Django with WebSockets for Redis behind Apache-2.4 using uWSGI[¶](#django-with-websockets-for-redis-behind-apache-2-4-using-uwsgi)
<NAME> <[<EMAIL>](mailto:<EMAIL>)> reported this configuration, which allows to run
**ws4redis** with Apache-2.4 and later.
Configuratin for uWSGI:
```
[uwsgi]
env=DJANGO_SETTINGS_MODULE=<app>.settings module=<module>:application master=True http-socket=127.0.0.1:9090 http-websockets=true gevent=1000 workers=2 plugin=python
```
Configuration section for Apache:
```
<VirtualHost IPADDR:80>
ProxyPass /ws/ ws://127.0.0.1:9090/
</VirtualHost>
```
#### Django with WebSockets for Redis as a stand alone uWSGI server in emperor mode[¶](#django-with-websockets-for-redis-as-a-stand-alone-uwsgi-server-in-emperor-mode)
In this configuration the **uWSGI** server owns both main loops. To distinguish WebSockets from normal requests, use uWSGI’s [internal routing](https://uwsgi.readthedocs.org/en/latest/InternalRouting.html) capabilities.
Note
The internal routing capabilities of uWSGI is dependent on the Perl Compatible Regular Expressions
(PCRE) library. Make sure that your uWSGI was built with PCRE support if you plan to run in emperor mode.
Please refer to the [PCRE Support](#pcre-support) section below for more information.
First create the two applications, `wsgi_django.py` and `wsgi_websocket.py` using the same code as in the above example. These are the two entry points for uWSGI. Then create these three ini-files, one for the emperor, say `uwsgi.ini`:
```
[uwsgi]
emperor = vassals http-socket = :9090 die-on-term = true offload-threads = 1 route = ^/ws uwsgi:/var/tmp/web.socket,0,0 route = ^/ uwsgi:/var/tmp/django.socket,0,0
```
Create a separate directory named `vassals` and add a configuration file for the Websocket loop, say `vassals/wsserver.ini`:
```
; run the Websocket loop
[uwsgi]
umask = 002 virtualenv = /path/to/your/virtualenv chdir = ..
master = true no-orphans = true die-on-term = true memory-report = true env = DJANGO_SETTINGS_MODULE=my_app.settings socket = /var/tmp/web.socket module = wsgi_websocket:application threads = 1 processes = 1 http-websockets = true gevent = 1000
```
To the directory named `vassals`, add a configuration file for the Django loop, say
`vassals/runserver.ini`:
```
; run the Django loop
[uwsgi]
umask = 002 virtualenv = /path/to/your/virtualenv chdir = ..
master = true no-orphans = true die-on-term = true memory-report = true env = DJANGO_SETTINGS_MODULE=my_app.settings socket = /var/tmp/django.socket module = wsgi_django:application buffer-size = 32768 threads = 1 processes = 2
```
Adopt the virtualenv, pathes, ports and number of threads/processes to your operating system and hosts capabilities.
Then start uWSGI:
```
uwsgi --ini uwsgi.ini
```
This configuration scales as well, as the sample from the previous section. It shall be used if no NGiNX server is available.
##### Serving static files[¶](#id1)
The alert reader will have noticed, that static files are not handled by this configuration. While in theory it is possible to configure **uWSGI** to [deliver static files](https://uwsgi.readthedocs.org/en/latest/InternalRouting.html?highlight=routing#static), please note that
**uWSGI** is not intended to completly [replace a webserver](http://uwsgi-docs.readthedocs.org/en/latest/HTTP.html#can-i-use-uwsgi-s-http-capabilities-in-production). Therefore, before adding
`route = ^/static static:/path/to/static/root` to the emperors ini-file, consider to place them onto a Content Delivery Network, such as Amazon S3.
##### PCRE Support[¶](#pcre-support)
If you encounter the error message `!!! no internal routing support, rebuild with pcre support !!!`
in the logs/console when running in emperor mode, that means you were lacking the PCRE libraries when you first installed uWSGI. You will need to rebuild the uWSGI binaries. To do that uninstall uWSGI, and then download the `libpcre3` and `libpcre3-dev` libraries using your system’s package management tool.
Once finished, reinstall uWSGI. Credits to this [post](http://stackoverflow.com/a/22645915/4284628).
### Using Websockets for Redis[¶](#using-websockets-for-redis)
**Websocket for Redis** allows uni- and bidirectional communication from the client to the server and vice versa. Each websocket is identified by the part of the URL which follows the prefix
`/ws/`. Use different uniform locators to distinguish between unrelated communication channels.
Note
The prefix `/ws/` is specified using the configuration setting `WEBSOCKET_URL` and can be changed to whatever is appropriate.
#### Client side[¶](#client-side)
The idea is to let a client subscribe for different channels, so that he only gets notified, when a certain event happens on a channel he is interested in. Currently there are four such events,
*broadcast notification*, *user notification*, *group notification* and *session notification*.
Additionally, a client may declare on initialization, on which channels he wishes to publish a message. The latter is not that important for a websocket implementation, because it can be achieved otherwise, using the well known XMLHttpRequest (Ajax) methods.
##### A minimal client in pure JavaScript[¶](#a-minimal-client-in-pure-javascript)
```
var ws = new WebSocket('ws://www.example.com/ws/foobar?subscribe-broadcast&publish-broadcast&echo');
ws.onopen = function() {
console.log("websocket connected");
};
ws.onmessage = function(e) {
console.log("Received: " + e.data);
};
ws.onerror = function(e) {
console.error(e);
};
ws.onclose = function(e) {
console.log("connection closed");
}
function send_message(msg) {
ws.send(msg);
}
```
##### Client JavaScript depending on jQuery (recommended)[¶](#client-javascript-depending-on-jquery-recommended)
When using jQuery, clients can reconnect on broken Websockets. Additionally the client awaits for heartbeat messages and reconnects if too many of them were missed.
Include the client code in your template:
```
<script type="text/javascript" src="{{ STATIC_URL }}js/ws4redis.js"></script>
```
and access the Websocket code:
```
jQuery(document).ready(function($) {
var ws4redis = WS4Redis({
uri: '{{ WEBSOCKET_URI }}foobar?subscribe-broadcast&publish-broadcast&echo',
connecting: on_connecting,
connected: on_connected,
receive_message: receiveMessage,
disconnected: on_disconnected,
heartbeat_msg: {{ WS4REDIS_HEARTBEAT }}
});
// attach this function to an event handler on your site
function sendMessage() {
ws4redis.send_message('A message');
}
function on_connecting() {
alert('Websocket is connecting...');
}
function on_connected() {
ws4redis.send_message('Hello');
}
function on_disconnected(evt) {
alert('Websocket was disconnected: ' + JSON.stringify(evt));
}
// receive a message though the websocket from the server
function receiveMessage(msg) {
alert('Message from Websocket: ' + msg);
}
});
```
If you want to close the connection explicitly, you could call **ws4redis.close()**. This way, the client will not perform reconnection attempts.
This example shows how to configure a Websocket for bidirectional communication.
Note
A client wishing to trigger events on the server side, shall use XMLHttpRequests (Ajax),
as they are much more suitable, rather than messages sent via Websockets. The main purpose for Websockets is to communicate asynchronously from the server to the client.
#### Server Side[¶](#server-side)
The Django loop is triggered by client HTTP requests, except for special cases such as jobs triggered by, for instance [django-celery](http://www.celeryproject.org/). Intentionally, there is no way to trigger events in the Django loop through a Websocket request. Hence, all of the communication between the Websocket loop and the Django loop must pass through the message queue.
##### RedisSubscriber[¶](#redissubscriber)
In the Websocket loop, the message queue is controlled by the class `RedisSubscriber`, which can be replaced using the configuration directive `WS4REDIS_SUBSCRIBER`.
##### RedisPublisher[¶](#redispublisher)
In the Django loop, this message queue is controlled by the class `RedisPublisher`, which can be accessed by any Django view.
Both, `RedisSubscriber` and `RedisPublisher` share the same base class `RedisStore`.
##### Subscribe to Broadcast Notifications[¶](#subscribe-to-broadcast-notifications)
This is the simplest form of notification. Every Websocket subscribed to a broadcast channel is notified, when a message is sent to that named Redis channel. Say, the Websocket URL is
`ws://www.example.com/ws/foobar?subscribe-broadcast` and the Django loop wants to publish a message to all clients listening on the named facility, referred here as `foobar`.
```
from ws4redis.publisher import RedisPublisher from ws4redis.redis_store import RedisMessage
redis_publisher = RedisPublisher(facility='foobar', broadcast=True)
message = RedisMessage('Hello World')
# and somewhere else redis_publisher.publish_message(message)
```
now, the message “Hello World” is received by all clients listening for that broadcast notification.
##### Subscribe to User Notification[¶](#subscribe-to-user-notification)
A Websocket initialized with the URL `ws://www.example.com/ws/foobar?subscribe-user`, will be notified if that connection belongs to a logged in user and someone publishes a message on for that user, using the `RedisPublisher`.
```
redis_publisher = RedisPublisher(facility='foobar', users=['john', 'mary'])
message = RedisMessage('Hello World')
# and somewhere else redis_publisher.publish_message(message)
```
now, the message “Hello World” is sent to all clients logged in as `john` or `mary` and listening for that kind of notification.
If the message shall be send to the currently logged in user, then you may use the magic item
`SELF`.
```
from ws4redis.redis_store import SELF
redis_publisher = RedisPublisher(facility='foobar', users=[SELF], request=request)
```
##### Subscribe to Group Notification[¶](#subscribe-to-group-notification)
A Websocket initialized with the URL `ws://www.example.com/ws/foobar?subscribe-group`, will be notified if that connection belongs to a logged in user and someone publishes a message for a group where this user is member of.
```
redis_publisher = RedisPublisher(facility='foobar', groups=['chatters'])
# and somewhere else redis_publisher.publish_message('Hello World')
```
now, the message “Hello World” is sent to all clients logged in as users which are members of the group `chatters` and subscribing to that kind of notification.
In this context the the magic item `SELF` refers to all the groups, the current logged in user belongs to.
Note
This feature uses a signal handler in the Django loop, which determines the groups a user belongs to. This list of groups then is persisted inside a session variable to avoid having the Websocket loop to access the database.
##### Subscribe to Session Notification[¶](#subscribe-to-session-notification)
A Websocket initialized with the URL `ws://www.example.com/ws/foobar?subscribe-session`, will be notified if someone publishes a message for a client owning this session key.
```
redis_publisher = RedisPublisher(facility='foobar', sessions=['wnqd0gbw5obpnj50zwh6yaq2yz4o8g9x'])
message = RedisMessage('Hello World')
# and somewhere else redis_publisher.publish_message(message)
```
now, the message “Hello World” is sent to all clients using the session key
`<KEY>` and subscribing to that kind of notification.
In this context the the magic item `SELF` refers to all clients owning the same session key.
##### Publish for Broadcast, User, Group and Session[¶](#publish-for-broadcast-user-group-and-session)
A Websocket initialized with the URL `ws://www.example.com/ws/foobar?publish-broadcast`,
`ws://www.example.com/ws/foobar?publish-user` or `ws://www.example.com/ws/foobar?publish-session`
will publish a message sent through the Websocket on the named Redis channel `broadcast:foobar`,
`user:john:foobar` and `session:wnqd0gbw5obpnj50zwh6yaq2yz4o8g9x:foobar` respectively.
Every listener subscribed to any of the named channels, then will be notified.
This configuration only makes sense, if the messages send by the client using the Websocket, shall not trigger any server side event. A practical use would be to store current the GPS coordinates of a moving client inside the Redis datastore. Then Django can fetch these coordinates from Redis,
whenever it requires them.
```
# if the publisher is required only for fetching messages, use an
# empty constructor, otherwise reuse an existing redis_publisher redis_publisher = RedisPublisher()
# and somewhere else facility = 'foobar'
audience = 'any'
redis_publisher.fetch_message(request, facility, audience)
```
The argument `audience` must be one of `broadcast`, `group`, `user`, `session` or
`any`. The method `fetch_message` searches through the Redis datastore to find a persisted message for that channel. The first found message is returned to the caller. If no matching message was found, `None` is returned.
##### Message echoing[¶](#message-echoing)
Some kind of applications require to just hold a state object on the server-side, which is a copy of a corresponding JavaScript object on the client. These applications do not require message echoing. Here an incoming message is only dispatched to the subscribed websockets, if the this message contains a different content. This is the default setting.
Other applications such as chats or games, must be informed on each message published on the message queue, regardless of its content. These applications require message echoing.
Here an incoming message is always dispatched to the subscribed websockets. To activate message echoing, simply append the parameter `&echo` to the URL used for connecting to the websocket.
##### Persisting messages[¶](#persisting-messages)
If a client connects to a Redis channel for the first time, or if he reconnects after a page reload,
he might be interested in the current message, previously published on that channel. If the configuration settings `WS4REDIS_EXPIRE` is set to a positive value, **Websocket for Redis**
persists the current message in its key-value store. This message then is retrieved and sent to the client, immediately after he connects to the server.
Note
By using client code, which automatically reconnects after the Websocket closes, one can create a setup which is immune against server and client reboots.
##### Safety considerations[¶](#safety-considerations)
The default setting of **Websocket for Redis** is to allow each client to subscribe and to publish on every possible channel. This normally is not what you want. Therefore **Websocket for Redis**
allows to restrict the channels for subscription and publishing to your application needs. This is done by a callback function, which is called right after the initialization of the Websocket.
This function shall be used to restrict the subscription/publishing channels for the current client.
Example:
```
def get_allowed_channels(request, channels):
return set(channels).intersection(['subscribe-broadcast', 'subscribe-group'])
```
This function restricts the allowed channels to `subscribe-broadcast` and `subscribe-group`
only. All other attempts to subscribe or to publish on other channels will be silently discarded.
Disallow non authenticated users to subscribe or to publish on the Websocket:
```
from django.core.exceptions import PermissionDenied
def get_allowed_channels(request, channels):
if not request.user.is_authenticated():
raise PermissionDenied('Not allowed to subscribe nor to publish on the Websocket!')
```
When using this callback function, Websockets opened by a non-authenticated users, will get a
**403 - Response Forbidden** error.
To enable this function in your application, use the configuration directive
`WS4REDIS_ALLOWED_CHANNELS`.
Note
This function must not perform any blocking requests, such as accessing the database!
### Sending and receiving heartbeat messages[¶](#sending-and-receiving-heartbeat-messages)
The Websocket protocol implements so called PING/PONG messages to keep Websockets alive, even behind proxies, firewalls and load-balancers. The server sends a PING message to the client through the Websocket, which then replies with PONG. If the client does not reply, the server closes the connection.
#### The client part[¶](#the-client-part)
Unfortunately, the Websocket protocol does not provide a similar method for the client, to find out if it is still connected to the server. This can happen, if the connection simply disappears without further notification. In order to have the client recognize this, some Javascript code has to be added to the client code responsible for the Websocket:
```
var ws = new WebSocket('ws://www.example.com/ws/foobar?subscribe-broadcast');
var heartbeat_msg = '--heartbeat--', heartbeat_interval = null, missed_heartbeats = 0;
function on_open() {
// ...
// other code which has to be executed after the client
// connected successfully through the websocket
// ...
if (heartbeat_interval === null) {
missed_heartbeats = 0;
heartbeat_interval = setInterval(function() {
try {
missed_heartbeats++;
if (missed_heartbeats >= 3)
throw new Error("Too many missed heartbeats.");
ws.send(heartbeat_msg);
} catch(e) {
clearInterval(heartbeat_interval);
heartbeat_interval = null;
console.warn("Closing connection. Reason: " + e.message);
ws.close();
}
}, 5000);
}
}
```
The heartbeat message, here `--heartbeat--` can be any magic string which does not interfere with your remaining logic. The best way to achieve this, is to check for that magic string inside the receive function, just before further processing the message:
```
function on_message(evt) {
if (evt.data === heartbeat_msg) {
// reset the counter for missed heartbeats
missed_heartbeats = 0;
return;
}
// ...
// code to further process the received message
// ...
}
```
#### The server part[¶](#the-server-part)
The main loop of the Websocket server is idle for a maximum of 4 seconds, even if there is nothing to do. After that time interval has elapsed, this loop optionally sends a magic string to the client. This can be configured using the special setting:
```
WS4REDIS_HEARTBEAT = '--heartbeat--'
```
The purpose of this setting is twofold. During processing, the server ignores incoming messages containing this magic string. Additionally the Websocket server sends a message with that magic string to the client, about every four seconds. The above client code awaits these messages, at least every five seconds, and if too many were not received, it closes the connection and tries to reestablish it.
By default the setting `WS4REDIS_HEARTBEAT` is `None`, which means that heartbeat messages are neither expected nor sent.
### Application Programming Interface[¶](#application-programming-interface)
This document describes how to interact with **Websockets for Redis** from the Django loop and how to adopt the Websocket loop for other purposes.
#### Use `RedisPublisher` from inside Django views[¶](#use-redispublisher-from-inside-django-views)
For obvious architectural reasons, the code handling the websocket loop can not be accessed directly from within Django. Therefore, all communication from Django to the websocket loop, must be passed over to the Redis message queue and vice versa. To facility this, **ws4redis** offers a class named
`RedisPublisher`. An instance of this class shall be used from inside Django views to push messages via a websocket to the client, or to fetch persisted messages sent through the websocket.
Example view:
```
from django.views.generic.base import View from ws4redis.publisher import RedisPublisher
class MyTypicalView(View):
facility = 'unique-named-facility'
audience = {'broadcast': True}
def __init__(self, *args, **kwargs):
super(MyTypicalView, self).init(*args, **kwargs)
self.redis_publisher = RedisPublisher(facility=self.facility, **self.audience)
def get(self, request)
message = 'A message passed to all browsers listening on the named facility'
self.redis_publisher.publish_message(message)
```
For further options, refer to the reference:
`RedisStore.``publish_message`(*message*, *expire=None*)[¶](#ws4redis.redis_store.RedisStore.publish_message)
Publish a `message` on the subscribed channel on the Redis datastore.
`expire` sets the time in seconds, on how long the message shall additionally of being published, also be persisted in the Redis datastore. If unset, it defaults to the configuration settings `WS4REDIS_EXPIRE`.
##### Replace `RedisSubscriber` for the Websocket loop[¶](#replace-redissubscriber-for-the-websocket-loop)
Sometimes the predefined channels for subscribing and publishing messages might not be enough.
If there is a need to add additional channels to the message queue, it is possible to replace the implemented class `ws4redis.store.RedisSubscriber` by setting the configuration directive
`WS4REDIS_SUBSCRIBER` to a class of your choice.
Use the class `RedisSubscriber` as a starting point and overload the required methods with your own implementation.
*class* `ws4redis.subscriber.``RedisSubscriber`(*connection*)[¶](#ws4redis.subscriber.RedisSubscriber)
Subscriber class, used by the websocket code to listen for subscribed channels
`get_file_descriptor`()[¶](#ws4redis.subscriber.RedisSubscriber.get_file_descriptor)
Returns the file descriptor used for passing to the select call when listening on the message queue.
`parse_response`()[¶](#ws4redis.subscriber.RedisSubscriber.parse_response)
Parse a message response sent by the Redis datastore on a subscribed channel.
`release`()[¶](#ws4redis.subscriber.RedisSubscriber.release)
New implementation to free up Redis subscriptions when websockets close. This prevents memory sap when Redis Output Buffer and Output Lists build when websockets are abandoned.
`send_persisted_messages`(*websocket*)[¶](#ws4redis.subscriber.RedisSubscriber.send_persisted_messages)
This method is called immediately after a websocket is openend by the client, so that persisted messages can be sent back to the client upon connection.
`send_persited_messages`(*websocket*)[¶](#ws4redis.subscriber.RedisSubscriber.send_persited_messages)
This method is called immediately after a websocket is openend by the client, so that persisted messages can be sent back to the client upon connection.
`set_pubsub_channels`(*request*, *channels*)[¶](#ws4redis.subscriber.RedisSubscriber.set_pubsub_channels)
Initialize the channels used for publishing and subscribing messages through the message queue.
Warning
If the overloaded class calls any blocking functions, such as `sleep`, `read`,
`select` or similar, make sure that these functions are patched by the gevent library,
otherwise *all* connections will block simultaneously.
### Testing Websockets for Redis[¶](#testing-websockets-for-redis)
#### A simple Chat server[¶](#a-simple-chat-server)
In the `examples` directory, there are two demo chat servers. To start them, first initialize the SQLite database
```
# create python2 virtualenv virtualenv - p /path/to/python2 /path/to/virtualenv
# activate virtualenv source /path/to/virtualenv/bin/activate
# Make sure you're in the examples/ directory cd examples/
# install pip requirements pip install -r requirements.txt
# Django 1.7+
# Load test data
./manage.py migrate
./manage.py loaddata chatserver/fixtures/data.json
```
and then start the server
```
# start Redis Server from a different shell prompt
# (or follow quickstart instructions http://redis.io/topics/quickstart)
redis-server
# start Django
./manage.py runserver
```
Point a browser onto <http://localhost:8000/admin/>, login as the ‘admin’ user using the password
‘secret’ and add additional users. Enable their staff status, so that they can use the admin interface to log into the testing application.
With <http://localhost:8000/chat/> you can send messages to specific users, provided they are logged in. To log in as another user, use Django’s admin interface.
##### Simple Broadcasting[¶](#simple-broadcasting)
On <http://localhost:8000/chat/> there is a chat server, which simply broadcasts messages to all browsers accessing this same URL.
##### Testing uWSGI[¶](#testing-uwsgi)
Before configuring NGiNX to run in front of two instances of uWSGI, it is recommended to run uWSGI as a stand alone server for testing purpose. The entry point of this server makes the distinction between normal HTTP and Websocket requests. In directory `examples`, start uwsgi as
```
uwsgi --virtualenv /path/to/virtualenvs --http :9090 --gevent 100 --http-websockets --module wsgi
```
Both chat server tests from above should run in this configuration.
#### Running Unit Tests[¶](#running-unit-tests)
```
./manage.py test chatserver --settings=chatserver.tests.settings
```
Currently it is not possible to simulate more than one client at a time. Django’s built in
[LiveServerTestCase](https://docs.djangoproject.com/en/1.6/topics/testing/overview/#liveservertestcase) can not handle more than one simultaneous open connection, and thus more sophisticated tests with more than one active Websockets are not possible.
#### Running Stress Tests[¶](#running-stress-tests)
To run stress tests, change into directory `stress-tests`. Since stress tests shall check the performance in a real environment, the server and the testing client must be started independently.
First start the server, as you would in productive environments.
```
# Open a new shell and activate your virtualenv in it source /path/to/virtualenv/bin/activate
# Install the uwsgi package pip install uwsgi
# Then start the uwsgi server uwsgi --http :8000 --gevent 1000 --http-websockets --master --workers 2 --module wsgi_websocket
```
then go back to the other shell (also with the virtualenv activated) and start one of the testing clients, using the [nose](http://nose.readthedocs.org/en/latest/) framework
```
nosetests test_uwsgi_gevent.py
```
(this test, on my MacBook, requires about 1.5 seconds)
or start a similar test using real threads instead of greenlets
```
nosetests test_uwsgi_threads.py
```
(this test, on my MacBook, requires about 2.5 seconds)
Both clients subscribe to 1000 concurrent Websockets. Then a message is published from another Websocket. If all the clients receive that message, the test is considered as successful. Both perform the same test, but `test_uwsgi_gevent.py` uses [greenlet](http://greenlet.readthedocs.org/en/latest/)’s for each client to simulate,
whereas `test_uwsgi_threads.py` uses [Python thread](http://docs.python.org/2/library/threading.html)’s.
If these tests do not work in your environment, check your file descriptors limitations. Use the shell command `ulimit -n` and adopt it to these requirements. Alternatively reduce the number of concurrent clients in the tests.
### Debugging[¶](#debugging)
This project adds some extra complexity to Django projects with websocket-redis. This is because now there are two entry points instead of one. The default **Django** one, based on the WSGI protocol,
which is used to handle the typical HTTP-Request-Response. And the new one **Websocket for Redis**,
based on the HTTP, which handles the websocket part.
#### Django Loop and Websocket Loop[¶](#django-loop-and-websocket-loop)
In this documentation, I use the terms *Django Loop* and *Websocket Loop* to distinguish these two entry points. You shall rarely need to access the Websocket Loop, because intentionally there are no hooks for adding server side logics. The latter must reside inside the Django loop using Redis as the communication engine between those two.
A reason one might need to debug inside the Websocket loop, is, because the subscriber was overridden using the configuration setting `WS4REDIS_SUBSCRIBER`. Therefore, one of the aims of this project is to facilitate the entry level for debugging. During development, hence the server is started with `./manage.py runserver`, this is achieved by hijacking the Django loop. Then the connection is kept open, until the client closes the Websocket.
If existing workers do not return, Django creates a thread for new incoming requests. This means that during debugging, each Websocket connection owns its own thread. Such an approach is perfectly feasible, however it scales badly and therefore should not be used during production.
#### Query the datastore[¶](#query-the-datastore)
Sometimes you might need to know, why some data is bogus or was not sent/received by the client.
The easiest way to do this is to access the Redis datastore.
```
$ redis-cli redis 127.0.0.1:6379>
```
In this command line interface, you can find out about all the data managed by
**Websocket for Redis**. Redis offers many [commands](http://redis.io/commands) from which a few are useful here:
##### keys[¶](#keys)
```
redis 127.0.0.1:6379> keys *
```
Gives a list of all keys used in Redis. If a `WS4REDIS_PREFIX` is specified in `settings.py`,
this prefixing string can be used to limit the keys to those used by **Websocket for Redis**.
If, for instance you’re interested into all messages available for broadcast, then invoke:
```
redis 127.0.0.1:6379> keys [prefix:]broadcast:*
```
with the *prefix*, if set.
##### get[¶](#get)
```
redis 127.0.0.1:6379> get [prefix:]broadcast:foo
```
This returns the data available for broadcast for the facility named “foo”.
```
redis 127.0.0.1:6379> get [prefix:]user:john:foo
```
This returns the data available for user “john” for the facility named “foo”.
```
redis 127.0.0.1:6379> get [prefix:]session:wnqd0gbw5obpnj50zwh6yaq2yz4o8g9x:foo
```
This returns the data available for the browser owning the session-id
`wnqd0gbw5obpnj50zwh6yaq2yz4o8g9x` for the facility named “foo”.
##### subscribe[¶](#subscribe)
If **Websocket for Redis** is configured to not cache published data, no data buckets are filled.
This is the case, when the configuration option `WS4REDIS_EXPIRE` is set to zero or None. In such a situation, the Redis commands `keys` and `get` won’t give you any information. But you can subscribe for listening to a named channel:
```
redis 127.0.0.1:6379> subscribe [prefix:]broadcast:foo
```
This command blocks until some data is received. It then dumps the received data.
You have to reenter the subscribe command, if you want to listen for further data.
### Release History[¶](#release-history)
#### 0.5.1[¶](#id1)
* Allow WS4REDIS_PROCESS_REQUEST to be a string.
* Renamed spelling error: send_persited_messages -> send_persisted_messages.
* Fix: Handle binary messages in Python 3.
* Fix: Websocket closed status code compatibility with Django v1.11.
* Fix: Support for Unix Domain Sockets.
#### 0.5.0[¶](#id2)
* Support for Django-1.11.
#### 0.4.8[¶](#id3)
* Support Redis connections over Unix Domain Sockets.
#### 0.4.7[¶](#id4)
Improvements to the javascript API:
* Performing reconnection attempts when the first connection (on instantiation) fails.
* Adding the ‘close()’ method to enable closing the connection explicitly. When the connection is closed calling this method, there will be no reconnection attempts. In order to connect again,
the client must be re-instantiated.
* Adding ‘connecting’ and ‘disconnected’ callback options. The first is fired right before the Websocket is being instantiated, while tha last is fired after connection is closed.
* Adding the following methods to check websocket status: `is_connecting()`, `is_connected()`,
`is_closing()`, `is_closed()`.
* Replaced `STATIC_URL` against `{% static %}` in all templates.
* Keep track on opened websockets.
#### 0.4.6[¶](#id5)
* Added support for the Sec-WebSocket-Protocol header. Thanks to <NAME>.
* Fixed bug in unpacking binary websocket protocol.
#### 0.4.5[¶](#id6)
* created 1 requirements file under `examples/chatserver/requirements.txt`
* renamed chatclient.py to test_chatclient.py - for django-nose testrunner
* migrated example project to django 1.7
* edited `docs/testing.rst` to show new changes for using example project
#### 0.4.4[¶](#id7)
* Added method `release()` to `RedisSubscriber` and calling this method each time a Websocket closes, for whatever reason. This should avoid some reported memory issues.
#### 0.4.3[¶](#id8)
* Fixed: **django-websocket-redis** failed to initialize under some circumstances in combination with Django-1.7. This only happened for logged in users and threw this exception:
`django.core.exceptions.AppRegistryNotReady: Models aren't loaded yet.`
* Added setup on how to run **django-websocket-redis** with uWSGI but without NGiNX.
#### 0.4.2[¶](#id9)
* Message echoing can be switched “on” and “off” according to the user needs. Before it was “on”
by default.
* Many changes to get this app compatible with Python3. This is still not finished, since the pilfered module `websocket.py` is not PY3 compatible yet.
* Added a class `RedisMessage` to pass and store the message to and from the websocket.
Before this was just a string with serialized data.
#### 0.4.1[¶](#id10)
* Fixed: `request.user.username` has been replaced by `get_username()`.
#### 0.4.0[¶](#id11)
* Messages can be sent to users being member of one or more Django groups.
* `RedisPublisher` and `RedisSubscriber` now only accept lists for `users`, `groups` and
`sessions`. This makes the API simpler and more consistent.
* A new magic item `ws4redis.redis_store.SELF` has been added to reflect self referencing in this list, what before was done through `users=True` or `sessions=True`.
* Added the possibility to receive heartbeats. This lets the client disconnect and attempt to reconnect after a number of heartbeats were missed. It prevents silent disconnections.
* Refactored the examples.
* Added reusable JavaScript code for the client.
* Added a context processor to inject some settings from `ws4redis` into templates.
#### 0.3.1[¶](#id12)
* Keys for entries in Redis datastore can be prefixed by an optional string. This may be required to avoid namespace clashes.
#### 0.3.0[¶](#id13)
* Added possibility to publish and subscribe for Django Groups, additionally to Users and Sesions.
* To ease the communication between Redis and the Django, a new class `RedisPublisher` has been added as Programming Interface for the Django loop. Before, one had to connect to the Redis datastore directly to send messages to the Websoclet loop.
* Renamed configuration setting `WS4REDIS_STORE` to `WS4REDIS_SUBSCRIBER`.
#### 0.2.3[¶](#id14)
* Fixed: Use flush to discard received PONG message.
#### 0.2.2[¶](#id15)
* Moved mokey patching for Redis socket into the runner. This sometimes caused errors when running in development mode.
* Added timeout to select call. This caused IOerrors when running under uWSGI and the websocket was idle.
#### 0.2.1[¶](#id16)
* Reverted issue #1 and dropped compatibility with Django-1.4 since the response status must use force_str.
#### 0.2.0[¶](#id17)
* Major API changes.
* Use `WS4REDIS_...` in Django settings.
* Persist messages, allowing server reboots and reconnecting the client.
* Share the file descriptor for Redis for all open connections.
* Allow to override the subscribe/publish engine.
#### 0.1.2[¶](#id18)
* Fixed: Can use publish to websocket without subscribing.
#### 0.1.1[¶](#id19)
* Instead of CLI monkey patching, explicitly patch the redis.connection.socket using
`gevent.socket`.
#### 0.1.0[¶](#id20)
* Initial revision.
### Credits to Others[¶](#credits-to-others)
When <NAME> gave his [keynote talk](http://www.youtube.com/watch?v=UKAkKXFMQP8#t=1174) on PyCon 2013 Canada, he mentioned the [MeteorJS](https://www.meteor.com/)
framework as the next big step in web development.
Personally, I share his opinion about this forecast. The point for both of us is, that we don’t see JavaScript as *the* server side language – yet. Probably I am wrong on this, but for the moment I prefer server side frameworks in a language with real classes and numeric types suitable for business applications. This all is missing in JavasSript. Moreover, if content has to be optimized for [E-book readers](http://en.wikipedia.org/wiki/E-book_reader), static rendering on the server side becomes mandatory.
Apart from these technical issues, I love clear separation of concerns, where I can deliberately exchange software components specialized for the running platform. Eventually a web server is very different from a browser, so why should I be forced to run components from the same framework on both of them? If this would be the case, frameworks such as [GWT](http://www.gwtproject.org/) would be more successful.
Therefore my way to go, is for a pure server- and a pure client-side framework. As the latter,
I prefer [AngularJS](http://angularjs.org/), which in my humble opinion is by far the best JavaScript framework ever written.
#### AngularJS[¶](#id1)
is a MVC framework for the client with two-way data-binding. Two way data-binding is an automatic way of updating the view whenever the model changes, as well as updating the model whenever the view changes. Django users will immediately feel comfortable with AngularJS, since the concept of templates, controllers and data models is quite similar.
The problem however with two distinct frameworks is, that it becomes difficult to use the server side model on the client, and always keeping track on each model alteration on the server. This by the way, is a typical violation of the DRY principle and should be avoided. I therefore wrote a library, [django-angular](https://github.com/jrief/django-angular), which “translates” Django models into an Angular models and vice versa.
With this library, for instance, it is possible to use a Django form and bind it with an AngularJS controller without having to keep track on each of the model fields. It is even possible to “export”
Django’s server side form validation to the client side validation functions, without having to duplicate this code.
#### Current solutions[¶](#current-solutions)
For rendering server side data using HTML, and receiving client data through POST or XMLHttpRequests, **django-angular** works fine, but in order to update data on the client upon events triggered by the server, communication using a technology such as websockets must be provided by the application server.
I tried out all of the current implementations to add functionality for websocket to Django. But they all looked like makeshift solutions. Something I found specially disturbing, was the need for another framework running side by side with Django, during development.
#### uWSGI[¶](#uwsgi)
Then I stumbled across a [talk](http://www.youtube.com/watch?v=qmdk5mVLsHM#t=580) from <NAME> on EuroPython 2013.
Here he pointed out, that the WSGI protocol will never be able to support a technology such as websockets. But, since websockets override HTTP, the solution is to let them override WSGI too.
Now with a web application runner, supporting thousands of concurrent websocket connections, the implementation for Django was quite easy. Adding a compatible solution for the development environment on Django was somehow trickier, but fortunately <NAME> had already implemented a pure Python implementation, which can do the complicated [websocket handshake](https://bitbucket.org/Jeffrey/gevent-websocket) for us.
Since these technologies now can be sticked together, adding three-way data-binding for AngularJS will be the next step. Three-way data-binding is an extension to synchronize changes on the Angular model back to a data-store at the server side. This is awesome because then Django can manipulate the client side DOM, using the AngularJS template system but without having to implement a single line of JavaScript code. With three-way data-binding, Django will come a step nearer to one of the coolest feature MeteorJS can offer right now.
Indices and tables[¶](#indices-and-tables)
---
* [Index](genindex.html)
* [Module Index](py-modindex.html)
* [Search Page](search.html) |
PlaneGeometry | cran | R | Package ‘PlaneGeometry’
August 9, 2023
Type Package
Title Plane Geometry
Version 1.6.0
Author <NAME>
Maintainer <NAME> <<EMAIL>>
Description An extensive set of plane geometry routines. Provides R6
classes representing triangles, circles, circular arcs, ellipses,
elliptical arcs, lines, hyperbolae, and their plot methods. Also
provides R6 classes representing transformations: rotations,
reflections, homotheties, scalings, general affine transformations,
inversions, Möbius transformations.
License GPL-3
URL https://github.com/stla/PlaneGeometry
BugReports https://github.com/stla/PlaneGeometry/issues
Imports Carlson, CVXR, fitConic, graphics, methods, R6, rcdd, sdpt3r,
stringr, uniformly
Suggests ellipse, elliptic, freegroup, knitr, rgl, rmarkdown, sets,
testthat, viridisLite
VignetteBuilder knitr
Encoding UTF-8
RoxygenNote 7.2.3
NeedsCompilation no
Repository CRAN
Date/Publication 2023-08-09 21:40:02 UTC
R topics documented:
Affin... 3
AffineMappingEllipse2Ellips... 5
AffineMappingThreePoint... 5
Ar... 6
Circl... 8
CircleA... 15
CircleO... 16
crossRati... 16
dra... 17
Ellips... 18
EllipseEquationFromFivePoint... 27
EllipseFromCenterAndMatri... 28
EllipseFromEquatio... 28
EllipseFromFivePoint... 29
EllipseFromFociAndOnePoin... 30
EllipseFromThreeBoundaryPoint... 30
EllipticalAr... 31
fitEllips... 33
GaussianEllips... 34
Homothet... 34
Hyperbol... 36
HyperbolaFromEquatio... 40
intersectionCircleCircl... 41
intersectionCircleLin... 41
intersectionEllipseLin... 42
intersectionLineLin... 43
Inversio... 43
inversionFixingThreeCircle... 47
inversionFixingTwoCircle... 48
inversionFromCircl... 48
inversionKeepingCircl... 49
inversionSwappingTwoCircle... 49
Lin... 50
LineFromEquatio... 54
LineFromInterceptAndSlop... 55
LownerJohnEllips... 55
maxAreaInscribedCircl... 56
maxAreaInscribedEllips... 57
midCircle... 58
Mobiu... 59
MobiusMappingCircl... 62
MobiusMappingThreePoint... 63
MobiusSwappingTwoPoint... 64
Projectio... 64
radicalCente... 67
Reflectio... 67
Rotatio... 70
Scalin... 73
ScalingX... 75
Shea... 77
soddyCircl... 80
SteinerChai... 80
Translatio... 81
Triangl... 83
TriangleThreeLine... 98
unitCircl... 98
Affine R6 class representing an affine map.
Description
An affine map is given by a 2x2 matrix (a linear transformation) and a vector (the "intercept").
Active bindings
A get or set the matrix A
b get or set the vector b
Methods
Public methods:
• Affine$new()
• Affine$print()
• Affine$get3x3matrix()
• Affine$inverse()
• Affine$compose()
• Affine$transform()
• Affine$transformLine()
• Affine$transformEllipse()
• Affine$clone()
Method new(): Create a new Affine object.
Usage:
Affine$new(A, b)
Arguments:
A the 2x2 matrix of the affine map
b the shift vector of the affine map
Returns: A new Affine object.
Method print(): Show instance of an Affine object.
Usage:
Affine$print(...)
Arguments:
... ignored
Examples:
Affine$new(rbind(c(3.5,2),c(0,4)), c(-1, 1.25))
Method get3x3matrix(): The 3x3 matrix representing the affine map.
Usage:
Affine$get3x3matrix()
Method inverse(): The inverse affine transformation, if it exists.
Usage:
Affine$inverse()
Method compose(): Compose the reference affine map with another affine map.
Usage:
Affine$compose(transfo, left = TRUE)
Arguments:
transfo an Affine object
left logical, whether to compose at left or at right (i.e. returns f1 o f0 or f0 o f1)
Returns: An Affine object.
Method transform(): Transform a point or several points by the reference affine map.
Usage:
Affine$transform(M)
Arguments:
M a point or a two-column matrix of points, one point per row
Method transformLine(): Transform a line by the reference affine transformation (only for
invertible affine maps).
Usage:
Affine$transformLine(line)
Arguments:
line a Line object
Returns: A Line object.
Method transformEllipse(): Transform an ellipse by the reference affine transformation
(only for an invertible affine map). The result is an ellipse.
Usage:
Affine$transformEllipse(ell)
Arguments:
ell an Ellipse object or a Circle object
Returns: An Ellipse object.
Method clone(): The objects of this class are cloneable with this method.
Usage:
Affine$clone(deep = FALSE)
Arguments:
deep Whether to make a deep clone.
Examples
## ------------------------------------------------
## Method `Affine$print`
## ------------------------------------------------
Affine$new(rbind(c(3.5,2),c(0,4)), c(-1, 1.25))
AffineMappingEllipse2Ellipse
Affine transformation mapping a given ellipse to a given ellipse
Description
Return the affine transformation which transforms ell1 to ell2.
Usage
AffineMappingEllipse2Ellipse(ell1, ell2)
Arguments
ell1, ell2 Ellipse or Circle objects
Value
An Affine object.
Examples
ell1 <- Ellipse$new(c(1,1), 5, 1, 30)
( ell2 <- Ellipse$new(c(4,-1), 3, 2, 50) )
f <- AffineMappingEllipse2Ellipse(ell1, ell2)
f$transformEllipse(ell1) # should be ell2
AffineMappingThreePoints
Affine transformation mapping three given points to three given points
Description
Return the affine transformation which sends P1 to Q1, P2 to Q2 and P3 to Q3.
Usage
AffineMappingThreePoints(P1, P2, P3, Q1, Q2, Q3)
Arguments
P1, P2, P3 three non-collinear points
Q1, Q2, Q3 three non-collinear points
Value
An Affine object.
Arc R6 class representing a circular arc
Description
An arc is given by a center, a radius, a starting angle and an ending angle. They are respectively
named center, radius, alpha1 and alpha2.
Active bindings
center get or set the center
radius get or set the radius
alpha1 get or set the starting angle
alpha2 get or set the ending angle
degrees get or set the degrees field
Methods
Public methods:
• Arc$new()
• Arc$print()
• Arc$startingPoint()
• Arc$endingPoint()
• Arc$isEqual()
• Arc$complementaryArc()
• Arc$path()
• Arc$clone()
Method new(): Create a new Arc object.
Usage:
Arc$new(center, radius, alpha1, alpha2, degrees = TRUE)
Arguments:
center the center
radius the radius
alpha1 the starting angle
alpha2 the ending angle
degrees logical, whether alpha1 and alpha2 are given in degrees
Returns: A new Arc object.
Examples:
arc <- Arc$new(c(1,1), 1, 45, 90)
arc
arc$center
arc$center <- c(0,0)
arc
Method print(): Show instance of an Arc object.
Usage:
Arc$print(...)
Arguments:
... ignored
Examples:
Arc$new(c(0,0), 2, pi/4, pi/2, FALSE)
Method startingPoint(): Starting point of the reference arc.
Usage:
Arc$startingPoint()
Method endingPoint(): Ending point of the reference arc.
Usage:
Arc$endingPoint()
Method isEqual(): Check whether the reference arc equals another arc.
Usage:
Arc$isEqual(arc)
Arguments:
arc an Arc object
Method complementaryArc(): Complementary arc of the reference arc.
Usage:
Arc$complementaryArc()
Examples:
arc <- Arc$new(c(0,0), 1, 30, 60)
plot(NULL, type = "n", asp = 1, xlim = c(-1,1), ylim = c(-1,1),
xlab = NA, ylab = NA)
draw(arc, lwd = 3, col = "red")
draw(arc$complementaryArc(), lwd = 3, col = "green")
Method path(): The reference arc as a path.
Usage:
Arc$path(npoints = 100L)
Arguments:
npoints number of points of the path
Returns: A matrix with two columns x and y of length npoints. See "Filling the lapping area
of two circles" in the vignette for an example.
Method clone(): The objects of this class are cloneable with this method.
Usage:
Arc$clone(deep = FALSE)
Arguments:
deep Whether to make a deep clone.
Examples
## ------------------------------------------------
## Method `Arc$new`
## ------------------------------------------------
arc <- Arc$new(c(1,1), 1, 45, 90)
arc
arc$center
arc$center <- c(0,0)
arc
## ------------------------------------------------
## Method `Arc$print`
## ------------------------------------------------
Arc$new(c(0,0), 2, pi/4, pi/2, FALSE)
## ------------------------------------------------
## Method `Arc$complementaryArc`
## ------------------------------------------------
arc <- Arc$new(c(0,0), 1, 30, 60)
plot(NULL, type = "n", asp = 1, xlim = c(-1,1), ylim = c(-1,1),
xlab = NA, ylab = NA)
draw(arc, lwd = 3, col = "red")
draw(arc$complementaryArc(), lwd = 3, col = "green")
Circle R6 class representing a circle
Description
A circle is given by a center and a radius, named center and radius.
Active bindings
center get or set the center
radius get or set the radius
Methods
Public methods:
• Circle$new()
• Circle$print()
• Circle$pointFromAngle()
• Circle$diameter()
• Circle$tangent()
• Circle$tangentsThroughExternalPoint()
• Circle$isEqual()
• Circle$isDifferent()
• Circle$isOrthogonal()
• Circle$angle()
• Circle$includes()
• Circle$orthogonalThroughTwoPointsOnCircle()
• Circle$orthogonalThroughTwoPointsWithinCircle()
• Circle$power()
• Circle$radicalCenter()
• Circle$radicalAxis()
• Circle$rotate()
• Circle$translate()
• Circle$invert()
• Circle$asEllipse()
• Circle$randomPoints()
• Circle$clone()
Method new(): Create a new Circle object.
Usage:
Circle$new(center, radius)
Arguments:
center the center
radius the radius
Returns: A new Circle object.
Examples:
circ <- Circle$new(c(1,1), 1)
circ
circ$center
circ$center <- c(0,0)
circ
Method print(): Show instance of a circle object.
Usage:
Circle$print(...)
Arguments:
... ignored
Examples:
Circle$new(c(0,0), 2)
Method pointFromAngle(): Get a point on the reference circle from its polar angle.
Usage:
Circle$pointFromAngle(alpha, degrees = TRUE)
Arguments:
alpha a number, the angle
degrees logical, whether alpha is given in degrees
Returns: The point on the circle with polar angle alpha.
Method diameter(): Diameter of the reference circle for a given polar angle.
Usage:
Circle$diameter(alpha)
Arguments:
alpha an angle in radians, there is one diameter for each value of alpha modulo pi
Returns: A segment (Line object).
Examples:
circ <- Circle$new(c(1,1), 5)
diams <- lapply(c(0, pi/3, 2*pi/3), circ$diameter)
plot(NULL, type="n", asp=1, xlim = c(-4,6), ylim = c(-5,7),
xlab = NA, ylab = NA)
draw(circ, lwd = 2, col = "yellow")
invisible(lapply(diams, draw, col = "blue"))
Method tangent(): Tangent of the reference circle at a given polar angle.
Usage:
Circle$tangent(alpha)
Arguments:
alpha an angle in radians, there is one tangent for each value of alpha modulo 2*pi
Examples:
circ <- Circle$new(c(1,1), 5)
tangents <- lapply(c(0, pi/3, 2*pi/3, pi, 4*pi/3, 5*pi/3), circ$tangent)
plot(NULL, type="n", asp=1, xlim = c(-4,6), ylim = c(-5,7),
xlab = NA, ylab = NA)
draw(circ, lwd = 2, col = "yellow")
invisible(lapply(tangents, draw, col = "blue"))
Method tangentsThroughExternalPoint(): Return the two tangents of the reference circle
passing through an external point.
Usage:
Circle$tangentsThroughExternalPoint(P)
Arguments:
P a point external to the reference circle
Returns: A list of two Line objects, the two tangents; the tangency points are in the B field of
the lines.
Method isEqual(): Check whether the reference circle equals another circle.
Usage:
Circle$isEqual(circ)
Arguments:
circ a Circle object
Method isDifferent(): Check whether the reference circle differs from another circle.
Usage:
Circle$isDifferent(circ)
Arguments:
circ a Circle object
Method isOrthogonal(): Check whether the reference circle is orthogonal to a given circle.
Usage:
Circle$isOrthogonal(circ)
Arguments:
circ a Circle object
Method angle(): Angle between the reference circle and a given circle, if they intersect.
Usage:
Circle$angle(circ)
Arguments:
circ a Circle object
Method includes(): Check whether a point belongs to the reference circle.
Usage:
Circle$includes(M)
Arguments:
M a point
Method orthogonalThroughTwoPointsOnCircle(): Orthogonal circle passing through two
points on the reference circle.
Usage:
Circle$orthogonalThroughTwoPointsOnCircle(alpha1, alpha2, arc = FALSE)
Arguments:
alpha1, alpha2 two angles defining two points on the reference circle
arc logical, whether to return only the arc at the interior of the reference circle
Returns: A Circle object if arc=FALSE, an Arc object if arc=TRUE, or a Line object: the
diameter of the reference circle defined by the two points in case when the two angles differ by
pi.
Examples:
# hyperbolic triangle
circ <- Circle$new(c(5,5), 3)
arc1 <- circ$orthogonalThroughTwoPointsOnCircle(0, 2*pi/3, arc = TRUE)
arc2 <- circ$orthogonalThroughTwoPointsOnCircle(2*pi/3, 4*pi/3, arc = TRUE)
arc3 <- circ$orthogonalThroughTwoPointsOnCircle(4*pi/3, 0, arc = TRUE)
opar <- par(mar = c(0,0,0,0))
plot(0, 0, type = "n", asp = 1, xlim = c(2,8), ylim = c(2,8))
draw(circ)
draw(arc1, col = "red", lwd = 2)
draw(arc2, col = "green", lwd = 2)
draw(arc3, col = "blue", lwd = 2)
par(opar)
Method orthogonalThroughTwoPointsWithinCircle(): Orthogonal circle passing through
two points within the reference circle.
Usage:
Circle$orthogonalThroughTwoPointsWithinCircle(P1, P2, arc = FALSE)
Arguments:
P1, P2 two distinct points in the interior of the reference circle
arc logical, whether to return the arc joining the two points instead of the circle
Returns: A Circle object or an Arc object, or a Line object if the two points are on a diameter.
Examples:
circ <- Circle$new(c(0,0),3)
P1 <- c(1,1); P2 <- c(1, 2)
ocirc <- circ$orthogonalThroughTwoPointsWithinCircle(P1, P2)
arc <- circ$orthogonalThroughTwoPointsWithinCircle(P1, P2, arc = TRUE)
plot(0, 0, type = "n", asp = 1, xlab = NA, ylab = NA,
xlim = c(-3, 4), ylim = c(-3, 4))
draw(circ, lwd = 2)
draw(ocirc, lty = "dashed", lwd = 2)
draw(arc, lwd = 3, col = "blue")
Method power(): Power of a point with respect to the reference circle.
Usage:
Circle$power(M)
Arguments:
M point
Returns: A number.
Method radicalCenter(): Radical center of two circles.
Usage:
Circle$radicalCenter(circ2)
Arguments:
circ2 a Circle object
Method radicalAxis(): Radical axis of two circles.
Usage:
Circle$radicalAxis(circ2)
Arguments:
circ2 a Circle object
Returns: A Line object.
Method rotate(): Rotate the reference circle.
Usage:
Circle$rotate(alpha, O, degrees = TRUE)
Arguments:
alpha angle of rotation
O center of rotation
degrees logical, whether alpha is given in degrees
Returns: A Circle object.
Method translate(): Translate the reference circle.
Usage:
Circle$translate(v)
Arguments:
v the vector of translation
Returns: A Circle object.
Method invert(): Invert the reference circle.
Usage:
Circle$invert(inversion)
Arguments:
inversion an Inversion object
Returns: A Circle object or a Line object.
Method asEllipse(): Convert the reference circle to an Ellipse object.
Usage:
Circle$asEllipse()
Method randomPoints(): Random points on or in the reference circle.
Usage:
Circle$randomPoints(n, where = "in")
Arguments:
n an integer, the desired number of points
where "in" to generate inside the circle, "on" to generate on the circle
Returns: The generated points in a two columns matrix with n rows.
Method clone(): The objects of this class are cloneable with this method.
Usage:
Circle$clone(deep = FALSE)
Arguments:
deep Whether to make a deep clone.
See Also
radicalCenter for the radical center of three circles.
Examples
## ------------------------------------------------
## Method `Circle$new`
## ------------------------------------------------
circ <- Circle$new(c(1,1), 1)
circ
circ$center
circ$center <- c(0,0)
circ
## ------------------------------------------------
## Method `Circle$print`
## ------------------------------------------------
Circle$new(c(0,0), 2)
## ------------------------------------------------
## Method `Circle$diameter`
## ------------------------------------------------
circ <- Circle$new(c(1,1), 5)
diams <- lapply(c(0, pi/3, 2*pi/3), circ$diameter)
plot(NULL, type="n", asp=1, xlim = c(-4,6), ylim = c(-5,7),
xlab = NA, ylab = NA)
draw(circ, lwd = 2, col = "yellow")
invisible(lapply(diams, draw, col = "blue"))
## ------------------------------------------------
## Method `Circle$tangent`
## ------------------------------------------------
circ <- Circle$new(c(1,1), 5)
tangents <- lapply(c(0, pi/3, 2*pi/3, pi, 4*pi/3, 5*pi/3), circ$tangent)
plot(NULL, type="n", asp=1, xlim = c(-4,6), ylim = c(-5,7),
xlab = NA, ylab = NA)
draw(circ, lwd = 2, col = "yellow")
invisible(lapply(tangents, draw, col = "blue"))
## ------------------------------------------------
## Method `Circle$orthogonalThroughTwoPointsOnCircle`
## ------------------------------------------------
# hyperbolic triangle
circ <- Circle$new(c(5,5), 3)
arc1 <- circ$orthogonalThroughTwoPointsOnCircle(0, 2*pi/3, arc = TRUE)
arc2 <- circ$orthogonalThroughTwoPointsOnCircle(2*pi/3, 4*pi/3, arc = TRUE)
arc3 <- circ$orthogonalThroughTwoPointsOnCircle(4*pi/3, 0, arc = TRUE)
opar <- par(mar = c(0,0,0,0))
plot(0, 0, type = "n", asp = 1, xlim = c(2,8), ylim = c(2,8))
draw(circ)
draw(arc1, col = "red", lwd = 2)
draw(arc2, col = "green", lwd = 2)
draw(arc3, col = "blue", lwd = 2)
par(opar)
## ------------------------------------------------
## Method `Circle$orthogonalThroughTwoPointsWithinCircle`
## ------------------------------------------------
circ <- Circle$new(c(0,0),3)
P1 <- c(1,1); P2 <- c(1, 2)
ocirc <- circ$orthogonalThroughTwoPointsWithinCircle(P1, P2)
arc <- circ$orthogonalThroughTwoPointsWithinCircle(P1, P2, arc = TRUE)
plot(0, 0, type = "n", asp = 1, xlab = NA, ylab = NA,
xlim = c(-3, 4), ylim = c(-3, 4))
draw(circ, lwd = 2)
draw(ocirc, lty = "dashed", lwd = 2)
draw(arc, lwd = 3, col = "blue")
CircleAB Circle given by a diameter
Description
Return the circle given by a diameter
Usage
CircleAB(A, B)
Arguments
A, B the endpoints of the diameter
Value
A Circle object.
CircleOA Circle given by its center and a point
Description
Return the circle given by its center and a point it passes through.
Usage
CircleOA(O, A)
Arguments
O the center of the circle
A a point of the circle
Value
A Circle object.
crossRatio Cross ratio
Description
The cross ratio of four points.
Usage
crossRatio(A, B, C, D)
Arguments
A, B, C, D four distinct points
Value
A complex number. It is real if and only if the four points lie on a generalized circle (that is a circle
or a line).
Examples
c <- Circle$new(c(0, 0), 1)
A <- c$pointFromAngle(0)
B <- c$pointFromAngle(90)
C <- c$pointFromAngle(180)
D <- c$pointFromAngle(270)
crossRatio(A, B, C, D) # should be real
Mob <- Mobius$new(rbind(c(1+1i,2),c(0,3-2i)))
MA <- Mob$transform(A)
MB <- Mob$transform(B)
MC <- Mob$transform(C)
MD <- Mob$transform(D)
crossRatio(MA, MB, MC, MD) # should be identical to `crossRatio(A, B, C, D)`
draw Draw a geometric object
Description
Draw a geometric object on the current plot.
Usage
draw(x, ...)
## S3 method for class 'Triangle'
draw(x, ...)
## S3 method for class 'Circle'
draw(x, npoints = 100L, ...)
## S3 method for class 'Arc'
draw(x, npoints = 100L, ...)
## S3 method for class 'Ellipse'
draw(x, npoints = 100L, ...)
## S3 method for class 'EllipticalArc'
draw(x, npoints = 100L, ...)
## S3 method for class 'Line'
draw(x, ...)
Arguments
x geometric object (Triangle, Circle, Line, Ellipse, Arc, EllipticalArc)
... arguments passed to lines for a Triangle object, an Arc object or an EllipticalArc
object, to polypath for a Circle object or an Ellipse object, general graphical
parameters for a Line object, passed to lines, curve, or abline.
npoints integer, the number of points of the path
Examples
# open new plot window
plot(0, 0, type="n", asp = 1, xlim = c(0,2.5), ylim = c(0,2.5),
xlab = NA, ylab = NA)
grid()
# draw a triangle
t <- Triangle$new(c(0,0), c(1,0), c(0.5,sqrt(3)/2))
draw(t, col = "blue", lwd = 2)
draw(t$rotate(90, t$C), col = "green", lwd = 2)
# draw a circle
circ <- t$incircle()
draw(circ, col = "orange", border = "brown", lwd = 2)
# draw an ellipse
S <- Scaling$new(circ$center, direction = c(2,1), scale = 2)
draw(S$scaleCircle(circ), border = "grey", lwd = 2)
# draw a line
l <- Line$new(c(1,1), c(1.5,1.5), FALSE, TRUE)
draw(l, col = "red", lwd = 2)
perp <- l$perpendicular(c(2,1))
draw(perp, col = "yellow", lwd = 2)
Ellipse R6 class representing an ellipse
Description
An ellipse is given by a center, two radii (rmajor and rminor), and the angle (alpha) between the
major axis and the horizontal direction.
Active bindings
center get or set the center
rmajor get or set the major radius of the ellipse
rminor get or set the minor radius of the ellipse
alpha get or set the angle of the ellipse
degrees get or set the degrees field
Methods
Public methods:
• Ellipse$new()
• Ellipse$print()
• Ellipse$isEqual()
• Ellipse$equation()
• Ellipse$includes()
• Ellipse$contains()
• Ellipse$matrix()
• Ellipse$path()
• Ellipse$diameter()
• Ellipse$perimeter()
• Ellipse$pointFromAngle()
• Ellipse$pointFromEccentricAngle()
• Ellipse$semiMajorAxis()
• Ellipse$semiMinorAxis()
• Ellipse$foci()
• Ellipse$tangent()
• Ellipse$normal()
• Ellipse$theta2t()
• Ellipse$regressionLines()
• Ellipse$boundingbox()
• Ellipse$randomPoints()
• Ellipse$clone()
Method new(): Create a new Ellipse object.
Usage:
Ellipse$new(center, rmajor, rminor, alpha, degrees = TRUE)
Arguments:
center a point, the center of the rotation
rmajor positive number, the major radius
rminor positive number, the minor radius
alpha a number, the angle between the major axis and the horizontal direction
degrees logical, whether alpha is given in degrees
Returns: A new Ellipse object.
Examples:
Ellipse$new(c(1,1), 3, 2, 30)
Method print(): Show instance of an Ellipse object.
Usage:
Ellipse$print(...)
Arguments:
... ignored
Method isEqual(): Check whether the reference ellipse equals an ellipse.
Usage:
Ellipse$isEqual(ell)
Arguments:
ell An Ellipse object.
Method equation(): The coefficients of the implicit equation of the ellipse.
Usage:
Ellipse$equation()
Details: The implicit equation of the ellipse is Ax² + Bxy + Cy² + Dx + Ey + F = 0. This method
returns A, B, C, D, E and F.
Returns: A named numeric vector.
Method includes(): Check whether a point lies on the reference ellipse.
Usage:
Ellipse$includes(M)
Arguments:
M a point
Method contains(): Check whether a point is contained in the reference ellipse.
Usage:
Ellipse$contains(M)
Arguments:
M a point
Method matrix(): Returns the 2x2 matrix S associated to the reference ellipse. The equation of
the ellipse is t(M-O) %*% S %*% (M-O) = 1.
Usage:
Ellipse$matrix()
Examples:
ell <- Ellipse$new(c(1,1), 5, 1, 30)
S <- ell$matrix()
O <- ell$center
pts <- ell$path(4L) # four points on the ellipse
apply(pts, 1L, function(M) t(M-O) %*% S %*% (M-O))
Method path(): Path that forms the reference ellipse.
Usage:
Ellipse$path(npoints = 100L, closed = FALSE, outer = FALSE)
Arguments:
npoints number of points of the path
closed Boolean, whether to return a closed path; you don’t need a closed path if you want to
plot it with polygon
outer Boolean; if TRUE, the ellipse will be contained inside the path, otherwise it will contain
the path
Returns: A matrix with two columns x and y of length npoints.
Examples:
library(PlaneGeometry)
ell <- Ellipse$new(c(1, -1), rmajor = 3, rminor = 2, alpha = 30)
innerPath <- ell$path(npoints = 10)
outerPath <- ell$path(npoints = 10, outer = TRUE)
bbox <- ell$boundingbox()
plot(NULL, asp = 1, xlim = bbox$x, ylim = bbox$y, xlab = NA, ylab = NA)
draw(ell, border = "red", lty = "dashed")
polygon(innerPath, border = "blue", lwd = 2)
polygon(outerPath, border = "green", lwd = 2)
Method diameter(): Diameter and conjugate diameter of the reference ellipse.
Usage:
Ellipse$diameter(t, conjugate = FALSE)
Arguments:
t a number, the diameter only depends on t modulo pi; the axes correspond to t=0 and t=pi/2
conjugate logical, whether to return the conjugate diameter as well
Returns: A Line object or a list of two Line objects if conjugate = TRUE.
Examples:
ell <- Ellipse$new(c(1,1), 5, 2, 30)
diameters <- lapply(c(0, pi/3, 2*pi/3), ell$diameter)
plot(NULL, asp = 1, xlim = c(-4,6), ylim = c(-2,4),
xlab = NA, ylab = NA)
draw(ell)
invisible(lapply(diameters, draw))
Method perimeter(): Perimeter of the reference ellipse.
Usage:
Ellipse$perimeter()
Method pointFromAngle(): Intersection point of the ellipse with the half-line starting at the
ellipse center and forming angle theta with the major axis.
Usage:
Ellipse$pointFromAngle(theta, degrees = TRUE)
Arguments:
theta a number, the angle, or a numeric vector
degrees logical, whether theta is given in degrees
Returns: A point of the ellipse if length(theta)==1 or a two-column matrix of points of the
ellipse if length(theta) > 1 (one point per row).
Method pointFromEccentricAngle(): Point of the ellipse with given eccentric angle.
Usage:
Ellipse$pointFromEccentricAngle(t)
Arguments:
t a number, the eccentric angle in radians, or a numeric vector
Returns: A point of the ellipse if length(t)==1 or a two-column matrix of points of the ellipse
if length(t) > 1 (one point per row).
Method semiMajorAxis(): Semi-major axis of the ellipse.
Usage:
Ellipse$semiMajorAxis()
Returns: A segment (Line object).
Method semiMinorAxis(): Semi-minor axis of the ellipse.
Usage:
Ellipse$semiMinorAxis()
Returns: A segment (Line object).
Method foci(): Foci of the reference ellipse.
Usage:
Ellipse$foci()
Returns: A list with the two foci.
Method tangent(): Tangents of the reference ellipse at a point given by its eccentric angle.
Usage:
Ellipse$tangent(t)
Arguments:
t eccentric angle, there is one tangent for each value of t modulo 2*pi; for t = 0, pi/2, pi,
-pi/2, these are the tangents at the vertices of the ellipse
Examples:
ell <- Ellipse$new(c(1,1), 5, 2, 30)
tangents <- lapply(c(0, pi/3, 2*pi/3, pi, 4*pi/3, 5*pi/3), ell$tangent)
plot(NULL, asp = 1, xlim = c(-4,6), ylim = c(-2,4),
xlab = NA, ylab = NA)
draw(ell, col = "yellow")
invisible(lapply(tangents, draw, col = "blue"))
Method normal(): Normal unit vector to the ellipse.
Usage:
Ellipse$normal(t)
Arguments:
t a number, the eccentric angle in radians of the point of the ellipse at which we want the normal
unit vector
Returns: The normal unit vector to the ellipse at the point given by eccentric angle t.
Examples:
ell <- Ellipse$new(c(1,1), 5, 2, 30)
t_ <- seq(0, 2*pi, length.out = 13)[-1]
plot(NULL, asp = 1, xlim = c(-5,7), ylim = c(-3,5),
xlab = NA, ylab = NA)
draw(ell, col = "magenta")
for(i in 1:length(t_)){
t <- t_[i]
P <- ell$pointFromEccentricAngle(t)
v <- ell$normal(t)
draw(Line$new(P, P+v, FALSE, FALSE))
}
Method theta2t(): Convert angle to eccentric angle.
Usage:
Ellipse$theta2t(theta, degrees = TRUE)
Arguments:
theta angle between the major axis and the half-line starting at the center of the ellipse and
passing through the point of interest on the ellipse
degrees logical, whether theta is given in degrees
Returns: The eccentric angle of the point of interest on the ellipse, in radians.
Examples:
O <- c(1, 1)
ell <- Ellipse$new(O, 5, 2, 30)
theta <- 20
P <- ell$pointFromAngle(theta)
t <- ell$theta2t(theta)
tg <- ell$tangent(t)
OP <- Line$new(O, P, FALSE, FALSE)
plot(NULL, asp = 1, xlim = c(-4,6), ylim = c(-2,5),
xlab = NA, ylab = NA)
draw(ell, col = "antiquewhite")
points(P[1], P[2], pch = 19)
draw(tg, col = "red")
draw(OP)
draw(ell$semiMajorAxis())
text(t(O+c(1,0.9)), expression(theta))
Method regressionLines(): Regression lines. The regression line of y on x intersects the
ellipse at its rightmost point and its leftmost point. The tangents at these points are vertical. The
regression line of x on y intersects the ellipse at its topmost point and its bottommost point. The
tangents at these points are horizontal.
Usage:
Ellipse$regressionLines()
Returns: A list with two Line objects: the regression line of y on x and the regression line of x
on y.
Examples:
ell <- Ellipse$new(c(1,1), 5, 2, 30)
reglines <- ell$regressionLines()
plot(NULL, asp = 1, xlim = c(-4,6), ylim = c(-2,4),
xlab = NA, ylab = NA)
draw(ell, lwd = 2)
draw(reglines$YonX, lwd = 2, col = "blue")
draw(reglines$XonY, lwd = 2, col = "green")
Method boundingbox(): Return the smallest rectangle parallel to the axes which contains the
reference ellipse.
Usage:
Ellipse$boundingbox()
Returns: A list with two components: the x-limits in x and the y-limits in y.
Examples:
ell <- Ellipse$new(c(2,2), 5, 3, 40)
box <- ell$boundingbox()
plot(NULL, asp = 1, xlim = box$x, ylim = box$y, xlab = NA, ylab = NA)
draw(ell, col = "seaShell", border = "blue")
abline(v = box$x, lty = 2); abline(h = box$y, lty = 2)
Method randomPoints(): Random points on or in the reference ellipse.
Usage:
Ellipse$randomPoints(n, where = "in")
Arguments:
n an integer, the desired number of points
where "in" to generate inside the ellipse, "on" to generate on the ellipse
Returns: The generated points in a two columns matrix with n rows.
Examples:
ell <- Ellipse$new(c(1,1), 5, 2, 30)
pts <- ell$randomPoints(100)
plot(NULL, type="n", asp=1, xlim = c(-4,6), ylim = c(-2,4),
xlab = NA, ylab = NA)
draw(ell, lwd = 2)
points(pts, pch = 19, col = "blue")
Method clone(): The objects of this class are cloneable with this method.
Usage:
Ellipse$clone(deep = FALSE)
Arguments:
deep Whether to make a deep clone.
Examples
## ------------------------------------------------
## Method `Ellipse$new`
## ------------------------------------------------
Ellipse$new(c(1,1), 3, 2, 30)
## ------------------------------------------------
## Method `Ellipse$matrix`
## ------------------------------------------------
ell <- Ellipse$new(c(1,1), 5, 1, 30)
S <- ell$matrix()
O <- ell$center
pts <- ell$path(4L) # four points on the ellipse
apply(pts, 1L, function(M) t(M-O) %*% S %*% (M-O))
## ------------------------------------------------
## Method `Ellipse$path`
## ------------------------------------------------
library(PlaneGeometry)
ell <- Ellipse$new(c(1, -1), rmajor = 3, rminor = 2, alpha = 30)
innerPath <- ell$path(npoints = 10)
outerPath <- ell$path(npoints = 10, outer = TRUE)
bbox <- ell$boundingbox()
plot(NULL, asp = 1, xlim = bbox$x, ylim = bbox$y, xlab = NA, ylab = NA)
draw(ell, border = "red", lty = "dashed")
polygon(innerPath, border = "blue", lwd = 2)
polygon(outerPath, border = "green", lwd = 2)
## ------------------------------------------------
## Method `Ellipse$diameter`
## ------------------------------------------------
ell <- Ellipse$new(c(1,1), 5, 2, 30)
diameters <- lapply(c(0, pi/3, 2*pi/3), ell$diameter)
plot(NULL, asp = 1, xlim = c(-4,6), ylim = c(-2,4),
xlab = NA, ylab = NA)
draw(ell)
invisible(lapply(diameters, draw))
## ------------------------------------------------
## Method `Ellipse$tangent`
## ------------------------------------------------
ell <- Ellipse$new(c(1,1), 5, 2, 30)
tangents <- lapply(c(0, pi/3, 2*pi/3, pi, 4*pi/3, 5*pi/3), ell$tangent)
plot(NULL, asp = 1, xlim = c(-4,6), ylim = c(-2,4),
xlab = NA, ylab = NA)
draw(ell, col = "yellow")
invisible(lapply(tangents, draw, col = "blue"))
## ------------------------------------------------
## Method `Ellipse$normal`
## ------------------------------------------------
ell <- Ellipse$new(c(1,1), 5, 2, 30)
t_ <- seq(0, 2*pi, length.out = 13)[-1]
plot(NULL, asp = 1, xlim = c(-5,7), ylim = c(-3,5),
xlab = NA, ylab = NA)
draw(ell, col = "magenta")
for(i in 1:length(t_)){
t <- t_[i]
P <- ell$pointFromEccentricAngle(t)
v <- ell$normal(t)
draw(Line$new(P, P+v, FALSE, FALSE))
}
## ------------------------------------------------
## Method `Ellipse$theta2t`
## ------------------------------------------------
O <- c(1, 1)
ell <- Ellipse$new(O, 5, 2, 30)
theta <- 20
P <- ell$pointFromAngle(theta)
t <- ell$theta2t(theta)
tg <- ell$tangent(t)
OP <- Line$new(O, P, FALSE, FALSE)
plot(NULL, asp = 1, xlim = c(-4,6), ylim = c(-2,5),
xlab = NA, ylab = NA)
draw(ell, col = "antiquewhite")
points(P[1], P[2], pch = 19)
draw(tg, col = "red")
draw(OP)
draw(ell$semiMajorAxis())
text(t(O+c(1,0.9)), expression(theta))
## ------------------------------------------------
## Method `Ellipse$regressionLines`
## ------------------------------------------------
ell <- Ellipse$new(c(1,1), 5, 2, 30)
reglines <- ell$regressionLines()
plot(NULL, asp = 1, xlim = c(-4,6), ylim = c(-2,4),
xlab = NA, ylab = NA)
draw(ell, lwd = 2)
draw(reglines$YonX, lwd = 2, col = "blue")
draw(reglines$XonY, lwd = 2, col = "green")
## ------------------------------------------------
## Method `Ellipse$boundingbox`
## ------------------------------------------------
ell <- Ellipse$new(c(2,2), 5, 3, 40)
box <- ell$boundingbox()
plot(NULL, asp = 1, xlim = box$x, ylim = box$y, xlab = NA, ylab = NA)
draw(ell, col = "seaShell", border = "blue")
abline(v = box$x, lty = 2); abline(h = box$y, lty = 2)
## ------------------------------------------------
## Method `Ellipse$randomPoints`
## ------------------------------------------------
ell <- Ellipse$new(c(1,1), 5, 2, 30)
pts <- ell$randomPoints(100)
plot(NULL, type="n", asp=1, xlim = c(-4,6), ylim = c(-2,4),
xlab = NA, ylab = NA)
draw(ell, lwd = 2)
points(pts, pch = 19, col = "blue")
EllipseEquationFromFivePoints
Ellipse equation from five points
Description
The coefficients of the implicit equation of an ellipse from five points on this ellipse.
Usage
EllipseEquationFromFivePoints(P1, P2, P3, P4, P5)
Arguments
P1, P2, P3, P4, P5
the five points
Details
The implicit equation of the ellipse is Ax² + Bxy + Cy² + Dx + Ey + F = 0. This function returns A,
B, C, D, E and F.
Value
A named numeric vector.
Examples
ell <- Ellipse$new(c(2,3), 5, 4, 30)
set.seed(666)
pts <- ell$randomPoints(5, "on")
cf1 <- EllipseEquationFromFivePoints(pts[1,],pts[2,],pts[3,],pts[4,],pts[5,])
cf2 <- ell$equation() # should be the same up to a multiplicative factor
all.equal(cf1/cf1["F"], cf2/cf2["F"])
EllipseFromCenterAndMatrix
Ellipse from center and matrix
Description
Returns the ellipse of equation t(X-center) %*% S %*% (X-center) = 1.
Usage
EllipseFromCenterAndMatrix(center, S)
Arguments
center a point, the center of the ellipse
S a positive symmetric matrix
Value
An Ellipse object.
Examples
ell <- Ellipse$new(c(2,3), 4, 2, 20)
S <- ell$matrix()
EllipseFromCenterAndMatrix(ell$center, S)
EllipseFromEquation Ellipse from its implicit equation
Description
Return an ellipse from the coefficients of its implicit equation.
Usage
EllipseFromEquation(A, B, C, D, E, F)
Arguments
A, B, C, D, E, F the coefficients of the equation
Details
The implicit equation of the ellipse is Ax² + Bxy + Cy² + Dx + Ey + F = 0. This function returns the
ellipse given A, B, C, D, E and F.
Value
An Ellipse object.
Examples
ell <- Ellipse$new(c(2,3), 5, 4, 30)
cf <- ell$equation()
ell2 <- EllipseFromEquation(cf[1], cf[2], cf[3], cf[4], cf[5], cf[6])
ell$isEqual(ell2)
EllipseFromFivePoints Ellipse from five points
Description
Return an ellipse from five given points on this ellipse.
Usage
EllipseFromFivePoints(P1, P2, P3, P4, P5)
Arguments
P1, P2, P3, P4, P5
the five points
Value
An Ellipse object.
Examples
ell <- Ellipse$new(c(2,3), 5, 4, 30)
set.seed(666)
pts <- ell$randomPoints(5, "on")
ell2 <- EllipseFromFivePoints(pts[1,],pts[2,],pts[3,],pts[4,],pts[5,])
ell$isEqual(ell2)
EllipseFromFociAndOnePoint
Ellipse from foci and one point
Description
Derive the ellipse with given foci and one point on the boundary.
Usage
EllipseFromFociAndOnePoint(F1, F2, P)
Arguments
F1, F2 points, the foci
P a point on the boundary of the ellipse
Value
An Ellipse object.
EllipseFromThreeBoundaryPoints
Smallest ellipse that passes through three boundary points
Description
Returns the smallest area ellipse which passes through three given boundary points.
Usage
EllipseFromThreeBoundaryPoints(P1, P2, P3)
Arguments
P1, P2, P3 three non-collinear points
Value
An Ellipse object.
Examples
P1 <- c(-1,0); P2 <- c(0, 2); P3 <- c(3,0)
ell <- EllipseFromThreeBoundaryPoints(P1, P2, P3)
ell$includes(P1); ell$includes(P2); ell$includes(P3)
EllipticalArc R6 class representing an elliptical arc
Description
An arc is given by an ellipse (Ellipse object), a starting angle and an ending angle. They are
respectively named ell, alpha1 and alpha2.
Active bindings
ell get or set the ellipse
alpha1 get or set the starting angle
alpha2 get or set the ending angle
degrees get or set the degrees field
Methods
Public methods:
• EllipticalArc$new()
• EllipticalArc$print()
• EllipticalArc$startingPoint()
• EllipticalArc$endingPoint()
• EllipticalArc$isEqual()
• EllipticalArc$complementaryArc()
• EllipticalArc$path()
• EllipticalArc$length()
• EllipticalArc$clone()
Method new(): Create a new EllipticalArc object.
Usage:
EllipticalArc$new(ell, alpha1, alpha2, degrees = TRUE)
Arguments:
ell the ellipse
alpha1 the starting angle
alpha2 the ending angle
degrees logical, whether alpha1 and alpha2 are given in degrees
Returns: A new EllipticalArc object.
Examples:
ell <- Ellipse$new(c(-4,0), 4, 2.5, 140)
EllipticalArc$new(ell, 45, 90)
Method print(): Show instance of an EllipticalArc object.
Usage:
EllipticalArc$print(...)
Arguments:
... ignored
Method startingPoint(): Starting point of the reference elliptical arc.
Usage:
EllipticalArc$startingPoint()
Method endingPoint(): Ending point of the reference elliptical arc.
Usage:
EllipticalArc$endingPoint()
Method isEqual(): Check whether the reference elliptical arc equals another elliptical arc.
Usage:
EllipticalArc$isEqual(arc)
Arguments:
arc an EllipticalArc object
Method complementaryArc(): Complementary elliptical arc of the reference elliptical arc.
Usage:
EllipticalArc$complementaryArc()
Examples:
ell <- Ellipse$new(c(-4,0), 4, 2.5, 140)
arc <- EllipticalArc$new(ell, 30, 60)
plot(NULL, type = "n", asp = 1, xlim = c(-8,0), ylim = c(-3.2,3.2),
xlab = NA, ylab = NA)
draw(arc, lwd = 3, col = "red")
draw(arc$complementaryArc(), lwd = 3, col = "green")
Method path(): The reference elliptical arc as a path.
Usage:
EllipticalArc$path(npoints = 100L)
Arguments:
npoints number of points of the path
Returns: A matrix with two columns x and y of length npoints.
Method length(): The length of the elliptical arc.
Usage:
EllipticalArc$length()
Returns: A number, the arc length.
Method clone(): The objects of this class are cloneable with this method.
Usage:
EllipticalArc$clone(deep = FALSE)
Arguments:
deep Whether to make a deep clone.
Examples
## ------------------------------------------------
## Method `EllipticalArc$new`
## ------------------------------------------------
ell <- Ellipse$new(c(-4,0), 4, 2.5, 140)
EllipticalArc$new(ell, 45, 90)
## ------------------------------------------------
## Method `EllipticalArc$complementaryArc`
## ------------------------------------------------
ell <- Ellipse$new(c(-4,0), 4, 2.5, 140)
arc <- EllipticalArc$new(ell, 30, 60)
plot(NULL, type = "n", asp = 1, xlim = c(-8,0), ylim = c(-3.2,3.2),
xlab = NA, ylab = NA)
draw(arc, lwd = 3, col = "red")
draw(arc$complementaryArc(), lwd = 3, col = "green")
fitEllipse Fit an ellipse
Description
Fit an ellipse to a set of points.
Usage
fitEllipse(points)
Arguments
points numeric matrix with two columns, one point per row
Value
An Ellipse object representing the fitted ellipse. The residual sum of squares is given in the RSS
attribute.
Examples
library(PlaneGeometry)
# We add some noise to 30 points on an ellipse:
ell <- Ellipse$new(c(1, 1), 3, 2, 30)
set.seed(666L)
points <- ell$randomPoints(30, "on") + matrix(rnorm(30*2, sd = 0.2), ncol = 2)
# Now we fit an ellipse to these points:
ellFitted <- fitEllipse(points)
# let's draw all this stuff:
box <- ell$boundingbox()
plot(NULL, asp = 1, xlim = box$x, ylim = box$y, xlab = NA, ylab = NA)
draw(ell, border = "blue", lwd = 2)
points(points, pch = 19)
draw(ellFitted, border = "green", lwd = 2)
GaussianEllipse Gaussian ellipse
Description
Return the ellipse equal to the highest pdf region of a bivariate Gaussian distribution with a given
probability.
Usage
GaussianEllipse(mean, Sigma, p)
Arguments
mean numeric vector of length 2, the mean of the bivariate Gaussian distribution; this
is the center of the ellipse
Sigma covariance matrix of the bivariate Gaussian distribution
p desired probability level, a number between 0 and 1 (strictly)
Value
An Ellipse object.
Homothety R6 class representing a homothety
Description
A homothety is given by a center and a scale factor.
Active bindings
center get or set the center
scale get or set the scale factor of the homothety
Methods
Public methods:
• Homothety$new()
• Homothety$print()
• Homothety$transform()
• Homothety$transformCircle()
• Homothety$getMatrix()
• Homothety$asAffine()
• Homothety$clone()
Method new(): Create a new Homothety object.
Usage:
Homothety$new(center, scale)
Arguments:
center a point, the center of the homothety
scale a number, the scale factor of the homothety
Returns: A new Homothety object.
Examples:
Homothety$new(c(1,1), 2)
Method print(): Show instance of a Homothety object.
Usage:
Homothety$print(...)
Arguments:
... ignored
Method transform(): Transform a point or several points by the reference homothety.
Usage:
Homothety$transform(M)
Arguments:
M a point or a two-column matrix of points, one point per row
Method transformCircle(): Transform a circle by the reference homothety.
Usage:
Homothety$transformCircle(circ)
Arguments:
circ a Circle object
Returns: A Circle object.
Method getMatrix(): Augmented matrix of the homothety.
Usage:
Homothety$getMatrix()
Returns: A 3x3 matrix.
Examples:
H <- Homothety$new(c(1,1), 2)
P <- c(1,5)
H$transform(P)
H$getMatrix() %*% c(P,1)
Method asAffine(): Convert the reference homothety to an Affine object.
Usage:
Homothety$asAffine()
Method clone(): The objects of this class are cloneable with this method.
Usage:
Homothety$clone(deep = FALSE)
Arguments:
deep Whether to make a deep clone.
Examples
## ------------------------------------------------
## Method `Homothety$new`
## ------------------------------------------------
Homothety$new(c(1,1), 2)
## ------------------------------------------------
## Method `Homothety$getMatrix`
## ------------------------------------------------
H <- Homothety$new(c(1,1), 2)
P <- c(1,5)
H$transform(P)
H$getMatrix() %*% c(P,1)
Hyperbola R6 class representing a hyperbola
Description
A hyperbola is given by two intersecting asymptotes, named L1 and L2, and a point on this hyper-
bola, named M.
Active bindings
L1 get or set the asymptote L1
L2 get or set the asymptote L2
M get or set the point M
Methods
Public methods:
• Hyperbola$new()
• Hyperbola$center()
• Hyperbola$OAB()
• Hyperbola$vertices()
• Hyperbola$abce()
• Hyperbola$foci()
• Hyperbola$plot()
• Hyperbola$includes()
• Hyperbola$equation()
• Hyperbola$clone()
Method new(): Create a new Hyperbola object.
Usage:
Hyperbola$new(L1, L2, M)
Arguments:
L1, L2 two intersecting lines given as Line objects, the asymptotes
M a point on the hyperbola
Returns: A new Hyperbola object.
Method center(): Center of the hyperbola.
Usage:
Hyperbola$center()
Returns: The center of the hyperbola, i.e. the point where the two asymptotes meet each other.
Method OAB(): Parametric equation O ± cosh(t)A + sinh(t)B representing the hyperbola.
Usage:
Hyperbola$OAB()
Returns: The point O and the two vectors A and B in a list.
Examples:
L1 <- LineFromInterceptAndSlope(0, 2)
L2 <- LineFromInterceptAndSlope(-2, -0.5)
M <- c(4, 3)
hyperbola <- Hyperbola$new(L1, L2, M)
hyperbola$OAB()
Method vertices(): Vertices of the hyperbola.
Usage:
Hyperbola$vertices()
Returns: The two vertices V1 and V2 in a list.
Method abce(): The numbers a (semi-major axis, i.e. distance from center to vertex), b (semi-
minor axis), c (linear eccentricity) and e (eccentricity) associated to the hyperbola.
Usage:
Hyperbola$abce()
Returns: The four numbers a, b, c and e in a list.
Method foci(): Foci of the hyperbola.
Usage:
Hyperbola$foci()
Returns: The two foci F1 and F2 in a list.
Method plot(): Plot hyperbola.
Usage:
Hyperbola$plot(add = FALSE, ...)
Arguments:
add Boolean, whether to add this plot to the current plot
... named arguments passed to lines
Returns: Nothing, called for plotting.
Examples:
L1 <- LineFromInterceptAndSlope(0, 2)
L2 <- LineFromInterceptAndSlope(-2, -0.5)
M <- c(4, 3)
hyperbola <- Hyperbola$new(L1, L2, M)
plot(hyperbola, lwd = 2)
points(t(M), pch = 19, col = "blue")
O <- hyperbola$center()
points(t(O), pch = 19)
draw(L1, col = "red")
draw(L2, col = "red")
vertices <- hyperbola$vertices()
points(rbind(vertices$V1, vertices$V2), pch = 19)
majorAxis <- Line$new(vertices$V1, vertices$V2)
draw(majorAxis, lty = "dashed")
foci <- hyperbola$foci()
points(rbind(foci$F1, foci$F2), pch = 19, col = "green")
Method includes(): Whether a point belongs to the hyperbola.
Usage:
Hyperbola$includes(P)
Arguments:
P a point
Returns: A Boolean value.
Examples:
L1 <- LineFromInterceptAndSlope(0, 2)
L2 <- LineFromInterceptAndSlope(-2, -0.5)
M <- c(4, 3)
hyperbola <- Hyperbola$new(L1, L2, M)
hyperbola$includes(M)
Method equation(): Implicit quadratic equation of the hyperbola Axx x2 + 2Axy xy + Ayy y 2 +
2Bx x + 2By y + C = 0
Usage:
Hyperbola$equation()
Returns: The coefficients of the equation in a named list.
Examples:
L1 <- LineFromInterceptAndSlope(0, 2)
L2 <- LineFromInterceptAndSlope(-2, -0.5)
M <- c(4, 3)
hyperbola <- Hyperbola$new(L1, L2, M)
eq <- hyperbola$equation()
x <- M[1]; y <- M[2]
with(eq, Axx*x^2 + 2*Axy*x*y + Ayy*y^2 + 2*Bx*x + 2*By*y + C)
V1 <- hyperbola$vertices()$V1
x <- V1[1]; y <- V1[2]
with(eq, Axx*x^2 + 2*Axy*x*y + Ayy*y^2 + 2*Bx*x + 2*By*y + C)
Method clone(): The objects of this class are cloneable with this method.
Usage:
Hyperbola$clone(deep = FALSE)
Arguments:
deep Whether to make a deep clone.
Examples
## ------------------------------------------------
## Method `Hyperbola$OAB`
## ------------------------------------------------
L1 <- LineFromInterceptAndSlope(0, 2)
L2 <- LineFromInterceptAndSlope(-2, -0.5)
M <- c(4, 3)
hyperbola <- Hyperbola$new(L1, L2, M)
hyperbola$OAB()
## ------------------------------------------------
## Method `Hyperbola$plot`
## ------------------------------------------------
L1 <- LineFromInterceptAndSlope(0, 2)
L2 <- LineFromInterceptAndSlope(-2, -0.5)
M <- c(4, 3)
hyperbola <- Hyperbola$new(L1, L2, M)
plot(hyperbola, lwd = 2)
points(t(M), pch = 19, col = "blue")
O <- hyperbola$center()
points(t(O), pch = 19)
draw(L1, col = "red")
draw(L2, col = "red")
vertices <- hyperbola$vertices()
points(rbind(vertices$V1, vertices$V2), pch = 19)
majorAxis <- Line$new(vertices$V1, vertices$V2)
draw(majorAxis, lty = "dashed")
foci <- hyperbola$foci()
points(rbind(foci$F1, foci$F2), pch = 19, col = "green")
## ------------------------------------------------
## Method `Hyperbola$includes`
## ------------------------------------------------
L1 <- LineFromInterceptAndSlope(0, 2)
L2 <- LineFromInterceptAndSlope(-2, -0.5)
M <- c(4, 3)
hyperbola <- Hyperbola$new(L1, L2, M)
hyperbola$includes(M)
## ------------------------------------------------
## Method `Hyperbola$equation`
## ------------------------------------------------
L1 <- LineFromInterceptAndSlope(0, 2)
L2 <- LineFromInterceptAndSlope(-2, -0.5)
M <- c(4, 3)
hyperbola <- Hyperbola$new(L1, L2, M)
eq <- hyperbola$equation()
x <- M[1]; y <- M[2]
with(eq, Axx*x^2 + 2*Axy*x*y + Ayy*y^2 + 2*Bx*x + 2*By*y + C)
V1 <- hyperbola$vertices()$V1
x <- V1[1]; y <- V1[2]
with(eq, Axx*x^2 + 2*Axy*x*y + Ayy*y^2 + 2*Bx*x + 2*By*y + C)
HyperbolaFromEquation Hyperbola object from the hyperbola equation.
Description
Create the Hyperbola object representing the hyperbola with the given implicit equation.
Usage
HyperbolaFromEquation(eq)
Arguments
eq named vector or list of the six parameters Axx, Axy, Ayy, Bx, By, C
Value
A Hyperbola object.
intersectionCircleCircle
Intersection of two circles
Description
Return the intersection of two circles.
Usage
intersectionCircleCircle(circ1, circ2, epsilon = sqrt(.Machine$double.eps))
Arguments
circ1, circ2 two Circle objects
epsilon a small positive number used for the numerical accuracy
Value
NULL if there is no intersection, a point if the circles touch, a list of two points if the circles meet at
two points, a circle if the two circles are identical.
intersectionCircleLine
Intersection of a circle and a line
Description
Return the intersection of a circle and a line.
Usage
intersectionCircleLine(circ, line, strict = FALSE)
Arguments
circ a Circle object
line a Line object
strict logical, whether to take into account line$extendA and line$extendB if they
are not both TRUE
Value
NULL if there is no intersection; a point if the infinite line is tangent to the circle, or NULL if
strict=TRUE and the point is not on the line (segment or half-line); a list of two points if the
circle and the infinite line meet at two points, when strict=FALSE; if strict=TRUE and the line is
a segment or a half-line, this can return NULL or a single point.
Examples
circ <- Circle$new(c(1,1), 2)
line <- Line$new(c(2,-2), c(1,2), FALSE, FALSE)
intersectionCircleLine(circ, line)
intersectionCircleLine(circ, line, strict = TRUE)
intersectionEllipseLine
Intersection of an ellipse and a line
Description
Return the intersection of an ellipse and a line.
Usage
intersectionEllipseLine(ell, line, strict = FALSE)
Arguments
ell an Ellipse object or a Circle object
line a Line object
strict logical, whether to take into account line$extendA and line$extendB if they
are not both TRUE
Value
NULL if there is no intersection; a point if the infinite line is tangent to the ellipse, or NULL if
strict=TRUE and the point is not on the line (segment or half-line); a list of two points if the
ellipse and the infinite line meet at two points, when strict=FALSE; if strict=TRUE and the line
is a segment or a half-line, this can return NULL or a single point.
Examples
ell <- Ellipse$new(c(1,1), 5, 1, 30)
line <- Line$new(c(2,-2), c(0,4))
( Is <- intersectionEllipseLine(ell, line) )
ell$includes(Is$I1); ell$includes(Is$I2)
intersectionLineLine Intersection of two lines
Description
Return the intersection of two lines.
Usage
intersectionLineLine(line1, line2, strict = FALSE)
Arguments
line1, line2 two Line objects
strict logical, whether to take into account the extensions of the lines (extendA and
extendB)
Value
If strict = FALSE this returns either a point, or NULL if the lines are parallel, or a bi-infinite line if
the two lines coincide. If strict = TRUE, this can also return a half-infinite line or a segment.
Inversion R6 class representing an inversion
Description
An inversion is given by a pole (a point) and a power (a number, possibly negative, but not zero).
Active bindings
pole get or set the pole
power get or set the power
Methods
Public methods:
• Inversion$new()
• Inversion$print()
• Inversion$invert()
• Inversion$transform()
• Inversion$invertCircle()
• Inversion$transformCircle()
• Inversion$invertLine()
• Inversion$transformLine()
• Inversion$invertGcircle()
• Inversion$compose()
• Inversion$clone()
Method new(): Create a new Inversion object.
Usage:
Inversion$new(pole, power)
Arguments:
pole the pole
power the power
Returns: A new Inversion object.
Method print(): Show instance of an inversion object.
Usage:
Inversion$print(...)
Arguments:
... ignored
Examples:
Inversion$new(c(0,0), 2)
Method invert(): Inversion of a point.
Usage:
Inversion$invert(M)
Arguments:
M a point or Inf
Returns: A point or Inf, the image of M.
Method transform(): An alias of invert.
Usage:
Inversion$transform(M)
Arguments:
M a point or Inf
Returns: A point or Inf, the image of M.
Method invertCircle(): Inversion of a circle.
Usage:
Inversion$invertCircle(circ)
Arguments:
circ a Circle object
Returns: A Circle object or a Line object.
Examples:
# A Pappus chain
# https://www.cut-the-knot.org/Curriculum/Geometry/InversionInArbelos.shtml
opar <- par(mar = c(0,0,0,0))
plot(0, 0, type = "n", asp = 1, xlim = c(0,6), ylim = c(-4,4),
xlab = NA, ylab = NA, axes = FALSE)
A <- c(0,0); B <- c(6,0)
ABsqr <- c(crossprod(A-B))
iota <- Inversion$new(A, ABsqr)
C <- iota$invert(c(8,0))
Sigma1 <- Circle$new((A+B)/2, sqrt(ABsqr)/2)
Sigma2 <- Circle$new((A+C)/2, sqrt(c(crossprod(A-C)))/2)
draw(Sigma1); draw(Sigma2)
circ0 <- Circle$new(c(7,0), 1)
iotacirc0 <- iota$invertCircle(circ0)
draw(iotacirc0)
for(i in 1:6){
circ <- circ0$translate(c(0,2*i))
iotacirc <- iota$invertCircle(circ)
draw(iotacirc)
circ <- circ0$translate(c(0,-2*i))
iotacirc <- iota$invertCircle(circ)
draw(iotacirc)
}
par(opar)
Method transformCircle(): An alias of invertCircle.
Usage:
Inversion$transformCircle(circ)
Arguments:
circ a Circle object
Returns: A Circle object or a Line object.
Method invertLine(): Inversion of a line.
Usage:
Inversion$invertLine(line)
Arguments:
line a Line object
Returns: A Circle object or a Line object.
Method transformLine(): An alias of invertLine.
Usage:
Inversion$transformLine(line)
Arguments:
line a Line object
Returns: A Circle object or a Line object.
Method invertGcircle(): Inversion of a generalized circle (i.e. a circle or a line).
Usage:
Inversion$invertGcircle(gcircle)
Arguments:
gcircle a Circle object or a Line object
Returns: A Circle object or a Line object.
Method compose(): Compose the reference inversion with another inversion. The result is a
Möbius transformation.
Usage:
Inversion$compose(iota1, left = TRUE)
Arguments:
iota1 an Inversion object
left logical, whether to compose at left or at right (i.e. returns iota1 o iota0 or iota0 o
iota1)
Returns: A Mobius object.
Method clone(): The objects of this class are cloneable with this method.
Usage:
Inversion$clone(deep = FALSE)
Arguments:
deep Whether to make a deep clone.
See Also
inversionSwappingTwoCircles, inversionFixingTwoCircles, inversionFixingThreeCircles
to create some inversions.
Examples
## ------------------------------------------------
## Method `Inversion$print`
## ------------------------------------------------
Inversion$new(c(0,0), 2)
## ------------------------------------------------
## Method `Inversion$invertCircle`
## ------------------------------------------------
# A Pappus chain
# https://www.cut-the-knot.org/Curriculum/Geometry/InversionInArbelos.shtml
opar <- par(mar = c(0,0,0,0))
plot(0, 0, type = "n", asp = 1, xlim = c(0,6), ylim = c(-4,4),
xlab = NA, ylab = NA, axes = FALSE)
A <- c(0,0); B <- c(6,0)
ABsqr <- c(crossprod(A-B))
iota <- Inversion$new(A, ABsqr)
C <- iota$invert(c(8,0))
Sigma1 <- Circle$new((A+B)/2, sqrt(ABsqr)/2)
Sigma2 <- Circle$new((A+C)/2, sqrt(c(crossprod(A-C)))/2)
draw(Sigma1); draw(Sigma2)
circ0 <- Circle$new(c(7,0), 1)
iotacirc0 <- iota$invertCircle(circ0)
draw(iotacirc0)
for(i in 1:6){
circ <- circ0$translate(c(0,2*i))
iotacirc <- iota$invertCircle(circ)
draw(iotacirc)
circ <- circ0$translate(c(0,-2*i))
iotacirc <- iota$invertCircle(circ)
draw(iotacirc)
}
par(opar)
inversionFixingThreeCircles
Inversion fixing three circles
Description
Return the inversion which lets invariant three given circles.
Usage
inversionFixingThreeCircles(circ1, circ2, circ3)
Arguments
circ1, circ2, circ3
Circle objects
Value
An Inversion object, which lets each of circ1, circ2 and circ3 invariant.
inversionFixingTwoCircles
Inversion fixing two circles
Description
Return the inversion which lets invariant two given circles.
Usage
inversionFixingTwoCircles(circ1, circ2)
Arguments
circ1, circ2 Circle objects
Value
An Inversion object, which maps circ1 to circ2 and circ2 to circ2.
inversionFromCircle Inversion on a circle
Description
Return the inversion on a given circle.
Usage
inversionFromCircle(circ)
Arguments
circ a Circle object
Value
An Inversion object
inversionKeepingCircle
Inversion keeping a circle unchanged
Description
Return an inversion with a given pole which keeps a given circle unchanged.
Usage
inversionKeepingCircle(pole, circ)
Arguments
pole inversion pole, a point
circ a Circle object
Value
An Inversion object.
Examples
circ <- Circle$new(c(4,3), 2)
iota <- inversionKeepingCircle(c(1,2), circ)
iota$transformCircle(circ)
inversionSwappingTwoCircles
Inversion swapping two circles
Description
Return the inversion which swaps two given circles.
Usage
inversionSwappingTwoCircles(circ1, circ2, positive = TRUE)
Arguments
circ1, circ2 Circle objects
positive logical, whether the sign of the desired inversion power must be positive or
negative
Value
An Inversion object, which maps circ1 to circ2 and circ2 to circ1, except in the case when
circ1 and circ2 are congruent and tangent: in this case a Reflection object is returned (a reflec-
tion is an inversion on a line).
Line R6 class representing a line
Description
A line is given by two distinct points, named A and B, and two logical values extendA and extendB,
indicating whether the line must be extended beyond A and B respectively. Depending on extendA
and extendB, the line is an infinite line, a half-line, or a segment.
Active bindings
A get or set the point A
B get or set the point B
extendA get or set extendA
extendB get or set extendB
Methods
Public methods:
• Line$new()
• Line$print()
• Line$length()
• Line$directionAndOffset()
• Line$isEqual()
• Line$isParallel()
• Line$isPerpendicular()
• Line$includes()
• Line$perpendicular()
• Line$parallel()
• Line$projection()
• Line$distance()
• Line$reflection()
• Line$rotate()
• Line$translate()
• Line$invert()
• Line$clone()
Method new(): Create a new Line object.
Usage:
Line$new(A, B, extendA = TRUE, extendB = TRUE)
Arguments:
A, B points
extendA, extendB logical values
Returns: A new Line object.
Examples:
l <- Line$new(c(1,1), c(1.5,1.5), FALSE, TRUE)
l
l$A
l$A <- c(0,0)
l
Method print(): Show instance of a line object.
Usage:
Line$print(...)
Arguments:
... ignored
Examples:
Line$new(c(0,0), c(1,0), FALSE, TRUE)
Method length(): Segment length, returns the length of the segment joining the two points
defining the line.
Usage:
Line$length()
Method directionAndOffset(): Direction (angle between 0 and 2pi) and offset (positive
number) of the reference line.
Usage:
Line$directionAndOffset()
Details: The equation of the line is cos(θ)x + sin(θ)y = d where θ is the direction and d is the
offset.
Method isEqual(): Check whether the reference line equals a given line, without taking into
account extendA and extendB.
Usage:
Line$isEqual(line)
Arguments:
line a Line object
Returns: TRUE or FALSE.
Method isParallel(): Check whether the reference line is parallel to a given line.
Usage:
Line$isParallel(line)
Arguments:
line a Line object
Returns: TRUE or FALSE.
Method isPerpendicular(): Check whether the reference line is perpendicular to a given line.
Usage:
Line$isPerpendicular(line)
Arguments:
line a Line object
Returns: TRUE or FALSE.
Method includes(): Whether a point belongs to the reference line.
Usage:
Line$includes(M, strict = FALSE, checkCollinear = TRUE)
Arguments:
M the point for which we want to test whether it belongs to the line
strict logical, whether to take into account extendA and extendB
checkCollinear logical, whether to check the collinearity of A, B, M; set to FALSE only if you
are sure that M is on the line (AB) in case if you use strict=TRUE
Returns: TRUE or FALSE.
Examples:
A <- c(0,0); B <- c(1,2); M <- c(3,6)
l <- Line$new(A, B, FALSE, FALSE)
l$includes(M, strict = TRUE)
Method perpendicular(): Perpendicular line passing through a given point.
Usage:
Line$perpendicular(M, extendH = FALSE, extendM = TRUE)
Arguments:
M the point through which the perpendicular passes.
extendH logical, whether to extend the perpendicular line beyond the meeting point
extendM logical, whether to extend the perpendicular line beyond the point M
Returns: A Line object; its two points are the meeting point and the point M.
Method parallel(): Parallel to the reference line passing through a given point.
Usage:
Line$parallel(M)
Arguments:
M a point
Returns: A Line object.
Method projection(): Orthogonal projection of a point to the reference line.
Usage:
Line$projection(M)
Arguments:
M a point
Returns: A point.
Method distance(): Distance from a point to the reference line.
Usage:
Line$distance(M)
Arguments:
M a point
Returns: A positive number.
Method reflection(): Reflection of a point with respect to the reference line.
Usage:
Line$reflection(M)
Arguments:
M a point
Returns: A point.
Method rotate(): Rotate the reference line.
Usage:
Line$rotate(alpha, O, degrees = TRUE)
Arguments:
alpha angle of rotation
O center of rotation
degrees logical, whether alpha is given in degrees
Returns: A Line object.
Method translate(): Translate the reference line.
Usage:
Line$translate(v)
Arguments:
v the vector of translation
Returns: A Line object.
Method invert(): Invert the reference line.
Usage:
Line$invert(inversion)
Arguments:
inversion an Inversion object
Returns: A Circle object or a Line object.
Method clone(): The objects of this class are cloneable with this method.
Usage:
Line$clone(deep = FALSE)
Arguments:
deep Whether to make a deep clone.
Examples
## ------------------------------------------------
## Method `Line$new`
## ------------------------------------------------
l <- Line$new(c(1,1), c(1.5,1.5), FALSE, TRUE)
l
l$A
l$A <- c(0,0)
l
## ------------------------------------------------
## Method `Line$print`
## ------------------------------------------------
Line$new(c(0,0), c(1,0), FALSE, TRUE)
## ------------------------------------------------
## Method `Line$includes`
## ------------------------------------------------
A <- c(0,0); B <- c(1,2); M <- c(3,6)
l <- Line$new(A, B, FALSE, FALSE)
l$includes(M, strict = TRUE)
LineFromEquation Line from general equation
Description
Create a Line object representing the infinite line with given equation ax + by + c = 0.
Usage
LineFromEquation(a, b, c)
Arguments
a, b, c the parameters of the equation; a and b cannot be both zero
Value
A Line object.
LineFromInterceptAndSlope
Line from intercept and slope
Description
Create a Line object representing the infinite line with given intercept and given slope.
Usage
LineFromInterceptAndSlope(a, b)
Arguments
a intercept
b slope
Value
A Line object.
LownerJohnEllipse Löwner-John ellipse (ellipse hull)
Description
Minimum area ellipse containing a set of points.
Usage
LownerJohnEllipse(pts)
Arguments
pts the points in a two-columns matrix (one point per row); at least three distinct
points
Value
An Ellipse object.
Examples
pts <- cbind(rnorm(30, sd=2), rnorm(30))
ell <- LownerJohnEllipse(pts)
box <- ell$boundingbox()
plot(NULL, asp = 1, xlim = box$x, ylim = box$y, xlab = NA, ylab = NA)
draw(ell, col = "seaShell")
points(pts, pch = 19)
all(apply(pts, 1, ell$contains)) # should be TRUE
maxAreaInscribedCircle
Maximum area circle inscribed in a convex polygon
Description
Computes the circle inscribed in a convex polygon with maximum area. This is the so-called Cheby-
shev circle.
Usage
maxAreaInscribedCircle(points, verbose = FALSE)
Arguments
points the vertices of the polygon in a two-columns matrix; their order has no impor-
tance, since the procedure takes the convex hull of these points (and does not
check the convexity)
verbose argument passed to psolve
Value
A Circle object. The status of the optimization problem is given as an attribute of this circle. A
warning is thrown if it is not optimal.
See Also
maxAreaInscribedEllipse
Examples
library(PlaneGeometry)
hexagon <- rbind(
c(-1.7, -1),
c(-1.4, 0.4),
c(0.3, 1.3),
c(1.7, 0.6),
c(1.3, -0.3),
c(-0.4, -1.8)
)
opar <- par(mar = c(2, 2, 1, 1))
plot(NULL, xlim=c(-2, 2), ylim=c(-2, 2), xlab = NA, ylab = NA, asp = 1)
points(hexagon, pch = 19)
polygon(hexagon)
circ <- maxAreaInscribedCircle(hexagon)
draw(circ, col = "yellow2", border = "blue", lwd = 2)
par(opar)
# check optimization status:
attr(circ, "status")
maxAreaInscribedEllipse
Maximum area ellipse inscribed in a convex polygon
Description
Computes the ellipse inscribed in a convex polygon with maximum area.
Usage
maxAreaInscribedEllipse(points, verbose = FALSE)
Arguments
points the vertices of the polygon in a two-columns matrix; their order has no impor-
tance, since the procedure takes the convex hull of these points (and does not
check the convexity)
verbose argument passed to psolve
Value
An Ellipse object. The status of the optimization problem is given as an attribute of this ellipse.
A warning is thrown if it is not optimal.
See Also
maxAreaInscribedCircle
Examples
hexagon <- rbind(
c(-1.7, -1),
c(-1.4, 0.4),
c(0.3, 1.3),
c(1.7, 0.6),
c(1.3, -0.3),
c(-0.4, -1.8)
)
opar <- par(mar = c(2, 2, 1, 1))
plot(NULL, xlim=c(-2, 2), ylim=c(-2, 2), xlab = NA, ylab = NA, asp = 1)
points(hexagon, pch = 19)
polygon(hexagon)
ell <- maxAreaInscribedEllipse(hexagon)
draw(ell, col = "yellow2", border = "blue", lwd = 2)
par(opar)
# check optimization status:
attr(ell, "status")
midCircles Mid-circle(s)
Description
Return the mid-circle(s) of two circles.
Usage
midCircles(circ1, circ2)
Arguments
circ1, circ2 Circle objects
Details
A mid-circle of two circles is a generalized circle (i.e. a circle or a line) such that the inversion on
this circle swaps the two circles. The case of a line appears only when the two circles have equal
radii.
Value
A Circle object, or a Line object, or a list of two such objects.
See Also
inversionSwappingTwoCircles
Examples
circ1 <- Circle$new(c(5,4),2)
circ2 <- Circle$new(c(6,4),1)
midcircle <- midCircles(circ1, circ2)
inversionFromCircle(midcircle)
inversionSwappingTwoCircles(circ1, circ2)
Mobius R6 class representing a Möbius transformation.
Description
A Möbius transformation is given by a matrix of complex numbers with non-null determinant.
Active bindings
a get or set a
b get or set b
c get or set c
d get or set d
Methods
Public methods:
• Mobius$new()
• Mobius$print()
• Mobius$getM()
• Mobius$compose()
• Mobius$inverse()
• Mobius$power()
• Mobius$gpower()
• Mobius$transform()
• Mobius$fixedPoints()
• Mobius$transformCircle()
• Mobius$transformLine()
• Mobius$transformGcircle()
• Mobius$clone()
Method new(): Create a new Mobius object.
Usage:
Mobius$new(M)
Arguments:
M the matrix corresponding to the Möbius transformation
Returns: A new Mobius object.
Method print(): Show instance of a Mobius object.
Usage:
Mobius$print(...)
Arguments:
... ignored
Examples:
Mobius$new(rbind(c(1+1i,2),c(0,3-2i)))
Method getM(): Get the matrix corresponding to the Möbius transformation.
Usage:
Mobius$getM()
Method compose(): Compose the reference Möbius transformation with another Möbius trans-
formation
Usage:
Mobius$compose(M1, left = TRUE)
Arguments:
M1 a Mobius object
left logical, whether to compose at left or at right (i.e. returns M1 o M0 or M0 o M1)
Returns: A Mobius object.
Method inverse(): Inverse of the reference Möbius transformation.
Usage:
Mobius$inverse()
Returns: A Mobius object.
Method power(): Power of the reference Möbius transformation.
Usage:
Mobius$power(k)
Arguments:
k an integer, possibly negative
Returns: The Möbius transformation M^k, where M is the reference Möbius transformation.
Method gpower(): Generalized power of the reference Möbius transformation.
Usage:
Mobius$gpower(k)
Arguments:
k a real number, possibly negative
Returns: A Mobius object, the generalized k-th power of the reference Möbius transformation.
Examples:
M <- Mobius$new(rbind(c(1+1i,2),c(0,3-2i)))
Mroot <- M$gpower(1/2)
Mroot$compose(Mroot) # should be M
Method transform(): Transformation of a point by the reference Möbius transformation.
Usage:
Mobius$transform(M)
Arguments:
M a point or Inf
Returns: A point or Inf, the image of M.
Examples:
Mob <- Mobius$new(rbind(c(1+1i,2),c(0,3-2i)))
Mob$transform(c(1,1))
Mob$transform(Inf)
Method fixedPoints(): Returns the fixed points of the reference Möbius transformation.
Usage:
Mobius$fixedPoints()
Returns: One point, or a list of two points, or a message in the case when the transformation is
the identity map.
Method transformCircle(): Transformation of a circle by the reference Möbius transforma-
tion.
Usage:
Mobius$transformCircle(circ)
Arguments:
circ a Circle object
Returns: A Circle object or a Line object.
Method transformLine(): Transformation of a line by the reference Möbius transformation.
Usage:
Mobius$transformLine(line)
Arguments:
line a Line object
Returns: A Circle object or a Line object.
Method transformGcircle(): Transformation of a generalized circle (i.e. a circle or a line) by
the reference Möbius transformation.
Usage:
Mobius$transformGcircle(gcirc)
Arguments:
gcirc a Circle object or a Line object
Returns: A Circle object or a Line object.
Method clone(): The objects of this class are cloneable with this method.
Usage:
Mobius$clone(deep = FALSE)
Arguments:
deep Whether to make a deep clone.
See Also
MobiusMappingThreePoints to create a Möbius transformation, and also the compose method of
the Inversion R6 class.
Examples
## ------------------------------------------------
## Method `Mobius$print`
## ------------------------------------------------
Mobius$new(rbind(c(1+1i,2),c(0,3-2i)))
## ------------------------------------------------
## Method `Mobius$gpower`
## ------------------------------------------------
M <- Mobius$new(rbind(c(1+1i,2),c(0,3-2i)))
Mroot <- M$gpower(1/2)
Mroot$compose(Mroot) # should be M
## ------------------------------------------------
## Method `Mobius$transform`
## ------------------------------------------------
Mob <- Mobius$new(rbind(c(1+1i,2),c(0,3-2i)))
Mob$transform(c(1,1))
Mob$transform(Inf)
MobiusMappingCircle Möbius transformation mapping a given circle to a given circle
Description
Returns a Möbius transformation mapping a given circle to another given circle.
Usage
MobiusMappingCircle(circ1, circ2)
Arguments
circ1, circ2 Circle objects
Value
A Möbius transformation which maps circ1 to circ2.
Examples
library(PlaneGeometry)
C1 <- Circle$new(c(0, 0), 1)
C2 <- Circle$new(c(1, 2), 3)
M <- MobiusMappingCircle(C1, C2)
C3 <- M$transformCircle(C1)
C3$isEqual(C2)
MobiusMappingThreePoints
Möbius transformation mapping three given points to three given
points
Description
Return a Möbius transformation which sends P1 to Q1, P2 to Q2 and P3 to Q3.
Usage
MobiusMappingThreePoints(P1, P2, P3, Q1, Q2, Q3)
Arguments
P1, P2, P3 three distinct points, Inf allowed
Q1, Q2, Q3 three distinct points, Inf allowed
Value
A Mobius object.
MobiusSwappingTwoPoints
Möbius transformation swapping two given points
Description
Return a Möbius transformation which sends A to B and B to A.
Usage
MobiusSwappingTwoPoints(A, B)
Arguments
A, B two distinct points, Inf not allowed
Value
A Mobius object.
Projection R6 class representing a projection
Description
A projection on a line D parallel to another line Delta is given by the line of projection (D) and the
directrix line (Delta).
Active bindings
D get or set the projection line
Delta get or set the directrix line
Methods
Public methods:
• Projection$new()
• Projection$print()
• Projection$project()
• Projection$transform()
• Projection$getMatrix()
• Projection$asAffine()
• Projection$clone()
Method new(): Create a new Projection object.
Usage:
Projection$new(D, Delta)
Arguments:
D, Delta two Line objects such that the two lines meet (not parallel); or Delta = NULL for
orthogonal projection onto D
Returns: A new Projection object.
Examples:
D <- Line$new(c(1,1), c(5,5))
Delta <- Line$new(c(0,0), c(3,4))
Projection$new(D, Delta)
Method print(): Show instance of a projection object.
Usage:
Projection$print(...)
Arguments:
... ignored
Method project(): Project a point.
Usage:
Projection$project(M)
Arguments:
M a point
Examples:
D <- Line$new(c(1,1), c(5,5))
Delta <- Line$new(c(0,0), c(3,4))
P <- Projection$new(D, Delta)
M <- c(1,3)
Mprime <- P$project(M)
D$includes(Mprime) # should be TRUE
Delta$isParallel(Line$new(M, Mprime)) # should be TRUE
Method transform(): An alias of project.
Usage:
Projection$transform(M)
Arguments:
M a point
Method getMatrix(): Augmented matrix of the projection.
Usage:
Projection$getMatrix()
Returns: A 3x3 matrix.
Examples:
P <- Projection$new(Line$new(c(2,2), c(4,5)), Line$new(c(0,0), c(1,1)))
M <- c(1,5)
P$project(M)
P$getMatrix() %*% c(M,1)
Method asAffine(): Convert the reference projection to an Affine object.
Usage:
Projection$asAffine()
Method clone(): The objects of this class are cloneable with this method.
Usage:
Projection$clone(deep = FALSE)
Arguments:
deep Whether to make a deep clone.
Note
For an orthogonal projection, you can use the projection method of the Line R6 class.
Examples
## ------------------------------------------------
## Method `Projection$new`
## ------------------------------------------------
D <- Line$new(c(1,1), c(5,5))
Delta <- Line$new(c(0,0), c(3,4))
Projection$new(D, Delta)
## ------------------------------------------------
## Method `Projection$project`
## ------------------------------------------------
D <- Line$new(c(1,1), c(5,5))
Delta <- Line$new(c(0,0), c(3,4))
P <- Projection$new(D, Delta)
M <- c(1,3)
Mprime <- P$project(M)
D$includes(Mprime) # should be TRUE
Delta$isParallel(Line$new(M, Mprime)) # should be TRUE
## ------------------------------------------------
## Method `Projection$getMatrix`
## ------------------------------------------------
P <- Projection$new(Line$new(c(2,2), c(4,5)), Line$new(c(0,0), c(1,1)))
M <- c(1,5)
P$project(M)
P$getMatrix() %*% c(M,1)
radicalCenter Radical center
Description
Returns the radical center of three circles.
Usage
radicalCenter(circ1, circ2, circ3)
Arguments
circ1, circ2, circ3
Circle objects
Value
A point.
Reflection R6 class representing a reflection
Description
A reflection is given by a line.
Active bindings
line get or set the line of the reflection
Methods
Public methods:
• Reflection$new()
• Reflection$print()
• Reflection$reflect()
• Reflection$transform()
• Reflection$reflectCircle()
• Reflection$transformCircle()
• Reflection$reflectLine()
• Reflection$transformLine()
• Reflection$getMatrix()
• Reflection$asAffine()
• Reflection$clone()
Method new(): Create a new Reflection object.
Usage:
Reflection$new(line)
Arguments:
line a Line object
Returns: A new Reflection object.
Examples:
l <- Line$new(c(1,1), c(1.5,1.5), FALSE, TRUE)
Reflection$new(l)
Method print(): Show instance of a reflection object.
Usage:
Reflection$print(...)
Arguments:
... ignored
Method reflect(): Reflect a point.
Usage:
Reflection$reflect(M)
Arguments:
M a point, Inf allowed
Method transform(): An alias of reflect.
Usage:
Reflection$transform(M)
Arguments:
M a point, Inf allowed
Method reflectCircle(): Reflect a circle.
Usage:
Reflection$reflectCircle(circ)
Arguments:
circ a Circle object
Returns: A Circle object.
Method transformCircle(): An alias of reflectCircle.
Usage:
Reflection$transformCircle(circ)
Arguments:
circ a Circle object
Returns: A Circle object.
Method reflectLine(): Reflect a line.
Usage:
Reflection$reflectLine(line)
Arguments:
line a Line object
Returns: A Line object.
Method transformLine(): An alias of reflectLine.
Usage:
Reflection$transformLine(line)
Arguments:
line a Line object
Returns: A Line object.
Method getMatrix(): Augmented matrix of the reflection.
Usage:
Reflection$getMatrix()
Returns: A 3x3 matrix.
Examples:
R <- Reflection$new(Line$new(c(2,2), c(4,5)))
P <- c(1,5)
R$reflect(P)
R$getMatrix() %*% c(P,1)
Method asAffine(): Convert the reference reflection to an Affine object.
Usage:
Reflection$asAffine()
Method clone(): The objects of this class are cloneable with this method.
Usage:
Reflection$clone(deep = FALSE)
Arguments:
deep Whether to make a deep clone.
Examples
## ------------------------------------------------
## Method `Reflection$new`
## ------------------------------------------------
l <- Line$new(c(1,1), c(1.5,1.5), FALSE, TRUE)
Reflection$new(l)
## ------------------------------------------------
## Method `Reflection$getMatrix`
## ------------------------------------------------
R <- Reflection$new(Line$new(c(2,2), c(4,5)))
P <- c(1,5)
R$reflect(P)
R$getMatrix() %*% c(P,1)
Rotation R6 class representing a rotation
Description
A rotation is given by an angle (theta) and a center.
Active bindings
theta get or set the angle of the rotation
center get or set the center
degrees get or set the degrees field
Methods
Public methods:
• Rotation$new()
• Rotation$print()
• Rotation$rotate()
• Rotation$transform()
• Rotation$rotateCircle()
• Rotation$transformCircle()
• Rotation$rotateEllipse()
• Rotation$transformEllipse()
• Rotation$rotateLine()
• Rotation$transformLine()
• Rotation$getMatrix()
• Rotation$asAffine()
• Rotation$clone()
Method new(): Create a new Rotation object.
Usage:
Rotation$new(theta, center, degrees = TRUE)
Arguments:
theta a number, the angle of the rotation
center a point, the center of the rotation
degrees logical, whether theta is given in degrees
Returns: A new Rotation object.
Examples:
Rotation$new(60, c(1,1))
Method print(): Show instance of a Rotation object.
Usage:
Rotation$print(...)
Arguments:
... ignored
Method rotate(): Rotate a point or several points.
Usage:
Rotation$rotate(M)
Arguments:
M a point or a two-column matrix of points, one point per row
Method transform(): An alias of rotate.
Usage:
Rotation$transform(M)
Arguments:
M a point or a two-column matrix of points, one point per row
Method rotateCircle(): Rotate a circle.
Usage:
Rotation$rotateCircle(circ)
Arguments:
circ a Circle object
Returns: A Circle object.
Method transformCircle(): An alias of rotateCircle.
Usage:
Rotation$transformCircle(circ)
Arguments:
circ a Circle object
Returns: A Circle object.
Method rotateEllipse(): Rotate an ellipse.
Usage:
Rotation$rotateEllipse(ell)
Arguments:
ell an Ellipse object
Returns: An Ellipse object.
Method transformEllipse(): An alias of rotateEllipse.
Usage:
Rotation$transformEllipse(ell)
Arguments:
ell an Ellipse object
Returns: An Ellipse object.
Method rotateLine(): Rotate a line.
Usage:
Rotation$rotateLine(line)
Arguments:
line a Line object
Returns: A Line object.
Method transformLine(): An alias of rotateLine.
Usage:
Rotation$transformLine(line)
Arguments:
line a Line object
Returns: A Line object.
Method getMatrix(): Augmented matrix of the rotation.
Usage:
Rotation$getMatrix()
Returns: A 3x3 matrix.
Examples:
R <- Rotation$new(60, c(1,1))
P <- c(1,5)
R$rotate(P)
R$getMatrix() %*% c(P,1)
Method asAffine(): Convert the reference rotation to an Affine object.
Usage:
Rotation$asAffine()
Method clone(): The objects of this class are cloneable with this method.
Usage:
Rotation$clone(deep = FALSE)
Arguments:
deep Whether to make a deep clone.
Examples
## ------------------------------------------------
## Method `Rotation$new`
## ------------------------------------------------
Rotation$new(60, c(1,1))
## ------------------------------------------------
## Method `Rotation$getMatrix`
## ------------------------------------------------
R <- Rotation$new(60, c(1,1))
P <- c(1,5)
R$rotate(P)
R$getMatrix() %*% c(P,1)
Scaling R6 class representing a (non-uniform) scaling
Description
A (non-uniform) scaling is given by a center, a direction vector, and a scale factor.
Active bindings
center get or set the center
direction get or set the direction
scale get or set the scale factor
Methods
Public methods:
• Scaling$new()
• Scaling$print()
• Scaling$transform()
• Scaling$getMatrix()
• Scaling$asAffine()
• Scaling$scaleCircle()
• Scaling$clone()
Method new(): Create a new Scaling object.
Usage:
Scaling$new(center, direction, scale)
Arguments:
center a point, the center of the scaling
direction a vector, the direction of the scaling
scale a number, the scale factor
Returns: A new Scaling object.
Examples:
Scaling$new(c(1,1), c(1,3), 2)
Method print(): Show instance of a Scaling object.
Usage:
Scaling$print(...)
Arguments:
... ignored
Method transform(): Transform a point or several points by the reference scaling.
Usage:
Scaling$transform(M)
Arguments:
M a point or a two-column matrix of points, one point per row
Method getMatrix(): Augmented matrix of the scaling.
Usage:
Scaling$getMatrix()
Returns: A 3x3 matrix.
Examples:
S <- Scaling$new(c(1,1), c(2,3), 2)
P <- c(1,5)
S$transform(P)
S$getMatrix() %*% c(P,1)
Method asAffine(): Convert the reference scaling to an Affine object.
Usage:
Scaling$asAffine()
Method scaleCircle(): Scale a circle. The result is an ellipse.
Usage:
Scaling$scaleCircle(circ)
Arguments:
circ a Circle object
Returns: An Ellipse object.
Method clone(): The objects of this class are cloneable with this method.
Usage:
Scaling$clone(deep = FALSE)
Arguments:
deep Whether to make a deep clone.
References
<NAME>, An Integrated Introduction to Computer Graphics and Geometric Modeling. CRC
Press, 2009.
Examples
Q <- c(1,1); w <- c(1,3); s <- 2
S <- Scaling$new(Q, w, s)
# the center is mapped to itself:
S$transform(Q)
# any vector \code{u} parallel to the direction vector is mapped to \code{s*u}:
u <- 3*w
all.equal(s*u, S$transform(u) - S$transform(c(0,0)))
# any vector perpendicular to the direction vector is mapped to itself
wt <- 3*c(-w[2], w[1])
all.equal(wt, S$transform(wt) - S$transform(c(0,0)))
## ------------------------------------------------
## Method `Scaling$new`
## ------------------------------------------------
Scaling$new(c(1,1), c(1,3), 2)
## ------------------------------------------------
## Method `Scaling$getMatrix`
## ------------------------------------------------
S <- Scaling$new(c(1,1), c(2,3), 2)
P <- c(1,5)
S$transform(P)
S$getMatrix() %*% c(P,1)
ScalingXY R6 class representing an axis-scaling
Description
An axis-scaling is given by a center, and two scale factors sx and sy, one for the x-axis and one for
the y-axis.
Active bindings
center get or set the center
sx get or set the scale factor of the x-axis
sy get or set the scale factor of the y-ayis
Methods
Public methods:
• ScalingXY$new()
• ScalingXY$print()
• ScalingXY$transform()
• ScalingXY$getMatrix()
• ScalingXY$asAffine()
• ScalingXY$clone()
Method new(): Create a new ScalingXY object.
Usage:
ScalingXY$new(center, sx, sy)
Arguments:
center a point, the center of the scaling
sx a number, the scale factor of the x-axis
sy a number, the scale factor of the y-axis
Returns: A new ScalingXY object.
Examples:
ScalingXY$new(c(1,1), 4, 2)
Method print(): Show instance of a ScalingXY object.
Usage:
ScalingXY$print(...)
Arguments:
... ignored
Method transform(): Transform a point or several points by the reference axis-scaling.
Usage:
ScalingXY$transform(M)
Arguments:
M a point or a two-column matrix of points, one point per row
Returns: A point or a two-column matrix of points.
Method getMatrix(): Augmented matrix of the axis-scaling.
Usage:
ScalingXY$getMatrix()
Returns: A 3x3 matrix.
Examples:
S <- ScalingXY$new(c(1,1), 4, 2)
P <- c(1,5)
S$transform(P)
S$getMatrix() %*% c(P,1)
Method asAffine(): Convert the reference axis-scaling to an Affine object.
Usage:
ScalingXY$asAffine()
Method clone(): The objects of this class are cloneable with this method.
Usage:
ScalingXY$clone(deep = FALSE)
Arguments:
deep Whether to make a deep clone.
Examples
## ------------------------------------------------
## Method `ScalingXY$new`
## ------------------------------------------------
ScalingXY$new(c(1,1), 4, 2)
## ------------------------------------------------
## Method `ScalingXY$getMatrix`
## ------------------------------------------------
S <- ScalingXY$new(c(1,1), 4, 2)
P <- c(1,5)
S$transform(P)
S$getMatrix() %*% c(P,1)
Shear R6 class representing a shear transformation
Description
A shear is given by a vertex, two perpendicular vectors, and an angle.
Active bindings
vertex get or set the vertex
vector get or set the first vector
ratio get or set the ratio between the length of vector and the length of the second vector, per-
pendicular to the first one
angle get or set the angle
degrees get or set the degrees field
Methods
Public methods:
• Shear$new()
• Shear$print()
• Shear$transform()
• Shear$getMatrix()
• Shear$asAffine()
• Shear$clone()
Method new(): Create a new Shear object.
Usage:
Shear$new(vertex, vector, ratio, angle, degrees = TRUE)
Arguments:
vertex a point
vector a vector
ratio a positive number, the ratio between the length of vector and the length of the second
vector, perpendicular to the first one
angle an angle strictly between -90 degrees and 90 degrees
degrees logical, whether angle is given in degrees
Returns: A new Shear object.
Examples:
Shear$new(c(1,1), c(1,3), 0.5, 30)
Method print(): Show instance of a Shear object.
Usage:
Shear$print(...)
Arguments:
... ignored
Method transform(): Transform a point or several points by the reference shear.
Usage:
Shear$transform(M)
Arguments:
M a point or a two-column matrix of points, one point per row
Method getMatrix(): Augmented matrix of the shear.
Usage:
Shear$getMatrix()
Returns: A 3x3 matrix.
Examples:
S <- Shear$new(c(1,1), c(1,3), 0.5, 30)
S$getMatrix()
Method asAffine(): Convert the reference shear to an Affine object.
Usage:
Shear$asAffine()
Examples:
Shear$new(c(0,0), c(1,0), 1, atan(30), FALSE)$asAffine()
Method clone(): The objects of this class are cloneable with this method.
Usage:
Shear$clone(deep = FALSE)
Arguments:
deep Whether to make a deep clone.
References
<NAME>, An Integrated Introduction to Computer Graphics and Geometric Modeling. CRC
Press, 2009.
Examples
P <- c(0,0); w <- c(1,0); ratio <- 1; angle <- 45
shear <- Shear$new(P, w, ratio, angle)
wt <- ratio * c(-w[2], w[1])
Q <- P + w; R <- Q + wt; S <- P + wt
A <- shear$transform(P)
B <- shear$transform(Q)
C <- shear$transform(R)
D <- shear$transform(S)
plot(0, 0, type = "n", asp = 1, xlim = c(0,1), ylim = c(0,2))
lines(rbind(P,Q,R,S,P), lwd = 2) # unit square
lines(rbind(A,B,C,D,A), lwd = 2, col = "blue") # image by the shear
## ------------------------------------------------
## Method `Shear$new`
## ------------------------------------------------
Shear$new(c(1,1), c(1,3), 0.5, 30)
## ------------------------------------------------
## Method `Shear$getMatrix`
## ------------------------------------------------
S <- Shear$new(c(1,1), c(1,3), 0.5, 30)
S$getMatrix()
## ------------------------------------------------
## Method `Shear$asAffine`
## ------------------------------------------------
Shear$new(c(0,0), c(1,0), 1, atan(30), FALSE)$asAffine()
soddyCircle Inner Soddy circle
Description
Inner Soddy circles associated to three circles.
Usage
soddyCircle(circ1, circ2, circ3)
Arguments
circ1, circ2, circ3
distinct circles
Value
A Circle object.
SteinerChain Steiner chain
Description
Return a Steiner chain of circles.
Usage
SteinerChain(c0, n, phi, shift, ellipse = FALSE)
Arguments
c0 exterior circle, a Circle object
n number of circles, not including the inner circle; at least 3
phi -1 < phi < 1 controls the radii of the circles
shift any number; it produces a kind of rotation around the inner circle; values be-
tween 0 and n cover all possibilities
ellipse logical; the centers of the circles of the Steiner chain lie on an ellipse, and this
ellipse is returned as an attribute if you set this argument to TRUE
Value
A list of n+1 Circle objects. The inner circle is stored at the last position.
Examples
c0 <- Circle$new(c(1,1), 3)
chain <- SteinerChain(c0, 5, 0.3, 0.5, ellipse = TRUE)
plot(0, 0, type = "n", asp = 1, xlim = c(-4,4), ylim = c(-4,4))
invisible(lapply(chain, draw, lwd = 2, border = "blue"))
draw(c0, lwd = 2)
draw(attr(chain, "ellipse"), lwd = 2, border = "red")
Translation R6 class representing a translation
Description
A translation is given by a vector v.
Active bindings
v get or set the vector of translation
Methods
Public methods:
• Translation$new()
• Translation$print()
• Translation$project()
• Translation$transform()
• Translation$translateLine()
• Translation$transformLine()
• Translation$translateEllipse()
• Translation$transformEllipse()
• Translation$getMatrix()
• Translation$asAffine()
• Translation$clone()
Method new(): Create a new Translation object.
Usage:
Translation$new(v)
Arguments:
v a numeric vector of length two, the vector of translation
Returns: A new Translation object.
Method print(): Show instance of a translation object.
Usage:
Translation$print(...)
Arguments:
... ignored
Method project(): Transform a point or several points by the reference translation.
Usage:
Translation$project(M)
Arguments:
M a point or a two-column matrix of points, one point per row
Method transform(): An alias of translate.
Usage:
Translation$transform(M)
Arguments:
M a point or a two-column matrix of points, one point per row
Method translateLine(): Translate a line.
Usage:
Translation$translateLine(line)
Arguments:
line a Line object
Returns: A Line object.
Method transformLine(): An alias of translateLine.
Usage:
Translation$transformLine(line)
Arguments:
line a Line object
Returns: A Line object.
Method translateEllipse(): Translate a circle or an ellipse.
Usage:
Translation$translateEllipse(ell)
Arguments:
ell an Ellipse object or a Circle object
Returns: An Ellipse object or a Circle object.
Method transformEllipse(): An alias of translateEllipse.
Usage:
Translation$transformEllipse(ell)
Arguments:
ell an Ellipse object or a Circle object
Returns: An Ellipse object or a Circle object.
Method getMatrix(): Augmented matrix of the translation.
Usage:
Translation$getMatrix()
Returns: A 3x3 matrix.
Method asAffine(): Convert the reference translation to an Affine object.
Usage:
Translation$asAffine()
Method clone(): The objects of this class are cloneable with this method.
Usage:
Translation$clone(deep = FALSE)
Arguments:
deep Whether to make a deep clone.
Triangle R6 class representing a triangle
Description
A triangle has three vertices. They are named A, B, C.
Active bindings
A get or set the vertex A
B get or set the vertex B
C get or set the vertex C
Methods
Public methods:
• Triangle$new()
• Triangle$print()
• Triangle$flatness()
• Triangle$a()
• Triangle$b()
• Triangle$c()
• Triangle$edges()
• Triangle$perimeter()
• Triangle$orientation()
• Triangle$contains()
• Triangle$isAcute()
• Triangle$angleA()
84 Triangle
• Triangle$angleB()
• Triangle$angleC()
• Triangle$angles()
• Triangle$X175()
• Triangle$VeldkampIsoperimetricPoint()
• Triangle$centroid()
• Triangle$orthocenter()
• Triangle$area()
• Triangle$incircle()
• Triangle$inradius()
• Triangle$incenter()
• Triangle$excircles()
• Triangle$excentralTriangle()
• Triangle$BevanPoint()
• Triangle$medialTriangle()
• Triangle$orthicTriangle()
• Triangle$incentralTriangle()
• Triangle$NagelTriangle()
• Triangle$NagelPoint()
• Triangle$GergonneTriangle()
• Triangle$GergonnePoint()
• Triangle$tangentialTriangle()
• Triangle$symmedialTriangle()
• Triangle$symmedianPoint()
• Triangle$circumcircle()
• Triangle$circumcenter()
• Triangle$circumradius()
• Triangle$BrocardCircle()
• Triangle$BrocardPoints()
• Triangle$LemoineCircleI()
• Triangle$LemoineCircleII()
• Triangle$LemoineTriangle()
• Triangle$LemoineCircleIII()
• Triangle$ParryCircle()
• Triangle$outerSoddyCircle()
• Triangle$pedalTriangle()
• Triangle$CevianTriangle()
• Triangle$MalfattiCircles()
• Triangle$AjimaMalfatti1()
• Triangle$AjimaMalfatti2()
• Triangle$equalDetourPoint()
• Triangle$trilinearToPoint()
• Triangle$pointToTrilinear()
• Triangle$isogonalConjugate()
• Triangle$rotate()
• Triangle$translate()
• Triangle$SteinerEllipse()
• Triangle$SteinerInellipse()
• Triangle$MandartInellipse()
• Triangle$randomPoints()
• Triangle$hexylTriangle()
• Triangle$plot()
• Triangle$clone()
Method new(): Create a new Triangle object.
Usage:
Triangle$new(A, B, C)
Arguments:
A, B, C vertices
Returns: A new Triangle object.
Examples:
t <- Triangle$new(c(0,0), c(1,0), c(1,1))
t
t$C
t$C <- c(2,2)
t
Method print(): Show instance of a triangle object
Usage:
Triangle$print(...)
Arguments:
... ignored
Examples:
Triangle$new(c(0,0), c(1,0), c(1,1))
Method flatness(): Flatness of the triangle.
Usage:
Triangle$flatness()
Returns: A number between 0 and 1. A triangle is flat when its flatness is 1.
Method a(): Length of the side BC.
Usage:
Triangle$a()
Method b(): Length of the side AC.
Usage:
Triangle$b()
Method c(): Length of the side AB.
Usage:
Triangle$c()
Method edges(): The lengths of the sides of the triangle.
Usage:
Triangle$edges()
Returns: A named numeric vector.
Method perimeter(): Perimeter of the triangle.
Usage:
Triangle$perimeter()
Returns: The perimeter of the triangle.
Method orientation(): Determine the orientation of the triangle.
Usage:
Triangle$orientation()
Returns: An integer: 1 for counterclockwise, -1 for clockwise, 0 for collinear.
Method contains(): Determine whether a point lies inside the reference triangle.
Usage:
Triangle$contains(M)
Arguments:
M a point
Method isAcute(): Determines whether the reference triangle is acute.
Usage:
Triangle$isAcute()
Returns: ‘TRUE‘ if the triangle is acute (or right), ‘FALSE‘ otherwise.
Method angleA(): Angle at the vertex A.
Usage:
Triangle$angleA()
Returns: The angle at the vertex A in radians.
Method angleB(): Angle at the vertex B.
Usage:
Triangle$angleB()
Returns: The angle at the vertex B in radians.
Method angleC(): Angle at the vertex C.
Usage:
Triangle$angleC()
Returns: The angle at the vertex C in radians.
Method angles(): The three angles of the triangle.
Usage:
Triangle$angles()
Returns: A named vector containing the values of the angles in radians.
Method X175(): Isoperimetric point, also known as the X(175) triangle center; this is the center
of the outer Soddy circle.
Usage:
Triangle$X175()
Method VeldkampIsoperimetricPoint(): Isoperimetric point in the sense of Veldkamp.
Usage:
Triangle$VeldkampIsoperimetricPoint()
Returns: The isoperimetric point in the sense of Veldkamp, if it exists. Otherwise, returns
‘NULL‘.
Method centroid(): Centroid.
Usage:
Triangle$centroid()
Method orthocenter(): Orthocenter.
Usage:
Triangle$orthocenter()
Method area(): Area of the triangle.
Usage:
Triangle$area()
Method incircle(): Incircle of the triangle.
Usage:
Triangle$incircle()
Returns: A Circle object.
Method inradius(): Inradius of the reference triangle.
Usage:
Triangle$inradius()
Method incenter(): Incenter of the reference triangle.
Usage:
Triangle$incenter()
Method excircles(): Excircles of the triangle.
Usage:
Triangle$excircles()
Returns: A list with the three excircles, Circle objects.
Method excentralTriangle(): Excentral triangle of the reference triangle.
Usage:
Triangle$excentralTriangle()
Returns: A Triangle object.
Method BevanPoint(): Bevan point. This is the circumcenter of the excentral triangle.
Usage:
Triangle$BevanPoint()
Method medialTriangle(): Medial triangle. Its vertices are the mid-points of the sides of the
reference triangle.
Usage:
Triangle$medialTriangle()
Method orthicTriangle(): Orthic triangle. Its vertices are the feet of the altitudes of the
reference triangle.
Usage:
Triangle$orthicTriangle()
Method incentralTriangle(): Incentral triangle.
Usage:
Triangle$incentralTriangle()
Details: It is the triangle whose vertices are the intersections of the reference triangle’s angle
bisectors with the respective opposite sides.
Returns: A Triangle object.
Method NagelTriangle(): Nagel triangle (or extouch triangle) of the reference triangle.
Usage:
Triangle$NagelTriangle(NagelPoint = FALSE)
Arguments:
NagelPoint logical, whether to return the Nagel point as attribute
Returns: A Triangle object.
Examples:
t <- Triangle$new(c(0,-2), c(0.5,1), c(3,0.6))
lineAB <- Line$new(t$A, t$B)
lineAC <- Line$new(t$A, t$C)
lineBC <- Line$new(t$B, t$C)
NagelTriangle <- t$NagelTriangle(NagelPoint = TRUE)
NagelPoint <- attr(NagelTriangle, "Nagel point")
excircles <- t$excircles()
opar <- par(mar = c(0,0,0,0))
plot(0, 0, type="n", asp = 1, xlim = c(-1,5), ylim = c(-3,3),
xlab = NA, ylab = NA, axes = FALSE)
draw(t, lwd = 2)
draw(lineAB); draw(lineAC); draw(lineBC)
draw(excircles$A, border = "orange")
draw(excircles$B, border = "orange")
draw(excircles$C, border = "orange")
draw(NagelTriangle, lwd = 2, col = "red")
draw(Line$new(t$A, NagelTriangle$A, FALSE, FALSE), col = "blue")
draw(Line$new(t$B, NagelTriangle$B, FALSE, FALSE), col = "blue")
draw(Line$new(t$C, NagelTriangle$C, FALSE, FALSE), col = "blue")
points(rbind(NagelPoint), pch = 19)
par(opar)
Method NagelPoint(): Nagel point of the triangle.
Usage:
Triangle$NagelPoint()
Method GergonneTriangle(): Gergonne triangle of the reference triangle.
Usage:
Triangle$GergonneTriangle(GergonnePoint = FALSE)
Arguments:
GergonnePoint logical, whether to return the Gergonne point as an attribute
Details: The Gergonne triangle is also known as the intouch triangle or the contact triangle.
This is the triangle made of the three tangency points of the incircle.
Returns: A Triangle object.
Method GergonnePoint(): Gergonne point of the reference triangle.
Usage:
Triangle$GergonnePoint()
Method tangentialTriangle(): Tangential triangle of the reference triangle. This is the
triangle formed by the lines tangent to the circumcircle of the reference triangle at its vertices. It
does not exist for a right triangle.
Usage:
Triangle$tangentialTriangle()
Returns: A Triangle object.
Method symmedialTriangle(): Symmedial triangle of the reference triangle.
Usage:
Triangle$symmedialTriangle()
Returns: A Triangle object.
Examples:
t <- Triangle$new(c(0,-2), c(0.5,1), c(3,0.6))
symt <- t$symmedialTriangle()
symmedianA <- Line$new(t$A, symt$A, FALSE, FALSE)
symmedianB <- Line$new(t$B, symt$B, FALSE, FALSE)
symmedianC <- Line$new(t$C, symt$C, FALSE, FALSE)
K <- t$symmedianPoint()
opar <- par(mar = c(0,0,0,0))
plot(NULL, asp = 1, xlim = c(-1,5), ylim = c(-3,3),
xlab = NA, ylab = NA, axes = FALSE)
draw(t, lwd = 2)
draw(symmedianA, lwd = 2, col = "blue")
draw(symmedianB, lwd = 2, col = "blue")
draw(symmedianC, lwd = 2, col = "blue")
points(rbind(K), pch = 19, col = "red")
par(opar)
Method symmedianPoint(): Symmedian point of the reference triangle.
Usage:
Triangle$symmedianPoint()
Returns: A point.
Method circumcircle(): Circumcircle of the reference triangle.
Usage:
Triangle$circumcircle()
Returns: A Circle object.
Method circumcenter(): Circumcenter of the reference triangle.
Usage:
Triangle$circumcenter()
Method circumradius(): Circumradius of the reference triangle.
Usage:
Triangle$circumradius()
Method BrocardCircle(): The Brocard circle of the reference triangle (also known as the
seven-point circle).
Usage:
Triangle$BrocardCircle()
Returns: A Circle object.
Method BrocardPoints(): Brocard points of the reference triangle.
Usage:
Triangle$BrocardPoints()
Returns: A list of two points, the first Brocard point and the second Brocard point.
Method LemoineCircleI(): The first Lemoine circle of the reference triangle.
Usage:
Triangle$LemoineCircleI()
Returns: A Circle object.
Method LemoineCircleII(): The second Lemoine circle of the reference triangle (also known
as the cosine circle)
Usage:
Triangle$LemoineCircleII()
Returns: A Circle object.
Method LemoineTriangle(): The Lemoine triangle of the reference triangle.
Usage:
Triangle$LemoineTriangle()
Returns: A Triangle object.
Method LemoineCircleIII(): The third Lemoine circle of the reference triangle.
Usage:
Triangle$LemoineCircleIII()
Returns: A Circle object.
Method ParryCircle(): Parry circle of the reference triangle.
Usage:
Triangle$ParryCircle()
Returns: A Circle object.
Method outerSoddyCircle(): Soddy outer circle of the reference triangle.
Usage:
Triangle$outerSoddyCircle()
Returns: A Circle object.
Method pedalTriangle(): Pedal triangle of a point with respect to the reference triangle. The
pedal triangle of a point P is the triangle whose vertices are the feet of the perpendiculars from P
to the sides of the reference triangle.
Usage:
Triangle$pedalTriangle(P)
Arguments:
P a point
Returns: A Triangle object.
Method CevianTriangle(): Cevian triangle of a point with respect to the reference triangle.
Usage:
Triangle$CevianTriangle(P)
Arguments:
P a point
Returns: A Triangle object.
Method MalfattiCircles(): Malfatti circles of the triangle.
Usage:
Triangle$MalfattiCircles(tangencyPoints = FALSE)
Arguments:
tangencyPoints logical, whether to retourn the tangency points of the Malfatti circles as an
attribute.
Returns: A list with the three Malfatti circles, Circle objects.
Examples:
t <- Triangle$new(c(0,0), c(2,0.5), c(1.5,2))
Mcircles <- t$MalfattiCircles(TRUE)
plot(NULL, asp = 1, xlim = c(0,2.5), ylim = c(0,2.5),
xlab = NA, ylab = NA)
grid()
draw(t, col = "blue", lwd = 2)
invisible(lapply(Mcircles, draw, col = "green", border = "red"))
invisible(lapply(attr(Mcircles, "tangencyPoints"), function(P){
points(P[1], P[2], pch = 19)
}))
Method AjimaMalfatti1(): First Ajima-Malfatti point of the triangle.
Usage:
Triangle$AjimaMalfatti1()
Method AjimaMalfatti2(): Second Ajima-Malfatti point of the triangle.
Usage:
Triangle$AjimaMalfatti2()
Method equalDetourPoint(): Equal detour point of the triangle.
Usage:
Triangle$equalDetourPoint(detour = FALSE)
Arguments:
detour logical, whether to return the detour as an attribute
Details: Also known as the X(176) triangle center.
Method trilinearToPoint(): Point given by trilinear coordinates.
Usage:
Triangle$trilinearToPoint(x, y, z)
Arguments:
x, y, z trilinear coordinates
Returns: The point with trilinear coordinates x:y:z with respect to the reference triangle.
Examples:
t <- Triangle$new(c(0,0), c(2,1), c(5,7))
incircle <- t$incircle()
t$trilinearToPoint(1, 1, 1)
incircle$center
Method pointToTrilinear(): Give the trilinear coordinates of a point with respect to the
reference triangle.
Usage:
Triangle$pointToTrilinear(P)
Arguments:
P a point
Returns: The trilinear coordinates, a numeric vector of length 3.
Method isogonalConjugate(): Isogonal conjugate of a point with respect to the reference
triangle.
Usage:
Triangle$isogonalConjugate(P)
Arguments:
P a point
Returns: A point, the isogonal conjugate of P.
Method rotate(): Rotate the triangle.
Usage:
Triangle$rotate(alpha, O, degrees = TRUE)
Arguments:
alpha angle of rotation
O center of rotation
degrees logical, whether alpha is given in degrees
Returns: A Triangle object.
Method translate(): Translate the triangle.
Usage:
Triangle$translate(v)
Arguments:
v the vector of translation
Returns: A Triangle object.
Method SteinerEllipse(): The Steiner ellipse (or circumellipse) of the reference triangle.
This is the ellipse passing through the three vertices of the triangle and centered at the centroid of
the triangle.
Usage:
Triangle$SteinerEllipse()
Returns: An Ellipse object.
Examples:
t <- Triangle$new(c(0,0), c(2,0.5), c(1.5,2))
ell <- t$SteinerEllipse()
plot(NULL, asp = 1, xlim = c(0,2.5), ylim = c(-0.7,2.4),
xlab = NA, ylab = NA)
draw(t, col = "blue", lwd = 2)
draw(ell, border = "red", lwd =2)
Method SteinerInellipse(): The Steiner inellipse (or midpoint ellipse) of the reference tri-
angle. This is the ellipse tangent to the sides of the triangle at their midpoints, and centered at the
centroid of the triangle.
Usage:
Triangle$SteinerInellipse()
Returns: An Ellipse object.
Examples:
t <- Triangle$new(c(0,0), c(2,0.5), c(1.5,2))
ell <- t$SteinerInellipse()
plot(NULL, asp = 1, xlim = c(0,2.5), ylim = c(-0.1,2.4),
xlab = NA, ylab = NA)
draw(t, col = "blue", lwd = 2)
draw(ell, border = "red", lwd =2)
Method MandartInellipse(): The Mandart inellipse of the reference triangle. This is the
unique ellipse tangent to the triangle’s sides at the contact points of its excircles
Usage:
Triangle$MandartInellipse()
Returns: An Ellipse object.
Method randomPoints(): Random points on or in the reference triangle.
Usage:
Triangle$randomPoints(n, where = "in")
Arguments:
n an integer, the desired number of points
where "in" to generate inside the triangle, "on" to generate on the sides of the triangle
Returns: The generated points in a two columns matrix with n rows.
Method hexylTriangle(): Hexyl triangle.
Usage:
Triangle$hexylTriangle()
Method plot(): Plot a Triangle object.
Usage:
Triangle$plot(add = FALSE, ...)
Arguments:
add Boolean, whether to add the plot to the current plot
... named arguments passed to polygon
Returns: Nothing, called for plotting only.
Examples:
trgl <- Triangle$new(c(0, 0), c(1, 0), c(0.5, sqrt(3)/2))
trgl$plot(col = "yellow", border = "red")
Method clone(): The objects of this class are cloneable with this method.
Usage:
Triangle$clone(deep = FALSE)
Arguments:
deep Whether to make a deep clone.
Note
The Steiner ellipse is also the smallest area ellipse which passes through the vertices of the triangle,
and thus can be obtained with the function EllipseFromThreeBoundaryPoints. We can also note
that the major axis of the Steiner ellipse is the Deming least squares line of the three triangle vertices.
See Also
TriangleThreeLines to define a triangle by three lines.
Examples
# incircle and excircles
A <- c(0,0); B <- c(1,2); C <- c(3.5,1)
t <- Triangle$new(A, B, C)
incircle <- t$incircle()
excircles <- t$excircles()
JA <- excircles$A$center
JB <- excircles$B$center
JC <- excircles$C$center
JAJBJC <- Triangle$new(JA, JB, JC)
A_JA <- Line$new(A, JA, FALSE, FALSE)
B_JB <- Line$new(B, JB, FALSE, FALSE)
C_JC <- Line$new(C, JC, FALSE, FALSE)
opar <- par(mar = c(0,0,0,0))
plot(NULL, asp = 1, xlim = c(0,6), ylim = c(-4,4),
xlab = NA, ylab = NA, axes = FALSE)
draw(t, lwd = 2)
draw(incircle, border = "orange")
draw(excircles$A); draw(excircles$B); draw(excircles$C)
draw(JAJBJC, col = "blue")
draw(A_JA, col = "green")
draw(B_JB, col = "green")
draw(C_JC, col = "green")
par(opar)
## ------------------------------------------------
## Method `Triangle$new`
## ------------------------------------------------
t <- Triangle$new(c(0,0), c(1,0), c(1,1))
t
t$C
t$C <- c(2,2)
t
## ------------------------------------------------
## Method `Triangle$print`
## ------------------------------------------------
Triangle$new(c(0,0), c(1,0), c(1,1))
## ------------------------------------------------
## Method `Triangle$NagelTriangle`
## ------------------------------------------------
t <- Triangle$new(c(0,-2), c(0.5,1), c(3,0.6))
lineAB <- Line$new(t$A, t$B)
lineAC <- Line$new(t$A, t$C)
lineBC <- Line$new(t$B, t$C)
NagelTriangle <- t$NagelTriangle(NagelPoint = TRUE)
NagelPoint <- attr(NagelTriangle, "Nagel point")
excircles <- t$excircles()
opar <- par(mar = c(0,0,0,0))
plot(0, 0, type="n", asp = 1, xlim = c(-1,5), ylim = c(-3,3),
xlab = NA, ylab = NA, axes = FALSE)
draw(t, lwd = 2)
draw(lineAB); draw(lineAC); draw(lineBC)
draw(excircles$A, border = "orange")
draw(excircles$B, border = "orange")
draw(excircles$C, border = "orange")
draw(NagelTriangle, lwd = 2, col = "red")
draw(Line$new(t$A, NagelTriangle$A, FALSE, FALSE), col = "blue")
draw(Line$new(t$B, NagelTriangle$B, FALSE, FALSE), col = "blue")
draw(Line$new(t$C, NagelTriangle$C, FALSE, FALSE), col = "blue")
points(rbind(NagelPoint), pch = 19)
par(opar)
## ------------------------------------------------
## Method `Triangle$symmedialTriangle`
## ------------------------------------------------
t <- Triangle$new(c(0,-2), c(0.5,1), c(3,0.6))
symt <- t$symmedialTriangle()
symmedianA <- Line$new(t$A, symt$A, FALSE, FALSE)
symmedianB <- Line$new(t$B, symt$B, FALSE, FALSE)
symmedianC <- Line$new(t$C, symt$C, FALSE, FALSE)
K <- t$symmedianPoint()
opar <- par(mar = c(0,0,0,0))
plot(NULL, asp = 1, xlim = c(-1,5), ylim = c(-3,3),
xlab = NA, ylab = NA, axes = FALSE)
draw(t, lwd = 2)
draw(symmedianA, lwd = 2, col = "blue")
draw(symmedianB, lwd = 2, col = "blue")
draw(symmedianC, lwd = 2, col = "blue")
points(rbind(K), pch = 19, col = "red")
par(opar)
## ------------------------------------------------
## Method `Triangle$MalfattiCircles`
## ------------------------------------------------
t <- Triangle$new(c(0,0), c(2,0.5), c(1.5,2))
Mcircles <- t$MalfattiCircles(TRUE)
plot(NULL, asp = 1, xlim = c(0,2.5), ylim = c(0,2.5),
xlab = NA, ylab = NA)
grid()
draw(t, col = "blue", lwd = 2)
invisible(lapply(Mcircles, draw, col = "green", border = "red"))
invisible(lapply(attr(Mcircles, "tangencyPoints"), function(P){
points(P[1], P[2], pch = 19)
}))
## ------------------------------------------------
## Method `Triangle$trilinearToPoint`
## ------------------------------------------------
t <- Triangle$new(c(0,0), c(2,1), c(5,7))
incircle <- t$incircle()
t$trilinearToPoint(1, 1, 1)
incircle$center
## ------------------------------------------------
## Method `Triangle$SteinerEllipse`
## ------------------------------------------------
t <- Triangle$new(c(0,0), c(2,0.5), c(1.5,2))
ell <- t$SteinerEllipse()
plot(NULL, asp = 1, xlim = c(0,2.5), ylim = c(-0.7,2.4),
xlab = NA, ylab = NA)
draw(t, col = "blue", lwd = 2)
draw(ell, border = "red", lwd =2)
## ------------------------------------------------
## Method `Triangle$SteinerInellipse`
## ------------------------------------------------
t <- Triangle$new(c(0,0), c(2,0.5), c(1.5,2))
ell <- t$SteinerInellipse()
plot(NULL, asp = 1, xlim = c(0,2.5), ylim = c(-0.1,2.4),
xlab = NA, ylab = NA)
draw(t, col = "blue", lwd = 2)
draw(ell, border = "red", lwd =2)
## ------------------------------------------------
## Method `Triangle$plot`
## ------------------------------------------------
trgl <- Triangle$new(c(0, 0), c(1, 0), c(0.5, sqrt(3)/2))
trgl$plot(col = "yellow", border = "red")
TriangleThreeLines Triangle defined by three lines
Description
Return the triangle formed by three lines.
Usage
TriangleThreeLines(line1, line2, line3)
Arguments
line1, line2, line3
Line objects
Value
A Triangle object.
unitCircle Unit circle
Description
Circle centered at the origin with radius 1.
Usage
unitCircle
Format
An object of class Circle (inherits from R6) of length 25. |
dineq | cran | R | Package ‘dineq’
October 13, 2022
Type Package
Title Decomposition of (Income) Inequality
Version 0.1.0
Date 2018-06-19
Author <NAME>
Maintainer <NAME> <<EMAIL>>
Description Decomposition of (income) inequality by population sub groups.
For a decomposition on a single variable the mean log deviation can be used
(see Mookherjee Shorrocks (1982) <DOI:10.2307/2232673>).
For a decomposition on multiple variables a regression based technique can be
used (see Fields (2003) <DOI:10.1016/s0147-9121(03)22001-x>).
Recentered influence function regression for marginal effects of the (income
or wealth) distribution (see Firpo et al. (2009) <DOI:10.3982/ECTA6822>).
Some extensions to inequality functions to handle weights and/or missings.
Depends R (>= 2.10)
Imports boot (>= 1.3-20), Hmisc (>= 4.0-3)
License GPL-3
Encoding UTF-8
LazyData true
RoxygenNote 6.0.1
NeedsCompilation no
Repository CRAN
Date/Publication 2018-06-27 15:30:30 UTC
R topics documented:
dineq_change_r... 2
dineq_r... 4
gini.wt... 6
gini_decom... 7
mex_inc_200... 8
mex_inc_201... 10
mld.wt... 11
mld_chang... 12
mld_decom... 14
ntiles.wt... 15
polar.wt... 16
ri... 18
rif... 19
rifrS... 21
theil.wt... 23
dineq_change_rb Decomposition of the change in inequality
Description
Decomposition of the change in (income) inequality into multiple characteristics, divided by a price
and a quantity effect.
Usage
dineq_change_rb(formula1, weights1 = NULL, data1, formula2, weights2 = NULL,
data2)
Arguments
formula1 an object of class "formula" (or one that can be coerced to that class) for the first
year/dataset: a symbolic description of the model to be fitted in the ordinary
least squares regression.
weights1 an optional vector of weights to be used in the fitting process. Should be NULL
or a numeric vector. Should be inside selected data frame in the function and
between quotation marks.
data1 a data frame containing the variables for the first year/dataset in the model.
formula2 an object of class "formula" (or one that can be coerced to that class) for the first
year/dataset: a symbolic description of the model to be fitted in the ordinary
least squares regression.
weights2 an optional vector of weights to be used in the fitting process. Should be NULL
or a numeric vector. Should be inside selected data frame in the function and
between quotation marks.
data2 a data frame containing the variables for the first year/dataset in the model.
Details
This function uses a multivariate regression-based decomposition method. Multiple characteristics
can be added to the function in order to calculate the contribution of each individual variable (in-
cluding the residual) to the change of the inequality. For instance socio-economic, demographic
and geographic characteristics (such as age, household composition, gender, region, education) of
the household or the individual can be added.
The change decomposition is divided into a price and a quantity effect for each characteristic. The
quantity effect is caused by changes in the relative size of subgroups (for instance: a higher per-
centage of elderly households). The price effect is caused by a change in the influence of the
characteristic on the dependent variable (for instance a higher income for the elderly households).
It uses a logarithmic transformation of the values of the dependent variable. Therefore it cannot
handle negative or zero values. Those are excluded from the computation in this function.
The decomposition can only be used on the variance of log income.
The main difference with the decomposition of the change of the mean log deviation is that multiple
characteristics can be analyzed at the same time. While the decomposition function only analyze
one characteristic at the same time.
The function uses two datasets for both years to compare. Pay attention that characteristics should
be the same (although can be named differently) and in the same order in the formula.
Value
a list with the results of the decomposition and the parts used for the decomposition, containing the
following components:
attention optional note on the difference in the input.
variance_logincome
the values of the variance of log income of both years/datasets and difference
between both.
decomposition_inequality
the (relative) decomposition of the inequality of both years/datasets into the dif-
ferent variables. See function ’rb_decomp’.
decomposition_change_absolute
decomposition of the change in the variance of log income into the different vari-
ables and residual split into price and quantity effects. Adds up to the absolute
change in variance of log income.
decomposition_change_relative
decomposition of the change in the variance of log income into the different
variables and residual split into price and quantity effects. Adds up to 100 per-
cent.
notes number of zero or negative observations in both data sets/years. The function
uses a logarithmic transformation of x as input for the regression. Therefore
these observations are deleted from the analysis
References
<NAME>. (2006) Earnings Inequality in USA, 1969–99: Comparing Inequality Using Earnings
Equations, Review of Income and Wealth, 52 (1): p. 127–144.
Fields, G. (2003) Accounting for income inequality and its change: a new method, with application
to the distribution of earnings in the United States, Research in Labor Economics, 22, p. 1–38.
<NAME>., and <NAME> (2016) Accounting for Changes in Income Inequality: Decompo-
sition Analyses for the UK, 1978–2008. Oxford Bulletin of economics and statistics, 78 (3), p.
289-322,
See Also
dineq_rb
Examples
#Decomposition of the change in income inequality into 4 variables using the Mexican Income
#data set
data(mex_inc_2008)
inequality_change <- dineq_change_rb(formula1=income~hh_structure+education+domicile_size+age_cat,
weights1="factor",data1=mex_inc_2008, formula2=income~hh_structure+education+
domicile_size+age_cat, weights2="factor",data2=mex_inc_2016)
#selection of output: change in variance of log income decomposed in variables split into price
#and quantity effect and residual.
inequality_change["decomposition_change_absolute"]
#selection of output: relatieve change in variance of log income decomposed in variables split
#into price and quantity effect and residual. Because of negative change in variance of log
#income, the negative contributuon of education (quantity) becomes a positive number.
inequality_change["decomposition_change_relative"]
dineq_rb Regression-based decomposition of inequality
Description
Decomposition of (income) inequality into multiple characteristics. A regression-based decompo-
sition method is used.
Usage
dineq_rb(formula, weights = NULL, data)
Arguments
formula an object of class "formula" (or one that can be coerced to that class): a symbolic
description of the model to be fitted in the ordinary least squares regression.
weights an optional vector of weights to be used in the fitting process. Should be NULL
or a numeric vector. Should be inside selected data frame in the function and
between quotation marks.
data a data frame containing the variables in the model.
Details
This function uses a multivariate regression-based decomposition method. Multiple variables can
be added to the function in order to calculate the contribution of each individual variable (including
the residual) to the inequality. For instance socio-economic, demographic and geographic charac-
teristics (such as age, household composition, gender, region, education) of the household or the
individual can be added.
This decomposition can be used on a broad range of inequality measure, like Gini, Theil, mean log
deviation, Atkinson index and variance of log income.
It uses a logarithmic transformation of the values of the dependent variable. Therefore it cannot
handle negative or zero values. Those are excluded from the computation in this function.
The main difference with the decomposition of the mean log deviation or Gini coefficient is that
multiple characteristics can be analyzed at the same time. While the other decomposition functions
only analyze one characteristic at the same time.
Value
a list with the results of the decomposition, containing the following components:
inequality_measures
the values of 4 inequality measures: gini, mean log deviation, theil and variance
of log income
decomposition_inequality
the (relative) decomposition of the inequality into the different variables
regression_results
results of the ols regression which is used to make the decomposition of inequal-
ity
note number of zero or negative observations. The function uses a logarithmic trans-
formation of x as input for the regression. Therefore these observations are
deleted from the analysis
References
Fields, <NAME>. (2003). ‘Accounting for income inequality and its change: a new method, with ap-
plication to the distribution of earnings in the United States’, Research in Labor Economics, 22, p.
1–38.
<NAME>., and <NAME> (2016) Accounting for Changes in Income Inequality: Decompo-
sition Analyses for the UK, 1978–2008. Oxford Bulletin of economics and statistics, 78 (3), p.
289-322,
See Also
dineq_change_rb
Examples
#Decomposition of the income inequality into 4 variables using Mexican Income data set:
data(mex_inc_2008)
inequality_decomp <- dineq_rb(income~hh_structure+education+domicile_size+age_cat,
weights="factor", data=mex_inc_2008)
#selection of the output: decomposition of the inequality into the contribution of the
#different variables and residual (adds up to 100 percent)
inequality_decomp["decomposition_inequality"]
gini.wtd Gini coefficient
Description
Returns the (optional weighted) Gini coefficient for a vector.
Usage
gini.wtd(x, weights = NULL)
Arguments
x a numeric vector containing at least non-negative elements.
weights an optional vector of weights of x to be used in the computation of the Gini
coefficient. Should be NULL or a numeric vector.
Details
The Gini coefficient is a measure of inequality among values of a distribution. The most used single
measure for income inequality. The coefficient can theoretically range between 0 and 1, with 1 being
the highest possible inequality (for instance: 1 person in a society has all income; the others none).
But coefficients that are negative or greater than 1 are also possible because of negative values in the
distribution. Compared to other measures of inequality, the Gini coefficient is especially sensitive
for changes in the middle of the distribution.
Extension of the gini function in reldist package in order to handle missings.
Value
The value of the Gini coefficient.
Source
<NAME>. (2016), Relative Distribution Methods. Version 1.6-6. Project home page at http://www.stat.ucla.edu/~handcoc
References
<NAME>. and <NAME>. (2009) Handbook on poverty and inequality, Washington, DC:
World Bank.
Cowell F. (2000) Measurement of Inequality. In Atkinson A. and Bourguignon F. (eds.) Handbook
of Income Distribution. Amsterdam: Elsevier, p. 87-166.
Examples
#calculate Gini coefficient using Mexican Income data set
data(mex_inc_2008)
#unweighted Gini coefficient:
gini.wtd(mex_inc_2008$income)
#weighted Gini coefficient:
gini.wtd(x=mex_inc_2008$income, weights=mex_inc_2008$factor)
gini_decomp Decomposition of the Gini coefficient
Description
Decomposes the Gini coefficient into population subgroups. Distinction is made by between and
within group inequality and an overlap (interaction) term.
Usage
gini_decomp(x, z, weights = NULL)
Arguments
x a numeric vector containing at least non-negative elements.
z a factor containing the population sub groups.
weights an optional vector of weights of x to be used in the computation of the decom-
position. Should be NULL or a numeric vector.
Details
The decomposition of the Gini coefficient by between and within group inequality. In most cases
there is an overlap of the distribution of both groups. Consequence is that between and within group
inequality doesn’t add up to the total Gini coefficient. In those cases there is an overlap term. Also
referred to as interaction effect.
Within group inequality is calculated by using the Gini coefficient for each sub group. Between
group inequality by using the gini coefficient of the average of both sub groups.
Value
a list with the results of the decomposition and the parts used for the decomposition, containing the
following components:
gini_decomp a list containing the decomposition: gini_total (value of the gini coefficient of x),
gini_within (value of within-group inequality), gini_between (value of between-
group inequality) and gini_overlap (value of overlap in inequality)
gini_group a list containing gini_group (the gini coefficients of the different subgroups) and
gini_group_contribution(the contribution of the subgroups to the total within-
group inequality: adds up to gini_within)
gini_decomp a list containing the means of x: mean_total (value of the mean of x of all
subgroups combined) and mean_group (value of the mean of x of the individual
subgroups) inequality) and gini_between (value of between-group inequality)
share_groups the distribution of the subgroups z
share_income_groups
the distribution of vector x by subgroups z
number_cases a list containing the number of cases in total, by subgroup (weighted and un-
weighted): n_unweighted (total number of unweighted x), n_weighted (total
number of weighted x), n_group_unweighted (number of unweighted x by sub-
group z), n_group_unweighted (number of weighted x by subgroup z)
References
<NAME>. and <NAME> (1982) A decomposition analysis of the trend in UK income
inequality, Economic Journal, 92 (368), p. 886-902.
<NAME>. (2000) Measurement of Inequality. In Atkinson A. and Bourguignon F. (eds.) Handbook
of Income Distribution. Amsterdam: Elsevier, p. 87-166.
See Also
mld_decomp
Examples
#Decomposition of the gini coefficient by level of education using Mexican Income data set
data(mex_inc_2008)
education_decomp <- gini_decomp(x=mex_inc_2008$income,z=mex_inc_2008$education,
weights=mex_inc_2008$factor)
#complete output
education_decomp
#Selected output: decomposition into between- and within-group inequality and overlap (interaction)
education_decomp["gini_decomp"]
mex_inc_2008 Mexican income data 2008
Description
Selection of Mexican income (survey) data and household characteristic for 2008. Extracted from
ENIGH (Household Income and Expenditure Survey).
Usage
data(mex_inc_2008)
Format
A data frame containing 5000 observations and 8 variables (a selection from the original).
hh_number Household ID.
factor Population inflating weights.
income Household income.
hh_structure Household structure, factor with levels unipersonal, nuclear, ampliado, compuesto
and coresidente.
education Highest achieved education of the head of the household, factor with levels Sin in-
struccion, Preescolar, Primaria incompleta, Primaria completa, Secundaria incompleta, Se-
cundaria completa, Preparatoria incompleta, Preparatoria completa, Profesional incompleta,
Profesional completa, Posgrado.
domicile_size Population of domicile, factor with levels <2500, 2500-15000, 15000-100000, >100000.
age age (integer) of the head of the household.
age_cat age (categorical) of the head of the household , factor with levels <25, 25-34, 35-44, 45-54,
55-64, 65-74, >=75.
Details
This data set is a selecion of the original dataset of the National Institute of Statistics and Geography
in Mexico (INEGI). The original contains 29468 observations and 129 variables with information
on the income and household characteristics in Mexico. This selection is only meant to be used
as a calculation example for the functions in this package. Results will not represent the correct
information on the Mexican situation.
Source
http://en.www.inegi.org.mx/proyectos/enchogares/regulares/enigh/nc/2008/default.
html, the whole data set can be obtained here.
References
INEGI (2009), ENIGH 2008 Nueva construcción. Ingresos y gastos de los hogares, Aguascalientes:
INEGI.
mex_inc_2016 Mexican income data 2016
Description
Selection of Mexican income (survey) data and household characteristic for 2016. Extracted from
ENIGH (Household Income and Expenditure Survey).
Usage
data(mex_inc_2016)
Format
A data frame containing 5000 observations and 8 variables (a selection from the original).
hh_number Household ID.
factor Population inflating weights.
income Household income.
hh_structure Household structure, factor with levels unipersonal, nuclear, ampliado, compuesto
and coresidente.
education Highest achieved education of the head of the household, factor with levels Sin in-
struccion, Preescolar, Primaria incompleta, Primaria completa, Secundaria incompleta, Se-
cundaria completa, Preparatoria incompleta, Preparatoria completa, Profesional incompleta,
Profesional completa, Posgrado.
domicile_size Population of domicile, factor with levels <2500, 2500-15000, 15000-100000, >100000.
age age (integer) of the head of the household.
age_cat age (categorical) of the head of the household , factor with levels <25, 25-34, 35-44, 45-54,
55-64, 65-74, >=75.
Details
This data set is a selecion of the original dataset of the National Institute of Statistics and Geography
in Mexico (INEGI). The original contains 70311 observations and 127 variables with information
on the income and household characteristics in Mexico. This selection is only meant to be used
as a calculation example for the functions in this package. Results will not represent the correct
information on the Mexican situation.
Source
http://en.www.inegi.org.mx/proyectos/enchogares/regulares/enigh/nc/2016/default.
html, the whole data set can be obtained here.
References
INEGI (2017), Encuesta Nacional de Ingresos y Gastos de los Hogares 2016. ENIGH. Nueva serie.
Temas, categorías y variables, Aguascalientes: INEGI.
mld.wtd Mean log deviation
Description
Returns the (optional weighted) mean log deviation for a vector.
Usage
mld.wtd(x, weights = NULL)
Arguments
x a numeric vector containing at least non-negative elements.
weights an optional vector of weights of x to be used in the computation of the mean log
deviation. Should be NULL or a numeric vector.
Details
The mean log deviation is a measure of inequality among values of a distribution. It is a member
of the Generalized Entropy Measures. Also referred to as GE(0). A value of zero is the lowest
possible inequality. The measure does not have an upper bound for the highest inequality. It uses a
logarithmic transformation of the values of the distribution. Therefore it cannot handle negative or
zero values. Those are excluded from the computation in this function. The mean log deviation is
more sensitive for changes in the lower tail of the distribution.
Extension of the calcGEI function in IC2 package in order to handle missings.
Value
the value of the mean log deviation index.
Source
<NAME>. (2012). IC2: Inequality and Concentration Indices and Curves. R package version 1.0-1.
https://CRAN.R-project.org/package=IC2
References
<NAME>. and <NAME>. (2009) Handbook on poverty and inequality, Washington, DC:
World Bank.
Cowell F. (2000) Measurement of Inequality. In Atkinson A. and <NAME>. (eds.) Handbook
of Income Distribution. Amsterdam: Elsevier, p. 87-166.
Examples
#calculate mean log deviation using Mexican Income data set
data(mex_inc_2008)
#unweighted mean log deviation:
mld.wtd(mex_inc_2008$income)
#weighted mean log deviation:
mld.wtd(x=mex_inc_2008$income, weights=mex_inc_2008$factor)
mld_change Decomposition of the change of the mean log deviation
Description
Decomposes the change of the mean log deviation between two years/data sets into population
subgroups.
Usage
mld_change(x1, z1, weights1 = NULL, x2, z2, weights2 = NULL)
Arguments
x1 a numeric vector for the first year/dataset containing at least non-negative ele-
ments.
z1 a factor for the first year/dataset containing the population subgroups.
weights1 an optional vector of weights of x for the first year/dataset to be used in the
computation of the decomposition. Should be NULL or a numeric vector.
x2 a numeric vector for the second year/dataset containing at least non-negative
elements.
z2 a factor for the second year/dataset containing the population subgroups.
weights2 an optional vector of weights of x for the second year/dataset to be used in the
computation of the decomposition. Should be NULL or a numeric vector.
Details
The change of the mean log deviation can be decomposed into three components: inequality
changes between and within groups and changes in the relative sizes of the groups. The change
of between group inequality is measures by a change in the relative income of the subgroups. The
change of within group inequality by adding up all changes in mean log deviation within the sub-
groups. And the contribution of changes in relative population size effects the change on both the
within and between group components. For the relative contributions those two are added together.
This method is introduced by Mookherjee and Shorrocks. It is an accurate approximation of the ex-
act decomposition. It uses a logarithmic transformation of the values of the distribution. Therefore
it cannot handle negative or zero values. Those are excluded from the computation in this function.
Value
a list with the results of the decomposition and the parts used for the decomposition, containing the
following components:
mld_data1 the value of the mean log deviation index of x for the first year/dataset, and the
decomposition into within-group and between-group inequality
mld_data2 the value of the mean log deviation index of x for the second year/dataset, and
the decomposition into within-group and between-group inequality
mld_difference the difference between the mean log deviation and the decomposition between
the second and first year/dataset
absolute_contributions_difference
decomposition of the absolute change in inequality into: within group changes,
group size changes (split into the effect of within and between group compo-
nents) and between group changes.
relative_contributions_difference
decomposition of the change in inequality into relatieve contributions of: within
group changes, group size changes and between group changes. Adds up to 100
percent (or -100 percent for negative change)
note number of zero or negative observations in both datasets. The mean log devi-
ation uses a logarithmic transformation of x. Therefore these observations are
deleted from the analysis
References
<NAME>. and <NAME> (1982) A decomposition analysis of the trend in UK income
inequality, Economic Journal, 92 (368), p. 886-902.
<NAME>., and <NAME> (2016) Accounting for Changes in Income Inequality: Decompo-
sition Analyses for the UK, 1978–2008. Oxford Bulletin of economics and statistics, 78 (3), p.
289-322,
See Also
mld_decomp
Examples
#Decomposition of the change in mean log deviation by level of eduction using
#Mexican Income data set
data(mex_inc_2008)
change_education <- mld_change(x1=mex_inc_2008$income, z1=mex_inc_2008$education,
weights1=mex_inc_2008$factor, x2=mex_inc_2016$income, z2=mex_inc_2016$education,
weights2=mex_inc_2016$factor)
#selection of the output: decomposition of the change into within- and between-group
#contribution and change in de size of groups (adds up to 100 percent)
change_education["relative_contributions_difference"]
mld_decomp Decomposition of the mean log deviation
Description
Decomposes the mean log deviation into non overlapping population subgroups. Distinction is
made by between and within group inequality.
Usage
mld_decomp(x, z, weights = NULL)
Arguments
x a numeric vector containing at least non-negative elements.
z a factor containing the population subgroups.
weights an optional vector of weights of x to be used in the computation of the decom-
position. Should be NULL or a numeric vector.
Details
The decomposition of the mean log deviation by between and within group inequality. Within
group inequality is calculated by using the mean log deviation for each sub group. Between group
inequality by the mean log deviation of the average of both sub groups.
It uses a logarithmic transformation of the values of the distribution. Therefore it cannot handle
negative or zero values. Those are excluded from the computation in this function.
Based on calcGEI function in IC2 package. Handles missings.
Value
a list with the results of the decomposition and the parts used for the decomposition, containing the
following components:
mld_decomp a list containing the decomposition: mld_total (value of the mean log devia-
tion index of x) mld_within (value of within-group inequality) and mld_between
(value of between-group inequality)
mld_group a list containing mld_group (the mean log deviations of the different subgroups)
and mld_group_contribution(the contribution of the subgroups to the total within-
group inequality: adds up to mld_within)
mld_decomp a list containing the means of x: mean_total (value of the mean of x of all
subgroups combined) and mean_group (value of the mean of x of the individual
subgroups) inequality) and mld_between (value of between-group inequality)
share_groups the distribution of the subgroups z
share_income_groups
the distribution of vector x by subgroups z
number_cases a list containing the number of cases in total, by subgroup (weighted and un-
weighted): n_unweighted (total number of unweighted x), n_weighted (total
number of weighted x), n_group_unweighted (number of unweighted x by sub-
group z), n_group_unweighted (number of weighted x by subgroup z)
note number of zero or negative observations. The mean log deviation uses a loga-
rithmic transformation of x. Therefore these observations are deleted from the
analysis
Source
<NAME>. (2012). IC2: Inequality and Concentration Indices and Curves. R package version 1.0-1.
https://CRAN.R-project.org/package=IC2
References
<NAME>. and <NAME> (1982) A decomposition analysis of the trend in UK income
inequality, Economic Journal, 92 (368), p. 886-902.
<NAME>., and <NAME> (2016) Accounting for Changes in Income Inequality: Decompo-
sition Analyses for the UK, 1978–2008. Oxford Bulletin of economics and statistics, 78 (3), p.
289-322,
<NAME>. and <NAME>. (2009) Handbook on poverty and inequality, Washington, DC:
World Bank.
See Also
mld_change gini_decomp
Examples
#Decomposition of mean log deviation by level of education using Mexican Income data set
data(mex_inc_2008)
education_decomp <- mld_decomp(x=mex_inc_2008$income,z=mex_inc_2008$education,
weights=mex_inc_2008$factor)
#complete output
education_decomp
#Selected output: decomposition into between- and within-group inequality
education_decomp["mld_decomp"]
ntiles.wtd Weighted tiles
Description
Breaks input vector into n groups. Returns the (optional weighted) tile of an individual observation
in vector x.
Usage
ntiles.wtd(x, n, weights = NULL)
Arguments
x a numeric vector for which the quantiles are computed. Missing values are left
as missing.
n the number of desired sub groups to break vector x into.
weights an optional vector of weights of x to be used in the computation of the tiles.
Should be NULL or a numeric vector.
Details
Breaks vector x into n sub groups. The main difference with other tile functions (for instance ntile
from dplyr) is that those functions break up vector x in exact equal size sub groups. Observations
with the same value can end up in different tiles. In this function, observations with the same value
always end up in the same tile, therefore sub groups may have different sizes. Especially when the
weights argument is used. For a weighted tile function with the same group size, see for instance
weighted_ntile from the grattan package.
When using a short-length vector (compared to the number of tiles) or with high variance weights,
output may be different than anticipated.
Value
A vector of integers corresponding to the quantiles of vector x.
Examples
#Break up the income variable in the Mexican Income data set into 10 groups (tiles)
data(mex_inc_2008)
#unweighted tiles:
q <- ntiles.wtd(x=mex_inc_2008$income, n=10)
#weighted tiles:
qw <- ntiles.wtd(x=mex_inc_2008$income, n=10, weights=mex_inc_2008$factor)
polar.wtd Polarization index
Description
Returns the (possibly weighted) polarization index for a vector. The Wolfson index of bipolarization
is used.
A bipolarized (income) distribution has fewer observations in the middle and more in lower and/or
higher part of the distribution. The regular measures of inequality (like the gini coefficient) does
not give information about the polarization of the distribution. This Polarization index computes
the level of bipolarization of the distribution. The concept is closely related to the Lorenz curve and
therefore the scalar measure is also related to the Gini coefficient. A lower number means a lower
level of polarization.
Extension of the polar.aff function in affluence-index package. Option of weighting the index is
included.
Usage
polar.wtd(x, weights = NULL)
Arguments
x a numeric vector.
weights an optional vector of weights of x to be used in the computation of the Polariza-
tion index. Should be NULL or a numeric vector.
Value
The value of the Wolfson polarization index.
Source
Wolny-Dominiak, A. and <NAME>-Piotrowska (2017). affluenceIndex: Affluence Indices. R
package version 1.0. https://CRAN.R-project.org/package=affluenceIndex
References
<NAME>. (1994) When inequalities diverge, The American Economic Review, 84, p. 353-358.
<NAME>. (2002) Statistical Measurement of Income Polarization. A Cross-National, Berlin 10th
International conference on panel data.
Examples
#calculate Polarization Index using Mexican Income data set
data(mex_inc_2008)
#unweighted Polarization Index:
polar.wtd(mex_inc_2008$income)
#weighted Polarization Index:
polar.wtd(x=mex_inc_2008$income, weights=mex_inc_2008$factor)
rif Recentered influence function (RIF)
Description
Returns the (optional weighted) recentered influence function of a distributional statistic.
Usage
rif(x, weights = NULL, method = "quantile", quantile = 0.5,
kernel = "gaussian")
Arguments
x a numeric vector for which the recentered influence function is computed.
weights an optional vector of weights of x to be used in the computation of the recentered
influence function. Should be NULL or a numeric vector.
method the distribution statistic for which the recentered influence function is estimated.
Options are "quantile", "gini" and "variance". Default is "quantile".
quantile quantile to be used when method "quantile" is selected. Must be a numeric
between 0 and 1. Default is 0.5 (median). Only a single quantile can be selected.
kernel a character giving the smoothing kernel to be used in method "quantile". Op-
tions are "gaussian", "rectangular", "triangular", "epanechnikov", "biweight",
"cosine" or "optcosine". Default is "gaussian".
Details
The RIF can be used as input for a RIF regression approach. RIF regressions are mostly used to
estimate the marginal effect of covariates on distributional statistics of income or wealth.
The RIF is calculated by adding the distributional statistic (quantile, gini or variance) to the influ-
ence function. RIF is a numeric vector where each element corresponds to a particular individual’s
influence on the distributional statistic.
Value
A numeric vector of the recentered influence function of the selected distributional statistic.
References
<NAME>., <NAME> and <NAME> (2009) Unconditional quantile regressions. Econometrica,
77(3), p. 953-973.
<NAME>, <NAME> and <NAME> (2016) A general method for decomposing the
causes of socioeconomic inequality in health. Journal of Health Economics,48, p. 89–106.
<NAME>. and <NAME> (2016) The drivers of wage inequality across Europe, a recentered in-
fluence function regression approach, 10th Annual Meeting of the Portuguese Economic Journal,
University of Evora.
See Also
rifr
Examples
data(mex_inc_2008)
#Recentered influence funtion of 20th quantile
rif_q20 <- rif(x=mex_inc_2008$income, weights=mex_inc_2008$factor, method="quantile",
quantile=0.2)
#Recentered influence funtion of the gini coefficient
rif_gini <- rif(x=mex_inc_2008$income, weights=mex_inc_2008$factor, method="gini")
rifr Recentered influence function regression (RIF Regression)
Description
Recentered influence function regression of a distributional statistic.
Usage
rifr(formula, data, weights = NULL, method = "quantile", quantile = 0.5,
kernel = "gaussian")
Arguments
formula an object of class "formula" (or one that can be coerced to that class): a symbolic
description of the model to be fitted in the RIF regression.
data a data frame containing the variables and weights of the model.
weights an optional vector of weights of x to be used in the computation of the recentered
influence function. Should be NULL or a numeric vector. Should be inside
selected data frame in the function and between quotation marks.
method the distribution statistic for which the recentered influence function is estimated.
Options are "quantile", "gini" and "variance". Default is "quantile".
quantile quantile to be used when method "quantile" is selected. Must be a numeric
between 0 and 1. Default is 0.5 (median). Multiple quantiles can be used.
kernel a character giving the smoothing kernel to be used in method "quantile". Op-
tions are "gaussian", "rectangular", "triangular", "epanechnikov", "biweight",
"cosine" or "optcosine". Default is "gaussian".
Details
RIF Regressions can be used to estimate the marginal effects of covariates on distributional statistics
(such as quantiles, gini and variance). It is based on the recentered influence function of a statistic.
The transformed RIF is used as the dependent variable in an ordinary least squares regression. RIF
regressions are mostly used to estimate the marginal effect of covariates on distributional statistics
of income or wealth.
Value
A list containing the results of the RIF regression.
coefficients the coefficient estimates.
SE the coefficient standard error.
t the coefficient t-value.
p the coefficient p-value.
adjusted_r2 the adjusted r-squares.
References
<NAME>., <NAME> and <NAME> (2009) Unconditional quantile regressions. Econometrica,
77(3), p. 953-973.
<NAME>, <NAME> and <NAME> (2016) A general method for decomposing the
causes of socioeconomic inequality in health. Journal of Health Economics,48, p. 89–106.
<NAME>. and <NAME> (2016) The drivers of wage inequality across Europe, a recentered in-
fluence function regression approach, 10th Annual Meeting of the Portuguese Economic Journal,
University of Evora.
See Also
rif rifrSE
Examples
data(mex_inc_2008)
#Recentered influence funtion of each decile
rifr_q <- rifr(income~hh_structure+education, data=mex_inc_2008, weights="factor",
method="quantile", quantile=seq(0.1,0.9,0.1), kernel="gaussian")
#Recentered influence funtion of the gini coefficient
rifr_gini <- rifr(income~hh_structure+education, data=mex_inc_2008, weights="factor",
method="gini")
rifrSE Inference of recentered influence function regression (RIF regression)
Description
Inference of a RIF Regression using a bootstrap method.
Usage
rifrSE(formula, data, weights = NULL, method = "quantile", quantile = 0.5,
kernel = "gaussian", Nboot = 100, confidence = 0.95)
Arguments
formula an object of class "formula" (or one that can be coerced to that class): a symbolic
description of the model to be fitted in the RIF regression.
data a data frame containing the variables and weights of the model.
weights an optional vector of weights of x to be used in the computation of the recentered
influence function. Should be NULL or a numeric vector. Should be inside
selected data frame in the function and between quotation marks.
method the distribution statistic for which the recentered influence function is estimated.
Options are "quantile", "gini" and "variance". Default is "quantile".
quantile quantile to be used when method "quantile" is selected. Must be a numeric
between 0 and 1. Default is 0.5 (median). Only a single quantile can be used.
kernel a character giving the smoothing kernel to be used in method "quantile". Op-
tions are "gaussian", "rectangular", "triangular", "epanechnikov", "biweight",
"cosine" or "optcosine". Default is "gaussian".
Nboot the number of bootstrap replicates. Default is 100.
confidence significance level for estimation of the confidence interval of the fitted model.
Default is 0.95.
Details
RIF Regressions can be used to estimate the marginal effects of covariates on distributional statistics
(such as quantiles, gini and variance). It is based on the recentered influence function of a statistic.
The transformed RIF is used as the dependent variable in an ordinary least squares regression. RIF
regressions are mostly used to estimate the marginal effect of covariates on distributional statistics
of income or wealth.
The standard errors, confidence intervals and Z- and P-values are calculated by using a standard
bootstrap method (from boot package).
Value
A data frame containing the results of the RIF regression.
Coef estimated coefficients of the original (non bootstrapped) RIF regression
lower lower bound of confidence interval of estimated coefficient
upper upper bound of confidence interval of estimated coefficient
SE standard error
Z Value Z value
P Value P value
Signif Significance codes of P: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
References
<NAME>., <NAME> and <NAME> (2009) Unconditional quantile regressions. Econometrica,
77(3), p. 953-973.
<NAME>, <NAME> and <NAME> (2016) A general method for decomposing the
causes of socioeconomic inequality in health. Journal of Health Economics,48, p. 89–106.
<NAME>. and <NAME> (2016) The drivers of wage inequality across Europe, a recentered in-
fluence function regression approach, 10th Annual Meeting of the Portuguese Economic Journal,
University of Evora.
See Also
rif rifr
Examples
data(mex_inc_2008)
#Recentered influence funtion of 20th quantile
rifr_q <- rifrSE(income~hh_structure+education, data=mex_inc_2008, weights="factor",
method="quantile", quantile=0.2, kernel="gaussian", Nboot=100, confidence=0.95)
#Recentered influence funtion of the gini coefficient
rifr_gini <- rifrSE(income~hh_structure+education, data=mex_inc_2008, weights="factor",
method="gini", Nboot=100, confidence=0.95)
theil.wtd Theil index
Description
Returns the (optional weighted) Theil index for a vector.
Usage
theil.wtd(x, weights = NULL)
Arguments
x a numeric vector containing at least non-negative elements.
weights an optional vector of weights of x to be used in the computation of the Theil
index. Should be NULL or a numeric vector.
Details
The Theil index is a measure of inequality among values of a distribution. It is a member of the
Generalized Entropy Measures. Also referred to as GE(1). The index can have a value between 0
and ln N (the logarithm of the number of values), with 0 being the lowest possible inequality. It uses
a logarithmic transformation of the values of the distribution. Therefore it cannot handle negative
or zero values. Those are excluded from the computation in this function. The Theil Index is more
sensitive for changes in the upper tail of the distribution.
Extension of the calcGEI function in IC2 package in order to handle missings.
Value
The value of the Theil index.
Source
<NAME>. (2012). IC2: Inequality and Concentration Indices and Curves. R package version 1.0-1.
https://CRAN.R-project.org/package=IC2
References
<NAME>. and <NAME>. (2009) Handbook on poverty and inequality, Washington, DC:
World Bank.
<NAME>. (2000) Measurement of Inequality. In <NAME>. and <NAME>. (eds.) Handbook
of Income Distribution. Amsterdam: Elsevier, p. 87-166.
Examples
#calculate Theil Index using Mexican Income data set
data(mex_inc_2008)
#unweighted Theil Index:
theil.wtd(mex_inc_2008$income)
#weighted Theil Index:
theil.wtd(x=mex_inc_2008$income, weights=mex_inc_2008$factor) |
@paperless/react | npm | JavaScript | A collection of Web, React & Angular components that conform to the Employes design system.
<https://paperless.employes.nl[📦 Install](#-install)
---
#### [React](#react)
```
npm install @paperless/core @paperless/react
```
```
yarn add @paperless/core @paperless/react
```
#### [Angular](#angular)
```
npm install @paperless/core @paperless/angular
```
```
yarn add @paperless/core @paperless/angular
```
#### [Web Components](#web-components)
```
npm install @paperless/core
```
```
yarn add @paperless/core
```
[🚀 Usage](#-usage)
---
#### [React](#react-1)
```
// setup import { applyPolyfills, defineCustomElements } from '@paperless/core/loader';
applyPolyfills().then(() => defineCustomElements());
// usage import { Button } from '@employes/paperless';
const App = () => <Button>Click me!</Button>;
```
#### [Angular](#angular-1)
```
// main.ts import { platformBrowserDynamic } from '@angular/platform-browser-dynamic';
import { applyPolyfills, defineCustomElements } from '@paperless/core/loader';
applyPolyfills()
.then(() => defineCustomElements())
.then(() => platformBrowserDynamic().bootstrapModule(AppModule))
.catch((err) => console.error(err));
// App Module import { PaperlessModule } from '@employes/paperless-ngx';
@NgModule({
declarations: [AppComponent],
imports: [
BrowserModule,
// add this in your app module
PaperlessModule.forRoot(),
// add this in any module using paperless components
PaperlessModule,
],
providers: [],
bootstrap: [AppComponent],
})
export class AppModule {}
// Any component
@Component({
selector: 'app-root',
templateUrl: `
<p-button>Click me!</p-button>
`,
})
export class AppComponent {}
```
#### [Web Components](#web-components-1)
Add the following code snippet in your project to start using the components
```
import { defineCustomElements } from '@paperless/core/loader';
defineCustomElements();
```
And in your html:
```
<p-button>Click me!</p-button>
```
[⌨️ Typescript](#️-typescript)
---
The library is javascript based but types are supported with `d.ts` files.
You should get the types automatically when installing `@paperless/core`.
[🤝 Contributing](#-contributing-)
---
We welcome contributions to @paperless!
Read our [contributing guide](https://github.com/Employes/paperless/blob/main/CONTRIBUTING.md) and help us build or improve our components.
[📝 License](#-license)
---
This project is offered under [Apache License 2.0](https://github.com/employes/paperless/blob/main/LICENSE).
Readme
---
### Keywords
none |
aws-sdk-dynamodb | rust | Rust | Crate aws_sdk_dynamodb
===
**Please Note: The SDK is currently in Developer Preview and is intended strictly for feedback purposes only. Do not use this SDK for production workloads.**
Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB lets you offload the administrative burdens of operating and scaling a distributed database, so that you don’t have to worry about hardware provisioning, setup and configuration, replication, software patching, or cluster scaling.
With DynamoDB, you can create database tables that can store and retrieve any amount of data, and serve any level of request traffic. You can scale up or scale down your tables’ throughput capacity without downtime or performance degradation, and use the Amazon Web Services Management Console to monitor resource utilization and performance metrics.
DynamoDB automatically spreads the data and traffic for your tables over a sufficient number of servers to handle your throughput and storage requirements, while maintaining consistent and fast performance. All of your data is stored on solid state disks (SSDs) and automatically replicated across multiple Availability Zones in an Amazon Web Services Region, providing built-in high availability and data durability.
### Getting Started
> Examples are available for many services and operations, check out the
> examples folder in GitHub.
The SDK provides one crate per AWS service. You must add Tokio as a dependency within your Rust project to execute asynchronous code. To add `aws-sdk-dynamodb` to your project, add the following to your **Cargo.toml** file:
```
[dependencies]
aws-config = "0.56.1"
aws-sdk-dynamodb = "0.33.0"
tokio = { version = "1", features = ["full"] }
```
Then in code, a client can be created with the following:
```
use aws_sdk_dynamodb as dynamodb;
#[::tokio::main]
async fn main() -> Result<(), dynamodb::Error> {
let config = aws_config::load_from_env().await;
let client = aws_sdk_dynamodb::Client::new(&config);
// ... make some calls with the client
Ok(())
}
```
See the client documentation for information on what calls can be made, and the inputs and outputs for each of those calls.
### Using the SDK
Until the SDK is released, we will be adding information about using the SDK to the Developer Guide. Feel free to suggest additional sections for the guide by opening an issue and describing what you are trying to do.
### Getting Help
* GitHub discussions - For ideas, RFCs & general questions
* GitHub issues - For bug reports & feature requests
* Generated Docs (latest version)
* Usage examples
Crate Organization
---
The entry point for most customers will be `Client`, which exposes one method for each API offered by Amazon DynamoDB. The return value of each of these methods is a “fluent builder”,
where the different inputs for that API are added by builder-style function call chaining,
followed by calling `send()` to get a `Future` that will result in either a successful output or a `SdkError`.
Some of these API inputs may be structs or enums to provide more complex structured information.
These structs and enums live in `types`. There are some simpler types for representing data such as date times or binary blobs that live in `primitives`.
All types required to configure a client via the `Config` struct live in `config`.
The `operation` module has a submodule for every API, and in each submodule is the input, output, and error type for that API, as well as builders to construct each of those.
There is a top-level `Error` type that encompasses all the errors that the client can return. Any other error type can be converted to this `Error` type via the
`From` trait.
The other modules within this crate are not required for normal usage.
Modules
---
* clientClient for calling Amazon DynamoDB.
* configConfiguration for Amazon DynamoDB.
* errorCommon errors and error handling utilities.
* metaInformation about this crate.
* operationAll operations that this crate can perform.
* primitivesPrimitives such as `Blob` or `DateTime` used by other types.
* typesData structures used by operation inputs/outputs.
Structs
---
* ClientClient for Amazon DynamoDB
* ConfigConfiguration for a aws_sdk_dynamodb service client.
Enums
---
* ErrorAll possible error types for this service.
Crate aws_sdk_dynamodb
===
**Please Note: The SDK is currently in Developer Preview and is intended strictly for feedback purposes only. Do not use this SDK for production workloads.**
Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB lets you offload the administrative burdens of operating and scaling a distributed database, so that you don’t have to worry about hardware provisioning, setup and configuration, replication, software patching, or cluster scaling.
With DynamoDB, you can create database tables that can store and retrieve any amount of data, and serve any level of request traffic. You can scale up or scale down your tables’ throughput capacity without downtime or performance degradation, and use the Amazon Web Services Management Console to monitor resource utilization and performance metrics.
DynamoDB automatically spreads the data and traffic for your tables over a sufficient number of servers to handle your throughput and storage requirements, while maintaining consistent and fast performance. All of your data is stored on solid state disks (SSDs) and automatically replicated across multiple Availability Zones in an Amazon Web Services Region, providing built-in high availability and data durability.
### Getting Started
> Examples are available for many services and operations, check out the
> examples folder in GitHub.
The SDK provides one crate per AWS service. You must add Tokio as a dependency within your Rust project to execute asynchronous code. To add `aws-sdk-dynamodb` to your project, add the following to your **Cargo.toml** file:
```
[dependencies]
aws-config = "0.56.1"
aws-sdk-dynamodb = "0.33.0"
tokio = { version = "1", features = ["full"] }
```
Then in code, a client can be created with the following:
```
use aws_sdk_dynamodb as dynamodb;
#[::tokio::main]
async fn main() -> Result<(), dynamodb::Error> {
let config = aws_config::load_from_env().await;
let client = aws_sdk_dynamodb::Client::new(&config);
// ... make some calls with the client
Ok(())
}
```
See the client documentation for information on what calls can be made, and the inputs and outputs for each of those calls.
### Using the SDK
Until the SDK is released, we will be adding information about using the SDK to the Developer Guide. Feel free to suggest additional sections for the guide by opening an issue and describing what you are trying to do.
### Getting Help
* GitHub discussions - For ideas, RFCs & general questions
* GitHub issues - For bug reports & feature requests
* Generated Docs (latest version)
* Usage examples
Crate Organization
---
The entry point for most customers will be `Client`, which exposes one method for each API offered by Amazon DynamoDB. The return value of each of these methods is a “fluent builder”,
where the different inputs for that API are added by builder-style function call chaining,
followed by calling `send()` to get a `Future` that will result in either a successful output or a `SdkError`.
Some of these API inputs may be structs or enums to provide more complex structured information.
These structs and enums live in `types`. There are some simpler types for representing data such as date times or binary blobs that live in `primitives`.
All types required to configure a client via the `Config` struct live in `config`.
The `operation` module has a submodule for every API, and in each submodule is the input, output, and error type for that API, as well as builders to construct each of those.
There is a top-level `Error` type that encompasses all the errors that the client can return. Any other error type can be converted to this `Error` type via the
`From` trait.
The other modules within this crate are not required for normal usage.
Modules
---
* clientClient for calling Amazon DynamoDB.
* configConfiguration for Amazon DynamoDB.
* errorCommon errors and error handling utilities.
* metaInformation about this crate.
* operationAll operations that this crate can perform.
* primitivesPrimitives such as `Blob` or `DateTime` used by other types.
* typesData structures used by operation inputs/outputs.
Structs
---
* ClientClient for Amazon DynamoDB
* ConfigConfiguration for a aws_sdk_dynamodb service client.
Enums
---
* ErrorAll possible error types for this service.
Struct aws_sdk_dynamodb::Client
===
```
pub struct Client { /* private fields */ }
```
Client for Amazon DynamoDB
Client for invoking operations on Amazon DynamoDB. Each operation on Amazon DynamoDB is a method on this this struct. `.send()` MUST be invoked on the generated operations to dispatch the request to the service.
### Constructing a `Client`
A `Config` is required to construct a client. For most use cases, the `aws-config`
crate should be used to automatically resolve this config using
`aws_config::load_from_env()`, since this will resolve an `SdkConfig` which can be shared across multiple different AWS SDK clients. This config resolution process can be customized by calling `aws_config::from_env()` instead, which returns a `ConfigLoader` that uses the builder pattern to customize the default config.
In the simplest case, creating a client looks as follows:
```
let config = aws_config::load_from_env().await;
let client = aws_sdk_dynamodb::Client::new(&config);
```
Occasionally, SDKs may have additional service-specific that can be set on the `Config` that is absent from `SdkConfig`, or slightly different settings for a specific client may be desired.
The `Config` struct implements `From<&SdkConfig>`, so setting these specific settings can be done as follows:
```
let sdk_config = ::aws_config::load_from_env().await;
let config = aws_sdk_dynamodb::config::Builder::from(&sdk_config)
.some_service_specific_setting("value")
.build();
```
See the `aws-config` docs and `Config` for more information on customizing configuration.
*Note:* Client construction is expensive due to connection thread pool initialization, and should be done once at application start-up.
Using the `Client`
---
A client has a function for every operation that can be performed by the service.
For example, the `BatchExecuteStatement` operation has a `Client::batch_execute_statement`, function which returns a builder for that operation.
The fluent builder ultimately has a `send()` function that returns an async future that returns a result, as illustrated below:
```
let result = client.batch_execute_statement()
.return_consumed_capacity("example")
.send()
.await;
```
The underlying HTTP requests that get made by this can be modified with the `customize_operation`
function on the fluent builder. See the `customize` module for more information.
Implementations
---
### impl Client
#### pub fn batch_execute_statement(&self) -> BatchExecuteStatementFluentBuilder
Constructs a fluent builder for the `BatchExecuteStatement` operation.
* The fluent builder is configurable:
+ `statements(BatchStatementRequest)` / `set_statements(Option<Vec<BatchStatementRequest>>)`: The list of PartiQL statements representing the batch to run.
+ `return_consumed_capacity(ReturnConsumedCapacity)` / `set_return_consumed_capacity(Option<ReturnConsumedCapacity>)`: Determines the level of detail about either provisioned or on-demand throughput consumption that is returned in the response:
- `INDEXES` - The response includes the aggregate `ConsumedCapacity` for the operation, together with `ConsumedCapacity` for each table and secondary index that was accessed.
Note that some operations, such as `GetItem` and `BatchGetItem`, do not access any indexes at all. In these cases, specifying `INDEXES` will only return `ConsumedCapacity` information for table(s).
- `TOTAL` - The response includes only the aggregate `ConsumedCapacity` for the operation.
- `NONE` - No `ConsumedCapacity` details are included in the response.
* On success, responds with `BatchExecuteStatementOutput` with field(s):
+ `responses(Option<Vec<BatchStatementResponse>>)`: The response to each PartiQL statement in the batch. The values of the list are ordered according to the ordering of the request statements.
+ `consumed_capacity(Option<Vec<ConsumedCapacity>>)`: The capacity units consumed by the entire operation. The values of the list are ordered according to the ordering of the statements.
* On failure, responds with `SdkError<BatchExecuteStatementError>`
### impl Client
#### pub fn batch_get_item(&self) -> BatchGetItemFluentBuilder
Constructs a fluent builder for the `BatchGetItem` operation.
* The fluent builder is configurable:
+ `request_items(impl Into<String>, KeysAndAttributes)` / `set_request_items(Option<HashMap<String, KeysAndAttributes>>)`: A map of one or more table names and, for each table, a map that describes one or more items to retrieve from that table. Each table name can be used only once per `BatchGetItem` request.
Each element in the map of items to retrieve consists of the following:
- `ConsistentRead` - If `true`, a strongly consistent read is used; if `false` (the default), an eventually consistent read is used.
- `ExpressionAttributeNames` - One or more substitution tokens for attribute names in the `ProjectionExpression` parameter. The following are some use cases for using `ExpressionAttributeNames`:
* To access an attribute whose name conflicts with a DynamoDB reserved word.
* To create a placeholder for repeating occurrences of an attribute name in an expression.
* To prevent special characters in an attribute name from being misinterpreted in an expression.Use the **#** character in an expression to dereference an attribute name. For example, consider the following attribute name:
* `Percentile`The name of this attribute conflicts with a reserved word, so it cannot be used directly in an expression. (For the complete list of reserved words, see Reserved Words in the *Amazon DynamoDB Developer Guide*). To work around this, you could specify the following for `ExpressionAttributeNames`:
* `{“#P”:“Percentile”}`You could then use this substitution in an expression, as in this example:
* `#P = :val` Tokens that begin with the **:** character are *expression attribute values*, which are placeholders for the actual value at runtime.
For more information about expression attribute names, see Accessing Item Attributes in the *Amazon DynamoDB Developer Guide*.
- `Keys` - An array of primary key attribute values that define specific items in the table. For each primary key, you must provide *all* of the key attributes. For example, with a simple primary key, you only need to provide the partition key value. For a composite key, you must provide *both* the partition key value and the sort key value.
- `ProjectionExpression` - A string that identifies one or more attributes to retrieve from the table. These attributes can include scalars, sets, or elements of a JSON document. The attributes in the expression must be separated by commas.
If no attribute names are specified, then all attributes are returned. If any of the requested attributes are not found, they do not appear in the result.
For more information, see Accessing Item Attributes in the *Amazon DynamoDB Developer Guide*.
- `AttributesToGet` - This is a legacy parameter. Use `ProjectionExpression` instead. For more information, see AttributesToGet in the *Amazon DynamoDB Developer Guide*.
+ `return_consumed_capacity(ReturnConsumedCapacity)` / `set_return_consumed_capacity(Option<ReturnConsumedCapacity>)`: Determines the level of detail about either provisioned or on-demand throughput consumption that is returned in the response:
- `INDEXES` - The response includes the aggregate `ConsumedCapacity` for the operation, together with `ConsumedCapacity` for each table and secondary index that was accessed.
Note that some operations, such as `GetItem` and `BatchGetItem`, do not access any indexes at all. In these cases, specifying `INDEXES` will only return `ConsumedCapacity` information for table(s).
- `TOTAL` - The response includes only the aggregate `ConsumedCapacity` for the operation.
- `NONE` - No `ConsumedCapacity` details are included in the response.
* On success, responds with `BatchGetItemOutput` with field(s):
+ `responses(Option<HashMap<String, Vec<HashMap<String, AttributeValue>>>>)`: A map of table name to a list of items. Each object in `Responses` consists of a table name, along with a map of attribute data consisting of the data type and attribute value.
+ `unprocessed_keys(Option<HashMap<String, KeysAndAttributes>>)`: A map of tables and their respective keys that were not processed with the current response. The `UnprocessedKeys` value is in the same form as `RequestItems`, so the value can be provided directly to a subsequent `BatchGetItem` operation. For more information, see `RequestItems` in the Request Parameters section.
Each element consists of:
- `Keys` - An array of primary key attribute values that define specific items in the table.
- `ProjectionExpression` - One or more attributes to be retrieved from the table or index. By default, all attributes are returned. If a requested attribute is not found, it does not appear in the result.
- `ConsistentRead` - The consistency of a read operation. If set to `true`, then a strongly consistent read is used; otherwise, an eventually consistent read is used.If there are no unprocessed keys remaining, the response contains an empty `UnprocessedKeys` map.
+ `consumed_capacity(Option<Vec<ConsumedCapacity>>)`: The read capacity units consumed by the entire `BatchGetItem` operation.
Each element consists of:
- `TableName` - The table that consumed the provisioned throughput.
- `CapacityUnits` - The total number of capacity units consumed.
* On failure, responds with `SdkError<BatchGetItemError>`
### impl Client
#### pub fn batch_write_item(&self) -> BatchWriteItemFluentBuilder
Constructs a fluent builder for the `BatchWriteItem` operation.
* The fluent builder is configurable:
+ `request_items(impl Into<String>, Vec<WriteRequest>)` / `set_request_items(Option<HashMap<String, Vec<WriteRequest>>>)`: A map of one or more table names and, for each table, a list of operations to be performed (`DeleteRequest` or `PutRequest`). Each element in the map consists of the following:
- `DeleteRequest` - Perform a `DeleteItem` operation on the specified item. The item to be deleted is identified by a `Key` subelement:
* `Key` - A map of primary key attribute values that uniquely identify the item. Each entry in this map consists of an attribute name and an attribute value. For each primary key, you must provide *all* of the key attributes. For example, with a simple primary key, you only need to provide a value for the partition key. For a composite primary key, you must provide values for *both* the partition key and the sort key.
- `PutRequest` - Perform a `PutItem` operation on the specified item. The item to be put is identified by an `Item` subelement:
* `Item` - A map of attributes and their values. Each entry in this map consists of an attribute name and an attribute value. Attribute values must not be null; string and binary type attributes must have lengths greater than zero; and set type attributes must not be empty. Requests that contain empty values are rejected with a `ValidationException` exception.
If you specify any attributes that are part of an index key, then the data types for those attributes must match those of the schema in the table’s attribute definition.
+ `return_consumed_capacity(ReturnConsumedCapacity)` / `set_return_consumed_capacity(Option<ReturnConsumedCapacity>)`: Determines the level of detail about either provisioned or on-demand throughput consumption that is returned in the response:
- `INDEXES` - The response includes the aggregate `ConsumedCapacity` for the operation, together with `ConsumedCapacity` for each table and secondary index that was accessed.
Note that some operations, such as `GetItem` and `BatchGetItem`, do not access any indexes at all. In these cases, specifying `INDEXES` will only return `ConsumedCapacity` information for table(s).
- `TOTAL` - The response includes only the aggregate `ConsumedCapacity` for the operation.
- `NONE` - No `ConsumedCapacity` details are included in the response.
+ `return_item_collection_metrics(ReturnItemCollectionMetrics)` / `set_return_item_collection_metrics(Option<ReturnItemCollectionMetrics>)`: Determines whether item collection metrics are returned. If set to `SIZE`, the response includes statistics about item collections, if any, that were modified during the operation are returned in the response. If set to `NONE` (the default), no statistics are returned.
* On success, responds with `BatchWriteItemOutput` with field(s):
+ `unprocessed_items(Option<HashMap<String, Vec<WriteRequest>>>)`: A map of tables and requests against those tables that were not processed. The `UnprocessedItems` value is in the same form as `RequestItems`, so you can provide this value directly to a subsequent `BatchWriteItem` operation. For more information, see `RequestItems` in the Request Parameters section.
Each `UnprocessedItems` entry consists of a table name and, for that table, a list of operations to perform (`DeleteRequest` or `PutRequest`).
- `DeleteRequest` - Perform a `DeleteItem` operation on the specified item. The item to be deleted is identified by a `Key` subelement:
* `Key` - A map of primary key attribute values that uniquely identify the item. Each entry in this map consists of an attribute name and an attribute value.
- `PutRequest` - Perform a `PutItem` operation on the specified item. The item to be put is identified by an `Item` subelement:
* `Item` - A map of attributes and their values. Each entry in this map consists of an attribute name and an attribute value. Attribute values must not be null; string and binary type attributes must have lengths greater than zero; and set type attributes must not be empty. Requests that contain empty values will be rejected with a `ValidationException` exception.
If you specify any attributes that are part of an index key, then the data types for those attributes must match those of the schema in the table’s attribute definition.If there are no unprocessed items remaining, the response contains an empty `UnprocessedItems` map.
+ `item_collection_metrics(Option<HashMap<String, Vec<ItemCollectionMetrics>>>)`: A list of tables that were processed by `BatchWriteItem` and, for each table, information about any item collections that were affected by individual `DeleteItem` or `PutItem` operations.
Each entry consists of the following subelements:
- `ItemCollectionKey` - The partition key value of the item collection. This is the same as the partition key value of the item.
- `SizeEstimateRangeGB` - An estimate of item collection size, expressed in GB. This is a two-element array containing a lower bound and an upper bound for the estimate. The estimate includes the size of all the items in the table, plus the size of all attributes projected into all of the local secondary indexes on the table. Use this estimate to measure whether a local secondary index is approaching its size limit.
The estimate is subject to change over time; therefore, do not rely on the precision or accuracy of the estimate.
+ `consumed_capacity(Option<Vec<ConsumedCapacity>>)`: The capacity units consumed by the entire `BatchWriteItem` operation.
Each element consists of:
- `TableName` - The table that consumed the provisioned throughput.
- `CapacityUnits` - The total number of capacity units consumed.
* On failure, responds with `SdkError<BatchWriteItemError>`
### impl Client
#### pub fn create_backup(&self) -> CreateBackupFluentBuilder
Constructs a fluent builder for the `CreateBackup` operation.
* The fluent builder is configurable:
+ `table_name(impl Into<String>)` / `set_table_name(Option<String>)`: The name of the table.
+ `backup_name(impl Into<String>)` / `set_backup_name(Option<String>)`: Specified name for the backup.
* On success, responds with `CreateBackupOutput` with field(s):
+ `backup_details(Option<BackupDetails>)`: Contains the details of the backup created for the table.
* On failure, responds with `SdkError<CreateBackupError>`
### impl Client
#### pub fn create_global_table(&self) -> CreateGlobalTableFluentBuilder
Constructs a fluent builder for the `CreateGlobalTable` operation.
* The fluent builder is configurable:
+ `global_table_name(impl Into<String>)` / `set_global_table_name(Option<String>)`: The global table name.
+ `replication_group(Replica)` / `set_replication_group(Option<Vec<Replica>>)`: The Regions where the global table needs to be created.
* On success, responds with `CreateGlobalTableOutput` with field(s):
+ `global_table_description(Option<GlobalTableDescription>)`: Contains the details of the global table.
* On failure, responds with `SdkError<CreateGlobalTableError>`
### impl Client
#### pub fn create_table(&self) -> CreateTableFluentBuilder
Constructs a fluent builder for the `CreateTable` operation.
* The fluent builder is configurable:
+ `attribute_definitions(AttributeDefinition)` / `set_attribute_definitions(Option<Vec<AttributeDefinition>>)`: An array of attributes that describe the key schema for the table and indexes.
+ `table_name(impl Into<String>)` / `set_table_name(Option<String>)`: The name of the table to create.
+ `key_schema(KeySchemaElement)` / `set_key_schema(Option<Vec<KeySchemaElement>>)`: Specifies the attributes that make up the primary key for a table or an index. The attributes in `KeySchema` must also be defined in the `AttributeDefinitions` array. For more information, see Data Model in the *Amazon DynamoDB Developer Guide*.
Each `KeySchemaElement` in the array is composed of:
- `AttributeName` - The name of this key attribute.
- `KeyType` - The role that the key attribute will assume:
* `HASH` - partition key
* `RANGE` - sort key The partition key of an item is also known as its *hash attribute*. The term “hash attribute” derives from the DynamoDB usage of an internal hash function to evenly distribute data items across partitions, based on their partition key values.
The sort key of an item is also known as its *range attribute*. The term “range attribute” derives from the way DynamoDB stores items with the same partition key physically close together, in sorted order by the sort key value.
For a simple primary key (partition key), you must provide exactly one element with a `KeyType` of `HASH`.
For a composite primary key (partition key and sort key), you must provide exactly two elements, in this order: The first element must have a `KeyType` of `HASH`, and the second element must have a `KeyType` of `RANGE`.
For more information, see Working with Tables in the *Amazon DynamoDB Developer Guide*.
+ `local_secondary_indexes(LocalSecondaryIndex)` / `set_local_secondary_indexes(Option<Vec<LocalSecondaryIndex>>)`: One or more local secondary indexes (the maximum is 5) to be created on the table. Each index is scoped to a given partition key value. There is a 10 GB size limit per partition key value; otherwise, the size of a local secondary index is unconstrained.
Each local secondary index in the array includes the following:
- `IndexName` - The name of the local secondary index. Must be unique only for this table.
- `KeySchema` - Specifies the key schema for the local secondary index. The key schema must begin with the same partition key as the table.
- `Projection` - Specifies attributes that are copied (projected) from the table into the index. These are in addition to the primary key attributes and index key attributes, which are automatically projected. Each attribute specification is composed of:
* `ProjectionType` - One of the following:
+ `KEYS_ONLY` - Only the index and primary keys are projected into the index.
+ `INCLUDE` - Only the specified table attributes are projected into the index. The list of projected attributes is in `NonKeyAttributes`.
+ `ALL` - All of the table attributes are projected into the index.
* `NonKeyAttributes` - A list of one or more non-key attribute names that are projected into the secondary index. The total count of attributes provided in `NonKeyAttributes`, summed across all of the secondary indexes, must not exceed 100. If you project the same attribute into two different indexes, this counts as two distinct attributes when determining the total.
+ `global_secondary_indexes(GlobalSecondaryIndex)` / `set_global_secondary_indexes(Option<Vec<GlobalSecondaryIndex>>)`: One or more global secondary indexes (the maximum is 20) to be created on the table. Each global secondary index in the array includes the following:
- `IndexName` - The name of the global secondary index. Must be unique only for this table.
- `KeySchema` - Specifies the key schema for the global secondary index.
- `Projection` - Specifies attributes that are copied (projected) from the table into the index. These are in addition to the primary key attributes and index key attributes, which are automatically projected. Each attribute specification is composed of:
* `ProjectionType` - One of the following:
+ `KEYS_ONLY` - Only the index and primary keys are projected into the index.
+ `INCLUDE` - Only the specified table attributes are projected into the index. The list of projected attributes is in `NonKeyAttributes`.
+ `ALL` - All of the table attributes are projected into the index.
* `NonKeyAttributes` - A list of one or more non-key attribute names that are projected into the secondary index. The total count of attributes provided in `NonKeyAttributes`, summed across all of the secondary indexes, must not exceed 100. If you project the same attribute into two different indexes, this counts as two distinct attributes when determining the total.
- `ProvisionedThroughput` - The provisioned throughput settings for the global secondary index, consisting of read and write capacity units.
+ `billing_mode(BillingMode)` / `set_billing_mode(Option<BillingMode>)`: Controls how you are charged for read and write throughput and how you manage capacity. This setting can be changed later.
- `PROVISIONED` - We recommend using `PROVISIONED` for predictable workloads. `PROVISIONED` sets the billing mode to Provisioned Mode.
- `PAY_PER_REQUEST` - We recommend using `PAY_PER_REQUEST` for unpredictable workloads. `PAY_PER_REQUEST` sets the billing mode to On-Demand Mode.
+ `provisioned_throughput(ProvisionedThroughput)` / `set_provisioned_throughput(Option<ProvisionedThroughput>)`: Represents the provisioned throughput settings for a specified table or index. The settings can be modified using the `UpdateTable` operation.
If you set BillingMode as `PROVISIONED`, you must specify this property. If you set BillingMode as `PAY_PER_REQUEST`, you cannot specify this property.
For current minimum and maximum provisioned throughput values, see Service, Account, and Table Quotas in the *Amazon DynamoDB Developer Guide*.
+ `stream_specification(StreamSpecification)` / `set_stream_specification(Option<StreamSpecification>)`: The settings for DynamoDB Streams on the table. These settings consist of:
- `StreamEnabled` - Indicates whether DynamoDB Streams is to be enabled (true) or disabled (false).
- `StreamViewType` - When an item in the table is modified, `StreamViewType` determines what information is written to the table’s stream. Valid values for `StreamViewType` are:
* `KEYS_ONLY` - Only the key attributes of the modified item are written to the stream.
* `NEW_IMAGE` - The entire item, as it appears after it was modified, is written to the stream.
* `OLD_IMAGE` - The entire item, as it appeared before it was modified, is written to the stream.
* `NEW_AND_OLD_IMAGES` - Both the new and the old item images of the item are written to the stream.
+ `sse_specification(SseSpecification)` / `set_sse_specification(Option<SseSpecification>)`: Represents the settings used to enable server-side encryption.
+ `tags(Tag)` / `set_tags(Option<Vec<Tag>>)`: A list of key-value pairs to label the table. For more information, see Tagging for DynamoDB.
+ `table_class(TableClass)` / `set_table_class(Option<TableClass>)`: The table class of the new table. Valid values are `STANDARD` and `STANDARD_INFREQUENT_ACCESS`.
+ `deletion_protection_enabled(bool)` / `set_deletion_protection_enabled(Option<bool>)`: Indicates whether deletion protection is to be enabled (true) or disabled (false) on the table.
* On success, responds with `CreateTableOutput` with field(s):
+ `table_description(Option<TableDescription>)`: Represents the properties of the table.
* On failure, responds with `SdkError<CreateTableError>`
### impl Client
#### pub fn delete_backup(&self) -> DeleteBackupFluentBuilder
Constructs a fluent builder for the `DeleteBackup` operation.
* The fluent builder is configurable:
+ `backup_arn(impl Into<String>)` / `set_backup_arn(Option<String>)`: The ARN associated with the backup.
* On success, responds with `DeleteBackupOutput` with field(s):
+ `backup_description(Option<BackupDescription>)`: Contains the description of the backup created for the table.
* On failure, responds with `SdkError<DeleteBackupError>`
### impl Client
#### pub fn delete_item(&self) -> DeleteItemFluentBuilder
Constructs a fluent builder for the `DeleteItem` operation.
* The fluent builder is configurable:
+ `table_name(impl Into<String>)` / `set_table_name(Option<String>)`: The name of the table from which to delete the item.
+ `key(impl Into<String>, AttributeValue)` / `set_key(Option<HashMap<String, AttributeValue>>)`: A map of attribute names to `AttributeValue` objects, representing the primary key of the item to delete.
For the primary key, you must provide all of the key attributes. For example, with a simple primary key, you only need to provide a value for the partition key. For a composite primary key, you must provide values for both the partition key and the sort key.
+ `expected(impl Into<String>, ExpectedAttributeValue)` / `set_expected(Option<HashMap<String, ExpectedAttributeValue>>)`: This is a legacy parameter. Use `ConditionExpression` instead. For more information, see Expected in the *Amazon DynamoDB Developer Guide*.
+ `conditional_operator(ConditionalOperator)` / `set_conditional_operator(Option<ConditionalOperator>)`: This is a legacy parameter. Use `ConditionExpression` instead. For more information, see ConditionalOperator in the *Amazon DynamoDB Developer Guide*.
+ `return_values(ReturnValue)` / `set_return_values(Option<ReturnValue>)`: Use `ReturnValues` if you want to get the item attributes as they appeared before they were deleted. For `DeleteItem`, the valid values are:
- `NONE` - If `ReturnValues` is not specified, or if its value is `NONE`, then nothing is returned. (This setting is the default for `ReturnValues`.)
- `ALL_OLD` - The content of the old item is returned.There is no additional cost associated with requesting a return value aside from the small network and processing overhead of receiving a larger response. No read capacity units are consumed.
The `ReturnValues` parameter is used by several DynamoDB operations; however, `DeleteItem` does not recognize any values other than `NONE` or `ALL_OLD`.
+ `return_consumed_capacity(ReturnConsumedCapacity)` / `set_return_consumed_capacity(Option<ReturnConsumedCapacity>)`: Determines the level of detail about either provisioned or on-demand throughput consumption that is returned in the response:
- `INDEXES` - The response includes the aggregate `ConsumedCapacity` for the operation, together with `ConsumedCapacity` for each table and secondary index that was accessed.
Note that some operations, such as `GetItem` and `BatchGetItem`, do not access any indexes at all. In these cases, specifying `INDEXES` will only return `ConsumedCapacity` information for table(s).
- `TOTAL` - The response includes only the aggregate `ConsumedCapacity` for the operation.
- `NONE` - No `ConsumedCapacity` details are included in the response.
+ `return_item_collection_metrics(ReturnItemCollectionMetrics)` / `set_return_item_collection_metrics(Option<ReturnItemCollectionMetrics>)`: Determines whether item collection metrics are returned. If set to `SIZE`, the response includes statistics about item collections, if any, that were modified during the operation are returned in the response. If set to `NONE` (the default), no statistics are returned.
+ `condition_expression(impl Into<String>)` / `set_condition_expression(Option<String>)`: A condition that must be satisfied in order for a conditional `DeleteItem` to succeed.
An expression can contain any of the following:
- Functions: `attribute_exists | attribute_not_exists | attribute_type | contains | begins_with | size`
These function names are case-sensitive.
- Comparison operators: `= | <> | < | > | <= | >= | BETWEEN | IN`
- Logical operators: `AND | OR | NOT`For more information about condition expressions, see Condition Expressions in the *Amazon DynamoDB Developer Guide*.
+ `expression_attribute_names(impl Into<String>, impl Into<String>)` / `set_expression_attribute_names(Option<HashMap<String, String>>)`: One or more substitution tokens for attribute names in an expression. The following are some use cases for using `ExpressionAttributeNames`:
- To access an attribute whose name conflicts with a DynamoDB reserved word.
- To create a placeholder for repeating occurrences of an attribute name in an expression.
- To prevent special characters in an attribute name from being misinterpreted in an expression.Use the **#** character in an expression to dereference an attribute name. For example, consider the following attribute name:
- `Percentile`The name of this attribute conflicts with a reserved word, so it cannot be used directly in an expression. (For the complete list of reserved words, see Reserved Words in the *Amazon DynamoDB Developer Guide*). To work around this, you could specify the following for `ExpressionAttributeNames`:
- `{“#P”:“Percentile”}`You could then use this substitution in an expression, as in this example:
- `#P = :val` Tokens that begin with the **:** character are *expression attribute values*, which are placeholders for the actual value at runtime.
For more information on expression attribute names, see Specifying Item Attributes in the *Amazon DynamoDB Developer Guide*.
+ `expression_attribute_values(impl Into<String>, AttributeValue)` / `set_expression_attribute_values(Option<HashMap<String, AttributeValue>>)`: One or more values that can be substituted in an expression.
Use the **:** (colon) character in an expression to dereference an attribute value. For example, suppose that you wanted to check whether the value of the *ProductStatus* attribute was one of the following:
`Available | Backordered | Discontinued`
You would first need to specify `ExpressionAttributeValues` as follows:
`{ “:avail”:{“S”:“Available”}, “:back”:{“S”:“Backordered”}, “:disc”:{“S”:“Discontinued”} }`
You could then use these values in an expression, such as this:
`ProductStatus IN (:avail, :back, :disc)`
For more information on expression attribute values, see Condition Expressions in the *Amazon DynamoDB Developer Guide*.
+ `return_values_on_condition_check_failure(ReturnValuesOnConditionCheckFailure)` / `set_return_values_on_condition_check_failure(Option<ReturnValuesOnConditionCheckFailure>)`: An optional parameter that returns the item attributes for a `DeleteItem` operation that failed a condition check.
There is no additional cost associated with requesting a return value aside from the small network and processing overhead of receiving a larger response. No read capacity units are consumed.
* On success, responds with `DeleteItemOutput` with field(s):
+ `attributes(Option<HashMap<String, AttributeValue>>)`: A map of attribute names to `AttributeValue` objects, representing the item as it appeared before the `DeleteItem` operation. This map appears in the response only if `ReturnValues` was specified as `ALL_OLD` in the request.
+ `consumed_capacity(Option<ConsumedCapacity>)`: The capacity units consumed by the `DeleteItem` operation. The data returned includes the total provisioned throughput consumed, along with statistics for the table and any indexes involved in the operation. `ConsumedCapacity` is only returned if the `ReturnConsumedCapacity` parameter was specified. For more information, see Provisioned Throughput in the *Amazon DynamoDB Developer Guide*.
+ `item_collection_metrics(Option<ItemCollectionMetrics>)`: Information about item collections, if any, that were affected by the `DeleteItem` operation. `ItemCollectionMetrics` is only returned if the `ReturnItemCollectionMetrics` parameter was specified. If the table does not have any local secondary indexes, this information is not returned in the response.
Each `ItemCollectionMetrics` element consists of:
- `ItemCollectionKey` - The partition key value of the item collection. This is the same as the partition key value of the item itself.
- `SizeEstimateRangeGB` - An estimate of item collection size, in gigabytes. This value is a two-element array containing a lower bound and an upper bound for the estimate. The estimate includes the size of all the items in the table, plus the size of all attributes projected into all of the local secondary indexes on that table. Use this estimate to measure whether a local secondary index is approaching its size limit.
The estimate is subject to change over time; therefore, do not rely on the precision or accuracy of the estimate.
* On failure, responds with `SdkError<DeleteItemError>`
### impl Client
#### pub fn delete_table(&self) -> DeleteTableFluentBuilder
Constructs a fluent builder for the `DeleteTable` operation.
* The fluent builder is configurable:
+ `table_name(impl Into<String>)` / `set_table_name(Option<String>)`: The name of the table to delete.
* On success, responds with `DeleteTableOutput` with field(s):
+ `table_description(Option<TableDescription>)`: Represents the properties of a table.
* On failure, responds with `SdkError<DeleteTableError>`
### impl Client
#### pub fn describe_backup(&self) -> DescribeBackupFluentBuilder
Constructs a fluent builder for the `DescribeBackup` operation.
* The fluent builder is configurable:
+ `backup_arn(impl Into<String>)` / `set_backup_arn(Option<String>)`: The Amazon Resource Name (ARN) associated with the backup.
* On success, responds with `DescribeBackupOutput` with field(s):
+ `backup_description(Option<BackupDescription>)`: Contains the description of the backup created for the table.
* On failure, responds with `SdkError<DescribeBackupError>`
### impl Client
#### pub fn describe_continuous_backups(
&self
) -> DescribeContinuousBackupsFluentBuilder
Constructs a fluent builder for the `DescribeContinuousBackups` operation.
* The fluent builder is configurable:
+ `table_name(impl Into<String>)` / `set_table_name(Option<String>)`: Name of the table for which the customer wants to check the continuous backups and point in time recovery settings.
* On success, responds with `DescribeContinuousBackupsOutput` with field(s):
+ `continuous_backups_description(Option<ContinuousBackupsDescription>)`: Represents the continuous backups and point in time recovery settings on the table.
* On failure, responds with `SdkError<DescribeContinuousBackupsError>`
### impl Client
#### pub fn describe_contributor_insights(
&self
) -> DescribeContributorInsightsFluentBuilder
Constructs a fluent builder for the `DescribeContributorInsights` operation.
* The fluent builder is configurable:
+ `table_name(impl Into<String>)` / `set_table_name(Option<String>)`: The name of the table to describe.
+ `index_name(impl Into<String>)` / `set_index_name(Option<String>)`: The name of the global secondary index to describe, if applicable.
* On success, responds with `DescribeContributorInsightsOutput` with field(s):
+ `table_name(Option<String>)`: The name of the table being described.
+ `index_name(Option<String>)`: The name of the global secondary index being described.
+ `contributor_insights_rule_list(Option<Vec<String>>)`: List of names of the associated contributor insights rules.
+ `contributor_insights_status(Option<ContributorInsightsStatus>)`: Current status of contributor insights.
+ `last_update_date_time(Option<DateTime>)`: Timestamp of the last time the status was changed.
+ `failure_exception(Option<FailureException>)`: Returns information about the last failure that was encountered.
The most common exceptions for a FAILED status are:
- LimitExceededException - Per-account Amazon CloudWatch Contributor Insights rule limit reached. Please disable Contributor Insights for other tables/indexes OR disable Contributor Insights rules before retrying.
- AccessDeniedException - Amazon CloudWatch Contributor Insights rules cannot be modified due to insufficient permissions.
- AccessDeniedException - Failed to create service-linked role for Contributor Insights due to insufficient permissions.
- InternalServerError - Failed to create Amazon CloudWatch Contributor Insights rules. Please retry request.
* On failure, responds with `SdkError<DescribeContributorInsightsError>`
### impl Client
#### pub fn describe_endpoints(&self) -> DescribeEndpointsFluentBuilder
Constructs a fluent builder for the `DescribeEndpoints` operation.
* The fluent builder takes no input, just `send` it.
* On success, responds with `DescribeEndpointsOutput` with field(s):
+ `endpoints(Option<Vec<Endpoint>>)`: List of endpoints.
* On failure, responds with `SdkError<DescribeEndpointsError>`
### impl Client
#### pub fn describe_export(&self) -> DescribeExportFluentBuilder
Constructs a fluent builder for the `DescribeExport` operation.
* The fluent builder is configurable:
+ `export_arn(impl Into<String>)` / `set_export_arn(Option<String>)`: The Amazon Resource Name (ARN) associated with the export.
* On success, responds with `DescribeExportOutput` with field(s):
+ `export_description(Option<ExportDescription>)`: Represents the properties of the export.
* On failure, responds with `SdkError<DescribeExportError>`
### impl Client
#### pub fn describe_global_table(&self) -> DescribeGlobalTableFluentBuilder
Constructs a fluent builder for the `DescribeGlobalTable` operation.
* The fluent builder is configurable:
+ `global_table_name(impl Into<String>)` / `set_global_table_name(Option<String>)`: The name of the global table.
* On success, responds with `DescribeGlobalTableOutput` with field(s):
+ `global_table_description(Option<GlobalTableDescription>)`: Contains the details of the global table.
* On failure, responds with `SdkError<DescribeGlobalTableError>`
### impl Client
#### pub fn describe_global_table_settings(
&self
) -> DescribeGlobalTableSettingsFluentBuilder
Constructs a fluent builder for the `DescribeGlobalTableSettings` operation.
* The fluent builder is configurable:
+ `global_table_name(impl Into<String>)` / `set_global_table_name(Option<String>)`: The name of the global table to describe.
* On success, responds with `DescribeGlobalTableSettingsOutput` with field(s):
+ `global_table_name(Option<String>)`: The name of the global table.
+ `replica_settings(Option<Vec<ReplicaSettingsDescription>>)`: The Region-specific settings for the global table.
* On failure, responds with `SdkError<DescribeGlobalTableSettingsError>`
### impl Client
#### pub fn describe_import(&self) -> DescribeImportFluentBuilder
Constructs a fluent builder for the `DescribeImport` operation.
* The fluent builder is configurable:
+ `import_arn(impl Into<String>)` / `set_import_arn(Option<String>)`: The Amazon Resource Name (ARN) associated with the table you’re importing to.
* On success, responds with `DescribeImportOutput` with field(s):
+ `import_table_description(Option<ImportTableDescription>)`: Represents the properties of the table created for the import, and parameters of the import. The import parameters include import status, how many items were processed, and how many errors were encountered.
* On failure, responds with `SdkError<DescribeImportError>`
### impl Client
#### pub fn describe_kinesis_streaming_destination(
&self
) -> DescribeKinesisStreamingDestinationFluentBuilder
Constructs a fluent builder for the `DescribeKinesisStreamingDestination` operation.
* The fluent builder is configurable:
+ `table_name(impl Into<String>)` / `set_table_name(Option<String>)`: The name of the table being described.
* On success, responds with `DescribeKinesisStreamingDestinationOutput` with field(s):
+ `table_name(Option<String>)`: The name of the table being described.
+ `kinesis_data_stream_destinations(Option<Vec<KinesisDataStreamDestination>>)`: The list of replica structures for the table being described.
* On failure, responds with `SdkError<DescribeKinesisStreamingDestinationError>`
### impl Client
#### pub fn describe_limits(&self) -> DescribeLimitsFluentBuilder
Constructs a fluent builder for the `DescribeLimits` operation.
* The fluent builder takes no input, just `send` it.
* On success, responds with `DescribeLimitsOutput` with field(s):
+ `account_max_read_capacity_units(Option<i64>)`: The maximum total read capacity units that your account allows you to provision across all of your tables in this Region.
+ `account_max_write_capacity_units(Option<i64>)`: The maximum total write capacity units that your account allows you to provision across all of your tables in this Region.
+ `table_max_read_capacity_units(Option<i64>)`: The maximum read capacity units that your account allows you to provision for a new table that you are creating in this Region, including the read capacity units provisioned for its global secondary indexes (GSIs).
+ `table_max_write_capacity_units(Option<i64>)`: The maximum write capacity units that your account allows you to provision for a new table that you are creating in this Region, including the write capacity units provisioned for its global secondary indexes (GSIs).
* On failure, responds with `SdkError<DescribeLimitsError>`
### impl Client
#### pub fn describe_table(&self) -> DescribeTableFluentBuilder
Constructs a fluent builder for the `DescribeTable` operation.
* The fluent builder is configurable:
+ `table_name(impl Into<String>)` / `set_table_name(Option<String>)`: The name of the table to describe.
* On success, responds with `DescribeTableOutput` with field(s):
+ `table(Option<TableDescription>)`: The properties of the table.
* On failure, responds with `SdkError<DescribeTableError>`
### impl Client
#### pub fn describe_table_replica_auto_scaling(
&self
) -> DescribeTableReplicaAutoScalingFluentBuilder
Constructs a fluent builder for the `DescribeTableReplicaAutoScaling` operation.
* The fluent builder is configurable:
+ `table_name(impl Into<String>)` / `set_table_name(Option<String>)`: The name of the table.
* On success, responds with `DescribeTableReplicaAutoScalingOutput` with field(s):
+ `table_auto_scaling_description(Option<TableAutoScalingDescription>)`: Represents the auto scaling properties of the table.
* On failure, responds with `SdkError<DescribeTableReplicaAutoScalingError>`
### impl Client
#### pub fn describe_time_to_live(&self) -> DescribeTimeToLiveFluentBuilder
Constructs a fluent builder for the `DescribeTimeToLive` operation.
* The fluent builder is configurable:
+ `table_name(impl Into<String>)` / `set_table_name(Option<String>)`: The name of the table to be described.
* On success, responds with `DescribeTimeToLiveOutput` with field(s):
+ `time_to_live_description(Option<TimeToLiveDescription>)`:
* On failure, responds with `SdkError<DescribeTimeToLiveError>`
### impl Client
#### pub fn disable_kinesis_streaming_destination(
&self
) -> DisableKinesisStreamingDestinationFluentBuilder
Constructs a fluent builder for the `DisableKinesisStreamingDestination` operation.
* The fluent builder is configurable:
+ `table_name(impl Into<String>)` / `set_table_name(Option<String>)`: The name of the DynamoDB table.
+ `stream_arn(impl Into<String>)` / `set_stream_arn(Option<String>)`: The ARN for a Kinesis data stream.
* On success, responds with `DisableKinesisStreamingDestinationOutput` with field(s):
+ `table_name(Option<String>)`: The name of the table being modified.
+ `stream_arn(Option<String>)`: The ARN for the specific Kinesis data stream.
+ `destination_status(Option<DestinationStatus>)`: The current status of the replication.
* On failure, responds with `SdkError<DisableKinesisStreamingDestinationError>`
### impl Client
#### pub fn enable_kinesis_streaming_destination(
&self
) -> EnableKinesisStreamingDestinationFluentBuilder
Constructs a fluent builder for the `EnableKinesisStreamingDestination` operation.
* The fluent builder is configurable:
+ `table_name(impl Into<String>)` / `set_table_name(Option<String>)`: The name of the DynamoDB table.
+ `stream_arn(impl Into<String>)` / `set_stream_arn(Option<String>)`: The ARN for a Kinesis data stream.
* On success, responds with `EnableKinesisStreamingDestinationOutput` with field(s):
+ `table_name(Option<String>)`: The name of the table being modified.
+ `stream_arn(Option<String>)`: The ARN for the specific Kinesis data stream.
+ `destination_status(Option<DestinationStatus>)`: The current status of the replication.
* On failure, responds with `SdkError<EnableKinesisStreamingDestinationError>`
### impl Client
#### pub fn execute_statement(&self) -> ExecuteStatementFluentBuilder
Constructs a fluent builder for the `ExecuteStatement` operation.
* The fluent builder is configurable:
+ `statement(impl Into<String>)` / `set_statement(Option<String>)`: The PartiQL statement representing the operation to run.
+ `parameters(AttributeValue)` / `set_parameters(Option<Vec<AttributeValue>>)`: The parameters for the PartiQL statement, if any.
+ `consistent_read(bool)` / `set_consistent_read(Option<bool>)`: The consistency of a read operation. If set to `true`, then a strongly consistent read is used; otherwise, an eventually consistent read is used.
+ `next_token(impl Into<String>)` / `set_next_token(Option<String>)`: Set this value to get remaining results, if `NextToken` was returned in the statement response.
+ `return_consumed_capacity(ReturnConsumedCapacity)` / `set_return_consumed_capacity(Option<ReturnConsumedCapacity>)`: Determines the level of detail about either provisioned or on-demand throughput consumption that is returned in the response:
- `INDEXES` - The response includes the aggregate `ConsumedCapacity` for the operation, together with `ConsumedCapacity` for each table and secondary index that was accessed.
Note that some operations, such as `GetItem` and `BatchGetItem`, do not access any indexes at all. In these cases, specifying `INDEXES` will only return `ConsumedCapacity` information for table(s).
- `TOTAL` - The response includes only the aggregate `ConsumedCapacity` for the operation.
- `NONE` - No `ConsumedCapacity` details are included in the response.
+ `limit(i32)` / `set_limit(Option<i32>)`: The maximum number of items to evaluate (not necessarily the number of matching items). If DynamoDB processes the number of items up to the limit while processing the results, it stops the operation and returns the matching values up to that point, along with a key in `LastEvaluatedKey` to apply in a subsequent operation so you can pick up where you left off. Also, if the processed dataset size exceeds 1 MB before DynamoDB reaches this limit, it stops the operation and returns the matching values up to the limit, and a key in `LastEvaluatedKey` to apply in a subsequent operation to continue the operation.
+ `return_values_on_condition_check_failure(ReturnValuesOnConditionCheckFailure)` / `set_return_values_on_condition_check_failure(Option<ReturnValuesOnConditionCheckFailure>)`: An optional parameter that returns the item attributes for an `ExecuteStatement` operation that failed a condition check.
There is no additional cost associated with requesting a return value aside from the small network and processing overhead of receiving a larger response. No read capacity units are consumed.
* On success, responds with `ExecuteStatementOutput` with field(s):
+ `items(Option<Vec<HashMap<String, AttributeValue>>>)`: If a read operation was used, this property will contain the result of the read operation; a map of attribute names and their values. For the write operations this value will be empty.
+ `next_token(Option<String>)`: If the response of a read request exceeds the response payload limit DynamoDB will set this value in the response. If set, you can use that this value in the subsequent request to get the remaining results.
+ `consumed_capacity(Option<ConsumedCapacity>)`: The capacity units consumed by an operation. The data returned includes the total provisioned throughput consumed, along with statistics for the table and any indexes involved in the operation. `ConsumedCapacity` is only returned if the request asked for it. For more information, see Provisioned Throughput in the *Amazon DynamoDB Developer Guide*.
+ `last_evaluated_key(Option<HashMap<String, AttributeValue>>)`: The primary key of the item where the operation stopped, inclusive of the previous result set. Use this value to start a new operation, excluding this value in the new request. If `LastEvaluatedKey` is empty, then the “last page” of results has been processed and there is no more data to be retrieved. If `LastEvaluatedKey` is not empty, it does not necessarily mean that there is more data in the result set. The only way to know when you have reached the end of the result set is when `LastEvaluatedKey` is empty.
* On failure, responds with `SdkError<ExecuteStatementError>`
### impl Client
#### pub fn execute_transaction(&self) -> ExecuteTransactionFluentBuilder
Constructs a fluent builder for the `ExecuteTransaction` operation.
* The fluent builder is configurable:
+ `transact_statements(ParameterizedStatement)` / `set_transact_statements(Option<Vec<ParameterizedStatement>>)`: The list of PartiQL statements representing the transaction to run.
+ `client_request_token(impl Into<String>)` / `set_client_request_token(Option<String>)`: Set this value to get remaining results, if `NextToken` was returned in the statement response.
+ `return_consumed_capacity(ReturnConsumedCapacity)` / `set_return_consumed_capacity(Option<ReturnConsumedCapacity>)`: Determines the level of detail about either provisioned or on-demand throughput consumption that is returned in the response. For more information, see TransactGetItems and TransactWriteItems.
* On success, responds with `ExecuteTransactionOutput` with field(s):
+ `responses(Option<Vec<ItemResponse>>)`: The response to a PartiQL transaction.
+ `consumed_capacity(Option<Vec<ConsumedCapacity>>)`: The capacity units consumed by the entire operation. The values of the list are ordered according to the ordering of the statements.
* On failure, responds with `SdkError<ExecuteTransactionError>`
### impl Client
#### pub fn export_table_to_point_in_time(
&self
) -> ExportTableToPointInTimeFluentBuilder
Constructs a fluent builder for the `ExportTableToPointInTime` operation.
* The fluent builder is configurable:
+ `table_arn(impl Into<String>)` / `set_table_arn(Option<String>)`: The Amazon Resource Name (ARN) associated with the table to export.
+ `export_time(DateTime)` / `set_export_time(Option<DateTime>)`: Time in the past from which to export table data, counted in seconds from the start of the Unix epoch. The table export will be a snapshot of the table’s state at this point in time.
+ `client_token(impl Into<String>)` / `set_client_token(Option<String>)`: Providing a `ClientToken` makes the call to `ExportTableToPointInTimeInput` idempotent, meaning that multiple identical calls have the same effect as one single call.
A client token is valid for 8 hours after the first request that uses it is completed. After 8 hours, any request with the same client token is treated as a new request. Do not resubmit the same request with the same client token for more than 8 hours, or the result might not be idempotent.
If you submit a request with the same client token but a change in other parameters within the 8-hour idempotency window, DynamoDB returns an `ImportConflictException`.
+ `s3_bucket(impl Into<String>)` / `set_s3_bucket(Option<String>)`: The name of the Amazon S3 bucket to export the snapshot to.
+ `s3_bucket_owner(impl Into<String>)` / `set_s3_bucket_owner(Option<String>)`: The ID of the Amazon Web Services account that owns the bucket the export will be stored in.
+ `s3_prefix(impl Into<String>)` / `set_s3_prefix(Option<String>)`: The Amazon S3 bucket prefix to use as the file name and path of the exported snapshot.
+ `s3_sse_algorithm(S3SseAlgorithm)` / `set_s3_sse_algorithm(Option<S3SseAlgorithm>)`: Type of encryption used on the bucket where export data will be stored. Valid values for `S3SseAlgorithm` are:
- `AES256` - server-side encryption with Amazon S3 managed keys
- `KMS` - server-side encryption with KMS managed keys
+ `s3_sse_kms_key_id(impl Into<String>)` / `set_s3_sse_kms_key_id(Option<String>)`: The ID of the KMS managed key used to encrypt the S3 bucket where export data will be stored (if applicable).
+ `export_format(ExportFormat)` / `set_export_format(Option<ExportFormat>)`: The format for the exported data. Valid values for `ExportFormat` are `DYNAMODB_JSON` or `ION`.
+ `export_type(ExportType)` / `set_export_type(Option<ExportType>)`: Choice of whether to execute as a full export or incremental export. Valid values are `FULL_EXPORT` or `INCREMENTAL_EXPORT`. If `INCREMENTAL_EXPORT` is provided, the `IncrementalExportSpecification` must also be used.
+ `incremental_export_specification(IncrementalExportSpecification)` / `set_incremental_export_specification(Option<IncrementalExportSpecification>)`: Optional object containing the parameters specific to an incremental export.
* On success, responds with `ExportTableToPointInTimeOutput` with field(s):
+ `export_description(Option<ExportDescription>)`: Contains a description of the table export.
* On failure, responds with `SdkError<ExportTableToPointInTimeError>`
### impl Client
#### pub fn get_item(&self) -> GetItemFluentBuilder
Constructs a fluent builder for the `GetItem` operation.
* The fluent builder is configurable:
+ `table_name(impl Into<String>)` / `set_table_name(Option<String>)`: The name of the table containing the requested item.
+ `key(impl Into<String>, AttributeValue)` / `set_key(Option<HashMap<String, AttributeValue>>)`: A map of attribute names to `AttributeValue` objects, representing the primary key of the item to retrieve.
For the primary key, you must provide all of the attributes. For example, with a simple primary key, you only need to provide a value for the partition key. For a composite primary key, you must provide values for both the partition key and the sort key.
+ `attributes_to_get(impl Into<String>)` / `set_attributes_to_get(Option<Vec<String>>)`: This is a legacy parameter. Use `ProjectionExpression` instead. For more information, see AttributesToGet in the *Amazon DynamoDB Developer Guide*.
+ `consistent_read(bool)` / `set_consistent_read(Option<bool>)`: Determines the read consistency model: If set to `true`, then the operation uses strongly consistent reads; otherwise, the operation uses eventually consistent reads.
+ `return_consumed_capacity(ReturnConsumedCapacity)` / `set_return_consumed_capacity(Option<ReturnConsumedCapacity>)`: Determines the level of detail about either provisioned or on-demand throughput consumption that is returned in the response:
- `INDEXES` - The response includes the aggregate `ConsumedCapacity` for the operation, together with `ConsumedCapacity` for each table and secondary index that was accessed.
Note that some operations, such as `GetItem` and `BatchGetItem`, do not access any indexes at all. In these cases, specifying `INDEXES` will only return `ConsumedCapacity` information for table(s).
- `TOTAL` - The response includes only the aggregate `ConsumedCapacity` for the operation.
- `NONE` - No `ConsumedCapacity` details are included in the response.
+ `projection_expression(impl Into<String>)` / `set_projection_expression(Option<String>)`: A string that identifies one or more attributes to retrieve from the table. These attributes can include scalars, sets, or elements of a JSON document. The attributes in the expression must be separated by commas.
If no attribute names are specified, then all attributes are returned. If any of the requested attributes are not found, they do not appear in the result.
For more information, see Specifying Item Attributes in the *Amazon DynamoDB Developer Guide*.
+ `expression_attribute_names(impl Into<String>, impl Into<String>)` / `set_expression_attribute_names(Option<HashMap<String, String>>)`: One or more substitution tokens for attribute names in an expression. The following are some use cases for using `ExpressionAttributeNames`:
- To access an attribute whose name conflicts with a DynamoDB reserved word.
- To create a placeholder for repeating occurrences of an attribute name in an expression.
- To prevent special characters in an attribute name from being misinterpreted in an expression.Use the **#** character in an expression to dereference an attribute name. For example, consider the following attribute name:
- `Percentile`The name of this attribute conflicts with a reserved word, so it cannot be used directly in an expression. (For the complete list of reserved words, see Reserved Words in the *Amazon DynamoDB Developer Guide*). To work around this, you could specify the following for `ExpressionAttributeNames`:
- `{“#P”:“Percentile”}`You could then use this substitution in an expression, as in this example:
- `#P = :val` Tokens that begin with the **:** character are *expression attribute values*, which are placeholders for the actual value at runtime.
For more information on expression attribute names, see Specifying Item Attributes in the *Amazon DynamoDB Developer Guide*.
* On success, responds with `GetItemOutput` with field(s):
+ `item(Option<HashMap<String, AttributeValue>>)`: A map of attribute names to `AttributeValue` objects, as specified by `ProjectionExpression`.
+ `consumed_capacity(Option<ConsumedCapacity>)`: The capacity units consumed by the `GetItem` operation. The data returned includes the total provisioned throughput consumed, along with statistics for the table and any indexes involved in the operation. `ConsumedCapacity` is only returned if the `ReturnConsumedCapacity` parameter was specified. For more information, see Provisioned Throughput in the *Amazon DynamoDB Developer Guide*.
* On failure, responds with `SdkError<GetItemError>`
### impl Client
#### pub fn import_table(&self) -> ImportTableFluentBuilder
Constructs a fluent builder for the `ImportTable` operation.
* The fluent builder is configurable:
+ `client_token(impl Into<String>)` / `set_client_token(Option<String>)`: Providing a `ClientToken` makes the call to `ImportTableInput` idempotent, meaning that multiple identical calls have the same effect as one single call.
A client token is valid for 8 hours after the first request that uses it is completed. After 8 hours, any request with the same client token is treated as a new request. Do not resubmit the same request with the same client token for more than 8 hours, or the result might not be idempotent.
If you submit a request with the same client token but a change in other parameters within the 8-hour idempotency window, DynamoDB returns an `IdempotentParameterMismatch` exception.
+ `s3_bucket_source(S3BucketSource)` / `set_s3_bucket_source(Option<S3BucketSource>)`: The S3 bucket that provides the source for the import.
+ `input_format(InputFormat)` / `set_input_format(Option<InputFormat>)`: The format of the source data. Valid values for `ImportFormat` are `CSV`, `DYNAMODB_JSON` or `ION`.
+ `input_format_options(InputFormatOptions)` / `set_input_format_options(Option<InputFormatOptions>)`: Additional properties that specify how the input is formatted,
+ `input_compression_type(InputCompressionType)` / `set_input_compression_type(Option<InputCompressionType>)`: Type of compression to be used on the input coming from the imported table.
+ `table_creation_parameters(TableCreationParameters)` / `set_table_creation_parameters(Option<TableCreationParameters>)`: Parameters for the table to import the data into.
* On success, responds with `ImportTableOutput` with field(s):
+ `import_table_description(Option<ImportTableDescription>)`: Represents the properties of the table created for the import, and parameters of the import. The import parameters include import status, how many items were processed, and how many errors were encountered.
* On failure, responds with `SdkError<ImportTableError>`
### impl Client
#### pub fn list_backups(&self) -> ListBackupsFluentBuilder
Constructs a fluent builder for the `ListBackups` operation.
* The fluent builder is configurable:
+ `table_name(impl Into<String>)` / `set_table_name(Option<String>)`: The backups from the table specified by `TableName` are listed.
+ `limit(i32)` / `set_limit(Option<i32>)`: Maximum number of backups to return at once.
+ `time_range_lower_bound(DateTime)` / `set_time_range_lower_bound(Option<DateTime>)`: Only backups created after this time are listed. `TimeRangeLowerBound` is inclusive.
+ `time_range_upper_bound(DateTime)` / `set_time_range_upper_bound(Option<DateTime>)`: Only backups created before this time are listed. `TimeRangeUpperBound` is exclusive.
+ `exclusive_start_backup_arn(impl Into<String>)` / `set_exclusive_start_backup_arn(Option<String>)`: `LastEvaluatedBackupArn` is the Amazon Resource Name (ARN) of the backup last evaluated when the current page of results was returned, inclusive of the current page of results. This value may be specified as the `ExclusiveStartBackupArn` of a new `ListBackups` operation in order to fetch the next page of results.
+ `backup_type(BackupTypeFilter)` / `set_backup_type(Option<BackupTypeFilter>)`: The backups from the table specified by `BackupType` are listed.
Where `BackupType` can be:
- `USER` - On-demand backup created by you. (The default setting if no other backup types are specified.)
- `SYSTEM` - On-demand backup automatically created by DynamoDB.
- `ALL` - All types of on-demand backups (USER and SYSTEM).
* On success, responds with `ListBackupsOutput` with field(s):
+ `backup_summaries(Option<Vec<BackupSummary>>)`: List of `BackupSummary` objects.
+ `last_evaluated_backup_arn(Option<String>)`: The ARN of the backup last evaluated when the current page of results was returned, inclusive of the current page of results. This value may be specified as the `ExclusiveStartBackupArn` of a new `ListBackups` operation in order to fetch the next page of results.
If `LastEvaluatedBackupArn` is empty, then the last page of results has been processed and there are no more results to be retrieved.
If `LastEvaluatedBackupArn` is not empty, this may or may not indicate that there is more data to be returned. All results are guaranteed to have been returned if and only if no value for `LastEvaluatedBackupArn` is returned.
* On failure, responds with `SdkError<ListBackupsError>`
### impl Client
#### pub fn list_contributor_insights(&self) -> ListContributorInsightsFluentBuilder
Constructs a fluent builder for the `ListContributorInsights` operation.
This operation supports pagination; See `into_paginator()`.
* The fluent builder is configurable:
+ `table_name(impl Into<String>)` / `set_table_name(Option<String>)`: The name of the table.
+ `next_token(impl Into<String>)` / `set_next_token(Option<String>)`: A token to for the desired page, if there is one.
+ `max_results(i32)` / `set_max_results(Option<i32>)`: Maximum number of results to return per page.
* On success, responds with `ListContributorInsightsOutput` with field(s):
+ `contributor_insights_summaries(Option<Vec<ContributorInsightsSummary>>)`: A list of ContributorInsightsSummary.
+ `next_token(Option<String>)`: A token to go to the next page if there is one.
* On failure, responds with `SdkError<ListContributorInsightsError>`
### impl Client
#### pub fn list_exports(&self) -> ListExportsFluentBuilder
Constructs a fluent builder for the `ListExports` operation.
This operation supports pagination; See `into_paginator()`.
* The fluent builder is configurable:
+ `table_arn(impl Into<String>)` / `set_table_arn(Option<String>)`: The Amazon Resource Name (ARN) associated with the exported table.
+ `max_results(i32)` / `set_max_results(Option<i32>)`: Maximum number of results to return per page.
+ `next_token(impl Into<String>)` / `set_next_token(Option<String>)`: An optional string that, if supplied, must be copied from the output of a previous call to `ListExports`. When provided in this manner, the API fetches the next page of results.
* On success, responds with `ListExportsOutput` with field(s):
+ `export_summaries(Option<Vec<ExportSummary>>)`: A list of `ExportSummary` objects.
+ `next_token(Option<String>)`: If this value is returned, there are additional results to be displayed. To retrieve them, call `ListExports` again, with `NextToken` set to this value.
* On failure, responds with `SdkError<ListExportsError>`
### impl Client
#### pub fn list_global_tables(&self) -> ListGlobalTablesFluentBuilder
Constructs a fluent builder for the `ListGlobalTables` operation.
* The fluent builder is configurable:
+ `exclusive_start_global_table_name(impl Into<String>)` / `set_exclusive_start_global_table_name(Option<String>)`: The first global table name that this operation will evaluate.
+ `limit(i32)` / `set_limit(Option<i32>)`: The maximum number of table names to return, if the parameter is not specified DynamoDB defaults to 100.
If the number of global tables DynamoDB finds reaches this limit, it stops the operation and returns the table names collected up to that point, with a table name in the `LastEvaluatedGlobalTableName` to apply in a subsequent operation to the `ExclusiveStartGlobalTableName` parameter.
+ `region_name(impl Into<String>)` / `set_region_name(Option<String>)`: Lists the global tables in a specific Region.
* On success, responds with `ListGlobalTablesOutput` with field(s):
+ `global_tables(Option<Vec<GlobalTable>>)`: List of global table names.
+ `last_evaluated_global_table_name(Option<String>)`: Last evaluated global table name.
* On failure, responds with `SdkError<ListGlobalTablesError>`
### impl Client
#### pub fn list_imports(&self) -> ListImportsFluentBuilder
Constructs a fluent builder for the `ListImports` operation.
This operation supports pagination; See `into_paginator()`.
* The fluent builder is configurable:
+ `table_arn(impl Into<String>)` / `set_table_arn(Option<String>)`: The Amazon Resource Name (ARN) associated with the table that was imported to.
+ `page_size(i32)` / `set_page_size(Option<i32>)`: The number of `ImportSummary` objects returned in a single page.
+ `next_token(impl Into<String>)` / `set_next_token(Option<String>)`: An optional string that, if supplied, must be copied from the output of a previous call to `ListImports`. When provided in this manner, the API fetches the next page of results.
* On success, responds with `ListImportsOutput` with field(s):
+ `import_summary_list(Option<Vec<ImportSummary>>)`: A list of `ImportSummary` objects.
+ `next_token(Option<String>)`: If this value is returned, there are additional results to be displayed. To retrieve them, call `ListImports` again, with `NextToken` set to this value.
* On failure, responds with `SdkError<ListImportsError>`
### impl Client
#### pub fn list_tables(&self) -> ListTablesFluentBuilder
Constructs a fluent builder for the `ListTables` operation.
This operation supports pagination; See `into_paginator()`.
* The fluent builder is configurable:
+ `exclusive_start_table_name(impl Into<String>)` / `set_exclusive_start_table_name(Option<String>)`: The first table name that this operation will evaluate. Use the value that was returned for `LastEvaluatedTableName` in a previous operation, so that you can obtain the next page of results.
+ `limit(i32)` / `set_limit(Option<i32>)`: A maximum number of table names to return. If this parameter is not specified, the limit is 100.
* On success, responds with `ListTablesOutput` with field(s):
+ `table_names(Option<Vec<String>>)`: The names of the tables associated with the current account at the current endpoint. The maximum size of this array is 100.
If `LastEvaluatedTableName` also appears in the output, you can use this value as the `ExclusiveStartTableName` parameter in a subsequent `ListTables` request and obtain the next page of results.
+ `last_evaluated_table_name(Option<String>)`: The name of the last table in the current page of results. Use this value as the `ExclusiveStartTableName` in a new request to obtain the next page of results, until all the table names are returned.
If you do not receive a `LastEvaluatedTableName` value in the response, this means that there are no more table names to be retrieved.
* On failure, responds with `SdkError<ListTablesError>`
### impl Client
#### pub fn list_tags_of_resource(&self) -> ListTagsOfResourceFluentBuilder
Constructs a fluent builder for the `ListTagsOfResource` operation.
* The fluent builder is configurable:
+ `resource_arn(impl Into<String>)` / `set_resource_arn(Option<String>)`: The Amazon DynamoDB resource with tags to be listed. This value is an Amazon Resource Name (ARN).
+ `next_token(impl Into<String>)` / `set_next_token(Option<String>)`: An optional string that, if supplied, must be copied from the output of a previous call to ListTagOfResource. When provided in this manner, this API fetches the next page of results.
* On success, responds with `ListTagsOfResourceOutput` with field(s):
+ `tags(Option<Vec<Tag>>)`: The tags currently associated with the Amazon DynamoDB resource.
+ `next_token(Option<String>)`: If this value is returned, there are additional results to be displayed. To retrieve them, call ListTagsOfResource again, with NextToken set to this value.
* On failure, responds with `SdkError<ListTagsOfResourceError>`
### impl Client
#### pub fn put_item(&self) -> PutItemFluentBuilder
Constructs a fluent builder for the `PutItem` operation.
* The fluent builder is configurable:
+ `table_name(impl Into<String>)` / `set_table_name(Option<String>)`: The name of the table to contain the item.
+ `item(impl Into<String>, AttributeValue)` / `set_item(Option<HashMap<String, AttributeValue>>)`: A map of attribute name/value pairs, one for each attribute. Only the primary key attributes are required; you can optionally provide other attribute name-value pairs for the item.
You must provide all of the attributes for the primary key. For example, with a simple primary key, you only need to provide a value for the partition key. For a composite primary key, you must provide both values for both the partition key and the sort key.
If you specify any attributes that are part of an index key, then the data types for those attributes must match those of the schema in the table’s attribute definition.
Empty String and Binary attribute values are allowed. Attribute values of type String and Binary must have a length greater than zero if the attribute is used as a key attribute for a table or index.
For more information about primary keys, see Primary Key in the *Amazon DynamoDB Developer Guide*.
Each element in the `Item` map is an `AttributeValue` object.
+ `expected(impl Into<String>, ExpectedAttributeValue)` / `set_expected(Option<HashMap<String, ExpectedAttributeValue>>)`: This is a legacy parameter. Use `ConditionExpression` instead. For more information, see Expected in the *Amazon DynamoDB Developer Guide*.
+ `return_values(ReturnValue)` / `set_return_values(Option<ReturnValue>)`: Use `ReturnValues` if you want to get the item attributes as they appeared before they were updated with the `PutItem` request. For `PutItem`, the valid values are:
- `NONE` - If `ReturnValues` is not specified, or if its value is `NONE`, then nothing is returned. (This setting is the default for `ReturnValues`.)
- `ALL_OLD` - If `PutItem` overwrote an attribute name-value pair, then the content of the old item is returned.The values returned are strongly consistent.
There is no additional cost associated with requesting a return value aside from the small network and processing overhead of receiving a larger response. No read capacity units are consumed.
The `ReturnValues` parameter is used by several DynamoDB operations; however, `PutItem` does not recognize any values other than `NONE` or `ALL_OLD`.
+ `return_consumed_capacity(ReturnConsumedCapacity)` / `set_return_consumed_capacity(Option<ReturnConsumedCapacity>)`: Determines the level of detail about either provisioned or on-demand throughput consumption that is returned in the response:
- `INDEXES` - The response includes the aggregate `ConsumedCapacity` for the operation, together with `ConsumedCapacity` for each table and secondary index that was accessed.
Note that some operations, such as `GetItem` and `BatchGetItem`, do not access any indexes at all. In these cases, specifying `INDEXES` will only return `ConsumedCapacity` information for table(s).
- `TOTAL` - The response includes only the aggregate `ConsumedCapacity` for the operation.
- `NONE` - No `ConsumedCapacity` details are included in the response.
+ `return_item_collection_metrics(ReturnItemCollectionMetrics)` / `set_return_item_collection_metrics(Option<ReturnItemCollectionMetrics>)`: Determines whether item collection metrics are returned. If set to `SIZE`, the response includes statistics about item collections, if any, that were modified during the operation are returned in the response. If set to `NONE` (the default), no statistics are returned.
+ `conditional_operator(ConditionalOperator)` / `set_conditional_operator(Option<ConditionalOperator>)`: This is a legacy parameter. Use `ConditionExpression` instead. For more information, see ConditionalOperator in the *Amazon DynamoDB Developer Guide*.
+ `condition_expression(impl Into<String>)` / `set_condition_expression(Option<String>)`: A condition that must be satisfied in order for a conditional `PutItem` operation to succeed.
An expression can contain any of the following:
- Functions: `attribute_exists | attribute_not_exists | attribute_type | contains | begins_with | size`
These function names are case-sensitive.
- Comparison operators: `= | <> | < | > | <= | >= | BETWEEN | IN`
- Logical operators: `AND | OR | NOT`For more information on condition expressions, see Condition Expressions in the *Amazon DynamoDB Developer Guide*.
+ `expression_attribute_names(impl Into<String>, impl Into<String>)` / `set_expression_attribute_names(Option<HashMap<String, String>>)`: One or more substitution tokens for attribute names in an expression. The following are some use cases for using `ExpressionAttributeNames`:
- To access an attribute whose name conflicts with a DynamoDB reserved word.
- To create a placeholder for repeating occurrences of an attribute name in an expression.
- To prevent special characters in an attribute name from being misinterpreted in an expression.Use the **#** character in an expression to dereference an attribute name. For example, consider the following attribute name:
- `Percentile`The name of this attribute conflicts with a reserved word, so it cannot be used directly in an expression. (For the complete list of reserved words, see Reserved Words in the *Amazon DynamoDB Developer Guide*). To work around this, you could specify the following for `ExpressionAttributeNames`:
- `{“#P”:“Percentile”}`You could then use this substitution in an expression, as in this example:
- `#P = :val` Tokens that begin with the **:** character are *expression attribute values*, which are placeholders for the actual value at runtime.
For more information on expression attribute names, see Specifying Item Attributes in the *Amazon DynamoDB Developer Guide*.
+ `expression_attribute_values(impl Into<String>, AttributeValue)` / `set_expression_attribute_values(Option<HashMap<String, AttributeValue>>)`: One or more values that can be substituted in an expression.
Use the **:** (colon) character in an expression to dereference an attribute value. For example, suppose that you wanted to check whether the value of the *ProductStatus* attribute was one of the following:
`Available | Backordered | Discontinued`
You would first need to specify `ExpressionAttributeValues` as follows:
`{ “:avail”:{“S”:“Available”}, “:back”:{“S”:“Backordered”}, “:disc”:{“S”:“Discontinued”} }`
You could then use these values in an expression, such as this:
`ProductStatus IN (:avail, :back, :disc)`
For more information on expression attribute values, see Condition Expressions in the *Amazon DynamoDB Developer Guide*.
+ `return_values_on_condition_check_failure(ReturnValuesOnConditionCheckFailure)` / `set_return_values_on_condition_check_failure(Option<ReturnValuesOnConditionCheckFailure>)`: An optional parameter that returns the item attributes for a `PutItem` operation that failed a condition check.
There is no additional cost associated with requesting a return value aside from the small network and processing overhead of receiving a larger response. No read capacity units are consumed.
* On success, responds with `PutItemOutput` with field(s):
+ `attributes(Option<HashMap<String, AttributeValue>>)`: The attribute values as they appeared before the `PutItem` operation, but only if `ReturnValues` is specified as `ALL_OLD` in the request. Each element consists of an attribute name and an attribute value.
+ `consumed_capacity(Option<ConsumedCapacity>)`: The capacity units consumed by the `PutItem` operation. The data returned includes the total provisioned throughput consumed, along with statistics for the table and any indexes involved in the operation. `ConsumedCapacity` is only returned if the `ReturnConsumedCapacity` parameter was specified. For more information, see Provisioned Throughput in the *Amazon DynamoDB Developer Guide*.
+ `item_collection_metrics(Option<ItemCollectionMetrics>)`: Information about item collections, if any, that were affected by the `PutItem` operation. `ItemCollectionMetrics` is only returned if the `ReturnItemCollectionMetrics` parameter was specified. If the table does not have any local secondary indexes, this information is not returned in the response.
Each `ItemCollectionMetrics` element consists of:
- `ItemCollectionKey` - The partition key value of the item collection. This is the same as the partition key value of the item itself.
- `SizeEstimateRangeGB` - An estimate of item collection size, in gigabytes. This value is a two-element array containing a lower bound and an upper bound for the estimate. The estimate includes the size of all the items in the table, plus the size of all attributes projected into all of the local secondary indexes on that table. Use this estimate to measure whether a local secondary index is approaching its size limit.
The estimate is subject to change over time; therefore, do not rely on the precision or accuracy of the estimate.
* On failure, responds with `SdkError<PutItemError>`
### impl Client
#### pub fn query(&self) -> QueryFluentBuilder
Constructs a fluent builder for the `Query` operation.
This operation supports pagination; See `into_paginator()`.
* The fluent builder is configurable:
+ `table_name(impl Into<String>)` / `set_table_name(Option<String>)`: The name of the table containing the requested items.
+ `index_name(impl Into<String>)` / `set_index_name(Option<String>)`: The name of an index to query. This index can be any local secondary index or global secondary index on the table. Note that if you use the `IndexName` parameter, you must also provide `TableName.`
+ `select(Select)` / `set_select(Option<Select>)`: The attributes to be returned in the result. You can retrieve all item attributes, specific item attributes, the count of matching items, or in the case of an index, some or all of the attributes projected into the index.
- `ALL_ATTRIBUTES` - Returns all of the item attributes from the specified table or index. If you query a local secondary index, then for each matching item in the index, DynamoDB fetches the entire item from the parent table. If the index is configured to project all item attributes, then all of the data can be obtained from the local secondary index, and no fetching is required.
- `ALL_PROJECTED_ATTRIBUTES` - Allowed only when querying an index. Retrieves all attributes that have been projected into the index. If the index is configured to project all attributes, this return value is equivalent to specifying `ALL_ATTRIBUTES`.
- `COUNT` - Returns the number of matching items, rather than the matching items themselves. Note that this uses the same quantity of read capacity units as getting the items, and is subject to the same item size calculations.
- `SPECIFIC_ATTRIBUTES` - Returns only the attributes listed in `ProjectionExpression`. This return value is equivalent to specifying `ProjectionExpression` without specifying any value for `Select`.
If you query or scan a local secondary index and request only attributes that are projected into that index, the operation will read only the index and not the table. If any of the requested attributes are not projected into the local secondary index, DynamoDB fetches each of these attributes from the parent table. This extra fetching incurs additional throughput cost and latency.
If you query or scan a global secondary index, you can only request attributes that are projected into the index. Global secondary index queries cannot fetch attributes from the parent table.If neither `Select` nor `ProjectionExpression` are specified, DynamoDB defaults to `ALL_ATTRIBUTES` when accessing a table, and `ALL_PROJECTED_ATTRIBUTES` when accessing an index. You cannot use both `Select` and `ProjectionExpression` together in a single request, unless the value for `Select` is `SPECIFIC_ATTRIBUTES`. (This usage is equivalent to specifying `ProjectionExpression` without any value for `Select`.)
If you use the `ProjectionExpression` parameter, then the value for `Select` can only be `SPECIFIC_ATTRIBUTES`. Any other value for `Select` will return an error.
+ `attributes_to_get(impl Into<String>)` / `set_attributes_to_get(Option<Vec<String>>)`: This is a legacy parameter. Use `ProjectionExpression` instead. For more information, see AttributesToGet in the *Amazon DynamoDB Developer Guide*.
+ `limit(i32)` / `set_limit(Option<i32>)`: The maximum number of items to evaluate (not necessarily the number of matching items). If DynamoDB processes the number of items up to the limit while processing the results, it stops the operation and returns the matching values up to that point, and a key in `LastEvaluatedKey` to apply in a subsequent operation, so that you can pick up where you left off. Also, if the processed dataset size exceeds 1 MB before DynamoDB reaches this limit, it stops the operation and returns the matching values up to the limit, and a key in `LastEvaluatedKey` to apply in a subsequent operation to continue the operation. For more information, see Query and Scan in the *Amazon DynamoDB Developer Guide*.
+ `consistent_read(bool)` / `set_consistent_read(Option<bool>)`: Determines the read consistency model: If set to `true`, then the operation uses strongly consistent reads; otherwise, the operation uses eventually consistent reads.
Strongly consistent reads are not supported on global secondary indexes. If you query a global secondary index with `ConsistentRead` set to `true`, you will receive a `ValidationException`.
+ `key_conditions(impl Into<String>, Condition)` / `set_key_conditions(Option<HashMap<String, Condition>>)`: This is a legacy parameter. Use `KeyConditionExpression` instead. For more information, see KeyConditions in the *Amazon DynamoDB Developer Guide*.
+ `query_filter(impl Into<String>, Condition)` / `set_query_filter(Option<HashMap<String, Condition>>)`: This is a legacy parameter. Use `FilterExpression` instead. For more information, see QueryFilter in the *Amazon DynamoDB Developer Guide*.
+ `conditional_operator(ConditionalOperator)` / `set_conditional_operator(Option<ConditionalOperator>)`: This is a legacy parameter. Use `FilterExpression` instead. For more information, see ConditionalOperator in the *Amazon DynamoDB Developer Guide*.
+ `scan_index_forward(bool)` / `set_scan_index_forward(Option<bool>)`: Specifies the order for index traversal: If `true` (default), the traversal is performed in ascending order; if `false`, the traversal is performed in descending order.
Items with the same partition key value are stored in sorted order by sort key. If the sort key data type is Number, the results are stored in numeric order. For type String, the results are stored in order of UTF-8 bytes. For type Binary, DynamoDB treats each byte of the binary data as unsigned.
If `ScanIndexForward` is `true`, DynamoDB returns the results in the order in which they are stored (by sort key value). This is the default behavior. If `ScanIndexForward` is `false`, DynamoDB reads the results in reverse order by sort key value, and then returns the results to the client.
+ `exclusive_start_key(impl Into<String>, AttributeValue)` / `set_exclusive_start_key(Option<HashMap<String, AttributeValue>>)`: The primary key of the first item that this operation will evaluate. Use the value that was returned for `LastEvaluatedKey` in the previous operation.
The data type for `ExclusiveStartKey` must be String, Number, or Binary. No set data types are allowed.
+ `return_consumed_capacity(ReturnConsumedCapacity)` / `set_return_consumed_capacity(Option<ReturnConsumedCapacity>)`: Determines the level of detail about either provisioned or on-demand throughput consumption that is returned in the response:
- `INDEXES` - The response includes the aggregate `ConsumedCapacity` for the operation, together with `ConsumedCapacity` for each table and secondary index that was accessed.
Note that some operations, such as `GetItem` and `BatchGetItem`, do not access any indexes at all. In these cases, specifying `INDEXES` will only return `ConsumedCapacity` information for table(s).
- `TOTAL` - The response includes only the aggregate `ConsumedCapacity` for the operation.
- `NONE` - No `ConsumedCapacity` details are included in the response.
+ `projection_expression(impl Into<String>)` / `set_projection_expression(Option<String>)`: A string that identifies one or more attributes to retrieve from the table. These attributes can include scalars, sets, or elements of a JSON document. The attributes in the expression must be separated by commas.
If no attribute names are specified, then all attributes will be returned. If any of the requested attributes are not found, they will not appear in the result.
For more information, see Accessing Item Attributes in the *Amazon DynamoDB Developer Guide*.
+ `filter_expression(impl Into<String>)` / `set_filter_expression(Option<String>)`: A string that contains conditions that DynamoDB applies after the `Query` operation, but before the data is returned to you. Items that do not satisfy the `FilterExpression` criteria are not returned.
A `FilterExpression` does not allow key attributes. You cannot define a filter expression based on a partition key or a sort key.
A `FilterExpression` is applied after the items have already been read; the process of filtering does not consume any additional read capacity units.
For more information, see Filter Expressions in the *Amazon DynamoDB Developer Guide*.
+ `key_condition_expression(impl Into<String>)` / `set_key_condition_expression(Option<String>)`: The condition that specifies the key values for items to be retrieved by the `Query` action.
The condition must perform an equality test on a single partition key value.
The condition can optionally perform one of several comparison tests on a single sort key value. This allows `Query` to retrieve one item with a given partition key value and sort key value, or several items that have the same partition key value but different sort key values.
The partition key equality test is required, and must be specified in the following format:
`partitionKeyName` *=* `:partitionkeyval`
If you also want to provide a condition for the sort key, it must be combined using `AND` with the condition for the sort key. Following is an example, using the **=** comparison operator for the sort key:
`partitionKeyName` `=` `:partitionkeyval` `AND` `sortKeyName` `=` `:sortkeyval`
Valid comparisons for the sort key condition are as follows:
- `sortKeyName` `=` `:sortkeyval` - true if the sort key value is equal to `:sortkeyval`.
- `sortKeyName` `<` `:sortkeyval` - true if the sort key value is less than `:sortkeyval`.
- `sortKeyName` `<=` `:sortkeyval` - true if the sort key value is less than or equal to `:sortkeyval`.
- `sortKeyName` `>` `:sortkeyval` - true if the sort key value is greater than `:sortkeyval`.
- `sortKeyName` `>=` `:sortkeyval` - true if the sort key value is greater than or equal to `:sortkeyval`.
- `sortKeyName` `BETWEEN` `:sortkeyval1` `AND` `:sortkeyval2` - true if the sort key value is greater than or equal to `:sortkeyval1`, and less than or equal to `:sortkeyval2`.
- `begins_with (` `sortKeyName`, `:sortkeyval` `)` - true if the sort key value begins with a particular operand. (You cannot use this function with a sort key that is of type Number.) Note that the function name `begins_with` is case-sensitive.Use the `ExpressionAttributeValues` parameter to replace tokens such as `:partitionval` and `:sortval` with actual values at runtime.
You can optionally use the `ExpressionAttributeNames` parameter to replace the names of the partition key and sort key with placeholder tokens. This option might be necessary if an attribute name conflicts with a DynamoDB reserved word. For example, the following `KeyConditionExpression` parameter causes an error because *Size* is a reserved word:
- `Size = :myval`To work around this, define a placeholder (such a `#S`) to represent the attribute name *Size*. `KeyConditionExpression` then is as follows:
- `#S = :myval`For a list of reserved words, see Reserved Words in the *Amazon DynamoDB Developer Guide*.
For more information on `ExpressionAttributeNames` and `ExpressionAttributeValues`, see Using Placeholders for Attribute Names and Values in the *Amazon DynamoDB Developer Guide*.
+ `expression_attribute_names(impl Into<String>, impl Into<String>)` / `set_expression_attribute_names(Option<HashMap<String, String>>)`: One or more substitution tokens for attribute names in an expression. The following are some use cases for using `ExpressionAttributeNames`:
- To access an attribute whose name conflicts with a DynamoDB reserved word.
- To create a placeholder for repeating occurrences of an attribute name in an expression.
- To prevent special characters in an attribute name from being misinterpreted in an expression.Use the **#** character in an expression to dereference an attribute name. For example, consider the following attribute name:
- `Percentile`The name of this attribute conflicts with a reserved word, so it cannot be used directly in an expression. (For the complete list of reserved words, see Reserved Words in the *Amazon DynamoDB Developer Guide*). To work around this, you could specify the following for `ExpressionAttributeNames`:
- `{“#P”:“Percentile”}`You could then use this substitution in an expression, as in this example:
- `#P = :val` Tokens that begin with the **:** character are *expression attribute values*, which are placeholders for the actual value at runtime.
For more information on expression attribute names, see Specifying Item Attributes in the *Amazon DynamoDB Developer Guide*.
+ `expression_attribute_values(impl Into<String>, AttributeValue)` / `set_expression_attribute_values(Option<HashMap<String, AttributeValue>>)`: One or more values that can be substituted in an expression.
Use the **:** (colon) character in an expression to dereference an attribute value. For example, suppose that you wanted to check whether the value of the *ProductStatus* attribute was one of the following:
`Available | Backordered | Discontinued`
You would first need to specify `ExpressionAttributeValues` as follows:
`{ “:avail”:{“S”:“Available”}, “:back”:{“S”:“Backordered”}, “:disc”:{“S”:“Discontinued”} }`
You could then use these values in an expression, such as this:
`ProductStatus IN (:avail, :back, :disc)`
For more information on expression attribute values, see Specifying Conditions in the *Amazon DynamoDB Developer Guide*.
* On success, responds with `QueryOutput` with field(s):
+ `items(Option<Vec<HashMap<String, AttributeValue>>>)`: An array of item attributes that match the query criteria. Each element in this array consists of an attribute name and the value for that attribute.
+ `count(i32)`: The number of items in the response.
If you used a `QueryFilter` in the request, then `Count` is the number of items returned after the filter was applied, and `ScannedCount` is the number of matching items before the filter was applied.
If you did not use a filter in the request, then `Count` and `ScannedCount` are the same.
+ `scanned_count(i32)`: The number of items evaluated, before any `QueryFilter` is applied. A high `ScannedCount` value with few, or no, `Count` results indicates an inefficient `Query` operation. For more information, see Count and ScannedCount in the *Amazon DynamoDB Developer Guide*.
If you did not use a filter in the request, then `ScannedCount` is the same as `Count`.
+ `last_evaluated_key(Option<HashMap<String, AttributeValue>>)`: The primary key of the item where the operation stopped, inclusive of the previous result set. Use this value to start a new operation, excluding this value in the new request.
If `LastEvaluatedKey` is empty, then the “last page” of results has been processed and there is no more data to be retrieved.
If `LastEvaluatedKey` is not empty, it does not necessarily mean that there is more data in the result set. The only way to know when you have reached the end of the result set is when `LastEvaluatedKey` is empty.
+ `consumed_capacity(Option<ConsumedCapacity>)`: The capacity units consumed by the `Query` operation. The data returned includes the total provisioned throughput consumed, along with statistics for the table and any indexes involved in the operation. `ConsumedCapacity` is only returned if the `ReturnConsumedCapacity` parameter was specified. For more information, see Provisioned Throughput in the *Amazon DynamoDB Developer Guide*.
* On failure, responds with `SdkError<QueryError>`
### impl Client
#### pub fn restore_table_from_backup(&self) -> RestoreTableFromBackupFluentBuilder
Constructs a fluent builder for the `RestoreTableFromBackup` operation.
* The fluent builder is configurable:
+ `target_table_name(impl Into<String>)` / `set_target_table_name(Option<String>)`: The name of the new table to which the backup must be restored.
+ `backup_arn(impl Into<String>)` / `set_backup_arn(Option<String>)`: The Amazon Resource Name (ARN) associated with the backup.
+ `billing_mode_override(BillingMode)` / `set_billing_mode_override(Option<BillingMode>)`: The billing mode of the restored table.
+ `global_secondary_index_override(GlobalSecondaryIndex)` / `set_global_secondary_index_override(Option<Vec<GlobalSecondaryIndex>>)`: List of global secondary indexes for the restored table. The indexes provided should match existing secondary indexes. You can choose to exclude some or all of the indexes at the time of restore.
+ `local_secondary_index_override(LocalSecondaryIndex)` / `set_local_secondary_index_override(Option<Vec<LocalSecondaryIndex>>)`: List of local secondary indexes for the restored table. The indexes provided should match existing secondary indexes. You can choose to exclude some or all of the indexes at the time of restore.
+ `provisioned_throughput_override(ProvisionedThroughput)` / `set_provisioned_throughput_override(Option<ProvisionedThroughput>)`: Provisioned throughput settings for the restored table.
+ `sse_specification_override(SseSpecification)` / `set_sse_specification_override(Option<SseSpecification>)`: The new server-side encryption settings for the restored table.
* On success, responds with `RestoreTableFromBackupOutput` with field(s):
+ `table_description(Option<TableDescription>)`: The description of the table created from an existing backup.
* On failure, responds with `SdkError<RestoreTableFromBackupError>`
### impl Client
#### pub fn restore_table_to_point_in_time(
&self
) -> RestoreTableToPointInTimeFluentBuilder
Constructs a fluent builder for the `RestoreTableToPointInTime` operation.
* The fluent builder is configurable:
+ `source_table_arn(impl Into<String>)` / `set_source_table_arn(Option<String>)`: The DynamoDB table that will be restored. This value is an Amazon Resource Name (ARN).
+ `source_table_name(impl Into<String>)` / `set_source_table_name(Option<String>)`: Name of the source table that is being restored.
+ `target_table_name(impl Into<String>)` / `set_target_table_name(Option<String>)`: The name of the new table to which it must be restored to.
+ `use_latest_restorable_time(bool)` / `set_use_latest_restorable_time(Option<bool>)`: Restore the table to the latest possible time. `LatestRestorableDateTime` is typically 5 minutes before the current time.
+ `restore_date_time(DateTime)` / `set_restore_date_time(Option<DateTime>)`: Time in the past to restore the table to.
+ `billing_mode_override(BillingMode)` / `set_billing_mode_override(Option<BillingMode>)`: The billing mode of the restored table.
+ `global_secondary_index_override(GlobalSecondaryIndex)` / `set_global_secondary_index_override(Option<Vec<GlobalSecondaryIndex>>)`: List of global secondary indexes for the restored table. The indexes provided should match existing secondary indexes. You can choose to exclude some or all of the indexes at the time of restore.
+ `local_secondary_index_override(LocalSecondaryIndex)` / `set_local_secondary_index_override(Option<Vec<LocalSecondaryIndex>>)`: List of local secondary indexes for the restored table. The indexes provided should match existing secondary indexes. You can choose to exclude some or all of the indexes at the time of restore.
+ `provisioned_throughput_override(ProvisionedThroughput)` / `set_provisioned_throughput_override(Option<ProvisionedThroughput>)`: Provisioned throughput settings for the restored table.
+ `sse_specification_override(SseSpecification)` / `set_sse_specification_override(Option<SseSpecification>)`: The new server-side encryption settings for the restored table.
* On success, responds with `RestoreTableToPointInTimeOutput` with field(s):
+ `table_description(Option<TableDescription>)`: Represents the properties of a table.
* On failure, responds with `SdkError<RestoreTableToPointInTimeError>`
### impl Client
#### pub fn scan(&self) -> ScanFluentBuilder
Constructs a fluent builder for the `Scan` operation.
This operation supports pagination; See `into_paginator()`.
* The fluent builder is configurable:
+ `table_name(impl Into<String>)` / `set_table_name(Option<String>)`: The name of the table containing the requested items; or, if you provide `IndexName`, the name of the table to which that index belongs.
+ `index_name(impl Into<String>)` / `set_index_name(Option<String>)`: The name of a secondary index to scan. This index can be any local secondary index or global secondary index. Note that if you use the `IndexName` parameter, you must also provide `TableName`.
+ `attributes_to_get(impl Into<String>)` / `set_attributes_to_get(Option<Vec<String>>)`: This is a legacy parameter. Use `ProjectionExpression` instead. For more information, see AttributesToGet in the *Amazon DynamoDB Developer Guide*.
+ `limit(i32)` / `set_limit(Option<i32>)`: The maximum number of items to evaluate (not necessarily the number of matching items). If DynamoDB processes the number of items up to the limit while processing the results, it stops the operation and returns the matching values up to that point, and a key in `LastEvaluatedKey` to apply in a subsequent operation, so that you can pick up where you left off. Also, if the processed dataset size exceeds 1 MB before DynamoDB reaches this limit, it stops the operation and returns the matching values up to the limit, and a key in `LastEvaluatedKey` to apply in a subsequent operation to continue the operation. For more information, see Working with Queries in the *Amazon DynamoDB Developer Guide*.
+ `select(Select)` / `set_select(Option<Select>)`: The attributes to be returned in the result. You can retrieve all item attributes, specific item attributes, the count of matching items, or in the case of an index, some or all of the attributes projected into the index.
- `ALL_ATTRIBUTES` - Returns all of the item attributes from the specified table or index. If you query a local secondary index, then for each matching item in the index, DynamoDB fetches the entire item from the parent table. If the index is configured to project all item attributes, then all of the data can be obtained from the local secondary index, and no fetching is required.
- `ALL_PROJECTED_ATTRIBUTES` - Allowed only when querying an index. Retrieves all attributes that have been projected into the index. If the index is configured to project all attributes, this return value is equivalent to specifying `ALL_ATTRIBUTES`.
- `COUNT` - Returns the number of matching items, rather than the matching items themselves. Note that this uses the same quantity of read capacity units as getting the items, and is subject to the same item size calculations.
- `SPECIFIC_ATTRIBUTES` - Returns only the attributes listed in `ProjectionExpression`. This return value is equivalent to specifying `ProjectionExpression` without specifying any value for `Select`.
If you query or scan a local secondary index and request only attributes that are projected into that index, the operation reads only the index and not the table. If any of the requested attributes are not projected into the local secondary index, DynamoDB fetches each of these attributes from the parent table. This extra fetching incurs additional throughput cost and latency.
If you query or scan a global secondary index, you can only request attributes that are projected into the index. Global secondary index queries cannot fetch attributes from the parent table.If neither `Select` nor `ProjectionExpression` are specified, DynamoDB defaults to `ALL_ATTRIBUTES` when accessing a table, and `ALL_PROJECTED_ATTRIBUTES` when accessing an index. You cannot use both `Select` and `ProjectionExpression` together in a single request, unless the value for `Select` is `SPECIFIC_ATTRIBUTES`. (This usage is equivalent to specifying `ProjectionExpression` without any value for `Select`.)
If you use the `ProjectionExpression` parameter, then the value for `Select` can only be `SPECIFIC_ATTRIBUTES`. Any other value for `Select` will return an error.
+ `scan_filter(impl Into<String>, Condition)` / `set_scan_filter(Option<HashMap<String, Condition>>)`: This is a legacy parameter. Use `FilterExpression` instead. For more information, see ScanFilter in the *Amazon DynamoDB Developer Guide*.
+ `conditional_operator(ConditionalOperator)` / `set_conditional_operator(Option<ConditionalOperator>)`: This is a legacy parameter. Use `FilterExpression` instead. For more information, see ConditionalOperator in the *Amazon DynamoDB Developer Guide*.
+ `exclusive_start_key(impl Into<String>, AttributeValue)` / `set_exclusive_start_key(Option<HashMap<String, AttributeValue>>)`: The primary key of the first item that this operation will evaluate. Use the value that was returned for `LastEvaluatedKey` in the previous operation.
The data type for `ExclusiveStartKey` must be String, Number or Binary. No set data types are allowed.
In a parallel scan, a `Scan` request that includes `ExclusiveStartKey` must specify the same segment whose previous `Scan` returned the corresponding value of `LastEvaluatedKey`.
+ `return_consumed_capacity(ReturnConsumedCapacity)` / `set_return_consumed_capacity(Option<ReturnConsumedCapacity>)`: Determines the level of detail about either provisioned or on-demand throughput consumption that is returned in the response:
- `INDEXES` - The response includes the aggregate `ConsumedCapacity` for the operation, together with `ConsumedCapacity` for each table and secondary index that was accessed.
Note that some operations, such as `GetItem` and `BatchGetItem`, do not access any indexes at all. In these cases, specifying `INDEXES` will only return `ConsumedCapacity` information for table(s).
- `TOTAL` - The response includes only the aggregate `ConsumedCapacity` for the operation.
- `NONE` - No `ConsumedCapacity` details are included in the response.
+ `total_segments(i32)` / `set_total_segments(Option<i32>)`: For a parallel `Scan` request, `TotalSegments` represents the total number of segments into which the `Scan` operation will be divided. The value of `TotalSegments` corresponds to the number of application workers that will perform the parallel scan. For example, if you want to use four application threads to scan a table or an index, specify a `TotalSegments` value of 4.
The value for `TotalSegments` must be greater than or equal to 1, and less than or equal to 1000000. If you specify a `TotalSegments` value of 1, the `Scan` operation will be sequential rather than parallel.
If you specify `TotalSegments`, you must also specify `Segment`.
+ `segment(i32)` / `set_segment(Option<i32>)`: For a parallel `Scan` request, `Segment` identifies an individual segment to be scanned by an application worker.
Segment IDs are zero-based, so the first segment is always 0. For example, if you want to use four application threads to scan a table or an index, then the first thread specifies a `Segment` value of 0, the second thread specifies 1, and so on.
The value of `LastEvaluatedKey` returned from a parallel `Scan` request must be used as `ExclusiveStartKey` with the same segment ID in a subsequent `Scan` operation.
The value for `Segment` must be greater than or equal to 0, and less than the value provided for `TotalSegments`.
If you provide `Segment`, you must also provide `TotalSegments`.
+ `projection_expression(impl Into<String>)` / `set_projection_expression(Option<String>)`: A string that identifies one or more attributes to retrieve from the specified table or index. These attributes can include scalars, sets, or elements of a JSON document. The attributes in the expression must be separated by commas.
If no attribute names are specified, then all attributes will be returned. If any of the requested attributes are not found, they will not appear in the result.
For more information, see Specifying Item Attributes in the *Amazon DynamoDB Developer Guide*.
+ `filter_expression(impl Into<String>)` / `set_filter_expression(Option<String>)`: A string that contains conditions that DynamoDB applies after the `Scan` operation, but before the data is returned to you. Items that do not satisfy the `FilterExpression` criteria are not returned.
A `FilterExpression` is applied after the items have already been read; the process of filtering does not consume any additional read capacity units.
For more information, see Filter Expressions in the *Amazon DynamoDB Developer Guide*.
+ `expression_attribute_names(impl Into<String>, impl Into<String>)` / `set_expression_attribute_names(Option<HashMap<String, String>>)`: One or more substitution tokens for attribute names in an expression. The following are some use cases for using `ExpressionAttributeNames`:
- To access an attribute whose name conflicts with a DynamoDB reserved word.
- To create a placeholder for repeating occurrences of an attribute name in an expression.
- To prevent special characters in an attribute name from being misinterpreted in an expression.Use the **#** character in an expression to dereference an attribute name. For example, consider the following attribute name:
- `Percentile`The name of this attribute conflicts with a reserved word, so it cannot be used directly in an expression. (For the complete list of reserved words, see Reserved Words in the *Amazon DynamoDB Developer Guide*). To work around this, you could specify the following for `ExpressionAttributeNames`:
- `{“#P”:“Percentile”}`You could then use this substitution in an expression, as in this example:
- `#P = :val` Tokens that begin with the **:** character are *expression attribute values*, which are placeholders for the actual value at runtime.
For more information on expression attribute names, see Specifying Item Attributes in the *Amazon DynamoDB Developer Guide*.
+ `expression_attribute_values(impl Into<String>, AttributeValue)` / `set_expression_attribute_values(Option<HashMap<String, AttributeValue>>)`: One or more values that can be substituted in an expression.
Use the **:** (colon) character in an expression to dereference an attribute value. For example, suppose that you wanted to check whether the value of the `ProductStatus` attribute was one of the following:
`Available | Backordered | Discontinued`
You would first need to specify `ExpressionAttributeValues` as follows:
`{ “:avail”:{“S”:“Available”}, “:back”:{“S”:“Backordered”}, “:disc”:{“S”:“Discontinued”} }`
You could then use these values in an expression, such as this:
`ProductStatus IN (:avail, :back, :disc)`
For more information on expression attribute values, see Condition Expressions in the *Amazon DynamoDB Developer Guide*.
+ `consistent_read(bool)` / `set_consistent_read(Option<bool>)`: A Boolean value that determines the read consistency model during the scan:
- If `ConsistentRead` is `false`, then the data returned from `Scan` might not contain the results from other recently completed write operations (`PutItem`, `UpdateItem`, or `DeleteItem`).
- If `ConsistentRead` is `true`, then all of the write operations that completed before the `Scan` began are guaranteed to be contained in the `Scan` response.The default setting for `ConsistentRead` is `false`.
The `ConsistentRead` parameter is not supported on global secondary indexes. If you scan a global secondary index with `ConsistentRead` set to true, you will receive a `ValidationException`.
* On success, responds with `ScanOutput` with field(s):
+ `items(Option<Vec<HashMap<String, AttributeValue>>>)`: An array of item attributes that match the scan criteria. Each element in this array consists of an attribute name and the value for that attribute.
+ `count(i32)`: The number of items in the response.
If you set `ScanFilter` in the request, then `Count` is the number of items returned after the filter was applied, and `ScannedCount` is the number of matching items before the filter was applied.
If you did not use a filter in the request, then `Count` is the same as `ScannedCount`.
+ `scanned_count(i32)`: The number of items evaluated, before any `ScanFilter` is applied. A high `ScannedCount` value with few, or no, `Count` results indicates an inefficient `Scan` operation. For more information, see Count and ScannedCount in the *Amazon DynamoDB Developer Guide*.
If you did not use a filter in the request, then `ScannedCount` is the same as `Count`.
+ `last_evaluated_key(Option<HashMap<String, AttributeValue>>)`: The primary key of the item where the operation stopped, inclusive of the previous result set. Use this value to start a new operation, excluding this value in the new request.
If `LastEvaluatedKey` is empty, then the “last page” of results has been processed and there is no more data to be retrieved.
If `LastEvaluatedKey` is not empty, it does not necessarily mean that there is more data in the result set. The only way to know when you have reached the end of the result set is when `LastEvaluatedKey` is empty.
+ `consumed_capacity(Option<ConsumedCapacity>)`: The capacity units consumed by the `Scan` operation. The data returned includes the total provisioned throughput consumed, along with statistics for the table and any indexes involved in the operation. `ConsumedCapacity` is only returned if the `ReturnConsumedCapacity` parameter was specified. For more information, see Provisioned Throughput in the *Amazon DynamoDB Developer Guide*.
* On failure, responds with `SdkError<ScanError>`
### impl Client
#### pub fn tag_resource(&self) -> TagResourceFluentBuilder
Constructs a fluent builder for the `TagResource` operation.
* The fluent builder is configurable:
+ `resource_arn(impl Into<String>)` / `set_resource_arn(Option<String>)`: Identifies the Amazon DynamoDB resource to which tags should be added. This value is an Amazon Resource Name (ARN).
+ `tags(Tag)` / `set_tags(Option<Vec<Tag>>)`: The tags to be assigned to the Amazon DynamoDB resource.
* On success, responds with `TagResourceOutput`
* On failure, responds with `SdkError<TagResourceError>`
### impl Client
#### pub fn transact_get_items(&self) -> TransactGetItemsFluentBuilder
Constructs a fluent builder for the `TransactGetItems` operation.
* The fluent builder is configurable:
+ `transact_items(TransactGetItem)` / `set_transact_items(Option<Vec<TransactGetItem>>)`: An ordered array of up to 100 `TransactGetItem` objects, each of which contains a `Get` structure.
+ `return_consumed_capacity(ReturnConsumedCapacity)` / `set_return_consumed_capacity(Option<ReturnConsumedCapacity>)`: A value of `TOTAL` causes consumed capacity information to be returned, and a value of `NONE` prevents that information from being returned. No other value is valid.
* On success, responds with `TransactGetItemsOutput` with field(s):
+ `consumed_capacity(Option<Vec<ConsumedCapacity>>)`: If the *ReturnConsumedCapacity* value was `TOTAL`, this is an array of `ConsumedCapacity` objects, one for each table addressed by `TransactGetItem` objects in the *TransactItems* parameter. These `ConsumedCapacity` objects report the read-capacity units consumed by the `TransactGetItems` call in that table.
+ `responses(Option<Vec<ItemResponse>>)`: An ordered array of up to 100 `ItemResponse` objects, each of which corresponds to the `TransactGetItem` object in the same position in the *TransactItems* array. Each `ItemResponse` object contains a Map of the name-value pairs that are the projected attributes of the requested item.
If a requested item could not be retrieved, the corresponding `ItemResponse` object is Null, or if the requested item has no projected attributes, the corresponding `ItemResponse` object is an empty Map.
* On failure, responds with `SdkError<TransactGetItemsError>`
### impl Client
#### pub fn transact_write_items(&self) -> TransactWriteItemsFluentBuilder
Constructs a fluent builder for the `TransactWriteItems` operation.
* The fluent builder is configurable:
+ `transact_items(TransactWriteItem)` / `set_transact_items(Option<Vec<TransactWriteItem>>)`: An ordered array of up to 100 `TransactWriteItem` objects, each of which contains a `ConditionCheck`, `Put`, `Update`, or `Delete` object. These can operate on items in different tables, but the tables must reside in the same Amazon Web Services account and Region, and no two of them can operate on the same item.
+ `return_consumed_capacity(ReturnConsumedCapacity)` / `set_return_consumed_capacity(Option<ReturnConsumedCapacity>)`: Determines the level of detail about either provisioned or on-demand throughput consumption that is returned in the response:
- `INDEXES` - The response includes the aggregate `ConsumedCapacity` for the operation, together with `ConsumedCapacity` for each table and secondary index that was accessed.
Note that some operations, such as `GetItem` and `BatchGetItem`, do not access any indexes at all. In these cases, specifying `INDEXES` will only return `ConsumedCapacity` information for table(s).
- `TOTAL` - The response includes only the aggregate `ConsumedCapacity` for the operation.
- `NONE` - No `ConsumedCapacity` details are included in the response.
+ `return_item_collection_metrics(ReturnItemCollectionMetrics)` / `set_return_item_collection_metrics(Option<ReturnItemCollectionMetrics>)`: Determines whether item collection metrics are returned. If set to `SIZE`, the response includes statistics about item collections (if any), that were modified during the operation and are returned in the response. If set to `NONE` (the default), no statistics are returned.
+ `client_request_token(impl Into<String>)` / `set_client_request_token(Option<String>)`: Providing a `ClientRequestToken` makes the call to `TransactWriteItems` idempotent, meaning that multiple identical calls have the same effect as one single call.
Although multiple identical calls using the same client request token produce the same result on the server (no side effects), the responses to the calls might not be the same. If the `ReturnConsumedCapacity` parameter is set, then the initial `TransactWriteItems` call returns the amount of write capacity units consumed in making the changes. Subsequent `TransactWriteItems` calls with the same client token return the number of read capacity units consumed in reading the item.
A client request token is valid for 10 minutes after the first request that uses it is completed. After 10 minutes, any request with the same client token is treated as a new request. Do not resubmit the same request with the same client token for more than 10 minutes, or the result might not be idempotent.
If you submit a request with the same client token but a change in other parameters within the 10-minute idempotency window, DynamoDB returns an `IdempotentParameterMismatch` exception.
* On success, responds with `TransactWriteItemsOutput` with field(s):
+ `consumed_capacity(Option<Vec<ConsumedCapacity>>)`: The capacity units consumed by the entire `TransactWriteItems` operation. The values of the list are ordered according to the ordering of the `TransactItems` request parameter.
+ `item_collection_metrics(Option<HashMap<String, Vec<ItemCollectionMetrics>>>)`: A list of tables that were processed by `TransactWriteItems` and, for each table, information about any item collections that were affected by individual `UpdateItem`, `PutItem`, or `DeleteItem` operations.
* On failure, responds with `SdkError<TransactWriteItemsError>`
### impl Client
#### pub fn untag_resource(&self) -> UntagResourceFluentBuilder
Constructs a fluent builder for the `UntagResource` operation.
* The fluent builder is configurable:
+ `resource_arn(impl Into<String>)` / `set_resource_arn(Option<String>)`: The DynamoDB resource that the tags will be removed from. This value is an Amazon Resource Name (ARN).
+ `tag_keys(impl Into<String>)` / `set_tag_keys(Option<Vec<String>>)`: A list of tag keys. Existing tags of the resource whose keys are members of this list will be removed from the DynamoDB resource.
* On success, responds with `UntagResourceOutput`
* On failure, responds with `SdkError<UntagResourceError>`
### impl Client
#### pub fn update_continuous_backups(&self) -> UpdateContinuousBackupsFluentBuilder
Constructs a fluent builder for the `UpdateContinuousBackups` operation.
* The fluent builder is configurable:
+ `table_name(impl Into<String>)` / `set_table_name(Option<String>)`: The name of the table.
+ `point_in_time_recovery_specification(PointInTimeRecoverySpecification)` / `set_point_in_time_recovery_specification(Option<PointInTimeRecoverySpecification>)`: Represents the settings used to enable point in time recovery.
* On success, responds with `UpdateContinuousBackupsOutput` with field(s):
+ `continuous_backups_description(Option<ContinuousBackupsDescription>)`: Represents the continuous backups and point in time recovery settings on the table.
* On failure, responds with `SdkError<UpdateContinuousBackupsError>`
### impl Client
#### pub fn update_contributor_insights(
&self
) -> UpdateContributorInsightsFluentBuilder
Constructs a fluent builder for the `UpdateContributorInsights` operation.
* The fluent builder is configurable:
+ `table_name(impl Into<String>)` / `set_table_name(Option<String>)`: The name of the table.
+ `index_name(impl Into<String>)` / `set_index_name(Option<String>)`: The global secondary index name, if applicable.
+ `contributor_insights_action(ContributorInsightsAction)` / `set_contributor_insights_action(Option<ContributorInsightsAction>)`: Represents the contributor insights action.
* On success, responds with `UpdateContributorInsightsOutput` with field(s):
+ `table_name(Option<String>)`: The name of the table.
+ `index_name(Option<String>)`: The name of the global secondary index, if applicable.
+ `contributor_insights_status(Option<ContributorInsightsStatus>)`: The status of contributor insights
* On failure, responds with `SdkError<UpdateContributorInsightsError>`
### impl Client
#### pub fn update_global_table(&self) -> UpdateGlobalTableFluentBuilder
Constructs a fluent builder for the `UpdateGlobalTable` operation.
* The fluent builder is configurable:
+ `global_table_name(impl Into<String>)` / `set_global_table_name(Option<String>)`: The global table name.
+ `replica_updates(ReplicaUpdate)` / `set_replica_updates(Option<Vec<ReplicaUpdate>>)`: A list of Regions that should be added or removed from the global table.
* On success, responds with `UpdateGlobalTableOutput` with field(s):
+ `global_table_description(Option<GlobalTableDescription>)`: Contains the details of the global table.
* On failure, responds with `SdkError<UpdateGlobalTableError>`
### impl Client
#### pub fn update_global_table_settings(
&self
) -> UpdateGlobalTableSettingsFluentBuilder
Constructs a fluent builder for the `UpdateGlobalTableSettings` operation.
* The fluent builder is configurable:
+ `global_table_name(impl Into<String>)` / `set_global_table_name(Option<String>)`: The name of the global table
+ `global_table_billing_mode(BillingMode)` / `set_global_table_billing_mode(Option<BillingMode>)`: The billing mode of the global table. If `GlobalTableBillingMode` is not specified, the global table defaults to `PROVISIONED` capacity billing mode.
- `PROVISIONED` - We recommend using `PROVISIONED` for predictable workloads. `PROVISIONED` sets the billing mode to Provisioned Mode.
- `PAY_PER_REQUEST` - We recommend using `PAY_PER_REQUEST` for unpredictable workloads. `PAY_PER_REQUEST` sets the billing mode to On-Demand Mode.
+ `global_table_provisioned_write_capacity_units(i64)` / `set_global_table_provisioned_write_capacity_units(Option<i64>)`: The maximum number of writes consumed per second before DynamoDB returns a `ThrottlingException.`
+ `global_table_provisioned_write_capacity_auto_scaling_settings_update(AutoScalingSettingsUpdate)` / `set_global_table_provisioned_write_capacity_auto_scaling_settings_update(Option<AutoScalingSettingsUpdate>)`: Auto scaling settings for managing provisioned write capacity for the global table.
+ `global_table_global_secondary_index_settings_update(GlobalTableGlobalSecondaryIndexSettingsUpdate)` / `set_global_table_global_secondary_index_settings_update(Option<Vec<GlobalTableGlobalSecondaryIndexSettingsUpdate>>)`: Represents the settings of a global secondary index for a global table that will be modified.
+ `replica_settings_update(ReplicaSettingsUpdate)` / `set_replica_settings_update(Option<Vec<ReplicaSettingsUpdate>>)`: Represents the settings for a global table in a Region that will be modified.
* On success, responds with `UpdateGlobalTableSettingsOutput` with field(s):
+ `global_table_name(Option<String>)`: The name of the global table.
+ `replica_settings(Option<Vec<ReplicaSettingsDescription>>)`: The Region-specific settings for the global table.
* On failure, responds with `SdkError<UpdateGlobalTableSettingsError>`
### impl Client
#### pub fn update_item(&self) -> UpdateItemFluentBuilder
Constructs a fluent builder for the `UpdateItem` operation.
* The fluent builder is configurable:
+ `table_name(impl Into<String>)` / `set_table_name(Option<String>)`: The name of the table containing the item to update.
+ `key(impl Into<String>, AttributeValue)` / `set_key(Option<HashMap<String, AttributeValue>>)`: The primary key of the item to be updated. Each element consists of an attribute name and a value for that attribute.
For the primary key, you must provide all of the attributes. For example, with a simple primary key, you only need to provide a value for the partition key. For a composite primary key, you must provide values for both the partition key and the sort key.
+ `attribute_updates(impl Into<String>, AttributeValueUpdate)` / `set_attribute_updates(Option<HashMap<String, AttributeValueUpdate>>)`: This is a legacy parameter. Use `UpdateExpression` instead. For more information, see AttributeUpdates in the *Amazon DynamoDB Developer Guide*.
+ `expected(impl Into<String>, ExpectedAttributeValue)` / `set_expected(Option<HashMap<String, ExpectedAttributeValue>>)`: This is a legacy parameter. Use `ConditionExpression` instead. For more information, see Expected in the *Amazon DynamoDB Developer Guide*.
+ `conditional_operator(ConditionalOperator)` / `set_conditional_operator(Option<ConditionalOperator>)`: This is a legacy parameter. Use `ConditionExpression` instead. For more information, see ConditionalOperator in the *Amazon DynamoDB Developer Guide*.
+ `return_values(ReturnValue)` / `set_return_values(Option<ReturnValue>)`: Use `ReturnValues` if you want to get the item attributes as they appear before or after they are successfully updated. For `UpdateItem`, the valid values are:
- `NONE` - If `ReturnValues` is not specified, or if its value is `NONE`, then nothing is returned. (This setting is the default for `ReturnValues`.)
- `ALL_OLD` - Returns all of the attributes of the item, as they appeared before the UpdateItem operation.
- `UPDATED_OLD` - Returns only the updated attributes, as they appeared before the UpdateItem operation.
- `ALL_NEW` - Returns all of the attributes of the item, as they appear after the UpdateItem operation.
- `UPDATED_NEW` - Returns only the updated attributes, as they appear after the UpdateItem operation.There is no additional cost associated with requesting a return value aside from the small network and processing overhead of receiving a larger response. No read capacity units are consumed.
The values returned are strongly consistent.
+ `return_consumed_capacity(ReturnConsumedCapacity)` / `set_return_consumed_capacity(Option<ReturnConsumedCapacity>)`: Determines the level of detail about either provisioned or on-demand throughput consumption that is returned in the response:
- `INDEXES` - The response includes the aggregate `ConsumedCapacity` for the operation, together with `ConsumedCapacity` for each table and secondary index that was accessed.
Note that some operations, such as `GetItem` and `BatchGetItem`, do not access any indexes at all. In these cases, specifying `INDEXES` will only return `ConsumedCapacity` information for table(s).
- `TOTAL` - The response includes only the aggregate `ConsumedCapacity` for the operation.
- `NONE` - No `ConsumedCapacity` details are included in the response.
+ `return_item_collection_metrics(ReturnItemCollectionMetrics)` / `set_return_item_collection_metrics(Option<ReturnItemCollectionMetrics>)`: Determines whether item collection metrics are returned. If set to `SIZE`, the response includes statistics about item collections, if any, that were modified during the operation are returned in the response. If set to `NONE` (the default), no statistics are returned.
+ `update_expression(impl Into<String>)` / `set_update_expression(Option<String>)`: An expression that defines one or more attributes to be updated, the action to be performed on them, and new values for them.
The following action values are available for `UpdateExpression`.
- `SET` - Adds one or more attributes and values to an item. If any of these attributes already exist, they are replaced by the new values. You can also use `SET` to add or subtract from an attribute that is of type Number. For example: `SET myNum = myNum + :val`
`SET` supports the following functions:
* `if_not_exists (path, operand)` - if the item does not contain an attribute at the specified path, then `if_not_exists` evaluates to operand; otherwise, it evaluates to path. You can use this function to avoid overwriting an attribute that may already be present in the item.
* `list_append (operand, operand)` - evaluates to a list with a new element added to it. You can append the new element to the start or the end of the list by reversing the order of the operands.These function names are case-sensitive.
- `REMOVE` - Removes one or more attributes from an item.
- `ADD` - Adds the specified value to the item, if the attribute does not already exist. If the attribute does exist, then the behavior of `ADD` depends on the data type of the attribute:
* If the existing attribute is a number, and if `Value` is also a number, then `Value` is mathematically added to the existing attribute. If `Value` is a negative number, then it is subtracted from the existing attribute.
If you use `ADD` to increment or decrement a number value for an item that doesn’t exist before the update, DynamoDB uses `0` as the initial value.
Similarly, if you use `ADD` for an existing item to increment or decrement an attribute value that doesn’t exist before the update, DynamoDB uses `0` as the initial value. For example, suppose that the item you want to update doesn’t have an attribute named `itemcount`, but you decide to `ADD` the number `3` to this attribute anyway. DynamoDB will create the `itemcount` attribute, set its initial value to `0`, and finally add `3` to it. The result will be a new `itemcount` attribute in the item, with a value of `3`.
* If the existing data type is a set and if `Value` is also a set, then `Value` is added to the existing set. For example, if the attribute value is the set `[1,2]`, and the `ADD` action specified `[3]`, then the final attribute value is `[1,2,3]`. An error occurs if an `ADD` action is specified for a set attribute and the attribute type specified does not match the existing set type.
Both sets must have the same primitive data type. For example, if the existing data type is a set of strings, the `Value` must also be a set of strings. The `ADD` action only supports Number and set data types. In addition, `ADD` can only be used on top-level attributes, not nested attributes.
- `DELETE` - Deletes an element from a set.
If a set of values is specified, then those values are subtracted from the old set. For example, if the attribute value was the set `[a,b,c]` and the `DELETE` action specifies `[a,c]`, then the final attribute value is `[b]`. Specifying an empty set is an error.
The `DELETE` action only supports set data types. In addition, `DELETE` can only be used on top-level attributes, not nested attributes.You can have many actions in a single expression, such as the following: `SET a=:value1, b=:value2 DELETE :value3, :value4, :value5`
For more information on update expressions, see Modifying Items and Attributes in the *Amazon DynamoDB Developer Guide*.
+ `condition_expression(impl Into<String>)` / `set_condition_expression(Option<String>)`: A condition that must be satisfied in order for a conditional update to succeed.
An expression can contain any of the following:
- Functions: `attribute_exists | attribute_not_exists | attribute_type | contains | begins_with | size`
These function names are case-sensitive.
- Comparison operators: `= | <> | < | > | <= | >= | BETWEEN | IN`
- Logical operators: `AND | OR | NOT`For more information about condition expressions, see Specifying Conditions in the *Amazon DynamoDB Developer Guide*.
+ `expression_attribute_names(impl Into<String>, impl Into<String>)` / `set_expression_attribute_names(Option<HashMap<String, String>>)`: One or more substitution tokens for attribute names in an expression. The following are some use cases for using `ExpressionAttributeNames`:
- To access an attribute whose name conflicts with a DynamoDB reserved word.
- To create a placeholder for repeating occurrences of an attribute name in an expression.
- To prevent special characters in an attribute name from being misinterpreted in an expression.Use the **#** character in an expression to dereference an attribute name. For example, consider the following attribute name:
- `Percentile`The name of this attribute conflicts with a reserved word, so it cannot be used directly in an expression. (For the complete list of reserved words, see Reserved Words in the *Amazon DynamoDB Developer Guide*.) To work around this, you could specify the following for `ExpressionAttributeNames`:
- `{“#P”:“Percentile”}`You could then use this substitution in an expression, as in this example:
- `#P = :val` Tokens that begin with the **:** character are *expression attribute values*, which are placeholders for the actual value at runtime.
For more information about expression attribute names, see Specifying Item Attributes in the *Amazon DynamoDB Developer Guide*.
+ `expression_attribute_values(impl Into<String>, AttributeValue)` / `set_expression_attribute_values(Option<HashMap<String, AttributeValue>>)`: One or more values that can be substituted in an expression.
Use the **:** (colon) character in an expression to dereference an attribute value. For example, suppose that you wanted to check whether the value of the `ProductStatus` attribute was one of the following:
`Available | Backordered | Discontinued`
You would first need to specify `ExpressionAttributeValues` as follows:
`{ “:avail”:{“S”:“Available”}, “:back”:{“S”:“Backordered”}, “:disc”:{“S”:“Discontinued”} }`
You could then use these values in an expression, such as this:
`ProductStatus IN (:avail, :back, :disc)`
For more information on expression attribute values, see Condition Expressions in the *Amazon DynamoDB Developer Guide*.
+ `return_values_on_condition_check_failure(ReturnValuesOnConditionCheckFailure)` / `set_return_values_on_condition_check_failure(Option<ReturnValuesOnConditionCheckFailure>)`: An optional parameter that returns the item attributes for an `UpdateItem` operation that failed a condition check.
There is no additional cost associated with requesting a return value aside from the small network and processing overhead of receiving a larger response. No read capacity units are consumed.
* On success, responds with `UpdateItemOutput` with field(s):
+ `attributes(Option<HashMap<String, AttributeValue>>)`: A map of attribute values as they appear before or after the `UpdateItem` operation, as determined by the `ReturnValues` parameter.
The `Attributes` map is only present if the update was successful and `ReturnValues` was specified as something other than `NONE` in the request. Each element represents one attribute.
+ `consumed_capacity(Option<ConsumedCapacity>)`: The capacity units consumed by the `UpdateItem` operation. The data returned includes the total provisioned throughput consumed, along with statistics for the table and any indexes involved in the operation. `ConsumedCapacity` is only returned if the `ReturnConsumedCapacity` parameter was specified. For more information, see Provisioned Throughput in the *Amazon DynamoDB Developer Guide*.
+ `item_collection_metrics(Option<ItemCollectionMetrics>)`: Information about item collections, if any, that were affected by the `UpdateItem` operation. `ItemCollectionMetrics` is only returned if the `ReturnItemCollectionMetrics` parameter was specified. If the table does not have any local secondary indexes, this information is not returned in the response.
Each `ItemCollectionMetrics` element consists of:
- `ItemCollectionKey` - The partition key value of the item collection. This is the same as the partition key value of the item itself.
- `SizeEstimateRangeGB` - An estimate of item collection size, in gigabytes. This value is a two-element array containing a lower bound and an upper bound for the estimate. The estimate includes the size of all the items in the table, plus the size of all attributes projected into all of the local secondary indexes on that table. Use this estimate to measure whether a local secondary index is approaching its size limit.
The estimate is subject to change over time; therefore, do not rely on the precision or accuracy of the estimate.
* On failure, responds with `SdkError<UpdateItemError>`
### impl Client
#### pub fn update_table(&self) -> UpdateTableFluentBuilder
Constructs a fluent builder for the `UpdateTable` operation.
* The fluent builder is configurable:
+ `attribute_definitions(AttributeDefinition)` / `set_attribute_definitions(Option<Vec<AttributeDefinition>>)`: An array of attributes that describe the key schema for the table and indexes. If you are adding a new global secondary index to the table, `AttributeDefinitions` must include the key element(s) of the new index.
+ `table_name(impl Into<String>)` / `set_table_name(Option<String>)`: The name of the table to be updated.
+ `billing_mode(BillingMode)` / `set_billing_mode(Option<BillingMode>)`: Controls how you are charged for read and write throughput and how you manage capacity. When switching from pay-per-request to provisioned capacity, initial provisioned capacity values must be set. The initial provisioned capacity values are estimated based on the consumed read and write capacity of your table and global secondary indexes over the past 30 minutes.
- `PROVISIONED` - We recommend using `PROVISIONED` for predictable workloads. `PROVISIONED` sets the billing mode to Provisioned Mode.
- `PAY_PER_REQUEST` - We recommend using `PAY_PER_REQUEST` for unpredictable workloads. `PAY_PER_REQUEST` sets the billing mode to On-Demand Mode.
+ `provisioned_throughput(ProvisionedThroughput)` / `set_provisioned_throughput(Option<ProvisionedThroughput>)`: The new provisioned throughput settings for the specified table or index.
+ `global_secondary_index_updates(GlobalSecondaryIndexUpdate)` / `set_global_secondary_index_updates(Option<Vec<GlobalSecondaryIndexUpdate>>)`: An array of one or more global secondary indexes for the table. For each index in the array, you can request one action:
- `Create` - add a new global secondary index to the table.
- `Update` - modify the provisioned throughput settings of an existing global secondary index.
- `Delete` - remove a global secondary index from the table.You can create or delete only one global secondary index per `UpdateTable` operation.
For more information, see Managing Global Secondary Indexes in the *Amazon DynamoDB Developer Guide*.
+ `stream_specification(StreamSpecification)` / `set_stream_specification(Option<StreamSpecification>)`: Represents the DynamoDB Streams configuration for the table.
You receive a `ResourceInUseException` if you try to enable a stream on a table that already has a stream, or if you try to disable a stream on a table that doesn’t have a stream.
+ `sse_specification(SseSpecification)` / `set_sse_specification(Option<SseSpecification>)`: The new server-side encryption settings for the specified table.
+ `replica_updates(ReplicationGroupUpdate)` / `set_replica_updates(Option<Vec<ReplicationGroupUpdate>>)`: A list of replica update actions (create, delete, or update) for the table.
This property only applies to Version 2019.11.21 (Current) of global tables.
+ `table_class(TableClass)` / `set_table_class(Option<TableClass>)`: The table class of the table to be updated. Valid values are `STANDARD` and `STANDARD_INFREQUENT_ACCESS`.
+ `deletion_protection_enabled(bool)` / `set_deletion_protection_enabled(Option<bool>)`: Indicates whether deletion protection is to be enabled (true) or disabled (false) on the table.
* On success, responds with `UpdateTableOutput` with field(s):
+ `table_description(Option<TableDescription>)`: Represents the properties of the table.
* On failure, responds with `SdkError<UpdateTableError>`
### impl Client
#### pub fn update_table_replica_auto_scaling(
&self
) -> UpdateTableReplicaAutoScalingFluentBuilder
Constructs a fluent builder for the `UpdateTableReplicaAutoScaling` operation.
* The fluent builder is configurable:
+ `global_secondary_index_updates(GlobalSecondaryIndexAutoScalingUpdate)` / `set_global_secondary_index_updates(Option<Vec<GlobalSecondaryIndexAutoScalingUpdate>>)`: Represents the auto scaling settings of the global secondary indexes of the replica to be updated.
+ `table_name(impl Into<String>)` / `set_table_name(Option<String>)`: The name of the global table to be updated.
+ `provisioned_write_capacity_auto_scaling_update(AutoScalingSettingsUpdate)` / `set_provisioned_write_capacity_auto_scaling_update(Option<AutoScalingSettingsUpdate>)`: Represents the auto scaling settings to be modified for a global table or global secondary index.
+ `replica_updates(ReplicaAutoScalingUpdate)` / `set_replica_updates(Option<Vec<ReplicaAutoScalingUpdate>>)`: Represents the auto scaling settings of replicas of the table that will be modified.
* On success, responds with `UpdateTableReplicaAutoScalingOutput` with field(s):
+ `table_auto_scaling_description(Option<TableAutoScalingDescription>)`: Returns information about the auto scaling settings of a table with replicas.
* On failure, responds with `SdkError<UpdateTableReplicaAutoScalingError>`
### impl Client
#### pub fn update_time_to_live(&self) -> UpdateTimeToLiveFluentBuilder
Constructs a fluent builder for the `UpdateTimeToLive` operation.
* The fluent builder is configurable:
+ `table_name(impl Into<String>)` / `set_table_name(Option<String>)`: The name of the table to be configured.
+ `time_to_live_specification(TimeToLiveSpecification)` / `set_time_to_live_specification(Option<TimeToLiveSpecification>)`: Represents the settings used to enable or disable Time to Live for the specified table.
* On success, responds with `UpdateTimeToLiveOutput` with field(s):
+ `time_to_live_specification(Option<TimeToLiveSpecification>)`: Represents the output of an `UpdateTimeToLive` operation.
* On failure, responds with `SdkError<UpdateTimeToLiveError>`
### impl Client
#### pub fn from_conf(conf: Config) -> Self
Creates a new client from the service `Config`.
##### Panics
This method will panic if the `conf` has retry or timeouts enabled without a `sleep_impl`.
If you experience this panic, it can be fixed by setting the `sleep_impl`, or by disabling retries and timeouts.
#### pub fn config(&self) -> &Config
Returns the client’s configuration.
### impl Client
#### pub fn new(sdk_config: &SdkConfig) -> Self
Creates a new client from an SDK Config.
##### Panics
* This method will panic if the `sdk_config` is missing an async sleep implementation. If you experience this panic, set the `sleep_impl` on the Config passed into this function to fix it.
* This method will panic if the `sdk_config` is missing an HTTP connector. If you experience this panic, set the
`http_connector` on the Config passed into this function to fix it.
Trait Implementations
---
### impl Clone for Client
#### fn clone(&self) -> Client
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read moreAuto Trait Implementations
---
### impl !RefUnwindSafe for Client
### impl Send for Client
### impl Sync for Client
### impl Unpin for Client
### impl !UnwindSafe for Client
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Type Alias aws_sdk_dynamodb::error::SdkError
===
```
pub type SdkError<E, R = HttpResponse> = SdkError<E, R>;
```
Error type returned by the client.
Aliased Type
---
```
enum SdkError<E, R = HttpResponse> {
ConstructionFailure(ConstructionFailure),
TimeoutError(TimeoutError),
DispatchFailure(DispatchFailure),
ResponseError(ResponseError<R>),
ServiceError(ServiceError<E, R>),
}
```
Variants
---
### ConstructionFailure(ConstructionFailure)
The request failed during construction. It was not dispatched over the network.
### TimeoutError(TimeoutError)
The request failed due to a timeout. The request MAY have been sent and received.
### DispatchFailure(DispatchFailure)
The request failed during dispatch. An HTTP response was not received. The request MAY have been sent.
### ResponseError(ResponseError<R>)
A response was received but it was not parseable according the the protocol (for example the server hung up without sending a complete response)
### ServiceError(ServiceError<E, R>)
An error response was received from the service
Trait Implementations
---
### impl<E, R> ProvideErrorMetadata for SdkError<E, R>where
E: ProvideErrorMetadata,
#### fn meta(&self) -> &ErrorMetadata
Returns error metadata, which includes the error code, message,
request ID, and potentially additional information.#### fn code(&self) -> Option<&strReturns the error code if it’s available.#### fn message(&self) -> Option<&strReturns the error message, if there is one.### impl<E, R> RequestId for SdkError<E, R>where
R: HttpHeaders,
#### fn request_id(&self) -> Option<&strReturns the request ID, or `None` if the service could not be reached.
Module aws_sdk_dynamodb::types
===
Data structures used by operation inputs/outputs.
Modules
---
* buildersBuilders
* errorError types that Amazon DynamoDB can respond with.
Structs
---
* ArchivalSummaryContains details of a table archival operation.
* AttributeDefinitionRepresents an attribute for describing the key schema for the table and indexes.
* AttributeValueUpdateFor the `UpdateItem` operation, represents the attributes to be modified, the action to perform on each, and the new value for each.
* AutoScalingPolicyDescriptionRepresents the properties of the scaling policy.
* AutoScalingPolicyUpdateRepresents the auto scaling policy to be modified.
* AutoScalingSettingsDescriptionRepresents the auto scaling settings for a global table or global secondary index.
* AutoScalingSettingsUpdateRepresents the auto scaling settings to be modified for a global table or global secondary index.
* AutoScalingTargetTrackingScalingPolicyConfigurationDescriptionRepresents the properties of a target tracking scaling policy.
* AutoScalingTargetTrackingScalingPolicyConfigurationUpdateRepresents the settings of a target tracking scaling policy that will be modified.
* BackupDescriptionContains the description of the backup created for the table.
* BackupDetailsContains the details of the backup created for the table.
* BackupSummaryContains details for the backup.
* BatchStatementError An error associated with a statement in a PartiQL batch that was run.
* BatchStatementRequest A PartiQL batch statement request.
* BatchStatementResponse A PartiQL batch statement response..
* BillingModeSummaryContains the details for the read/write capacity mode. This page talks about `PROVISIONED` and `PAY_PER_REQUEST` billing modes. For more information about these modes, see Read/write capacity mode.
* CancellationReasonAn ordered list of errors for each item in the request which caused the transaction to get cancelled. The values of the list are ordered according to the ordering of the `TransactWriteItems` request parameter. If no error occurred for the associated item an error with a Null code and Null message will be present.
* CapacityRepresents the amount of provisioned throughput capacity consumed on a table or an index.
* ConditionRepresents the selection criteria for a `Query` or `Scan` operation:
* ConditionCheckRepresents a request to perform a check that an item exists or to check the condition of specific attributes of the item.
* ConsumedCapacityThe capacity units consumed by an operation. The data returned includes the total provisioned throughput consumed, along with statistics for the table and any indexes involved in the operation. `ConsumedCapacity` is only returned if the request asked for it. For more information, see Provisioned Throughput in the *Amazon DynamoDB Developer Guide*.
* ContinuousBackupsDescriptionRepresents the continuous backups and point in time recovery settings on the table.
* ContributorInsightsSummaryRepresents a Contributor Insights summary entry.
* CreateGlobalSecondaryIndexActionRepresents a new global secondary index to be added to an existing table.
* CreateReplicaActionRepresents a replica to be added.
* CreateReplicationGroupMemberActionRepresents a replica to be created.
* CsvOptions Processing options for the CSV file being imported.
* DeleteRepresents a request to perform a `DeleteItem` operation.
* DeleteGlobalSecondaryIndexActionRepresents a global secondary index to be deleted from an existing table.
* DeleteReplicaActionRepresents a replica to be removed.
* DeleteReplicationGroupMemberActionRepresents a replica to be deleted.
* DeleteRequestRepresents a request to perform a `DeleteItem` operation on an item.
* EndpointAn endpoint information details.
* ExpectedAttributeValueRepresents a condition to be compared with an attribute value. This condition can be used with `DeleteItem`, `PutItem`, or `UpdateItem` operations; if the comparison evaluates to true, the operation succeeds; if not, the operation fails. You can use `ExpectedAttributeValue` in one of two different ways:
* ExportDescriptionRepresents the properties of the exported table.
* ExportSummarySummary information about an export task.
* FailureExceptionRepresents a failure a contributor insights operation.
* GetSpecifies an item and related attribute values to retrieve in a `TransactGetItem` object.
* GlobalSecondaryIndexRepresents the properties of a global secondary index.
* GlobalSecondaryIndexAutoScalingUpdateRepresents the auto scaling settings of a global secondary index for a global table that will be modified.
* GlobalSecondaryIndexDescriptionRepresents the properties of a global secondary index.
* GlobalSecondaryIndexInfoRepresents the properties of a global secondary index for the table when the backup was created.
* GlobalSecondaryIndexUpdateRepresents one of the following:
* GlobalTableRepresents the properties of a global table.
* GlobalTableDescriptionContains details about the global table.
* GlobalTableGlobalSecondaryIndexSettingsUpdateRepresents the settings of a global secondary index for a global table that will be modified.
* ImportSummary Summary information about the source file for the import.
* ImportTableDescription Represents the properties of the table being imported into.
* IncrementalExportSpecificationOptional object containing the parameters specific to an incremental export.
* InputFormatOptions The format options for the data that was imported into the target table. There is one value, CsvOption.
* ItemCollectionMetricsInformation about item collections, if any, that were affected by the operation. `ItemCollectionMetrics` is only returned if the request asked for it. If the table does not have any local secondary indexes, this information is not returned in the response.
* ItemResponseDetails for the requested item.
* KeySchemaElementRepresents *a single element* of a key schema. A key schema specifies the attributes that make up the primary key of a table, or the key attributes of an index.
* KeysAndAttributesRepresents a set of primary keys and, for each key, the attributes to retrieve from the table.
* KinesisDataStreamDestinationDescribes a Kinesis data stream destination.
* LocalSecondaryIndexRepresents the properties of a local secondary index.
* LocalSecondaryIndexDescriptionRepresents the properties of a local secondary index.
* LocalSecondaryIndexInfoRepresents the properties of a local secondary index for the table when the backup was created.
* ParameterizedStatement Represents a PartiQL statment that uses parameters.
* PointInTimeRecoveryDescriptionThe description of the point in time settings applied to the table.
* PointInTimeRecoverySpecificationRepresents the settings used to enable point in time recovery.
* ProjectionRepresents attributes that are copied (projected) from the table into an index. These are in addition to the primary key attributes and index key attributes, which are automatically projected.
* ProvisionedThroughputRepresents the provisioned throughput settings for a specified table or index. The settings can be modified using the `UpdateTable` operation.
* ProvisionedThroughputDescriptionRepresents the provisioned throughput settings for the table, consisting of read and write capacity units, along with data about increases and decreases.
* ProvisionedThroughputOverrideReplica-specific provisioned throughput settings. If not specified, uses the source table's provisioned throughput settings.
* PutRepresents a request to perform a `PutItem` operation.
* PutRequestRepresents a request to perform a `PutItem` operation on an item.
* ReplicaRepresents the properties of a replica.
* ReplicaAutoScalingDescriptionRepresents the auto scaling settings of the replica.
* ReplicaAutoScalingUpdateRepresents the auto scaling settings of a replica that will be modified.
* ReplicaDescriptionContains the details of the replica.
* ReplicaGlobalSecondaryIndexRepresents the properties of a replica global secondary index.
* ReplicaGlobalSecondaryIndexAutoScalingDescriptionRepresents the auto scaling configuration for a replica global secondary index.
* ReplicaGlobalSecondaryIndexAutoScalingUpdateRepresents the auto scaling settings of a global secondary index for a replica that will be modified.
* ReplicaGlobalSecondaryIndexDescriptionRepresents the properties of a replica global secondary index.
* ReplicaGlobalSecondaryIndexSettingsDescriptionRepresents the properties of a global secondary index.
* ReplicaGlobalSecondaryIndexSettingsUpdateRepresents the settings of a global secondary index for a global table that will be modified.
* ReplicaSettingsDescriptionRepresents the properties of a replica.
* ReplicaSettingsUpdateRepresents the settings for a global table in a Region that will be modified.
* ReplicaUpdateRepresents one of the following:
* ReplicationGroupUpdateRepresents one of the following:
* RestoreSummaryContains details for the restore.
* S3BucketSource The S3 bucket that is being imported from.
* SourceTableDetailsContains the details of the table when the backup was created.
* SourceTableFeatureDetailsContains the details of the features enabled on the table when the backup was created. For example, LSIs, GSIs, streams, TTL.
* SseDescriptionThe description of the server-side encryption status on the specified table.
* SseSpecificationRepresents the settings used to enable server-side encryption.
* StreamSpecificationRepresents the DynamoDB Streams configuration for a table in DynamoDB.
* TableAutoScalingDescriptionRepresents the auto scaling configuration for a global table.
* TableClassSummaryContains details of the table class.
* TableCreationParameters The parameters for the table created as part of the import operation.
* TableDescriptionRepresents the properties of a table.
* TagDescribes a tag. A tag is a key-value pair. You can add up to 50 tags to a single DynamoDB table.
* TimeToLiveDescriptionThe description of the Time to Live (TTL) status on the specified table.
* TimeToLiveSpecificationRepresents the settings used to enable or disable Time to Live (TTL) for the specified table.
* TransactGetItemSpecifies an item to be retrieved as part of the transaction.
* TransactWriteItemA list of requests that can perform update, put, delete, or check operations on multiple items in one or more tables atomically.
* UpdateRepresents a request to perform an `UpdateItem` operation.
* UpdateGlobalSecondaryIndexActionRepresents the new provisioned throughput settings to be applied to a global secondary index.
* UpdateReplicationGroupMemberActionRepresents a replica to be modified.
* WriteRequestRepresents an operation to perform - either `DeleteItem` or `PutItem`. You can only request one of these operations, not both, in a single `WriteRequest`. If you do need to perform both of these operations, you need to provide two separate `WriteRequest` objects.
Enums
---
* AttributeActionWhen writing a match expression against `AttributeAction`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* AttributeValueRepresents the data for an attribute.
* BackupStatusWhen writing a match expression against `BackupStatus`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* BackupTypeWhen writing a match expression against `BackupType`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* BackupTypeFilterWhen writing a match expression against `BackupTypeFilter`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* BatchStatementErrorCodeEnumWhen writing a match expression against `BatchStatementErrorCodeEnum`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* BillingModeWhen writing a match expression against `BillingMode`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* ComparisonOperatorWhen writing a match expression against `ComparisonOperator`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* ConditionalOperatorWhen writing a match expression against `ConditionalOperator`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* ContinuousBackupsStatusWhen writing a match expression against `ContinuousBackupsStatus`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* ContributorInsightsActionWhen writing a match expression against `ContributorInsightsAction`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* ContributorInsightsStatusWhen writing a match expression against `ContributorInsightsStatus`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* DestinationStatusWhen writing a match expression against `DestinationStatus`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* ExportFormatWhen writing a match expression against `ExportFormat`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* ExportStatusWhen writing a match expression against `ExportStatus`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* ExportTypeWhen writing a match expression against `ExportType`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* ExportViewTypeWhen writing a match expression against `ExportViewType`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* GlobalTableStatusWhen writing a match expression against `GlobalTableStatus`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* ImportStatusWhen writing a match expression against `ImportStatus`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* IndexStatusWhen writing a match expression against `IndexStatus`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* InputCompressionTypeWhen writing a match expression against `InputCompressionType`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* InputFormatWhen writing a match expression against `InputFormat`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* KeyTypeWhen writing a match expression against `KeyType`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* PointInTimeRecoveryStatusWhen writing a match expression against `PointInTimeRecoveryStatus`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* ProjectionTypeWhen writing a match expression against `ProjectionType`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* ReplicaStatusWhen writing a match expression against `ReplicaStatus`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* ReturnConsumedCapacityWhen writing a match expression against `ReturnConsumedCapacity`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* ReturnItemCollectionMetricsWhen writing a match expression against `ReturnItemCollectionMetrics`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* ReturnValueWhen writing a match expression against `ReturnValue`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* ReturnValuesOnConditionCheckFailureWhen writing a match expression against `ReturnValuesOnConditionCheckFailure`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* S3SseAlgorithmWhen writing a match expression against `S3SseAlgorithm`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* ScalarAttributeTypeWhen writing a match expression against `ScalarAttributeType`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* SelectWhen writing a match expression against `Select`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* SseStatusWhen writing a match expression against `SseStatus`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* SseTypeWhen writing a match expression against `SseType`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* StreamViewTypeWhen writing a match expression against `StreamViewType`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* TableClassWhen writing a match expression against `TableClass`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* TableStatusWhen writing a match expression against `TableStatus`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* TimeToLiveStatusWhen writing a match expression against `TimeToLiveStatus`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
Module aws_sdk_dynamodb::primitives
===
Primitives such as `Blob` or `DateTime` used by other types.
Structs
---
* BlobBinary Blob Type
* DateTimeDateTime in time.
* UnknownVariantValueOpaque struct used as inner data for the `Unknown` variant defined in enums in the crate
Enums
---
* DateTimeFormatFormats for representing a `DateTime` in the Smithy protocols.
Struct aws_sdk_dynamodb::Config
===
```
pub struct Config { /* private fields */ }
```
Configuration for a aws_sdk_dynamodb service client.
Service configuration allows for customization of endpoints, region, credentials providers,
and retry configuration. Generally, it is constructed automatically for you from a shared configuration loaded by the `aws-config` crate. For example:
```
// Load a shared config from the environment let shared_config = aws_config::from_env().load().await;
// The client constructor automatically converts the shared config into the service config let client = Client::new(&shared_config);
```
The service config can also be constructed manually using its builder.
Implementations
---
### impl Config
#### pub fn builder() -> Builder
Constructs a config builder.
#### pub fn to_builder(&self) -> Builder
Converts this config back into a builder so that it can be tweaked.
#### pub fn idempotency_token_provider(&self) -> IdempotencyTokenProvider
Returns a copy of the idempotency token provider.
If a random token provider was configured,
a newly-randomized token provider will be returned.
#### pub fn http_connector(&self) -> Option<SharedHttpConnectorReturn the `SharedHttpConnector` to use when making requests, if any.
#### pub fn endpoint_resolver(&self) -> SharedEndpointResolver
Returns the endpoint resolver.
#### pub fn retry_config(&self) -> Option<&RetryConfigReturn a reference to the retry configuration contained in this config, if any.
#### pub fn sleep_impl(&self) -> Option<SharedAsyncSleepReturn a cloned shared async sleep implementation from this config, if any.
#### pub fn timeout_config(&self) -> Option<&TimeoutConfigReturn a reference to the timeout configuration contained in this config, if any.
#### pub fn interceptors(&self) -> impl Iterator<Item = SharedInterceptor> + '_
Returns interceptors currently registered by the user.
#### pub fn time_source(&self) -> Option<SharedTimeSourceReturn time source used for this service.
#### pub fn app_name(&self) -> Option<&AppNameReturns the name of the app that is using the client, if it was provided.
This *optional* name is used to identify the application in the user agent that gets sent along with requests.
#### pub fn invocation_id_generator(&self) -> Option<SharedInvocationIdGeneratorReturns the invocation ID generator if one was given in config.
The invocation ID generator generates ID values for the `amz-sdk-invocation-id` header. By default, this will be a random UUID. Overriding it may be useful in tests that examine the HTTP request and need to be deterministic.
#### pub fn new(config: &SdkConfig) -> Self
Creates a new service config from a shared `config`.
#### pub fn signing_service(&self) -> &'static str
The signature version 4 service signing name to use in the credential scope when signing requests.
The signing service may be overridden by the `Endpoint`, or by specifying a custom
`SigningService` during operation construction
#### pub fn region(&self) -> Option<&RegionReturns the AWS region, if it was provided.
#### pub fn credentials_cache(&self) -> Option<SharedCredentialsCacheReturns the credentials cache.
Trait Implementations
---
### impl Clone for Config
#### fn clone(&self) -> Config
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn from(sdk_config: &SdkConfig) -> Self
Converts to this type from the input type.Auto Trait Implementations
---
### impl !RefUnwindSafe for Config
### impl Send for Config
### impl Sync for Config
### impl Unpin for Config
### impl !UnwindSafe for Config
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Module aws_sdk_dynamodb::config
===
Configuration for Amazon DynamoDB.
Modules
---
* endpointTypes needed to configure endpoint resolution.
* interceptorsTypes needed to implement `Interceptor`.
* retryRetry configuration.
* timeoutTimeout configuration.
Structs
---
* AppNameApp name that can be configured with an AWS SDK client to become part of the user agent string.
* BuilderBuilder for creating a `Config`.
* ConfigConfiguration for a aws_sdk_dynamodb service client.
* ConfigBagLayered configuration structure
* CredentialsAWS SDK Credentials
* RegionThe region to send requests to.
* RuntimeComponentsComponents that can only be set in runtime plugins that the orchestrator uses directly to call an operation.
* SharedAsyncSleepWrapper type for sharable `AsyncSleep`
* SharedInterceptorInterceptor wrapper that may be shared
* SleepFuture returned by `AsyncSleep`.
Traits
---
* AsyncSleepAsync trait with a `sleep` function.
* InterceptorAn interceptor allows injecting code into the SDK ’s request execution pipeline.
Module aws_sdk_dynamodb::operation
===
All operations that this crate can perform.
Modules
---
* batch_execute_statementTypes for the `BatchExecuteStatement` operation.
* batch_get_itemTypes for the `BatchGetItem` operation.
* batch_write_itemTypes for the `BatchWriteItem` operation.
* create_backupTypes for the `CreateBackup` operation.
* create_global_tableTypes for the `CreateGlobalTable` operation.
* create_tableTypes for the `CreateTable` operation.
* delete_backupTypes for the `DeleteBackup` operation.
* delete_itemTypes for the `DeleteItem` operation.
* delete_tableTypes for the `DeleteTable` operation.
* describe_backupTypes for the `DescribeBackup` operation.
* describe_continuous_backupsTypes for the `DescribeContinuousBackups` operation.
* describe_contributor_insightsTypes for the `DescribeContributorInsights` operation.
* describe_endpointsTypes for the `DescribeEndpoints` operation.
* describe_exportTypes for the `DescribeExport` operation.
* describe_global_tableTypes for the `DescribeGlobalTable` operation.
* describe_global_table_settingsTypes for the `DescribeGlobalTableSettings` operation.
* describe_importTypes for the `DescribeImport` operation.
* describe_kinesis_streaming_destinationTypes for the `DescribeKinesisStreamingDestination` operation.
* describe_limitsTypes for the `DescribeLimits` operation.
* describe_tableTypes for the `DescribeTable` operation.
* describe_table_replica_auto_scalingTypes for the `DescribeTableReplicaAutoScaling` operation.
* describe_time_to_liveTypes for the `DescribeTimeToLive` operation.
* disable_kinesis_streaming_destinationTypes for the `DisableKinesisStreamingDestination` operation.
* enable_kinesis_streaming_destinationTypes for the `EnableKinesisStreamingDestination` operation.
* execute_statementTypes for the `ExecuteStatement` operation.
* execute_transactionTypes for the `ExecuteTransaction` operation.
* export_table_to_point_in_timeTypes for the `ExportTableToPointInTime` operation.
* get_itemTypes for the `GetItem` operation.
* import_tableTypes for the `ImportTable` operation.
* list_backupsTypes for the `ListBackups` operation.
* list_contributor_insightsTypes for the `ListContributorInsights` operation.
* list_exportsTypes for the `ListExports` operation.
* list_global_tablesTypes for the `ListGlobalTables` operation.
* list_importsTypes for the `ListImports` operation.
* list_tablesTypes for the `ListTables` operation.
* list_tags_of_resourceTypes for the `ListTagsOfResource` operation.
* put_itemTypes for the `PutItem` operation.
* queryTypes for the `Query` operation.
* restore_table_from_backupTypes for the `RestoreTableFromBackup` operation.
* restore_table_to_point_in_timeTypes for the `RestoreTableToPointInTime` operation.
* scanTypes for the `Scan` operation.
* tag_resourceTypes for the `TagResource` operation.
* transact_get_itemsTypes for the `TransactGetItems` operation.
* transact_write_itemsTypes for the `TransactWriteItems` operation.
* untag_resourceTypes for the `UntagResource` operation.
* update_continuous_backupsTypes for the `UpdateContinuousBackups` operation.
* update_contributor_insightsTypes for the `UpdateContributorInsights` operation.
* update_global_tableTypes for the `UpdateGlobalTable` operation.
* update_global_table_settingsTypes for the `UpdateGlobalTableSettings` operation.
* update_itemTypes for the `UpdateItem` operation.
* update_tableTypes for the `UpdateTable` operation.
* update_table_replica_auto_scalingTypes for the `UpdateTableReplicaAutoScaling` operation.
* update_time_to_liveTypes for the `UpdateTimeToLive` operation.
Traits
---
* RequestIdImplementers add a function to return an AWS request ID
Enum aws_sdk_dynamodb::Error
===
```
#[non_exhaustive]pub enum Error {
BackupInUseException(BackupInUseException),
BackupNotFoundException(BackupNotFoundException),
ConditionalCheckFailedException(ConditionalCheckFailedException),
ContinuousBackupsUnavailableException(ContinuousBackupsUnavailableException),
DuplicateItemException(DuplicateItemException),
ExportConflictException(ExportConflictException),
ExportNotFoundException(ExportNotFoundException),
GlobalTableAlreadyExistsException(GlobalTableAlreadyExistsException),
GlobalTableNotFoundException(GlobalTableNotFoundException),
IdempotentParameterMismatchException(IdempotentParameterMismatchException),
ImportConflictException(ImportConflictException),
ImportNotFoundException(ImportNotFoundException),
IndexNotFoundException(IndexNotFoundException),
InternalServerError(InternalServerError),
InvalidEndpointException(InvalidEndpointException),
InvalidExportTimeException(InvalidExportTimeException),
InvalidRestoreTimeException(InvalidRestoreTimeException),
ItemCollectionSizeLimitExceededException(ItemCollectionSizeLimitExceededException),
LimitExceededException(LimitExceededException),
PointInTimeRecoveryUnavailableException(PointInTimeRecoveryUnavailableException),
ProvisionedThroughputExceededException(ProvisionedThroughputExceededException),
ReplicaAlreadyExistsException(ReplicaAlreadyExistsException),
ReplicaNotFoundException(ReplicaNotFoundException),
RequestLimitExceeded(RequestLimitExceeded),
ResourceInUseException(ResourceInUseException),
ResourceNotFoundException(ResourceNotFoundException),
TableAlreadyExistsException(TableAlreadyExistsException),
TableInUseException(TableInUseException),
TableNotFoundException(TableNotFoundException),
TransactionCanceledException(TransactionCanceledException),
TransactionConflictException(TransactionConflictException),
TransactionInProgressException(TransactionInProgressException),
Unhandled(Unhandled),
}
```
All possible error types for this service.
Variants (Non-exhaustive)
---
Non-exhaustive enums could have additional variants added in future. Therefore, when matching against variants of non-exhaustive enums, an extra wildcard arm must be added to account for any future variants.### BackupInUseException(BackupInUseException)
There is another ongoing conflicting backup control plane operation on the table. The backup is either being created, deleted or restored to a table.
### BackupNotFoundException(BackupNotFoundException)
Backup not found for the given BackupARN.
### ConditionalCheckFailedException(ConditionalCheckFailedException)
A condition specified in the operation could not be evaluated.
### ContinuousBackupsUnavailableException(ContinuousBackupsUnavailableException)
Backups have not yet been enabled for this table.
### DuplicateItemException(DuplicateItemException)
There was an attempt to insert an item with the same primary key as an item that already exists in the DynamoDB table.
### ExportConflictException(ExportConflictException)
There was a conflict when writing to the specified S3 bucket.
### ExportNotFoundException(ExportNotFoundException)
The specified export was not found.
### GlobalTableAlreadyExistsException(GlobalTableAlreadyExistsException)
The specified global table already exists.
### GlobalTableNotFoundException(GlobalTableNotFoundException)
The specified global table does not exist.
### IdempotentParameterMismatchException(IdempotentParameterMismatchException)
DynamoDB rejected the request because you retried a request with a different payload but with an idempotent token that was already used.
### ImportConflictException(ImportConflictException)
There was a conflict when importing from the specified S3 source. This can occur when the current import conflicts with a previous import request that had the same client token.
### ImportNotFoundException(ImportNotFoundException)
The specified import was not found.
### IndexNotFoundException(IndexNotFoundException)
The operation tried to access a nonexistent index.
### InternalServerError(InternalServerError)
An error occurred on the server side.
### InvalidEndpointException(InvalidEndpointException)
### InvalidExportTimeException(InvalidExportTimeException)
The specified `ExportTime` is outside of the point in time recovery window.
### InvalidRestoreTimeException(InvalidRestoreTimeException)
An invalid restore time was specified. RestoreDateTime must be between EarliestRestorableDateTime and LatestRestorableDateTime.
### ItemCollectionSizeLimitExceededException(ItemCollectionSizeLimitExceededException)
An item collection is too large. This exception is only returned for tables that have one or more local secondary indexes.
### LimitExceededException(LimitExceededException)
There is no limit to the number of daily on-demand backups that can be taken.
For most purposes, up to 500 simultaneous table operations are allowed per account. These operations include `CreateTable`, `UpdateTable`, `DeleteTable`,`UpdateTimeToLive`, `RestoreTableFromBackup`, and `RestoreTableToPointInTime`.
When you are creating a table with one or more secondary indexes, you can have up to 250 such requests running at a time. However, if the table or index specifications are complex, then DynamoDB might temporarily reduce the number of concurrent operations.
When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account.
There is a soft account quota of 2,500 tables.
GetRecords was called with a value of more than 1000 for the limit request parameter.
More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may result in request throttling.
### PointInTimeRecoveryUnavailableException(PointInTimeRecoveryUnavailableException)
Point in time recovery has not yet been enabled for this source table.
### ProvisionedThroughputExceededException(ProvisionedThroughputExceededException)
Your request rate is too high. The Amazon Web Services SDKs for DynamoDB automatically retry requests that receive this exception. Your request is eventually successful, unless your retry queue is too large to finish. Reduce the frequency of requests and use exponential backoff. For more information, go to Error Retries and Exponential Backoff in the *Amazon DynamoDB Developer Guide*.
### ReplicaAlreadyExistsException(ReplicaAlreadyExistsException)
The specified replica is already part of the global table.
### ReplicaNotFoundException(ReplicaNotFoundException)
The specified replica is no longer part of the global table.
### RequestLimitExceeded(RequestLimitExceeded)
Throughput exceeds the current throughput quota for your account. Please contact Amazon Web Services Support to request a quota increase.
### ResourceInUseException(ResourceInUseException)
The operation conflicts with the resource's availability. For example, you attempted to recreate an existing table, or tried to delete a table currently in the `CREATING` state.
### ResourceNotFoundException(ResourceNotFoundException)
The operation tried to access a nonexistent table or index. The resource might not be specified correctly, or its status might not be `ACTIVE`.
### TableAlreadyExistsException(TableAlreadyExistsException)
A target table with the specified name already exists.
### TableInUseException(TableInUseException)
A target table with the specified name is either being created or deleted.
### TableNotFoundException(TableNotFoundException)
A source table with the name `TableName` does not currently exist within the subscriber's account or the subscriber is operating in the wrong Amazon Web Services Region.
### TransactionCanceledException(TransactionCanceledException)
The entire transaction request was canceled.
DynamoDB cancels a `TransactWriteItems` request under the following circumstances:
* A condition in one of the condition expressions is not met.
* A table in the `TransactWriteItems` request is in a different account or region.
* More than one action in the `TransactWriteItems` operation targets the same item.
* There is insufficient provisioned capacity for the transaction to be completed.
* An item size becomes too large (larger than 400 KB), or a local secondary index (LSI) becomes too large, or a similar validation error occurs because of changes made by the transaction.
* There is a user error, such as an invalid data format.
DynamoDB cancels a `TransactGetItems` request under the following circumstances:
* There is an ongoing `TransactGetItems` operation that conflicts with a concurrent `PutItem`, `UpdateItem`, `DeleteItem` or `TransactWriteItems` request. In this case the `TransactGetItems` operation fails with a `TransactionCanceledException`.
* A table in the `TransactGetItems` request is in a different account or region.
* There is insufficient provisioned capacity for the transaction to be completed.
* There is a user error, such as an invalid data format.
If using Java, DynamoDB lists the cancellation reasons on the `CancellationReasons` property. This property is not set for other languages. Transaction cancellation reasons are ordered in the order of requested items, if an item has no error it will have `None` code and `Null` message.
Cancellation reason codes and possible error messages:
* No Errors:
+ Code: `None`
+ Message: `null`
* Conditional Check Failed:
+ Code: `ConditionalCheckFailed`
+ Message: The conditional request failed.
* Item Collection Size Limit Exceeded:
+ Code: `ItemCollectionSizeLimitExceeded`
+ Message: Collection size exceeded.
* Transaction Conflict:
+ Code: `TransactionConflict`
+ Message: Transaction is ongoing for the item.
* Provisioned Throughput Exceeded:
+ Code: `ProvisionedThroughputExceeded`
+ Messages:
- The level of configured provisioned throughput for the table was exceeded. Consider increasing your provisioning level with the UpdateTable API.
This Message is received when provisioned throughput is exceeded is on a provisioned DynamoDB table.
- The level of configured provisioned throughput for one or more global secondary indexes of the table was exceeded. Consider increasing your provisioning level for the under-provisioned global secondary indexes with the UpdateTable API.
This message is returned when provisioned throughput is exceeded is on a provisioned GSI.
* Throttling Error:
+ Code: `ThrottlingError`
+ Messages:
- Throughput exceeds the current capacity of your table or index. DynamoDB is automatically scaling your table or index so please try again shortly. If exceptions persist, check if you have a hot key: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-partition-key-design.html.
This message is returned when writes get throttled on an On-Demand table as DynamoDB is automatically scaling the table.
- Throughput exceeds the current capacity for one or more global secondary indexes. DynamoDB is automatically scaling your index so please try again shortly.
This message is returned when writes get throttled on an On-Demand GSI as DynamoDB is automatically scaling the GSI.
* Validation Error:
+ Code: `ValidationError`
+ Messages:
- One or more parameter values were invalid.
- The update expression attempted to update the secondary index key beyond allowed size limits.
- The update expression attempted to update the secondary index key to unsupported type.
- An operand in the update expression has an incorrect data type.
- Item size to update has exceeded the maximum allowed size.
- Number overflow. Attempting to store a number with magnitude larger than supported range.
- Type mismatch for attribute to update.
- Nesting Levels have exceeded supported limits.
- The document path provided in the update expression is invalid for update.
- The provided expression refers to an attribute that does not exist in the item.
### TransactionConflictException(TransactionConflictException)
Operation was rejected because there is an ongoing transaction for the item.
### TransactionInProgressException(TransactionInProgressException)
The transaction with the given request token is already in progress.
Recommended Settings
This is a general recommendation for handling the `TransactionInProgressException`. These settings help ensure that the client retries will trigger completion of the ongoing `TransactWriteItems` request.
* Set `clientExecutionTimeout` to a value that allows at least one retry to be processed after 5 seconds have elapsed since the first attempt for the `TransactWriteItems` operation.
* Set `socketTimeout` to a value a little lower than the `requestTimeout` setting.
* `requestTimeout` should be set based on the time taken for the individual retries of a single HTTP request for your use case, but setting it to 1 second or higher should work well to reduce chances of retries and `TransactionInProgressException` errors.
* Use exponential backoff when retrying and tune backoff if needed.
Assuming default retry policy, example timeout settings based on the guidelines above are as follows:
Example timeline:
* 0-1000 first attempt
* 1000-1500 first sleep/delay (default retry policy uses 500 ms as base delay for 4xx errors)
* 1500-2500 second attempt
* 2500-3500 second sleep/delay (500 * 2, exponential backoff)
* 3500-4500 third attempt
* 4500-6500 third sleep/delay (500 * 2^2)
* 6500-7500 fourth attempt (this can trigger inline recovery since 5 seconds have elapsed since the first attempt reached TC)
### Unhandled(Unhandled)
An unexpected error occurred (e.g., invalid JSON returned by the service or an unknown error code).
Trait Implementations
---
### impl Debug for Error
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more1.0.0 · source#### fn description(&self) -> &str
👎Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎Deprecated since 1.33.0: replaced by Error::source, which can support downcasting#### fn provide<'a>(&'a self, request: &mut Request<'a>)
🔬This is a nightly-only experimental API. (`error_generic_member_access`)Provides type based access to context intended for error reports.
#### fn from(err: BatchExecuteStatementError) -> Self
Converts to this type from the input type.### impl From<BatchGetItemError> for Error
#### fn from(err: BatchGetItemError) -> Self
Converts to this type from the input type.### impl From<BatchWriteItemError> for Error
#### fn from(err: BatchWriteItemError) -> Self
Converts to this type from the input type.### impl From<CreateBackupError> for Error
#### fn from(err: CreateBackupError) -> Self
Converts to this type from the input type.### impl From<CreateGlobalTableError> for Error
#### fn from(err: CreateGlobalTableError) -> Self
Converts to this type from the input type.### impl From<CreateTableError> for Error
#### fn from(err: CreateTableError) -> Self
Converts to this type from the input type.### impl From<DeleteBackupError> for Error
#### fn from(err: DeleteBackupError) -> Self
Converts to this type from the input type.### impl From<DeleteItemError> for Error
#### fn from(err: DeleteItemError) -> Self
Converts to this type from the input type.### impl From<DeleteTableError> for Error
#### fn from(err: DeleteTableError) -> Self
Converts to this type from the input type.### impl From<DescribeBackupError> for Error
#### fn from(err: DescribeBackupError) -> Self
Converts to this type from the input type.### impl From<DescribeContinuousBackupsError> for Error
#### fn from(err: DescribeContinuousBackupsError) -> Self
Converts to this type from the input type.### impl From<DescribeContributorInsightsError> for Error
#### fn from(err: DescribeContributorInsightsError) -> Self
Converts to this type from the input type.### impl From<DescribeEndpointsError> for Error
#### fn from(err: DescribeEndpointsError) -> Self
Converts to this type from the input type.### impl From<DescribeExportError> for Error
#### fn from(err: DescribeExportError) -> Self
Converts to this type from the input type.### impl From<DescribeGlobalTableError> for Error
#### fn from(err: DescribeGlobalTableError) -> Self
Converts to this type from the input type.### impl From<DescribeGlobalTableSettingsError> for Error
#### fn from(err: DescribeGlobalTableSettingsError) -> Self
Converts to this type from the input type.### impl From<DescribeImportError> for Error
#### fn from(err: DescribeImportError) -> Self
Converts to this type from the input type.### impl From<DescribeKinesisStreamingDestinationError> for Error
#### fn from(err: DescribeKinesisStreamingDestinationError) -> Self
Converts to this type from the input type.### impl From<DescribeLimitsError> for Error
#### fn from(err: DescribeLimitsError) -> Self
Converts to this type from the input type.### impl From<DescribeTableError> for Error
#### fn from(err: DescribeTableError) -> Self
Converts to this type from the input type.### impl From<DescribeTableReplicaAutoScalingError> for Error
#### fn from(err: DescribeTableReplicaAutoScalingError) -> Self
Converts to this type from the input type.### impl From<DescribeTimeToLiveError> for Error
#### fn from(err: DescribeTimeToLiveError) -> Self
Converts to this type from the input type.### impl From<DisableKinesisStreamingDestinationError> for Error
#### fn from(err: DisableKinesisStreamingDestinationError) -> Self
Converts to this type from the input type.### impl From<EnableKinesisStreamingDestinationError> for Error
#### fn from(err: EnableKinesisStreamingDestinationError) -> Self
Converts to this type from the input type.### impl From<ExecuteStatementError> for Error
#### fn from(err: ExecuteStatementError) -> Self
Converts to this type from the input type.### impl From<ExecuteTransactionError> for Error
#### fn from(err: ExecuteTransactionError) -> Self
Converts to this type from the input type.### impl From<ExportTableToPointInTimeError> for Error
#### fn from(err: ExportTableToPointInTimeError) -> Self
Converts to this type from the input type.### impl From<GetItemError> for Error
#### fn from(err: GetItemError) -> Self
Converts to this type from the input type.### impl From<ImportTableError> for Error
#### fn from(err: ImportTableError) -> Self
Converts to this type from the input type.### impl From<ListBackupsError> for Error
#### fn from(err: ListBackupsError) -> Self
Converts to this type from the input type.### impl From<ListContributorInsightsError> for Error
#### fn from(err: ListContributorInsightsError) -> Self
Converts to this type from the input type.### impl From<ListExportsError> for Error
#### fn from(err: ListExportsError) -> Self
Converts to this type from the input type.### impl From<ListGlobalTablesError> for Error
#### fn from(err: ListGlobalTablesError) -> Self
Converts to this type from the input type.### impl From<ListImportsError> for Error
#### fn from(err: ListImportsError) -> Self
Converts to this type from the input type.### impl From<ListTablesError> for Error
#### fn from(err: ListTablesError) -> Self
Converts to this type from the input type.### impl From<ListTagsOfResourceError> for Error
#### fn from(err: ListTagsOfResourceError) -> Self
Converts to this type from the input type.### impl From<PutItemError> for Error
#### fn from(err: PutItemError) -> Self
Converts to this type from the input type.### impl From<QueryError> for Error
#### fn from(err: QueryError) -> Self
Converts to this type from the input type.### impl From<RestoreTableFromBackupError> for Error
#### fn from(err: RestoreTableFromBackupError) -> Self
Converts to this type from the input type.### impl From<RestoreTableToPointInTimeError> for Error
#### fn from(err: RestoreTableToPointInTimeError) -> Self
Converts to this type from the input type.### impl From<ScanError> for Error
#### fn from(err: ScanError) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<BatchExecuteStatementError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<BatchExecuteStatementError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<BatchGetItemError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<BatchGetItemError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<BatchWriteItemError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<BatchWriteItemError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<CreateBackupError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<CreateBackupError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<CreateGlobalTableError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<CreateGlobalTableError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<CreateTableError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<CreateTableError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DeleteBackupError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DeleteBackupError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DeleteItemError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DeleteItemError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DeleteTableError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DeleteTableError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DescribeBackupError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DescribeBackupError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DescribeContinuousBackupsError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DescribeContinuousBackupsError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DescribeContributorInsightsError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DescribeContributorInsightsError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DescribeEndpointsError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DescribeEndpointsError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DescribeExportError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DescribeExportError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DescribeGlobalTableError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DescribeGlobalTableError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DescribeGlobalTableSettingsError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DescribeGlobalTableSettingsError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DescribeImportError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DescribeImportError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DescribeKinesisStreamingDestinationError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DescribeKinesisStreamingDestinationError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DescribeLimitsError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DescribeLimitsError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DescribeTableError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DescribeTableError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DescribeTableReplicaAutoScalingError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DescribeTableReplicaAutoScalingError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DescribeTimeToLiveError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DescribeTimeToLiveError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DisableKinesisStreamingDestinationError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DisableKinesisStreamingDestinationError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<EnableKinesisStreamingDestinationError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<EnableKinesisStreamingDestinationError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<ExecuteStatementError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<ExecuteStatementError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<ExecuteTransactionError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<ExecuteTransactionError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<ExportTableToPointInTimeError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<ExportTableToPointInTimeError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<GetItemError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<GetItemError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<ImportTableError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<ImportTableError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<ListBackupsError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<ListBackupsError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<ListContributorInsightsError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<ListContributorInsightsError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<ListExportsError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<ListExportsError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<ListGlobalTablesError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<ListGlobalTablesError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<ListImportsError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<ListImportsError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<ListTablesError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<ListTablesError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<ListTagsOfResourceError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<ListTagsOfResourceError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<PutItemError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<PutItemError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<QueryError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<QueryError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<RestoreTableFromBackupError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<RestoreTableFromBackupError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<RestoreTableToPointInTimeError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<RestoreTableToPointInTimeError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<ScanError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<ScanError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<TagResourceError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<TagResourceError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<TransactGetItemsError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<TransactGetItemsError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<TransactWriteItemsError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<TransactWriteItemsError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<UntagResourceError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<UntagResourceError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<UpdateContinuousBackupsError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<UpdateContinuousBackupsError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<UpdateContributorInsightsError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<UpdateContributorInsightsError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<UpdateGlobalTableError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<UpdateGlobalTableError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<UpdateGlobalTableSettingsError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<UpdateGlobalTableSettingsError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<UpdateItemError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<UpdateItemError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<UpdateTableError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<UpdateTableError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<UpdateTableReplicaAutoScalingError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<UpdateTableReplicaAutoScalingError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<UpdateTimeToLiveError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<UpdateTimeToLiveError, R>) -> Self
Converts to this type from the input type.### impl From<TagResourceError> for Error
#### fn from(err: TagResourceError) -> Self
Converts to this type from the input type.### impl From<TransactGetItemsError> for Error
#### fn from(err: TransactGetItemsError) -> Self
Converts to this type from the input type.### impl From<TransactWriteItemsError> for Error
#### fn from(err: TransactWriteItemsError) -> Self
Converts to this type from the input type.### impl From<UntagResourceError> for Error
#### fn from(err: UntagResourceError) -> Self
Converts to this type from the input type.### impl From<UpdateContinuousBackupsError> for Error
#### fn from(err: UpdateContinuousBackupsError) -> Self
Converts to this type from the input type.### impl From<UpdateContributorInsightsError> for Error
#### fn from(err: UpdateContributorInsightsError) -> Self
Converts to this type from the input type.### impl From<UpdateGlobalTableError> for Error
#### fn from(err: UpdateGlobalTableError) -> Self
Converts to this type from the input type.### impl From<UpdateGlobalTableSettingsError> for Error
#### fn from(err: UpdateGlobalTableSettingsError) -> Self
Converts to this type from the input type.### impl From<UpdateItemError> for Error
#### fn from(err: UpdateItemError) -> Self
Converts to this type from the input type.### impl From<UpdateTableError> for Error
#### fn from(err: UpdateTableError) -> Self
Converts to this type from the input type.### impl From<UpdateTableReplicaAutoScalingError> for Error
#### fn from(err: UpdateTableReplicaAutoScalingError) -> Self
Converts to this type from the input type.### impl From<UpdateTimeToLiveError> for Error
#### fn from(err: UpdateTimeToLiveError) -> Self
Converts to this type from the input type.### impl RequestId for Error
#### fn request_id(&self) -> Option<&strReturns the request ID, or `None` if the service could not be reached.Auto Trait Implementations
---
### impl !RefUnwindSafe for Error
### impl Send for Error
### impl Sync for Error
### impl Unpin for Error
### impl !UnwindSafe for Error
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> ToString for Twhere
T: Display + ?Sized,
#### default fn to_string(&self) -> String
Converts the given value to a `String`.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Module aws_sdk_dynamodb::client
===
Client for calling Amazon DynamoDB.
### Constructing a `Client`
A `Config` is required to construct a client. For most use cases, the `aws-config`
crate should be used to automatically resolve this config using
`aws_config::load_from_env()`, since this will resolve an `SdkConfig` which can be shared across multiple different AWS SDK clients. This config resolution process can be customized by calling `aws_config::from_env()` instead, which returns a `ConfigLoader` that uses the builder pattern to customize the default config.
In the simplest case, creating a client looks as follows:
```
let config = aws_config::load_from_env().await;
let client = aws_sdk_dynamodb::Client::new(&config);
```
Occasionally, SDKs may have additional service-specific that can be set on the `Config` that is absent from `SdkConfig`, or slightly different settings for a specific client may be desired.
The `Config` struct implements `From<&SdkConfig>`, so setting these specific settings can be done as follows:
```
let sdk_config = ::aws_config::load_from_env().await;
let config = aws_sdk_dynamodb::config::Builder::from(&sdk_config)
.some_service_specific_setting("value")
.build();
```
See the `aws-config` docs and `Config` for more information on customizing configuration.
*Note:* Client construction is expensive due to connection thread pool initialization, and should be done once at application start-up.
Using the `Client`
---
A client has a function for every operation that can be performed by the service.
For example, the `BatchExecuteStatement` operation has a `Client::batch_execute_statement`, function which returns a builder for that operation.
The fluent builder ultimately has a `send()` function that returns an async future that returns a result, as illustrated below:
```
let result = client.batch_execute_statement()
.return_consumed_capacity("example")
.send()
.await;
```
The underlying HTTP requests that get made by this can be modified with the `customize_operation`
function on the fluent builder. See the `customize` module for more information.
Modules
---
* customizeOperation customization and supporting types.
Structs
---
* ClientClient for Amazon DynamoDB
Module aws_sdk_dynamodb::error
===
Common errors and error handling utilities.
Structs
---
* DisplayErrorContextProvides a `Display` impl for an `Error` that outputs the full error context
Traits
---
* ProvideErrorMetadataTrait to retrieve error metadata from a result
Type Aliases
---
* BoxErrorA boxed error that is `Send` and `Sync`.
* SdkErrorError type returned by the client.
Module aws_sdk_dynamodb::meta
===
Information about this crate.
Statics
---
* PKG_VERSIONCrate version number. |
deck | readthedoc | JSON | # REST API
Date: 2020-01-20
Categories:
Tags:
b'The REST API provides access for authenticated users to their data inside the Deck app. To get a better understanding of Decks data models and their relations, please have a look at the data structure documentation.\n Prerequisites\n \n All requests require a OCS-APIRequest HTTP header to be set to true and a Content-Type of application/json.\n The API is located at https://nextcloud.local/index.php/apps/deck/api/v1.0\n All request parameters are required, unless otherwise specified\n \n Naming\n \n \n Board is the project like grouping of tasks that can be shared to different users and groups\n \n \n Stack is the grouping of cards which is rendered in vertical columns in the UI\n \n \n Card is the representation of a single task\n \n \n Labels are defined on a board level and can be assigned to any number of cards\n \n \n Global responses\n 400 Bad request\n In case the request is invalid, e.g. because a parameter is missing or an invalid value has been transmitted, a 400 error will be returned:\n \n {\n "status": 400,\n "message": "title must be provided"\n}\n\n \n 403 Permission denied\n In any case a user doesn\'t have access to a requested entity, a 403 error will be returned:\n \n {\n "status": 403,\n "message": "Permission denied"\n}\n\n \n Formats\n Date\n Datetime values in request data need to be provided in ISO-8601. Example: 2020-01-20T09:52:43+00:00\n If-Modified-Since\n Some index endpoints support limiting the result set to entries that have been changed since the given time. The supported date formats are:\n \n IMF-fixdate: Sun, 03 Aug 2019 10:34:12 GMT\n (obsolete) RFC 850: Sunday, 03-Aug-19 10:34:12 GMT\n (obsolete) ANSI C asctime(): Sun Aug 3 10:34:12 2019\n \n It is highly recommended to only use the IMF-fixdate format. Note that according to RFC2616 all HTTP date/time stamps MUST be represented in Greenwich Mean Time (GMT), without exception.\n Example curl request:\n \n curl -u admin:admin -X GET \\\n \'http://localhost:8000/index.php/apps/deck/api/v1.0/boards/2/stacks\' \\\n -H "OCS-APIRequest: true" \\\n -H "If-Modified-Since: Mon, 05 Nov 2018 09:28:00 GMT"\n\n \n ETag\n An ETag header is returned in order to determine if further child elements have been updated for the following endpoints:\n \n Fetch all user board GET /api/v1.0/boards\n Fetch a single board GET /api/v1.0/boards/{boardId}\n Fetch all stacks of a board GET /api/v1.0/boards/{boardId}/stacks\n Fetch a single stacks of a board GET /api/v1.0/boards/{boardId}/stacks/{stackId}\n Fetch a single card of a board GET /api/v1.0/boards/{boardId}/stacks/{stackId}/cards/{cardId}\n Fetch attachments of a card GET /api/v1.0/boards/{boardId}/stacks/{stackId}/cards/{cardId}/attachments\n \n If a If-None-Match header is provided and the requested element has not changed a 304 Not Modified response will be returned. \n Changes of child elements will propagate to their parents and also cause an update of the ETag which will be useful for determining if a sync is necessary on any client integration side. As an example, if a label is added to a card, the ETag of all related entities (the card, stack and board) will change.\n If available the ETag will also be part of JSON response objects as shown below for a card:\n \n {\n "id": 81,\n "ETag": "bdb10fa2d2aeda092a2b6b469454dc90",\n "title": "Test card"\n}\n\n \n Changelog\n API version 1.0\n \n Deck >=1.0.0: The maximum length of the card title has been extended from 100 to 255 characters\n Deck >=1.0.0: The API will now return a 400 Bad request response if the length limitation of a board, stack or card title is exceeded\n \n API version 1.1\n This API version has become available with Deck 1.3.0.\n \n The maximum length of the card title has been extended from 100 to 255 characters\n The API will now return a 400 Bad request response if the length limitation of a board, stack or card title is exceeded\n The attachments API endpoints will return other attachment types than deck_file\n Prior to Deck version v1.3.0 (API v1.0), attachments were stored within deck. For this type of attachments deck_file was used as the default type of attachments\n Starting with Deck version 1.3.0 (API v1.1) files are stored within the users regular Nextcloud files and the type file has been introduced for that\n \n API version 1.2 (unreleased)\n Endpoints\n Boards\n GET /boards - Get a list of boards\n The board list endpoint supports setting an If-Modified-Since header to limit the results to entities that are changed after the provided time.\n Request parameters\n \n \n Parameter\n Type\n Description\n \n \n details\n Bool\n Optional Enhance boards with details about labels, stacks and users\n \n \n Response\n 200 Success\n Returns an array of board items\n \n [\n {\n "title": "Board title",\n "owner": {\n "primaryKey": "admin",\n "uid": "admin",\n "displayname": "Administrator"\n },\n "color": "ff0000",\n "archived": false,\n "labels": [],\n "acl": [],\n "permissions": {\n "PERMISSION_READ": true,\n "PERMISSION_EDIT": true,\n "PERMISSION_MANAGE": true,\n "PERMISSION_SHARE": true\n },\n "users": [],\n "shared": 0,\n "deletedAt": 0,\n "id": 10,\n "lastModified": 1586269585,\n "settings": {\n "notify-due": "off",\n "calendar": true\n }\n }\n]\n\n \n POST /boards - Create a new board\n Request body\n \n \n Parameter\n Type\n Description\n \n \n title\n String\n The title of the new board, maximum length is limited to 100 characters\n \n \n color\n String\n The hexadecimal color of the new board (e.g. FF0000)\n \n \n \n {\n "title": "Board title",\n "color": "ff0000"\n}\n\n \n Response\n 200 Success\n \n {\n "title": "Board title",\n "owner": {\n "primaryKey": "admin",\n "uid": "admin",\n "displayname": "Administrator"\n },\n "color": "ff0000",\n "archived": false,\n "labels": [\n {\n "title": "Finished",\n "color": "31CC7C",\n "boardId": 10,\n "cardId": null,\n "id": 37\n },\n {\n "title": "To review",\n "color": "317CCC",\n "boardId": 10,\n "cardId": null,\n "id": 38\n },\n {\n "title": "Action needed",\n "color": "FF7A66",\n "boardId": 10,\n "cardId": null,\n "id": 39\n },\n {\n "title": "Later",\n "color": "F1DB50",\n "boardId": 10,\n "cardId": null,\n "id": 40\n }\n ],\n "acl": [],\n "permissions": {\n "PERMISSION_READ": true,\n "PERMISSION_EDIT": true,\n "PERMISSION_MANAGE": true,\n "PERMISSION_SHARE": true\n },\n "users": [],\n "deletedAt": 0,\n "id": 10,\n "lastModified": 1586269585\n}\n\n \n 403 Forbidden\n A 403 response might be returned if the users ability to create new boards has been disabled by the administrator. For checking this before, see the canCreateBoards value in the Nextcloud capabilties.\n GET /boards/{boardId} - Get board details\n Request parameters\n \n \n Parameter\n Type\n Description\n \n \n boardId\n Integer\n The id of the board to fetch\n \n \n Response\n 200 Success\n \n {\n "title": "Board title",\n "owner": {\n "primaryKey": "admin",\n "uid": "admin",\n "displayname": "Administrator"\n },\n "color": "ff0000",\n "archived": false,\n "labels": [\n {\n "title": "Finished",\n "color": "31CC7C",\n "boardId": "10",\n "cardId": null,\n "id": 37\n },\n {\n "title": "To review",\n "color": "317CCC",\n "boardId": "10",\n "cardId": null,\n "id": 38\n },\n {\n "title": "Action needed",\n "color": "FF7A66",\n "boardId": "10",\n "cardId": null,\n "id": 39\n },\n {\n "title": "Later",\n "color": "F1DB50",\n "boardId": "10",\n "cardId": null,\n "id": 40\n }\n ],\n "acl": [],\n "permissions": {\n "PERMISSION_READ": true,\n "PERMISSION_EDIT": true,\n "PERMISSION_MANAGE": true,\n "PERMISSION_SHARE": true\n },\n "users": [\n {\n "primaryKey": "admin",\n "uid": "admin",\n "displayname": "Administrator"\n }\n ],\n "deletedAt": 0,\n "id": 10\n}\n\n \n PUT /boards/{boardId} - Update board details\n Request body\n \n \n Parameter\n Type\n Description\n \n \n title\n String\n The title of the board, maximum length is limited to 100 characters\n \n \n color\n String\n The hexadecimal color of the board (e.g. FF0000)\n \n \n archived\n Bool\n Whether or not this board should be archived.\n \n \n \n {\n "title": "Board title",\n "color": "ff0000",\n "archived": false\n}\n\n \n Response\n 200 Success\n DELETE /boards/{boardId} - Delete a board\n Request parameters\n \n \n Parameter\n Type\n Description\n \n \n boardId\n Integer\n The id of the board to fetch\n \n \n Response\n 200 Success\n POST /boards/{boardId}/undo_delete - Restore a deleted board\n Request parameters\n \n \n Parameter\n Type\n Description\n \n \n boardId\n Integer\n The id of the board to fetch\n \n \n Response\n 200 Success\n POST /boards/{boardId}/acl - Add new acl rule\n Request body\n \n \n Parameter\n Type\n Description\n \n \n type\n Integer\n Type of the participant\n \n \n participant\n String\n The uid of the participant\n \n \n permissionEdit\n Bool\n Setting if the participant has edit permissions\n \n \n permissionShare\n Bool\n Setting if the participant has sharing permissions\n \n \n permissionManage\n Bool\n Setting if the participant has management permissions\n \n \n Supported participant types:\n Response\n 200 Success\n \n [{\n "participant": {\n "primaryKey": "userid",\n "uid": "userid",\n "displayname": "<NAME>"\n },\n "type": 0,\n "boardId": 1,\n "permissionEdit": true,\n "permissionShare": false,\n "permissionManage": true,\n "owner": false,\n "id": 1\n}]\n\n \n PUT /boards/{boardId}/acl/{aclId} - Update an acl rule\n Request parameters\n \n \n Parameter\n Type\n Description\n \n \n permissionEdit\n Bool\n Setting if the participant has edit permissions\n \n \n permissionShare\n Bool\n Setting if the participant has sharing permissions\n \n \n permissionManage\n Bool\n Setting if the participant has management permissions\n \n \n Response\n 200 Success\n DELETE /boards/{boardId}/acl/{aclId} - Delete an acl rule\n Response\n 200 Success\n Stacks\n GET /boards/{boardId}/stacks - Get stacks\n The board list endpoint supports setting an If-Modified-Since header to limit the results to entities that are changed after the provided time.\n Request parameters\n \n \n Parameter\n Type\n Description\n \n \n boardId\n Integer\n The id of the board to fetch\n \n \n Response\n \n [\n {\n "title": "ToDo",\n "boardId": 2,\n "deletedAt": 0,\n "lastModified": 1541426139,\n "cards": [...],\n "order": 999,\n "id": 4\n }\n]\n\n \n 200 Success\n GET /boards/{boardId}/stacks/archived - Get list of archived stacks\n Request parameters\n \n \n Parameter\n Type\n Description\n \n \n boardId\n Integer\n The id of the board to fetch\n \n \n Response\n \n [\n {\n "title": "ToDo",\n "boardId": 2,\n "deletedAt": 0,\n "lastModified": 1541426139,\n "cards": [...],\n "order": 999,\n "id": 4\n }\n]\n\n \n 200 Success\n GET /boards/{boardId}/stacks/{stackId} - Get stack details\n Request parameters\n \n \n Parameter\n Type\n Description\n \n \n boardId\n Integer\n The id of the board the stack belongs to\n \n \n stackId\n Integer\n The id of the stack\n \n \n Response\n 200 Success\n POST /boards/{boardId}/stacks - Create a new stack\n Request body\n \n \n Parameter\n Type\n Description\n \n \n title\n String\n The title of the new stack, maximum length is limited to 100 characters\n \n \n order\n Integer\n Order for sorting the stacks\n \n \n Request parameters\n \n \n Parameter\n Type\n Description\n \n \n boardId\n Integer\n The id of the board to fetch\n \n \n Response\n 200 Success\n PUT /boards/{boardId}/stacks/{stackId} - Update stack details\n Request parameters\n \n \n Parameter\n Type\n Description\n \n \n boardId\n Integer\n The id of the board the stack belongs to\n \n \n stackId\n Integer\n The id of the stack\n \n \n Request body\n \n \n Parameter\n Type\n Description\n \n \n title\n String\n The title of the stack, maximum length is limited to 100 characters\n \n \n order\n Integer\n Order for sorting the stacks\n \n \n Response\n 200 Success\n DELETE /boards/{boardId}/stacks/{stackId} - Delete a stack\n Request parameters\n \n \n Parameter\n Type\n Description\n \n \n boardId\n Integer\n The id of the board the stack belongs to\n \n \n stackId\n Integer\n The id of the stack\n \n \n Response\n 200 Success\n Cards\n GET /boards/{boardId}/stacks/{stackId}/cards/{cardId} - Get card details\n Request parameters\n \n \n Parameter\n Type\n Description\n \n \n boardId\n Integer\n The id of the board the card belongs to\n \n \n stackId\n Integer\n The id of the stack the card belongs to\n \n \n cardId\n Integer\n The id of the card\n \n \n Response\n 200 Success\n POST /boards/{boardId}/stacks/{stackId}/cards - Create a new card\n Request parameters\n \n \n Parameter\n Type\n Description\n \n \n boardId\n Integer\n The id of the board the card belongs to\n \n \n stackId\n Integer\n The id of the stack the card belongs to\n \n \n Request body\n \n \n Parameter\n Type\n Description\n \n \n title\n String\n The title of the card, maximum length is limited to 255 characters\n \n \n type\n String\n Type of the card (for later use) use \'plain\' for now\n \n \n order\n Integer\n Order for sorting the stacks\n \n \n description\n String\n (optional) The markdown description of the card\n \n \n duedate\n timestamp\n (optional) The duedate of the card or null\n \n \n Response\n \n { \n "title":"Test",\n "description":null,\n "stackId":6,\n "type":"plain",\n "lastModified":1541528026,\n "createdAt":1541528026,\n "labels":null,\n "assignedUsers":null,\n "attachments":null,\n "attachmentCount":null,\n "owner":"admin",\n "order":999,\n "archived":false,\n "duedate": "2019-12-24T19:29:30+00:00",\n "deletedAt":0,\n "commentsUnread":0,\n "id":10,\n "overdue":0\n}\n\n \n 200 Success\n PUT /boards/{boardId}/stacks/{stackId}/cards/{cardId} - Update card details\n Request parameters\n \n \n Parameter\n Type\n Description\n \n \n boardId\n Integer\n The id of the board the card belongs to\n \n \n stackId\n Integer\n The id of the stack the card belongs to\n \n \n cardId\n Integer\n The id of the card\n \n \n Request data\n \n \n Parameter\n Type\n Description\n \n \n title\n String\n The title of the card, maximum length is limited to 255 characters\n \n \n description\n String\n The markdown description of the card\n \n \n type\n String\n Type of the card (for later use) use \'plain\' for now\n \n \n order\n Integer\n Order for sorting the stacks\n \n \n duedate\n timestamp\n The ISO-8601 formatted duedate of the card or null\n \n \n \n { \n "title": "Test card",\n "description": "A card description",\n "type": "plain",\n "order": 999,\n "duedate": "2019-12-24T19:29:30+00:00",\n}\n\n \n Response\n 200 Success\n DELETE /boards/{boardId}/stacks/{stackId}/cards/{cardId} - Delete a card\n Request parameters\n \n \n Parameter\n Type\n Description\n \n \n boardId\n Integer\n The id of the board the card belongs to\n \n \n stackId\n Integer\n The id of the stack the card belongs to\n \n \n cardId\n Integer\n The id of the card\n \n \n Response\n 200 Success\n PUT /boards/{boardId}/stacks/{stackId}/cards/{cardId}/assignLabel - Assign a label to a card\n Request parameters\n \n \n Parameter\n Type\n Description\n \n \n boardId\n Integer\n The id of the board the card belongs to\n \n \n stackId\n Integer\n The id of the stack the card belongs to\n \n \n cardId\n Integer\n The id of the card\n \n \n Request data\n \n \n Parameter\n Type\n Description\n \n \n labelId\n Integer\n The label id to assign to the card\n \n \n #### Response\n \n \n 200 Success\n PUT /boards/{boardId}/stacks/{stackId}/cards/{cardId}/removeLabel - Remove a label to a card\n Request parameters\n \n \n Parameter\n Type\n Description\n \n \n boardId\n Integer\n The id of the board the card belongs to\n \n \n stackId\n Integer\n The id of the stack the card belongs to\n \n \n cardId\n Integer\n The id of the card\n \n \n Request data\n \n \n Parameter\n Type\n Description\n \n \n labelId\n Integer\n The label id to remove to the card\n \n \n Response\n 200 Success\n PUT /boards/{boardId}/stacks/{stackId}/cards/{cardId}/assignUser - Assign a user to a card\n Request parameters\n \n \n Parameter\n Type\n Description\n \n \n boardId\n Integer\n The id of the board the card belongs to\n \n \n stackId\n Integer\n The id of the stack the card belongs to\n \n \n cardId\n Integer\n The id of the card\n \n \n Request data\n \n \n Parameter\n Type\n Description\n \n \n userId\n String\n The user id to assign to the card\n \n \n Response\n 200 Success\n \n {\n "id": 3,\n "participant": {\n "primaryKey": "admin",\n "uid": "admin",\n "displayname": "admin"\n },\n "cardId": 1\n}\n\n \n 400 Bad request\n \n {\n "status": 400,\n "message": "The user is already assigned to the card"\n}\n\n \n The request can fail with a bad request response for the following reasons: - Missing or wrongly formatted request parameters - The user is already assigned to the card - The user is not part of the board\n PUT /boards/{boardId}/stacks/{stackId}/cards/{cardId}/unassignUser - Unassign a user from a card\n Request parameters\n \n \n Parameter\n Type\n Description\n \n \n boardId\n Integer\n The id of the board the card belongs to\n \n \n stackId\n Integer\n The id of the stack the card belongs to\n \n \n cardId\n Integer\n The id of the card\n \n \n Request data\n \n \n Parameter\n Type\n Description\n \n \n userId\n String\n The user id to unassign from the card\n \n \n Response\n 200 Success\n PUT /boards/{boardId}/stacks/{stackId}/cards/{cardId}/reorder - Change the sorting order of a card\n Request parameters\n \n \n Parameter\n Type\n Description\n \n \n boardId\n Integer\n The id of the board the card belongs to\n \n \n stackId\n Integer\n The id of the stack the card belongs to\n \n \n cardId\n Integer\n The id of the card\n \n \n Request data\n \n \n Parameter\n Type\n Description\n \n \n order\n Integer\n The position in the stack where the card should be moved to\n \n \n stackId\n Integer\n The id of the stack where the card should be moved to\n \n \n Response\n 200 Success\n Labels\n GET /boards/{boardId}/labels/{labelId} - Get label details\n Request parameters\n \n \n Parameter\n Type\n Description\n \n \n boardId\n Integer\n The id of the board the label belongs to\n \n \n labelId\n Integer\n The id of the label\n \n \n Response\n 200 Success\n \n {\n "title": "Abgeschlossen",\n "color": "31CC7C",\n "boardId": "2",\n "cardId": null,\n "id": 5\n}\n\n \n POST /boards/{boardId}/labels - Create a new label\n Request parameters\n \n \n Parameter\n Type\n Description\n \n \n boardId\n Integer\n The id of the board the label belongs to\n \n \n Request data\n \n {\n "title": "Finished",\n "color": "31CC7C"\n}\n\n \n Response\n 200 Success\n PUT /boards/{boardId}/labels/{labelId} - Update label details\n Request parameters\n \n \n Parameter\n Type\n Description\n \n \n boardId\n Integer\n The id of the board the label belongs to\n \n \n labelId\n Integer\n The id of the label\n \n \n Request data\n \n {\n "title": "Finished",\n "color": "31CC7C"\n}\n\n \n Response\n 200 Success\n DELETE /boards/{boardId}/labels/{labelId} - Delete a label\n Request parameters\n \n \n Parameter\n Type\n Description\n \n \n boardId\n Integer\n The id of the board the label belongs to\n \n \n labelId\n Integer\n The id of the label\n \n \n Response\n 200 Success\n Attachments\n GET /boards/{boardId}/stacks/{stackId}/cards/{cardId}/attachments - Get a list of attachments\n Request parameters\n \n \n Parameter\n Type\n Description\n \n \n boardId\n Integer\n The id of the board the card belongs to\n \n \n stackId\n Integer\n The id of the stack the card belongs to\n \n \n cardId\n Integer\n The id of the card\n \n \n Response\n 200 Success\n \n [\n {\n "cardId": 5,\n "type": "deck_file",\n "data": "6DADC2C69F4.eml",\n "lastModified": 1541529048,\n "createdAt": 1541529048,\n "createdBy": "admin",\n "deletedAt": 0,\n "extendedData": {\n "filesize": 922258,\n "mimetype": "application/octet-stream",\n "info": {\n "dirname": ".",\n "basename": "6DADC2C69F4.eml",\n "extension": "eml",\n "filename": "6DADC2C69F4"\n }\n },\n "id": 6\n }\n]\n\n\n \n GET /boards/{boardId}/stacks/{stackId}/cards/{cardId}/attachments/{attachmentId} - Get the attachment file\n Request parameters\n \n \n Parameter\n Type\n Description\n \n \n boardId\n Integer\n The id of the board the attachment belongs to\n \n \n stackId\n Integer\n The id of the stack the attachment belongs to\n \n \n cardId\n Integer\n The id of the card the attachment belongs to\n \n \n attachmentId\n Integer\n The id of the attachment\n \n \n Response\n 200 Success\n POST /boards/{boardId}/stacks/{stackId}/cards/{cardId}/attachments - Upload an attachment\n Request parameters\n \n \n Parameter\n Type\n Description\n \n \n boardId\n Integer\n The id of the board the attachment belongs to\n \n \n stackId\n Integer\n The id of the stack the attachment belongs to\n \n \n cardId\n Integer\n The id of the card the attachment belongs to\n \n \n Request data\n \n \n Parameter\n Type\n Description\n \n \n type\n String\n The type of the attachement\n \n \n file\n Binary\n File data to add as an attachment\n \n \n \n Prior to Deck version v1.3.0 (API v1.0), attachments were stored within deck. For this type of attachments deck_file was used as the default type of attachments\n Starting with Deck version 1.3.0 (API v1.1) files are stored within the users regular Nextcloud files and the type file has been introduced for that\n \n Response\n 200 Success\n PUT /boards/{boardId}/stacks/{stackId}/cards/{cardId}/attachments/{attachmentId} - Update an attachment\n Request parameters\n \n \n Parameter\n Type\n Description\n \n \n boardId\n Integer\n The id of the board the attachment belongs to\n \n \n stackId\n Integer\n The id of the stack the attachment belongs to\n \n \n cardId\n Integer\n The id of the card the attachment belongs to\n \n \n attachmentId\n Integer\n The id of the attachment\n \n \n Request data\n \n \n Parameter\n Type\n Description\n \n \n type\n String\n The type of the attachement\n \n \n file\n Binary\n File data to add as an attachment\n \n \n For now only deck_file is supported as an attachment type.\n Response\n 200 Success\n DELETE /boards/{boardId}/stacks/{stackId}/cards/{cardId}/attachments/{attachmentId} - Delete an attachment\n Request parameters\n \n \n Parameter\n Type\n Description\n \n \n boardId\n Integer\n The id of the board the attachment belongs to\n \n \n stackId\n Integer\n The id of the stack the attachment belongs to\n \n \n cardId\n Integer\n The id of the card the attachment belongs to\n \n \n attachmentId\n Integer\n The id of the attachment\n \n \n Response\n 200 Success\n PUT /boards/{boardId}/stacks/{stackId}/cards/{cardId}/attachments/{attachmentId}/restore - Resore a deleted attachment\n Request parameters\n \n \n Parameter\n Type\n Description\n \n \n boardId\n Integer\n The id of the board the attachment belongs to\n \n \n stackId\n Integer\n The id of the stack the attachment belongs to\n \n \n cardId\n Integer\n The id of the card the attachment belongs to\n \n \n attachmentId\n Integer\n The id of the attachment\n \n \n Response\n 200 Success\n GET /boards/import/getSystems - Import a board\n Request parameters\n \n \n Parameter\n Type\n Description\n \n \n system\n Integer\n The system name. Example: trello\n \n \n Response\n Make a request to see the json schema of system\n \n {\n}\n\n \n GET /boards/import/config/system/{schema} - Import a board\n Request parameters\n Response\n \n [\n "trello"\n]\n\n \n POST /boards/import - Import a board\n Request parameters\n \n \n Parameter\n Type\n Description\n \n \n system\n string\n The allowed name of system to import from\n \n \n config\n Object\n The config object (JSON)\n \n \n data\n Object\n The data object to import (JSON)\n \n \n Response\n 200 Success\n OCS API\n The following endpoints are available through the Nextcloud OCS endpoint, which is available at /ocs/v2.php/apps/deck/api/v1.0/. \nThis has the benefit that both the web UI as well as external integrations can use the same API.\n Config\n Deck stores user and app configuration values globally and per board. The GET endpoint allows to fetch the current global configuration while board settings will be exposed through the board element on the regular API endpoints. \n GET /api/v1.0/config - Fetch app configuration values\n Response\n \n \n Config key\n Description\n \n \n calendar\n Determines if the calendar/tasks integration through the CalDAV backend is enabled for the user (boolean)\n \n \n cardDetailsInModal\n Determines if the bigger view is used (boolean)\n \n \n cardIdBadge\n Determines if the ID badges are displayed on cards (boolean)\n \n \n groupLimit\n Determines if creating new boards is limited to certain groups of the instance. The resulting output is an array of group objects with the id and the displayname (Admin only)\n \n \n \n {\n "ocs": {\n "meta": {\n "status": "ok",\n "statuscode": 200,\n "message": "OK"\n },\n "data": {\n "calendar": true,\n "cardDetailsInModal": true,\n "cardIdBadge": true,\n "groupLimit": [\n {\n "id": "admin",\n "displayname": "admin"\n }\n ]\n }\n }\n}\n\n\n \n POST /api/v1.0/config/{id}/{key} - Set a config value\n Request parameters\n \n \n Parameter\n Type\n Description\n \n \n id\n Integer\n The id of the board\n \n \n key\n String\n The config key to set, prefixed with board:{boardId}: for board specific settings\n \n \n value\n String\n The value that should be stored for the config key\n \n \n Board configuration\n \n \n Key\n Value\n \n \n notify-due\n off, assigned or all\n \n \n calendar\n Boolean\n \n \n cardDetailsInModal\n Boolean\n \n \n cardIdBadge\n Boolean\n \n \n Example request\n \n curl -X POST \'https://admin:[email protected]/ocs/v2.php/apps/deck/api/v1.0/config/calendar\' -H \'Accept: application/json\' -H "Content-Type: application/json" -H \'OCS-APIRequest: true\' --data-raw \'{"value":false}\'\n\n{\n "ocs": {\n "meta": {\n "status": "ok",\n "statuscode": 200,\n "message": "OK"\n },\n "data": false\n }\n}\n\n\n \n Request parameters\n string $cardId, int $limit = 20, int $offset = 0\n \n \n Parameter\n Type\n Description\n \n \n cardId\n Integer\n The id of the card\n \n \n limit\n Integer\n The maximum number of comments that should be returned, defaults to 20\n \n \n offset\n Integer\n The start offset used for pagination, defaults to 0\n \n \n \n curl \'https://admin:admin@nextcloud/ocs/v2.php/apps/deck/api/v1.0/cards/12/comments\' \\\n -H \'Accept: application/json\' -H \'OCS-APIRequest: true\'\n\n \n Response\n A list of comments will be provided under the ocs.data key. If no or no more comments are available the list will be empty.\n 200 Success\n \n {\n "ocs": {\n "meta": {\n "status": "ok",\n "statuscode": 200,\n "message": "OK"\n },\n "data": [\n {\n "id": 175,\n "objectId": 12,\n "message": "This is a comment with a mention to @alice",\n "actorId": "admin",\n "actorType": "users",\n "actorDisplayName": "Administrator",\n "creationDateTime": "2020-03-10T10:23:07+00:00",\n "mentions": [\n {\n "mentionId": "alice",\n "mentionType": "user",\n "mentionDisplayName": "alice"\n }\n ]\n }\n ]\n }\n}\n\n \n In case a comment is marked as a reply to another comment object, the parent comment will be added as replyTo entry to the response. Only the next parent node is added, nested replies are not exposed directly. \n \n [\n {\n "id": 175,\n "objectId": 12,\n "message": "This is a comment with a mention to @alice",\n "actorId": "admin",\n "actorType": "users",\n "actorDisplayName": "Administrator",\n "creationDateTime": "2020-03-10T10:23:07+00:00",\n "mentions": [\n {\n "mentionId": "alice",\n "mentionType": "user",\n "mentionDisplayName": "alice"\n }\n ],\n "replyTo": {\n "id": 175,\n "objectId": 12,\n "message": "This is a comment with a mention to @alice",\n "actorId": "admin",\n "actorType": "users",\n "actorDisplayName": "Administrator",\n "creationDateTime": "2020-03-10T10:23:07+00:00",\n "mentions": [\n {\n "mentionId": "alice",\n "mentionType": "user",\n "mentionDisplayName": "alice"\n }\n ]\n }\n }\n]\n\n \n Request parameters\n \n \n Parameter\n Type\n Description\n \n \n cardId\n Integer\n The id of the card\n \n \n message\n String\n The message of the comment, maximum length is limited to 1000 characters\n \n \n parentId\n Integer\n (optional) The start offset used for pagination, defaults to null\n \n \n Mentions will be parsed by the server. The server will return a list of mentions in the response to this request as shown below.\n \n curl -X POST \'https://admin:admin@nextcloud/ocs/v2.php/apps/deck/api/v1.0/cards/12/comments\' \\\n -H \'Accept: application/json\' -H \'OCS-APIRequest: true\'\n -H \'Content-Type: application/json;charset=utf-8\'\n --data \'{"message":"My message to @bob","parentId":null}\'\n\n \n Response\n A list of comments will be provided under the ocs.data key. If no or no more comments are available the list will be empty.\n 200 Success\n \n {\n "ocs": {\n "meta": {\n "status": "ok",\n "statuscode": 200,\n "message": "OK"\n },\n "data": {\n "id": "177",\n "objectId": "13",\n "message": "My message to @bob",\n "actorId": "admin",\n "actorType": "users",\n "actorDisplayName": "Administrator",\n "creationDateTime": "2020-03-10T10:30:17+00:00",\n "mentions": [\n {\n "mentionId": "bob",\n "mentionType": "user",\n "mentionDisplayName": "bob"\n }\n ]\n }\n }\n}\n\n \n 400 Bad request\n A bad request response is returned if invalid input values are provided. The response message will contain details about which part was not valid.\n 404 Not found\n A not found response might be returned if: - The card for the given cardId could not be found - The parent comment could not be found\n Request parameters\n \n \n Parameter\n Type\n Description\n \n \n cardId\n Integer\n The id of the card\n \n \n commentId\n Integer\n The id of the comment\n \n \n message\n String\n The message of the comment, maximum length is limited to 1000 characters\n \n \n Mentions will be parsed by the server. The server will return a list of mentions in the response to this request as shown below.\n Updating comments is limited to the current user being the same as the comment author specified in the actorId of the comment.\n \n curl -X POST \'https://admin:admin@nextcloud/ocs/v2.php/apps/deck/api/v1.0/cards/12/comments\' \\\n -H \'Accept: application/json\' -H \'OCS-APIRequest: true\'\n -H \'Content-Type: application/json;charset=utf-8\'\n --data \'{"message":"My message"}\'\n\n \n Response\n A list of comments will be provided under the ocs.data key. If no or no more comments are available the list will be empty.\n 200 Success\n \n {\n "ocs": {\n "meta": {\n "status": "ok",\n "statuscode": 200,\n "message": "OK"\n },\n "data": {\n "id": "177",\n "objectId": "13",\n "message": "My message",\n "actorId": "admin",\n "actorType": "users",\n "actorDisplayName": "Administrator",\n "creationDateTime": "2020-03-10T10:30:17+00:00",\n "mentions": []\n }\n }\n}\n\n \n 400 Bad request\n A bad request response is returned if invalid input values are provided. The response message will contain details about which part was not valid.\n 404 Not found\n A not found response might be returned if: - The card for the given cardId could not be found - The comment could not be found\n Request parameters\n \n \n Parameter\n Type\n Description\n \n \n cardId\n Integer\n The id of the card\n \n \n commentId\n Integer\n The id of the comment\n \n \n Deleting comments is limited to the current user being the same as the comment author specified in the actorId of the comment.\n \n curl -X DELETE \'https://admin:admin@nextcloud/ocs/v2.php/apps/deck/api/v1.0/cards/12/comments\' \\\n -H \'Accept: application/json\' -H \'OCS-APIRequest: true\'\n -H \'Content-Type: application/json;charset=utf-8\'\n\n \n Response\n A list of comments will be provided under the ocs.data key. If no or no more comments are available the list will be empty.\n 200 Success\n \n {\n "ocs": {\n "meta": {\n "status": "ok",\n "statuscode": 200,\n "message": "OK"\n },\n "data": []\n }\n}\n\n \n 400 Bad request\n A bad request response is returned if invalid input values are provided. The response message will contain details about which part was not valid.\n 404 Not found\n A not found response might be returned if: - The card for the given cardId could not be found - The comment could not be found\n Sessions\n PUT /session/create - creates a new session\n Request parameters\n \n \n Parameter\n Type\n Description\n \n \n boardId\n Integer\n The id of the opened board\n \n \n \n curl -X PUT \'https://admin:admin@nextcloud/ocs/v2.php/apps/deck/api/v1.0/session/create\' \\\n -H \'Accept: application/json\' -H \'OCS-APIRequest: true\' \\\n -H \'Content-Type: application/json;charset=utf-8\' \\\n --data \'{"boardId":1}\'\n\n \n Response\n 200 Success\n \n {\n "ocs": {\n "meta": {\n "status": "ok",\n "statuscode": 200,\n "message": "OK"\n },\n "data": {\n "token": <KEY>"\n }\n }\n}\n\n \n POST /session/sync - notifies the server, that the session is still open\n Request body\n \n \n Parameter\n Type\n Description\n \n \n boardId\n Integer\n The id of the opened board\n \n \n token\n String\n The session token from the /sessions/create response\n \n \n \n curl -X POST \'https://admin:admin@nextcloud/ocs/v2.php/apps/deck/api/v1.0/session/create\' \\\n -H \'Accept: application/json\' -H \'OCS-APIRequest: true\' \\\n -H \'Content-Type: application/json;charset=utf-8\' \\\n --data \'{"boardId":1, "token":"<KEY>"}\'\n\n \n Response\n 200 Success\n \n {\n "ocs": {\n "meta": {\n "status": "ok",\n "statuscode": 200,\n "message": "OK"\n },\n "data": []\n }\n}\n\n \n 404 Not Found\n the provided token is invalid or expired\n POST /session/close - closes the session\n Request body\n \n \n Parameter\n Type\n Description\n \n \n boardId\n Integer\n The id of the opened board\n \n \n token\n String\n The session token from the /sessions/create response\n \n \n \n curl -X POST \'https://admin:admin@nextcloud/ocs/v2.php/apps/deck/api/v1.0/session/close\' \\\n -H \'Accept: application/json\' -H \'OCS-APIRequest: true\' \\\n -H \'Content-Type: application/json;charset=utf-8\' \\\n --data \'{"boardId":1, "token":"<KEY>"}\'\n\n \n Response\n 200 Success\n \n {\n "ocs": {\n "meta": {\n "status": "ok",\n "statuscode": 200,\n "message": "OK"\n },\n "data": []\n }\n}'
The REST API provides access for authenticated users to their data inside the Deck app. To get a better understanding of Decks data models and their relations, please have a look at the data structure documentation.
# Prerequisites
* All requests require a
`OCS-APIRequest` HTTP header to be set to `true` and a `Content-Type` of `application/json` . * The API is located at https://nextcloud.local/index.php/apps/deck/api/v1.0
* All request parameters are required, unless otherwise specified
## Naming
*
Board is the project like grouping of tasks that can be shared to different users and groups
*
Stack is the grouping of cards which is rendered in vertical columns in the UI
*
Card is the representation of a single task
*
Labels are defined on a board level and can be assigned to any number of cards
## Global responses
### 400 Bad request
In case the request is invalid, e.g. because a parameter is missing or an invalid value has been transmitted, a 400 error will be returned:
```
{
"status": 400,
"message": "title must be provided"
}
```
### 403 Permission denied
In any case a user doesn't have access to a requested entity, a 403 error will be returned:
```
{
"status": 403,
"message": "Permission denied"
}
```
## Formats
### Date
Datetime values in request data need to be provided in ISO-8601. Example: 2020-01-20T09:52:43+00:00
### If-Modified-Since
Some index endpoints support limiting the result set to entries that have been changed since the given time. The supported date formats are:
* IMF-fixdate:
```
Sun, 03 Aug 2019 10:34:12 GMT
```
* (obsolete) RFC 850:
```
Sunday, 03-Aug-19 10:34:12 GMT
```
* (obsolete) ANSI C asctime():
```
Sun Aug 3 10:34:12 2019
```
It is highly recommended to only use the IMF-fixdate format. Note that according to RFC2616 all HTTP date/time stamps MUST be represented in Greenwich Mean Time (GMT), without exception.
Example curl request:
```
curl -u admin:admin -X GET \
'http://localhost:8000/index.php/apps/deck/api/v1.0/boards/2/stacks' \
-H "OCS-APIRequest: true" \
-H "If-Modified-Since: Mon, 05 Nov 2018 09:28:00 GMT"
```
### ETag
An ETag header is returned in order to determine if further child elements have been updated for the following endpoints:
* Fetch all user board
`GET /api/v1.0/boards` * Fetch a single board
```
GET /api/v1.0/boards/{boardId}
```
```
GET /api/v1.0/boards/{boardId}/stacks
```
```
GET /api/v1.0/boards/{boardId}/stacks/{stackId}
```
* Fetch a single card of a board
```
GET /api/v1.0/boards/{boardId}/stacks/{stackId}/cards/{cardId}
```
* Fetch attachments of a card
```
GET /api/v1.0/boards/{boardId}/stacks/{stackId}/cards/{cardId}/attachments
```
If a `If-None-Match` header is provided and the requested element has not changed a `304` Not Modified response will be returned.
Changes of child elements will propagate to their parents and also cause an update of the ETag which will be useful for determining if a sync is necessary on any client integration side. As an example, if a label is added to a card, the ETag of all related entities (the card, stack and board) will change.
If available the ETag will also be part of JSON response objects as shown below for a card:
```
{
"id": 81,
"ETag": "bdb10fa2d2aeda092a2b6b469454dc90",
"title": "Test card"
}
```
# Changelog
## API version 1.0
* Deck >=1.0.0: The maximum length of the card title has been extended from 100 to 255 characters
* Deck >=1.0.0: The API will now return a 400 Bad request response if the length limitation of a board, stack or card title is exceeded
## API version 1.1
This API version has become available with Deck 1.3.0.
* The maximum length of the card title has been extended from 100 to 255 characters
* The API will now return a 400 Bad request response if the length limitation of a board, stack or card title is exceeded
* The attachments API endpoints will return other attachment types than deck_file
* Prior to Deck version v1.3.0 (API v1.0), attachments were stored within deck. For this type of attachments
`deck_file` was used as the default type of attachments * Starting with Deck version 1.3.0 (API v1.1) files are stored within the users regular Nextcloud files and the type
`file` has been introduced for that
## API version 1.2 (unreleased)
# Endpoints
## Boards
### GET /boards - Get a list of boards
Parameter | Type | Description |
| --- | --- | --- |
details | Bool | Optional Enhance boards with details about labels, stacks and users |
Returns an array of board items
```
[
{
"title": "Board title",
"owner": {
"primaryKey": "admin",
"uid": "admin",
"displayname": "Administrator"
},
"color": "ff0000",
"archived": false,
"labels": [],
"acl": [],
"permissions": {
"PERMISSION_READ": true,
"PERMISSION_EDIT": true,
"PERMISSION_MANAGE": true,
"PERMISSION_SHARE": true
},
"users": [],
"shared": 0,
"deletedAt": 0,
"id": 10,
"lastModified": 1586269585,
"settings": {
"notify-due": "off",
"calendar": true
}
}
]
```
### POST /boards - Create a new board
Parameter | Type | Description |
| --- | --- | --- |
title | String | The title of the new board, maximum length is limited to 100 characters |
color | String | The hexadecimal color of the new board (e.g. FF0000) |
```
{
"title": "Board title",
"color": "ff0000"
}
```
# 403 Forbidden
A 403 response might be returned if the users ability to create new boards has been disabled by the administrator. For checking this before, see the `canCreateBoards` value in the Nextcloud capabilties.
### GET /boards/{boardId} - Get board details
### PUT /boards/{boardId} - Update board details
Parameter | Type | Description |
| --- | --- | --- |
title | String | The title of the board, maximum length is limited to 100 characters |
color | String | The hexadecimal color of the board (e.g. FF0000) |
archived | Bool | Whether or not this board should be archived. |
```
{
"title": "Board title",
"color": "ff0000",
"archived": false
}
```
### DELETE /boards/{boardId} - Delete a board
### POST /boards/{boardId}/undo_delete - Restore a deleted board
### POST /boards/{boardId}/acl - Add new acl rule
Parameter | Type | Description |
| --- | --- | --- |
type | Integer | Type of the participant |
participant | String | The uid of the participant |
permissionEdit | Bool | Setting if the participant has edit permissions |
permissionShare | Bool | Setting if the participant has sharing permissions |
permissionManage | Bool | Setting if the participant has management permissions |
# Supported participant types:
```
[{
"participant": {
"primaryKey": "userid",
"uid": "userid",
"displayname": "<NAME>"
},
"type": 0,
"boardId": 1,
"permissionEdit": true,
"permissionShare": false,
"permissionManage": true,
"owner": false,
"id": 1
}]
```
### PUT /boards/{boardId}/acl/{aclId} - Update an acl rule
Parameter | Type | Description |
| --- | --- | --- |
permissionEdit | Bool | Setting if the participant has edit permissions |
permissionShare | Bool | Setting if the participant has sharing permissions |
permissionManage | Bool | Setting if the participant has management permissions |
### DELETE /boards/{boardId}/acl/{aclId} - Delete an acl rule
## Stacks
### GET /boards/{boardId}/stacks - Get stacks
### GET /boards/{boardId}/stacks/archived - Get list of archived stacks
### GET /boards/{boardId}/stacks/{stackId} - Get stack details
### POST /boards/{boardId}/stacks - Create a new stack
### PUT /boards/{boardId}/stacks/{stackId} - Update stack details
### DELETE /boards/{boardId}/stacks/{stackId} - Delete a stack
## Cards
### GET /boards/{boardId}/stacks/{stackId}/cards/{cardId} - Get card details
### POST /boards/{boardId}/stacks/{stackId}/cards - Create a new card
```
{
"title":"Test",
"description":null,
"stackId":6,
"type":"plain",
"lastModified":1541528026,
"createdAt":1541528026,
"labels":null,
"assignedUsers":null,
"attachments":null,
"attachmentCount":null,
"owner":"admin",
"order":999,
"archived":false,
"duedate": "2019-12-24T19:29:30+00:00",
"deletedAt":0,
"commentsUnread":0,
"id":10,
"overdue":0
}
```
### PUT /boards/{boardId}/stacks/{stackId}/cards/{cardId} - Update card details
```
{
"title": "Test card",
"description": "A card description",
"type": "plain",
"order": 999,
"duedate": "2019-12-24T19:29:30+00:00",
}
```
### DELETE /boards/{boardId}/stacks/{stackId}/cards/{cardId} - Delete a card
### PUT /boards/{boardId}/stacks/{stackId}/cards/{cardId}/assignLabel - Assign a label to a card
Parameter | Type | Description |
| --- | --- | --- |
labelId | Integer | The label id to assign to the card |
#### Response |
### PUT /boards/{boardId}/stacks/{stackId}/cards/{cardId}/removeLabel - Remove a label to a card
### PUT /boards/{boardId}/stacks/{stackId}/cards/{cardId}/assignUser - Assign a user to a card
```
{
"id": 3,
"participant": {
"primaryKey": "admin",
"uid": "admin",
"displayname": "admin"
},
"cardId": 1
}
```
```
{
"status": 400,
"message": "The user is already assigned to the card"
}
```
The request can fail with a bad request response for the following reasons: - Missing or wrongly formatted request parameters - The user is already assigned to the card - The user is not part of the board
### PUT /boards/{boardId}/stacks/{stackId}/cards/{cardId}/unassignUser - Unassign a user from a card
### PUT /boards/{boardId}/stacks/{stackId}/cards/{cardId}/reorder - Change the sorting order of a card
Parameter | Type | Description |
| --- | --- | --- |
order | Integer | The position in the stack where the card should be moved to |
stackId | Integer | The id of the stack where the card should be moved to |
## Labels
### GET /boards/{boardId}/labels/{labelId} - Get label details
```
{
"title": "Abgeschlossen",
"color": "31CC7C",
"boardId": "2",
"cardId": null,
"id": 5
}
```
### POST /boards/{boardId}/labels - Create a new label
### PUT /boards/{boardId}/labels/{labelId} - Update label details
### DELETE /boards/{boardId}/labels/{labelId} - Delete a label
## Attachments
### GET /boards/{boardId}/stacks/{stackId}/cards/{cardId}/attachments - Get a list of attachments
```
[
{
"cardId": 5,
"type": "deck_file",
"data": "6DADC2C69F4.eml",
"lastModified": 1541529048,
"createdAt": 1541529048,
"createdBy": "admin",
"deletedAt": 0,
"extendedData": {
"filesize": 922258,
"mimetype": "application/octet-stream",
"info": {
"dirname": ".",
"basename": "6DADC2C69F4.eml",
"extension": "eml",
"filename": "6DADC2C69F4"
}
},
"id": 6
}
]
```
### GET /boards/{boardId}/stacks/{stackId}/cards/{cardId}/attachments/{attachmentId} - Get the attachment file
### POST /boards/{boardId}/stacks/{stackId}/cards/{cardId}/attachments - Upload an attachment
* Prior to Deck version v1.3.0 (API v1.0), attachments were stored within deck. For this type of attachments
`deck_file` was used as the default type of attachments * Starting with Deck version 1.3.0 (API v1.1) files are stored within the users regular Nextcloud files and the type
`file` has been introduced for that
### PUT /boards/{boardId}/stacks/{stackId}/cards/{cardId}/attachments/{attachmentId} - Update an attachment
For now only `deck_file` is supported as an attachment type.
### DELETE /boards/{boardId}/stacks/{stackId}/cards/{cardId}/attachments/{attachmentId} - Delete an attachment
### PUT /boards/{boardId}/stacks/{stackId}/cards/{cardId}/attachments/{attachmentId}/restore - Resore a deleted attachment
### GET /boards/import/getSystems - Import a board
Parameter | Type | Description |
| --- | --- | --- |
system | Integer | The system name. Example: trello |
Make a request to see the json schema of system
```
{
}
```
### GET /boards/import/config/system/{schema} - Import a board
```
[
"trello"
]
```
### POST /boards/import - Import a board
Parameter | Type | Description |
| --- | --- | --- |
system | string | The allowed name of system to import from |
config | Object | The config object (JSON) |
data | Object | The data object to import (JSON) |
# OCS API
The following endpoints are available through the Nextcloud OCS endpoint, which is available at
```
/ocs/v2.php/apps/deck/api/v1.0/
```
.
This has the benefit that both the web UI as well as external integrations can use the same API.
## Config
Deck stores user and app configuration values globally and per board. The GET endpoint allows to fetch the current global configuration while board settings will be exposed through the board element on the regular API endpoints.
### GET /api/v1.0/config - Fetch app configuration values
Config key | Description |
| --- | --- |
calendar | Determines if the calendar/tasks integration through the CalDAV backend is enabled for the user (boolean) |
cardDetailsInModal | Determines if the bigger view is used (boolean) |
cardIdBadge | Determines if the ID badges are displayed on cards (boolean) |
groupLimit | Determines if creating new boards is limited to certain groups of the instance. The resulting output is an array of group objects with the id and the displayname (Admin only) |
```
{
"ocs": {
"meta": {
"status": "ok",
"statuscode": 200,
"message": "OK"
},
"data": {
"calendar": true,
"cardDetailsInModal": true,
"cardIdBadge": true,
"groupLimit": [
{
"id": "admin",
"displayname": "admin"
}
]
}
}
}
```
### POST /api/v1.0/config/{id}/{key} - Set a config value
Parameter | Type | Description |
| --- | --- | --- |
id | Integer | The id of the board |
key | String | The config key to set, prefixed with |
value | String | The value that should be stored for the config key |
# Board configuration
Key | Value |
| --- | --- |
b'Key\n Value\n \n \n notify-due\n off, assigned or all\n \n \n calendar\n Boolean\n \n \n cardDetailsInModal\n Boolean\n \n \n cardIdBadge\n Boolean'
# Example request
```
curl -X POST 'https://admin:[email protected]/ocs/v2.php/apps/deck/api/v1.0/config/calendar' -H 'Accept: application/json' -H "Content-Type: application/json" -H 'OCS-APIRequest: true' --data-raw '{"value":false}'
string $cardId, int $limit = 20, int $offset = 0
Parameter | Type | Description |
| --- | --- | --- |
cardId | Integer | The id of the card |
limit | Integer | The maximum number of comments that should be returned, defaults to 20 |
offset | Integer | The start offset used for pagination, defaults to 0 |
```
curl 'https://admin:admin@nextcloud/ocs/v2.php/apps/deck/api/v1.0/cards/12/comments' \
-H 'Accept: application/json' -H 'OCS-APIRequest: true'
```
```
{
"ocs": {
"meta": {
"status": "ok",
"statuscode": 200,
"message": "OK"
},
"data": [
{
"id": 175,
"objectId": 12,
"message": "This is a comment with a mention to @alice",
"actorId": "admin",
"actorType": "users",
"actorDisplayName": "Administrator",
"creationDateTime": "2020-03-10T10:23:07+00:00",
"mentions": [
{
"mentionId": "alice",
"mentionType": "user",
"mentionDisplayName": "alice"
}
]
}
]
}
}
```
In case a comment is marked as a reply to another comment object, the parent comment will be added as `replyTo` entry to the response. Only the next parent node is added, nested replies are not exposed directly. ```
[
{
"id": 175,
"objectId": 12,
"message": "This is a comment with a mention to @alice",
"actorId": "admin",
"actorType": "users",
"actorDisplayName": "Administrator",
"creationDateTime": "2020-03-10T10:23:07+00:00",
"mentions": [
{
"mentionId": "alice",
"mentionType": "user",
"mentionDisplayName": "alice"
}
],
"replyTo": {
"id": 175,
"objectId": 12,
"message": "This is a comment with a mention to @alice",
"actorId": "admin",
"actorType": "users",
"actorDisplayName": "Administrator",
"creationDateTime": "2020-03-10T10:23:07+00:00",
"mentions": [
{
"mentionId": "alice",
"mentionType": "user",
"mentionDisplayName": "alice"
}
]
}
}
]
```
Parameter | Type | Description |
| --- | --- | --- |
cardId | Integer | The id of the card |
message | String | The message of the comment, maximum length is limited to 1000 characters |
parentId | Integer | (optional) The start offset used for pagination, defaults to null |
Mentions will be parsed by the server. The server will return a list of mentions in the response to this request as shown below.
```
{
"ocs": {
"meta": {
"status": "ok",
"statuscode": 200,
"message": "OK"
},
"data": {
"id": "177",
"objectId": "13",
"message": "My message to @bob",
"actorId": "admin",
"actorType": "users",
"actorDisplayName": "Administrator",
"creationDateTime": "2020-03-10T10:30:17+00:00",
"mentions": [
{
"mentionId": "bob",
"mentionType": "user",
"mentionDisplayName": "bob"
}
]
}
}
}
```
Parameter | Type | Description |
| --- | --- | --- |
cardId | Integer | The id of the card |
commentId | Integer | The id of the comment |
message | String | The message of the comment, maximum length is limited to 1000 characters |
Mentions will be parsed by the server. The server will return a list of mentions in the response to this request as shown below.
Updating comments is limited to the current user being the same as the comment author specified in the `actorId` of the comment. ```
{
"ocs": {
"meta": {
"status": "ok",
"statuscode": 200,
"message": "OK"
},
"data": {
"id": "177",
"objectId": "13",
"message": "My message",
"actorId": "admin",
"actorType": "users",
"actorDisplayName": "Administrator",
"creationDateTime": "2020-03-10T10:30:17+00:00",
"mentions": []
}
}
}
```
Parameter | Type | Description |
| --- | --- | --- |
cardId | Integer | The id of the card |
commentId | Integer | The id of the comment |
Deleting comments is limited to the current user being the same as the comment author specified in the `actorId` of the comment. ## Sessions
### PUT /session/create - creates a new session
Parameter | Type | Description |
| --- | --- | --- |
boardId | Integer | The id of the opened board |
```
{
"ocs": {
"meta": {
"status": "ok",
"statuscode": 200,
"message": "OK"
},
"data": {
"token": <KEY>"
}
}
}
```
### POST /session/sync - notifies the server, that the session is still open
# 404 Not Found
the provided token is invalid or expired
### POST /session/close - closes the session
# Data structure
Date:
Categories:
Tags:
## Database structure |
github.com/rakyll/gotest | go | Go | README
[¶](#section-readme)
---
### gotest
[![CircleCI](https://circleci.com/gh/rakyll/gotest.svg?style=svg)](https://circleci.com/gh/rakyll/gotest)
Like `go test` but with colors.
#### Installation
Use the pre-built binary for Linux 64-bit:
```
$ curl https://gotest-release.s3.amazonaws.com/gotest_linux > gotest && chmod +x gotest
```
Alternatively:
```
$ go get -u github.com/rakyll/gotest
```
### Usage
Accepts all the arguments and flags `go test` works with.
Example:
```
$ gotest -v github.com/jonasbn/go-test-demo
```
![gotest output example screenshot](https://raw.githubusercontent.com/jonasbn/go-test-demo/1.0.0/gotest-go-test-demo.png)
gotest comes with many colors! Configure the color of the output by setting the following env variable:
```
$ GOTEST_PALETTE="magenta,white"
```
The output will have magenta for failed cases, white for success.
Available colors: black, hiblack, red, hired, green, higreen, yellow, hiyellow, blue, hiblue, magenta, himagenta, cyan, hicyan, white, hiwhite.
Documentation
[¶](#section-documentation)
---
### Overview [¶](#pkg-overview)
gotest is a tiny program that shells out to `go test`
and prints the output in color. |
PrevMap | cran | R | Package ‘PrevMap’
October 12, 2022
Type Package
Title Geostatistical Modelling of Spatially Referenced Prevalence Data
Version 1.5.4
Date 2021-10-06
Author <NAME>, <NAME>
Maintainer <NAME> <<EMAIL>>
Imports splancs, lme4, truncnorm, methods, numDeriv
Depends maxLik, raster, pdist, Matrix
Description Provides functions for both likelihood-based
and Bayesian analysis of spatially referenced prevalence data. For a tuto-
rial on the use of the R package, see Giorgi and Diggle (2017) <doi:10.18637/jss.v078.i08>.
Encoding UTF-8
LazyData true
License GPL (>= 2)
Suggests geoR, R.rsp, INLA, knitr, rmarkdown
Additional_repositories https://inla.r-inla-download.org/R/testing/
RoxygenNote 7.1.1
NeedsCompilation no
Repository CRAN
Date/Publication 2021-10-07 14:30:02 UTC
R topics documented:
adjust.sigma... 3
autocor.plo... 4
binary.probit.Baye... 4
binomial.logistic.Baye... 7
binomial.logistic.MCM... 11
coef.PrevMa... 15
coef.PrevMap.p... 16
continuous.sampl... 16
contour.pred.PrevMa... 17
control.mcmc.Baye... 18
control.mcmc.Bayes.SPD... 20
control.mcmc.MCM... 22
control.prio... 23
control.profil... 24
create.ID.coord... 25
data_si... 26
dens.plo... 27
discrete.sampl... 28
galici... 29
galicia.boundar... 30
glgm.L... 31
Laplace.samplin... 34
Laplace.sampling.l... 36
Laplace.sampling.SPD... 38
linear.model.Baye... 39
linear.model.ML... 42
lm.ps.MCM... 45
loalo... 49
loglik.c... 50
loglik.linear.mode... 50
matern.kerne... 51
plot.pred.PrevMa... 52
plot.pred.PrevMap.p... 53
plot.PrevMap.diagnosti... 54
plot.profile.PrevMa... 54
plot.shape.mater... 55
point.ma... 56
poisson.log.MCM... 56
set.par.p... 60
shape.mater... 61
spat.corr.diagnosti... 62
spatial.pred.binomial.Baye... 65
spatial.pred.binomial.MCM... 66
spatial.pred.linear.Baye... 68
spatial.pred.linear.ML... 69
spatial.pred.lm.p... 71
spatial.pred.poisson.MCM... 73
summary.Bayes.PrevMa... 75
summary.PrevMa... 76
summary.PrevMap.p... 77
trace.plo... 78
trace.plot.MCM... 78
trend.plo... 79
variog.diagnostic.glg... 79
variog.diagnostic.l... 82
variogra... 84
adjust.sigma2 Adjustment factor for the variance of the convolution of Gaussian
noise
Description
This function computes the multiplicative constant used to adjust the value of sigma2 in the low-
rank approximation of a Gaussian process.
Usage
adjust.sigma2(knots.dist, phi, kappa)
Arguments
knots.dist a matrix of the distances between the observed coordinates and the spatial knots.
phi scale parameter of the Matern covariance function.
kappa shape parameter of the Matern covariance function.
Details
Let U denote the n by m matrix of the distances between the n observed coordinates and m pre-
defined spatial knots. This function computes the following quantity
n m
1 XX
n i=1 j=1
where K(.; φ, κ) is the Matern kernel (see matern.kernel) and uij is the distance between the i-th
sampled location and the j-th spatial knot.
Value
A value corresponding to the adjustment factor for sigma2.
Author(s)
<NAME> <<EMAIL>>
<NAME> <<EMAIL>>
See Also
matern.kernel, pdist.
autocor.plot Plot of the autocorrelgram for posterior samples
Description
Plots the autocorrelogram for the posterior samples of the model parameters and spatial random
effects.
Usage
autocor.plot(object, param, component.beta = NULL, component.S = NULL)
Arguments
object an object of class ’Bayes.PrevMap’.
param a character indicating for which component of the model the autocorrelation plot
is required: param="beta" for the regression coefficients; param="sigma2" for
the variance of the spatial random effect; param="phi" for the scale parameter
of the Matern correlation function; param="tau2" for the variance of the nugget
effect; param="S" for the spatial random effect.
component.beta if param="beta", component.beta is a numeric value indicating the component
of the regression coefficients; default is NULL.
component.S if param="S", component.S can be a numeric value indicating the component
of the spatial random effect, or set equal to "all" if the autocorrelgram should
be plotted for all the components. Default is NULL.
Author(s)
<NAME> <<EMAIL>>
<NAME> <<EMAIL>>
binary.probit.Bayes Bayesian estimation for the two-levels binary probit model
Description
This function performs Bayesian estimation for a geostatistical binary probit model. It also allows
to specify a two-levels model so as to include individual-level and household-level (or any other
unit comprising a group of individuals, e.g. village, school, compound, etc...) variables.
Usage
binary.probit.Bayes(
formula,
coords,
data,
ID.coords,
control.prior,
control.mcmc,
kappa,
low.rank = FALSE,
knots = NULL,
messages = TRUE
)
Arguments
formula an object of class formula (or one that can be coerced to that class): a symbolic
description of the model to be fitted.
coords an object of class formula indicating the geographic coordinates.
data a data frame containing the variables in the model.
ID.coords vector of ID values for the unique set of spatial coordinates obtained from
create.ID.coords. These must be provided in order to specify spatial ran-
dom effects at household-level. Warning: the household coordinates must all
be distinct otherwise see jitterDupCoords. Default is NULL.
control.prior output from control.prior.
control.mcmc output from control.mcmc.Bayes.
kappa value for the shape parameter of the Matern covariance function.
low.rank logical; if low.rank=TRUE a low-rank approximation is required. Default is
low.rank=FALSE.
knots if low.rank=TRUE, knots is a matrix of spatial knots used in the low-rank ap-
proximation. Default is knots=NULL.
messages logical; if messages=TRUE then status messages are printed on the screen (or
output device) while the function is running. Default is messages=TRUE.
Details
This function performs Bayesian estimation for the parameters of the geostatistical binary probit
model. Let i and j denote the indices of the i-th household and j-th individual within that house-
hold. The response variable Yij is a binary indicator taking value 1 if the individual has been
tested positive for the disease of interest and 0 otherwise. Conditionally on a zero-mean stationary
Gaussian process S(xi ), Yij are mutually independent Bernoulli variables with probit link function
Φ−1 (·), i.e.
Φ−1 (pij ) = d0ij β + S(xi ),
where dij is a vector of covariates, both at individual- and household-level, with associated re-
gression coefficients β. The Gaussian process S(x) has isotropic Matern covariance function (see
matern) with variance sigma2, scale parameter phi and shape parameter kappa.
Priors definition. Priors can be defined through the function control.prior. The hierarchi-
cal structure of the priors is the following. Let θ be the vector of the covariance parameters
c(sigma2,phi); each component of θ has independent priors that can be freely defined by the user.
However, in control.prior uniform and log-normal priors are also available as default priors for
each of the covariance parameters. The vector of regression coefficients beta has a multivariate
Gaussian prior with mean beta.mean and covariance matrix beta.covar.
Updating regression coefficents and random effects using auxiliary variables. To update β and
S(xi ), we use an auxiliary variable technique based on Rue and Held (2005). Let Vij denote a set
of random variables that conditionally on β and S(xi ), are mutually independent Gaussian with
mean d0ij β + S(xi ) and unit variance. Then, Yij = 1 if Vij > 0 and Yij = 0 otherwise. Using
this representation of the model, we use a Gibbs sampler to simulate from the full conditionals of
β, S(xi ) and Vij . See Section 4.3 of Rue and Held (2005) for more details.
Updating the covariance parameters with a Metropolis-Hastings algorithm. In the MCMC
algorithm implemented in binary.probit.Bayes, the transformed parameters
(θ1 , θ2 ) = (log(σ 2 )/2, log(σ 2 /φ2κ ))
are independently updated using a Metropolis Hastings algorithm. At the i-th iteration, a new value
is proposed for each parameter from a univariate Gaussian distrubion with variance h2i . This is
tuned using the following adaptive scheme
hi = hi−1 + c1 i−c2 (αi − 0.45),
where αi is the acceptance rate at the i-th iteration, 0.45 is the optimal acceptance rate for a uni-
variate Gaussian distribution, whilst c1 > 0 and 0 < c2 < 1 are pre-defined constants. The starting
values h1 for each of the parameters θ1 and θ2 can be set using the function control.mcmc.Bayes
through the arguments h.theta1, h.theta2 and h.theta3. To define values for c1 and c2 , see the
documentation of control.mcmc.Bayes.
Low-rank approximation. In the case of very large spatial data-sets, a low-rank approximation
of the Gaussian spatial process S(x) might be computationally beneficial. Let (x1 , . . . , xm ) and
(t1 , . . . , tm ) denote the set of sampling locations and aP grid of spatial knots covering the area of
m
interest, respectively. Then S(x) is approximated as i=1 K(kx − ti k; φ, κ)Ui , where Ui are
zero-mean mutually independent Gaussian variables with variance sigma2 and K(.; φ, κ) is the
isotropic Matern kernel (see matern.kernel). Since the resulting approximation is no longer a
stationary process (but only approximately), sigma2 may take very different values from the actual
variance of the Gaussian process to approximate. The function adjust.sigma2 can then be used to
(approximately) explore the range for sigma2. For example if the variance of the Gaussian process
is 0.5, then an approximate value for sigma2 is 0.5/const.sigma2, where const.sigma2 is the
value obtained with adjust.sigma2.
Value
An object of class "Bayes.PrevMap". The function summary.Bayes.PrevMap is used to print a
summary of the fitted model. The object is a list with the following components:
estimate: matrix of the posterior samples of the model parameters.
S: matrix of the posterior samples for each component of the random effect.
const.sigma2: vector of the values of the multiplicative factor used to adjust the values of sigma2
in the low-rank approximation.
y: binary observations.
D: matrix of covariarates.
coords: matrix of the observed sampling locations.
kappa: shape parameter of the Matern function.
ID.coords: set of ID values defined through the argument ID.coords.
knots: matrix of spatial knots used in the low-rank approximation.
h1: vector of values taken by the tuning parameter h.theta1 at each iteration.
h2: vector of values taken by the tuning parameter h.theta2 at each iteration.
call: the matched call.
Author(s)
<NAME> <<EMAIL>>
<NAME> <<EMAIL>>
References
<NAME>., <NAME>. (2019). Model-based Geostatistics for Global Public Health. CRC/Chapman
& Hall.
<NAME>., <NAME>. (2017). PrevMap: an R package for prevalence mapping. Journal of Statis-
tical Software. 78(8), 1-29. doi: 10.18637/jss.v078.i08
<NAME>., <NAME>. (2005). Gaussian Markov Random Fields: Theory and Applications. Chapman &
Hall, London.
<NAME>. (1998). A process-convolution approach to modeling temperatures in the North Atlantic
Ocean. Environmental and Ecological Statistics 5, 173-190.
See Also
control.mcmc.Bayes, control.prior,summary.Bayes.PrevMap, matern, matern.kernel, create.ID.coords.
binomial.logistic.Bayes
Bayesian estimation for the binomial logistic model
Description
This function performs Bayesian estimation for a geostatistical binomial logistic model.
Usage
binomial.logistic.Bayes(
formula,
units.m,
coords,
data,
ID.coords = NULL,
control.prior,
control.mcmc,
kappa,
low.rank = FALSE,
knots = NULL,
messages = TRUE,
mesh = NULL,
SPDE = FALSE
)
Arguments
formula an object of class formula (or one that can be coerced to that class): a symbolic
description of the model to be fitted.
units.m an object of class formula indicating the binomial denominators.
coords an object of class formula indicating the geographic coordinates.
data a data frame containing the variables in the model.
ID.coords vector of ID values for the unique set of spatial coordinates obtained from
create.ID.coords. These must be provided if, for example, spatial random
effects are defined at household level but some of the covariates are at individ-
ual level. Warning: the household coordinates must all be distinct otherwise
see jitterDupCoords. Default is NULL.
control.prior output from control.prior.
control.mcmc output from control.mcmc.Bayes.
kappa value for the shape parameter of the Matern covariance function.
low.rank logical; if low.rank=TRUE a low-rank approximation is required. Default is
low.rank=FALSE.
knots if low.rank=TRUE, knots is a matrix of spatial knots used in the low-rank ap-
proximation. Default is knots=NULL.
messages logical; if messages=TRUE then status messages are printed on the screen (or
output device) while the function is running. Default is messages=TRUE.
mesh an object obtained as result of a call to the function inla.mesh.2d.
SPDE logical; if SPDE=TRUE the SPDE approximation for the Gaussian spatial model
is used. Default is SPDE=FALSE.
Details
This function performs Bayesian estimation for the parameters of the geostatistical binomial logistic
model. Conditionally on a zero-mean stationary Gaussian process S(x) and mutually independent
zero-mean Gaussian variables Z with variance tau2, the linear predictor assumes the form
log(p/(1 − p)) = d0 β + S(x) + Z,
where d is a vector of covariates with associated regression coefficients β. The Gaussian process
S(x) has isotropic Matern covariance function (see matern) with variance sigma2, scale parameter
phi and shape parameter kappa.
Priors definition. Priors can be defined through the function control.prior. The hierarchi-
cal structure of the priors is the following. Let θ be the vector of the covariance parameters
c(sigma2,phi,tau2); then each component of θ has independent priors freely defined by the user.
However, in control.prior uniform and log-normal priors are also available as default priors for
each of the covariance parameters. To remove the nugget effect Z, no prior should be defined for
tau2. Conditionally on sigma2, the vector of regression coefficients beta has a multivariate Gaus-
sian prior with mean beta.mean and covariance matrix sigma2*beta.covar, while in the low-rank
approximation the covariance matrix is simply beta.covar.
Updating the covariance parameters with a Metropolis-Hastings algorithm. In the MCMC
algorithm implemented in binomial.logistic.Bayes, the transformed parameters
(θ1 , θ2 , θ3 ) = (log(σ 2 )/2, log(σ 2 /φ2κ ), log(τ 2 ))
are independently updated using a Metropolis Hastings algorithm. At the i-th iteration, a new value
is proposed for each from a univariate Gaussian distrubion with variance h2i that is tuned using the
following adaptive scheme
hi = hi−1 + c1 i−c2 (αi − 0.45),
where αi is the acceptance rate at the i-th iteration, 0.45 is the optimal acceptance rate for a univari-
ate Gaussian distribution, whilst c1 > 0 and 0 < c2 < 1 are pre-defined constants. The starting val-
ues h1 for each of the parameters θ1 , θ2 and θ3 can be set using the function control.mcmc.Bayes
through the arguments h.theta1, h.theta2 and h.theta3. To define values for c1 and c2 , see the
documentation of control.mcmc.Bayes.
Hamiltonian Monte Carlo. The MCMC algorithm in binomial.logistic.Bayes uses a Hamil-
tonian Monte Carlo (HMC) procedure to update the random effect T = d0 β + S(x) + Z; see
Neal (2011) for an introduction to HMC. HMC makes use of a postion vector, say t, represent-
ing the random effect T , and a momentum vector, say q, of the same length of the position vec-
tor, say n. Hamiltonian dynamics also have a physical interpretation where the states of the
system are described by the position of a puck and its momentum (its mass times its velocity).
The Hamiltonian function is then defined as a function of t and q, having the form H(t, q) =
− log{f (t|y, β, θ)} + q 0 q/2, where f (t|y, β, θ) is the conditional distribution of T given the data
y, the regression parameters β and covariance parameters θ. The system of Hamiltonian equations
then defines the evolution of the system in time, which can be used to implement an algorithm for
simulation from the posterior distribution of T . In order to implement the Hamiltonian dynamic on
a computer, the Hamiltonian equations must be discretised. The leapfrog method is then used for
this purpose, where two tuning parameters should be defined: the stepsize and the number of steps
L. These respectively correspond to epsilon.S.lim and L.S.lim in the control.mcmc.Bayes
function. However, it is advisable to let epsilon and L take different random values at each iter-
ation of the HCM algorithm so as to account for the different variances amongst the components
of the posterior of T . This can be done in control.mcmc.Bayes by defning epsilon.S.lim and
L.S.lim as vectors of two elements, each of which represents the lower and upper limit of a uniform
distribution used to generate values for epsilon.S.lim and L.S.lim, respectively.
Using a two-level model to include household-level and individual-level information. When
analysing data from household sruveys, some of the avilable information information might be
at household-level (e.g. material of house, temperature) and some at individual-level (e.g. age,
gender). In this case, the Gaussian spatial process S(x) and the nugget effect Z are defined at
hosuehold-level in order to account for extra-binomial variation between and within households,
respectively.
Low-rank approximation. In the case of very large spatial data-sets, a low-rank approximation
of the Gaussian spatial process S(x) might be computationally beneficial. Let (x1 , . . . , xm ) and
(t1 , . . . , tm ) denote the set of sampling locations and aP
grid of spatial knots covering the area of
m
interest, respectively. Then S(x) is approximated as i=1 K(kx − ti k; φ, κ)Ui , where Ui are
zero-mean mutually independent Gaussian variables with variance sigma2 and K(.; φ, κ) is the
isotropic Matern kernel (see matern.kernel). Since the resulting approximation is no longer a
stationary process (but only approximately), sigma2 may take very different values from the actual
variance of the Gaussian process to approximate. The function adjust.sigma2 can then be used to
(approximately) explore the range for sigma2. For example if the variance of the Gaussian process
is 0.5, then an approximate value for sigma2 is 0.5/const.sigma2, where const.sigma2 is the
value obtained with adjust.sigma2.
Value
An object of class "Bayes.PrevMap". The function summary.Bayes.PrevMap is used to print a
summary of the fitted model. The object is a list with the following components:
estimate: matrix of the posterior samples of the model parameters.
S: matrix of the posterior samples for each component of the random effect.
const.sigma2: vector of the values of the multiplicative factor used to adjust the values of sigma2
in the low-rank approximation.
y: binomial observations.
units.m: binomial denominators.
D: matrix of covariarates.
coords: matrix of the observed sampling locations.
kappa: shape parameter of the Matern function.
ID.coords: set of ID values defined through the argument ID.coords.
knots: matrix of spatial knots used in the low-rank approximation.
h1: vector of values taken by the tuning parameter h.theta1 at each iteration.
h2: vector of values taken by the tuning parameter h.theta2 at each iteration.
h3: vector of values taken by the tuning parameter h.theta3 at each iteration.
acc.beta.S: empirical acceptance rate for the regression coefficients and random effects (only if
SPDE=TRUE).
mesh: the mesh used in the SPDE approximation.
call: the matched call.
Author(s)
<NAME> <<EMAIL>>
<NAME> <<EMAIL>>
References
<NAME>., <NAME>. (2019). Model-based Geostatistics for Global Public Health. CRC/Chapman
& Hall.
<NAME>., <NAME>. (2017). PrevMap: an R package for prevalence mapping. Journal of Statis-
tical Software. 78(8), 1-29. doi: 10.18637/jss.v078.i08
<NAME>. (2011) MCMC using Hamiltonian Dynamics, In: Handbook of Markov Chain Monte
Carlo (Chapter 5), Edited by <NAME>, <NAME>, <NAME>, and <NAME>-
man & Hall / CRC Press.
<NAME>. (1998). A process-convolution approach to modeling temperatures in the North Atlantic
Ocean. Environmental and Ecological Statistics 5, 173-190.
See Also
control.mcmc.Bayes, control.prior,summary.Bayes.PrevMap, matern, matern.kernel, create.ID.coords.
binomial.logistic.MCML
Monte Carlo Maximum Likelihood estimation for the binomial logistic
model
Description
This function performs Monte Carlo maximum likelihood (MCML) estimation for the geostatistical
binomial logistic model.
Usage
binomial.logistic.MCML(
formula,
units.m,
coords,
times = NULL,
data,
ID.coords = NULL,
par0,
control.mcmc,
kappa,
kappa.t = NULL,
sst.model = NULL,
fixed.rel.nugget = NULL,
start.cov.pars,
method = "BFGS",
low.rank = FALSE,
SPDE = FALSE,
knots = NULL,
mesh = NULL,
messages = TRUE,
plot.correlogram = TRUE
)
Arguments
formula an object of class formula (or one that can be coerced to that class): a symbolic
description of the model to be fitted.
units.m an object of class formula indicating the binomial denominators in the data.
coords an object of class formula indicating the spatial coordinates in the data.
times an object of class formula indicating the times in the data, used in the spatio-
temporal model.
data a data frame containing the variables in the model.
ID.coords vector of ID values for the unique set of spatial coordinates obtained from
create.ID.coords. These must be provided if, for example, spatial random
effects are defined at household level but some of the covariates are at individ-
ual level. Warning: the household coordinates must all be distinct otherwise
see jitterDupCoords. Default is NULL.
par0 parameters of the importance sampling distribution: these should be given in
the following order c(beta,sigma2,phi,tau2), where beta are the regression
coefficients, sigma2 is the variance of the Gaussian process, phi is the scale
parameter of the spatial correlation and tau2 is the variance of the nugget effect
(if included in the model).
control.mcmc output from control.mcmc.MCML.
kappa fixed value for the shape parameter of the Matern covariance function.
kappa.t fixed value for the shape parameter of the Matern covariance function in the
separable double-Matern spatio-temporal model.
sst.model a character value that specifies the spatio-temporal correlation function.
• sst.model="DM" separable double-Matern.
• sst.model="GN1" separable correlation functions. Temporal correlation:
f (x) = 1/(1 + x/ψ); Spatial correaltion: Matern function.
Deafault is sst.model=NULL, which is used when a purely spatial model is fit-
ted.
fixed.rel.nugget
fixed value for the relative variance of the nugget effect; fixed.rel.nugget=NULL
if this should be included in the estimation. Default is fixed.rel.nugget=NULL.
start.cov.pars a vector of length two with elements corresponding to the starting values of phi
and the relative variance of the nugget effect nu2, respectively, that are used in
the optimization algorithm. If nu2 is fixed through fixed.rel.nugget, then
start.cov.pars represents the starting value for phi only.
method method of optimization. If method="BFGS" then the maxBFGS function is used;
otherwise method="nlminb" to use the nlminb function. Default is method="BFGS".
low.rank logical; if low.rank=TRUE a low-rank approximation of the Gaussian spatial
process is used when fitting the model. Default is low.rank=FALSE.
SPDE logical; if SPDE=TRUE the SPDE approximation for the Gaussian spatial model
is used. Default is SPDE=FALSE.
knots if low.rank=TRUE, knots is a matrix of spatial knots that are used in the low-
rank approximation. Default is knots=NULL.
mesh an object obtained as result of a call to the function inla.mesh.2d.
messages logical; if messages=TRUE then status messages are printed on the screen (or
output device) while the function is running. Default is messages=TRUE.
plot.correlogram
logical; if plot.correlogram=TRUE the autocorrelation plot of the samples of
the random effect is displayed after completion of conditional simulation. De-
fault is plot.correlogram=TRUE.
Details
This function performs parameter estimation for a geostatistical binomial logistic model. Condi-
tionally on a zero-mean stationary Gaussian process S(x) and mutually independent zero-mean
Gaussian variables Z with variance tau2, the observations y are generated from a binomial distri-
bution with probability p and binomial denominators units.m. A canonical logistic link is used,
thus the linear predictor assumes the form
log(p/(1 − p)) = d0 β + S(x) + Z,
where d is a vector of covariates with associated regression coefficients β. The Gaussian process
S(x) has isotropic Matern covariance function (see matern) with variance sigma2, scale parameter
phi and shape parameter kappa. In the binomial.logistic.MCML function, the shape parameter
is treated as fixed. The relative variance of the nugget effect, nu2=tau2/sigma2, can also be fixed
through the argument fixed.rel.nugget; if fixed.rel.nugget=NULL, then the relative variance
of the nugget effect is also included in the estimation.
Monte Carlo Maximum likelihood. The Monte Carlo maximum likelihood method uses condi-
tional simulation from the distribution of the random effect T (x) = d(x)0 β + S(x) + Z given
the data y, in order to approximate the high-dimensiional intractable integral given by the likeli-
hood function. The resulting approximation of the likelihood is then maximized by a numerical
optimization algorithm which uses analytic epression for computation of the gradient vector and
Hessian matrix. The functions used for numerical optimization are maxBFGS (method="BFGS"),
from the maxLik package, and nlminb (method="nlminb").
Using a two-level model to include household-level and individual-level information. When
analysing data from household sruveys, some of the avilable information information might be
at household-level (e.g. material of house, temperature) and some at individual-level (e.g. age,
gender). In this case, the Gaussian spatial process S(x) and the nugget effect Z are defined at
hosuehold-level in order to account for extra-binomial variation between and within households,
respectively.
Low-rank approximation. In the case of very large spatial data-sets, a low-rank approximation
of the Gaussian spatial process S(x) might be computationally beneficial. Let (x1 , . . . , xm ) and
(t1 , . . . , tm ) denote the set of sampling locations and aP
grid of spatial knots covering the area of
m
interest, respectively. Then S(x) is approximated as i=1 K(kx − ti k; φ, κ)Ui , where Ui are
zero-mean mutually independent Gaussian variables with variance sigma2 and K(.; φ, κ) is the
isotropic Matern kernel (see matern.kernel). Since the resulting approximation is no longer a
stationary process (but only approximately), the parameter sigma2 is then multiplied by a factor
constant.sigma2 so as to obtain a value that is closer to the actual variance of S(x).
Value
An object of class "PrevMap". The function summary.PrevMap is used to print a summary of the
fitted model. The object is a list with the following components:
estimate: estimates of the model parameters; use the function coef.PrevMap to obtain estimates
of covariance parameters on the original scale.
covariance: covariance matrix of the MCML estimates.
log.lik: maximum value of the log-likelihood.
y: binomial observations.
units.m: binomial denominators.
D: matrix of covariates.
coords: matrix of the observed sampling locations.
method: method of optimization used.
ID.coords: set of ID values defined through the argument ID.coords.
kappa: fixed value of the shape parameter of the Matern function.
kappa.t: fixed value for the shape parameter of the Matern covariance function in the separable
double-Matern spatio-temporal model.
knots: matrix of the spatial knots used in the low-rank approximation.
mesh: the mesh used in the SPDE approximation.
const.sigma2: adjustment factor for sigma2 in the low-rank approximation.
h: vector of the values of the tuning parameter at each iteration of the Langevin-Hastings MCMC
algorithm; see Laplace.sampling, or Laplace.sampling.lr if a low-rank approximation is used.
samples: matrix of the random effects samples from the importance sampling distribution used to
approximate the likelihood function.
fixed.rel.nugget: fixed value for the relative variance of the nugget effect.
call: the matched call.
Author(s)
<NAME> <<EMAIL>>
<NAME> <<EMAIL>>
References
<NAME>., <NAME>. (2019). Model-based Geostatistics for Global Public Health. CRC/Chapman
& Hall.
<NAME>., <NAME>. (2017). PrevMap: an R package for prevalence mapping. Journal of Statis-
tical Software. 78(8), 1-29. doi: 10.18637/jss.v078.i08
<NAME>. (2004). Monte carlo maximum likelihood in model-based geostatistics. Journal
of Computational and Graphical Statistics 13, 702-718.
<NAME>. (1998). A process-convolution approach to modeling temperatures in the North Atlantic
Ocean. Environmental and Ecological Statistics 5, 173-190.
See Also
Laplace.sampling, Laplace.sampling.lr, summary.PrevMap, coef.PrevMap, matern, matern.kernel,
control.mcmc.MCML, create.ID.coords.
coef.PrevMap Extract model coefficients
Description
coef extracts parameters estimates from models fitted with the functions linear.model.MLE and
binomial.logistic.MCML.
Usage
## S3 method for class 'PrevMap'
coef(object, ...)
Arguments
object an object of class "PrevMap".
... other arguments.
Value
coefficients extracted from the model object object.
Author(s)
<NAME> <<EMAIL>>
<NAME>. Diggle <<EMAIL>>
coef.PrevMap.ps Extract model coefficients from geostatistical linear model with pref-
erentially sampled locations
Description
coef extracts parameters estimates from models fitted with the functions lm.ps.MCML.
Usage
## S3 method for class 'PrevMap.ps'
coef(object, ...)
Arguments
object an object of class "PrevMap.ps".
... other arguments.
Value
a list of coefficients extracted from the model in object.
Author(s)
<NAME> <<EMAIL>>
continuous.sample Spatially continuous sampling
Description
Draws a sample of spatial locations within a spatially continuous polygonal sampling region.
Usage
continuous.sample(poly, n, delta, k = 0, rho = NULL)
Arguments
poly boundary of a polygon.
n number of events.
delta minimum permissible distance between any two events in preliminary sample.
k number of locations in preliminary sample to be replaced by near neighbours of
other preliminary sample locations in final sample (must be between 0 and n/2)
rho maximum distance between close pairs of locations in final sample.
Details
To draw a sample of size n from a spatially continuous region A, with the property that the distance
between any two sampled locations is at least delta, the following algorithm is used.
• Step 1. Set i = 1 and generate a point x1 uniformly distributed on A.
• Step 2. Increase i by 1, generate a point xi uniformly distributed on A and calculate the
minimum, dmin , of the distances from xi to all xj : j < i.
• Step 3. If dmin ≥ δ, increase i by 1 and return to step 2 if i ≤ n, otherwise stop;
• Step 4. If dmin < δ, return to step 2 without increasing i.
Sampling close pairs of points. For some purposes, it is desirable that a spatial sampling scheme
include pairs of closely spaced points. In this case, the above algorithm requires the following
additional steps to be taken. Let k be the required number of close pairs. Choose a value rho such
that a close pair of points will be a pair of points separated by a distance of at most rho.
• Step 5. Set j = 1 and draw a random sample of size 2 from the integers 1, 2, . . . , n, say
(i1 ; i2 );
• Step 6. Replace xi1 by xi2 + u , where u is uniformly distributed on the disc with centre xi2
and radius rho, increase i by 1 and return to step 5 if i ≤ k, otherwise stop.
Value
A matrix of dimension n by 2 containing event locations.
Author(s)
<NAME> <<EMAIL>>
<NAME> <<EMAIL>>
contour.pred.PrevMap Contour plot of a predicted surface
Description
plot.pred.PrevMap displays contours of predictions obtained from spatial.pred.linear.MLE,
spatial.pred.linear.Bayes,spatial.pred.binomial.MCML and spatial.pred.binomial.Bayes.
Usage
## S3 method for class 'pred.PrevMap'
contour(x, type = NULL, summary = "predictions", ...)
Arguments
x an object of class "pred.PrevMap".
type a character indicating the type of prediction to display: ’prevalence’, ’odds’,
’logit’ or ’probit’.
summary character indicating which summary to display: ’predictions’,’quantiles’, ’stan-
dard.errors’ or ’exceedance.prob’; default is ’predictions’. If summary="exceedance.prob",
the argument type is ignored.
... further arguments passed to contour.
Author(s)
<NAME> <<EMAIL>>
<NAME> <<EMAIL>>
control.mcmc.Bayes Control settings for the MCMC algorithm used for Bayesian inference
Description
This function defines the different tuning parameter that are used in the MCMC algorithm for
Bayesian inference.
Usage
control.mcmc.Bayes(
n.sim,
burnin,
thin,
h.theta1 = 0.01,
h.theta2 = 0.01,
h.theta3 = 0.01,
L.S.lim = NULL,
epsilon.S.lim = NULL,
start.beta = "prior mean",
start.sigma2 = "prior mean",
start.phi = "prior mean",
start.S = "prior mean",
start.nugget = "prior mean",
c1.h.theta1 = 0.01,
c2.h.theta1 = 1e-04,
c1.h.theta2 = 0.01,
c2.h.theta2 = 1e-04,
c1.h.theta3 = 0.01,
c2.h.theta3 = 1e-04,
linear.model = FALSE,
binary = FALSE
)
Arguments
n.sim total number of simulations.
burnin initial number of samples to be discarded.
thin value used to retain only evey thin-th sampled value.
h.theta1 starting value of the tuning parameter of the proposal distribution for θ1 =
log(σ 2 )/2. See ’Details’ in binomial.logistic.Bayes or linear.model.Bayes.
h.theta2 starting value of the tuning parameter of the proposal distribution for θ2 =
log(σ 2 /φ2κ ). See ’Details’ in binomial.logistic.Bayes or linear.model.Bayes.
h.theta3 starting value of the tuning parameter of the proposal distribution for θ3 =
log(τ 2 ). See ’Details’ in binomial.logistic.Bayes or linear.model.Bayes.
L.S.lim an atomic value or a vector of length 2 that is used to define the number of steps
used at each iteration in the Hamiltonian Monte Carlo algorithm to update the
spatial random effect; if a single value is provided than the number of steps is
kept fixed, otherwise if a vector of length 2 is provided the number of steps is
simulated at each iteration as floor(runif(1,L.S.lim[1],L.S.lim[2]+1)).
epsilon.S.lim an atomic value or a vector of length 2 that is used to define the stepsize used
at each iteration in the Hamiltonian Monte Carlo algorithm to update the spatial
random effect; if a single value is provided than the stepsize is kept fixed, other-
wise if a vector of length 2 is provided the stepsize is simulated at each iteration
as runif(1,epsilon.S.lim[1],epsilon.S.lim[2]).
start.beta starting value for the regression coefficients beta.
start.sigma2 starting value for sigma2.
start.phi starting value for phi.
start.S starting value for the spatial random effect.
start.nugget starting value for the variance of the nugget effect; default is NULL if the nugget
effect is not present.
c1.h.theta1 value of c1 used to adaptively tune the variance of the Gaussian proposal for the
transformed parameter log(sigma2)/2; see ’Details’ in binomial.logistic.Bayes
or linear.model.Bayes.
c2.h.theta1 value of c2 used to adaptively tune the variance of the Gaussian proposal for the
transformed parameter log(sigma2)/2; see ’Details’ in binomial.logistic.Bayes
or linear.model.Bayes.
c1.h.theta2 value of c1 used to adaptively tune the variance of the Gaussian proposal for the
transformed parameter log(sigma2.curr/(phi.curr^(2*kappa))); see ’De-
tails’ in binomial.logistic.Bayes or linear.model.Bayes.
c2.h.theta2 value of c2 used to adaptively tune the variance of the Gaussian proposal for the
transformed parameter log(sigma2.curr/(phi.curr^(2*kappa))); see ’De-
tails’ in binomial.logistic.Bayes or linear.model.Bayes.
c1.h.theta3 value of c1 used to adaptively tune the variance of the Gaussian proposal for the
transformed parameter log(tau2); see ’Details’ in binomial.logistic.Bayes
or linear.model.Bayes.
c2.h.theta3 value of c2 used to adaptively tune the variance of the Gaussian proposal for the
transformed parameter log(tau2); see ’Details’ in binomial.logistic.Bayes
or linear.model.Bayes.
linear.model logical; if linear.model=TRUE, the control parameters are set for the geostatis-
tical linear model. Default is linear.model=FALSE.
binary logical; if binary=TRUE, the control parameters are set the binary geostatistical
model. Default is binary=FALSE.
Value
an object of class "mcmc.Bayes.PrevMap".
Author(s)
<NAME> <<EMAIL>>
<NAME> <<EMAIL>>
control.mcmc.Bayes.SPDE
Control settings for the MCMC algorithm used for Bayesian inference
using SPDE
Description
This function defines the different tuning parameter that are used in the MCMC algorithm for
Bayesian inference using a SPDE approximation for the spatial Gaussian process.
Usage
control.mcmc.Bayes.SPDE(
n.sim,
burnin,
thin,
h.theta1 = 0.01,
h.theta2 = 0.01,
start.beta = "prior mean",
start.sigma2 = "prior mean",
start.phi = "prior mean",
start.S = "prior mean",
n.iter = 1,
h = 1,
c1.h.theta1 = 0.01,
c2.h.theta1 = 1e-04,
c1.h.theta2 = 0.01,
c2.h.theta2 = 1e-04
)
Arguments
n.sim total number of simulations.
burnin initial number of samples to be discarded.
thin value used to retain only evey thin-th sampled value.
h.theta1 starting value of the tuning parameter of the proposal distribution for θ1 =
log(σ 2 )/2. See ’Details’ in binomial.logistic.Bayes or linear.model.Bayes.
h.theta2 starting value of the tuning parameter of the proposal distribution for θ2 =
log(σ 2 /φ2κ ). See ’Details’ in binomial.logistic.Bayes or linear.model.Bayes.
start.beta starting value for the regression coefficients beta. If not provided the prior mean
is used.
start.sigma2 starting value for sigma2. If not provided the prior mean is used.
start.phi starting value for phi. If not provided the prior mean is used.
start.S starting value for the spatial random effect. If not provided the prior mean is
used.
n.iter number of iteration of the Newton-Raphson procedure used to compute the
mean and coviariance matrix of the Gaussian proposal in the MCMC; defaut
is n.iter=1.
h tuning parameter for the covariance matrix of the Gaussian proposal. Default is
h=1.
c1.h.theta1 value of c1 used to adaptively tune the variance of the Gaussian proposal for the
transformed parameter log(sigma2)/2; see ’Details’ in binomial.logistic.Bayes
or linear.model.Bayes.
c2.h.theta1 value of c2 used to adaptively tune the variance of the Gaussian proposal for the
transformed parameter log(sigma2)/2; see ’Details’ in binomial.logistic.Bayes
or linear.model.Bayes.
c1.h.theta2 value of c1 used to adaptively tune the variance of the Gaussian proposal for the
transformed parameter log(sigma2.curr/(phi.curr^(2*kappa))); see ’De-
tails’ in binomial.logistic.Bayes or linear.model.Bayes.
c2.h.theta2 value of c2 used to adaptively tune the variance of the Gaussian proposal for the
transformed parameter log(sigma2.curr/(phi.curr^(2*kappa))); see ’De-
tails’ in binomial.logistic.Bayes or linear.model.Bayes.
Value
an object of class "mcmc.Bayes.PrevMap".
Author(s)
<NAME> <<EMAIL>>
<NAME> <<EMAIL>>
control.mcmc.MCML Control settings for the MCMC algorithm used for classical inference
on a binomial logistic model
Description
This function defines the options for the MCMC algorithm used in the Monte Carlo maximum
likelihood method.
Usage
control.mcmc.MCML(n.sim, burnin, thin = 1, h = NULL, c1.h = 0.01, c2.h = 1e-04)
Arguments
n.sim number of simulations.
burnin length of the burn-in period.
thin only every thin iterations, a sample is stored; default is thin=1.
h tuning parameter of the proposal distribution used in the Langevin-Hastings
MCMC algorithm (see Laplace.sampling and Laplace.sampling.lr); de-
fault is h=NULL and then set internally as 1.65/n( 1/6), where n is the dimension
of the random effect.
c1.h value of c1 used in the adaptive scheme for h; default is c1.h=0.01. See also
’Details’ in binomial.logistic.MCML
c2.h value of c2 used in the adaptive scheme for h; default is c1.h=0.01. See also
’Details’ in binomial.logistic.MCML
Value
A list with processed arguments to be passed to the main function.
Author(s)
<NAME> <<EMAIL>>
<NAME> <<EMAIL>>
Examples
control.mcmc <- control.mcmc.MCML(n.sim=1000,burnin=100,thin=1,h=0.05)
str(control.mcmc)
control.prior Priors specification
Description
This function is used to define priors for the model parameters of a Bayesian geostatistical model.
Usage
control.prior(
beta.mean,
beta.covar,
log.prior.sigma2 = NULL,
log.prior.phi = NULL,
log.prior.nugget = NULL,
uniform.sigma2 = NULL,
log.normal.sigma2 = NULL,
uniform.phi = NULL,
log.normal.phi = NULL,
uniform.nugget = NULL,
log.normal.nugget = NULL
)
Arguments
beta.mean mean vector of the Gaussian prior for the regression coefficients.
beta.covar covariance matrix of the Gaussian prior for the regression coefficients.
log.prior.sigma2
a function corresponding to the log-density of the prior distribution for the vari-
ance sigma2 of the Gaussian process. Warning: if a low-rank approximation
is used, then sigma2 corresponds to the variance of the iid zero-mean Gaussian
variables. Default is NULL.
log.prior.phi a function corresponding to the log-density of the prior distribution for the scale
parameter of the Matern correlation function; default is NULL.
log.prior.nugget
optional: a function corresponding to the log-density of the prior distribution for
the variance of the nugget effect; default is NULL with no nugget incorporated in
the model; default is NULL.
uniform.sigma2 a vector of length two, corresponding to the lower and upper limit of the uniform
prior on sigma2. Default is NULL.
log.normal.sigma2
a vector of length two, corresponding to the mean and standard deviation of the
distribution on the log scale for the log-normal prior on sigma2. Default is NULL.
uniform.phi a vector of length two, corresponding to the lower and upper limit of the uniform
prior on phi. Default is NULL.
log.normal.phi a vector of length two, corresponding to the mean and standard deviation of the
distribution on the log scale for the log-normal prior on phi. Default is NULL.
uniform.nugget a vector of length two, corresponding to the lower and upper limit of the uniform
prior on tau2. Default is NULL.
log.normal.nugget
a vector of length two, corresponding to the mean and standard deviation of the
distribution on the log scale for the log-normal prior on tau2. Default is NULL.
Value
a list corresponding the prior distributions for each model parameter.
Author(s)
<NAME> <<EMAIL>>
<NAME> <<EMAIL>>
See Also
See "Priors definition" in the Details section of the binomial.logistic.Bayes function.
control.profile Auxliary function for controlling profile log-likelihood in the linear
Gaussian model
Description
Auxiliary function used by loglik.linear.model. This function defines whether the profile-
loglikelihood should be computed or evaluation of the likelihood is required by keeping the other
parameters fixed.
Usage
control.profile(
phi = NULL,
rel.nugget = NULL,
fixed.beta = NULL,
fixed.sigma2 = NULL,
fixed.phi = NULL,
fixed.rel.nugget = NULL
)
Arguments
phi a vector of the different values that should be used in the likelihood evalutation
for the scale parameter phi, or NULL if a single value is provided either as first
argument in start.par (for profile likelihood maximization) or as fixed value
in fixed.phi; default is NULL.
rel.nugget a vector of the different values that should be used in the likelihood evalutation
for the relative variance of the nugget effect nu2, or NULL if a single value is
provided either in start.par (for profile likelihood maximization) or as fixed
value in fixed.nu2; default is NULL.
fixed.beta a vector for the fixed values of the regression coefficients beta, or NULL if profile
log-likelihood is to be performed; default is NULL.
fixed.sigma2 value for the fixed variance of the Gaussian process sigma2, or NULL if profile
log-likelihood is to be performed; default is NULL.
fixed.phi value for the fixed scale parameter phi in the Matern function, or NULL if profile
log-likelihood is to be performed; default is NULL.
fixed.rel.nugget
value for the fixed relative variance of the nugget effect; fixed.rel.nugget=NULL
if profile log-likelihood is to be performed; default is NULL.
Value
A list with components named as the arguments.
Author(s)
<NAME> <<EMAIL>>
<NAME> <<EMAIL>>
See Also
loglik.linear.model
create.ID.coords ID spatial coordinates
Description
Creates ID values for the unique set of coordinates.
Usage
create.ID.coords(data, coords)
Arguments
data a data frame containing the spatial coordinates.
coords an object of class formula indicating the geographic coordinates.
Value
a vector of integers indicating the corresponding rows in data for each distinct coordinate obtained
with the unique function.
Author(s)
<NAME> <<EMAIL>>
<NAME> <<EMAIL>>
Examples
x1 <- runif(5)
x2 <- runif(5)
data <- data.frame(x1=rep(x1,each=3),x2=rep(x2,each=3))
ID.coords <- create.ID.coords(data,coords=~x1+x2)
data[,c("x1","x2")]==unique(data[,c("x1","x2")])[ID.coords,]
data_sim Simulated binomial data-set over the unit square
Description
This binomial data-set was simulated by generating a zero-mean Gaussian process over a 30 by 30
grid covering the unit square. The parameters used in the simulation are sigma2=1, phi=0.15 and
kappa=2. The nugget effect was not included, hence tau2=0. The variables are as follows:
• y binomial observations.
• units.m binomial denominators.
• x1 horizontal coordinates.
• x2 vertical coordinates.
• S simulated values of the Gaussian process.
Usage
data(data_sim)
Format
A data frame with 900 rows and 5 variables
dens.plot Density plot for posterior samples
Description
Plots the autocorrelogram for the posterior samples of the model parameters and spatial random
effects.
Usage
dens.plot(
object,
param,
component.beta = NULL,
component.S = NULL,
hist = TRUE,
...
)
Arguments
object an object of class ’Bayes.PrevMap’.
param a character indicating for which component of the model the density plot is
required: param="beta" for the regression coefficients; param="sigma2" for
the variance of the spatial random effect; param="phi" for the scale parameter
of the Matern correlation function; param="tau2" for the variance of the nugget
effect; param="S" for the spatial random effect.
component.beta if param="beta", component.beta is a numeric value indicating the component
of the regression coefficients; default is NULL.
component.S if param="S", component.S can be a numeric value indicating the component
of the spatial random effect. Default is NULL.
hist logical; if TRUE a histrogram is added to density plot.
... additional parameters to pass to density.
Author(s)
<NAME> <<EMAIL>>
<NAME> <<EMAIL>>
discrete.sample Spatially discrete sampling
Description
Draws a sub-sample from a set of units spatially located irregularly over some defined geographical
region by imposing a minimum distance between any two sampled units.
Usage
discrete.sample(xy.all, n, delta, k = 0)
Arguments
xy.all set of locations from which the sample will be drawn.
n size of required sample.
delta minimum distance between any two locations in preliminary sample.
k number of locations in preliminary sample to be replaced by nearest neighbours
of other preliminary sample locations in final sample (must be between 0 and
n/2).
Details
To draw a sample of size n from a population of spatial locations Xi : i = 1, . . . , N , with the prop-
erty that the distance between any two sampled locations is at least delta, the function implements
the following algorithm.
• Step 1. Draw an initial sample of size n completely at random and call this xi : i = 1, . . . , n.
• Step 2. Set i = 1 and calculate the minimum, dmin , of the distances from xi to all other xj in
the initial sample.
• Step 3. If dmin ≥ δ, increase i by 1 and return to step 2 if i ≤ n, otherwise stop.
• Step 4. If dmin < δ, draw an integer j at random from 1, 2, . . . , N , set xi = Xj and return to
step 3.
Samples generated in this way will exhibit a more regular spatial arrangement than would a random
sample of the same size. The degree of regularity achievable will be influenced by the spatial
arrangement of the population Xi : i = 1, . . . , N , the specified value of delta and the sample size
n. For any given population, if n and/or delta are too large, a sample of the required size with the
distance between any two sampled locations at least delta will not be achievable; the suggested
solution is then to run the algorithm with a smaller value of delta.
Sampling close pairs of points. For some purposes, it is desirable that a spatial sampling scheme
include pairs of closely spaced points. In this case, the above algorithm requires the following
additional steps to be taken. Let k be the required number of close pairs.
• Step 5. Set j = 1 and draw a random sample of size 2 from the integers 1, 2, . . . , n, say
(i1 , i2 ).
• Step 6. Find the integer r such that the distances from xi1 to Xr is the minimum of all N − 1
distances from xi1 to the Xj .
• Step 7. Replace xi2 by Xr , increase i by 1 and return to step 5 if i ≤ k, otherwise stop.
Value
A matrix of dimension n by 2 containing the final sampled locations.
Author(s)
<NAME> <<EMAIL>>
<NAME> <<EMAIL>>
Examples
x<-0.015+0.03*(1:33)
xall<-rep(x,33)
yall<-c(t(matrix(xall,33,33)))
xy<-cbind(xall,yall)+matrix(-0.0075+0.015*runif(33*33*2),33*33,2)
par(pty="s",mfrow=c(1,2))
plot(xy[,1],xy[,2],pch=19,cex=0.25,xlab="Easting",ylab="Northing",
cex.lab=1,cex.axis=1,cex.main=1)
set.seed(15892)
# Generate spatially random sample
xy.sample<-xy[sample(1:dim(xy)[1],50,replace=FALSE),]
points(xy.sample[,1],xy.sample[,2],pch=19,col="red")
points(xy[,1],xy[,2],pch=19,cex=0.25)
plot(xy[,1],xy[,2],pch=19,cex=0.25,xlab="Easting",ylab="Northing",
cex.lab=1,cex.axis=1,cex.main=1)
set.seed(15892)
# Generate spatially regular sample
xy.sample<-discrete.sample(xy,50,0.08)
points(xy.sample[,1],xy.sample[,2],pch=19,col="red")
points(xy[,1],xy[,2],pch=19,cex=0.25)
galicia Heavy metal biomonitoring in Galicia
Description
This data-set relates to two studies on lead concentration in moss samples, in micrograms per gram
dry weight, collected in Galicia, norther Spain. The data are from two surveys, one conducted in
October 1997 and on in July 2000. The variables are as follows:
• x x-coordinate of the spatial locations.
• y y-coordinate of the spatial locations.
• lead lead concentration.
• survey year of the survey (either 1997 or 2000).
Usage
data(galicia)
Format
A data frame with 195 rows and 4 variables
Source
<NAME>., <NAME>. and <NAME>. (2010). Geostatistical analysis under preferential sampling
(with Discussion). Applied Statistics, 59, 191-232.
galicia.boundary Boundary of Galicia
Description
This data-set contains the geographical coordinates of the boundary of the Galicia region in northern
Spain.
The variables are as follows:
• x x-coordinate of the spatial locations.
• y y-coordinate of the spatial locations.
Usage
data(galicia.boundary)
Format
A data frame with 42315 rows and 2 variables
glgm.LA Maximum Likelihood estimation for generalised linear geostatistical
models via the Laplace approximation
Description
This function performs the Laplace method for maximum likelihood estimation of a generalised
linear geostatistical model.
Usage
glgm.LA(
formula,
units.m = NULL,
coords,
times = NULL,
data,
ID.coords = NULL,
kappa,
kappa.t = 0.5,
fixed.rel.nugget = NULL,
start.cov.pars,
method = "nlminb",
messages = TRUE,
family,
return.covariance = TRUE
)
Arguments
formula an object of class formula (or one that can be coerced to that class): a symbolic
description of the model to be fitted.
units.m an object of class formula indicating the binomial denominators in the data.
coords an object of class formula indicating the spatial coordinates in the data.
times an object of class formula indicating the times in the data, used in the spatio-
temporal model.
data a data frame containing the variables in the model.
ID.coords vector of ID values for the unique set of spatial coordinates obtained from
create.ID.coords. These must be provided if, for example, spatial random
effects are defined at household level but some of the covariates are at individ-
ual level. Warning: the household coordinates must all be distinct otherwise
see jitterDupCoords. Default is NULL.
kappa fixed value for the shape parameter of the Matern covariance function.
kappa.t fixed value for the shape parameter of the Matern covariance function in the
separable double-Matern spatio-temporal model.
fixed.rel.nugget
fixed value for the relative variance of the nugget effect; fixed.rel.nugget=NULL
if this should be included in the estimation. Default is fixed.rel.nugget=NULL.
start.cov.pars a vector of length two with elements corresponding to the starting values of phi
and the relative variance of the nugget effect nu2, respectively, that are used in
the optimization algorithm. If nu2 is fixed through fixed.rel.nugget, then
start.cov.pars represents the starting value for phi only.
method method of optimization. If method="BFGS" then the maxBFGS function is used;
otherwise method="nlminb" to use the nlminb function. Default is method="BFGS".
messages logical; if messages=TRUE then status messages are printed on the screen (or
output device) while the function is running. Default is messages=TRUE.
family character, indicating the conditional distribution of the outcome. This should be
"Gaussian", "Binomial" or "Poisson".
return.covariance
logical; if return.covariance=TRUE then a numerical estimation of the covari-
ance function for the model parameters is returned. Default is return.covariance=TRUE.
Details
This function performs parameter estimation for a generealized linear geostatistical model. Con-
ditionally on a zero-mean stationary Gaussian process S(x) and mutually independent zero-mean
Gaussian variables Z with variance tau2, the observations y are generated from a GLM with link
function g(.) and linear predictor
η = d0 β + S(x) + Z,
where d is a vector of covariates with associated regression coefficients β. The Gaussian process
S(x) has isotropic Matern covariance function (see matern) with variance sigma2, scale parameter
phi and shape parameter kappa. The shape parameter is treated as fixed. The relative variance of
the nugget effect, nu2=tau2/sigma2, can also be fixed through the argument fixed.rel.nugget;
if fixed.rel.nugget=NULL, then the relative variance of the nugget effect is also included in the
estimation.
Laplace Approximation The Laplace approximation (LA) method uses a second-order Taylor ex-
pansion of the integrand expressing the likelihood function. The resulting approximation of the
likelihood is then maximized by a numerical optimization as defined through the argument method.
Using a two-level model to include household-level and individual-level information. When
analysing data from household sruveys, some of the avilable information information might be
at household-level (e.g. material of house, temperature) and some at individual-level (e.g. age,
gender). In this case, the Gaussian spatial process S(x) and the nugget effect Z are defined at
hosuehold-level in order to account for extra-binomial variation between and within households,
respectively.
Value
An object of class "PrevMap". The function summary.PrevMap is used to print a summary of the
fitted model. The object is a list with the following components:
estimate: estimates of the model parameters; use the function coef.PrevMap to obtain estimates
of covariance parameters on the original scale.
covariance: covariance matrix of the MCML estimates.
log.lik: maximum value of the log-likelihood.
y: binomial observations.
units.m: binomial denominators.
D: matrix of covariates.
coords: matrix of the observed sampling locations.
times: vector of the time points used in a spatio-temporal model.
method: method of optimization used.
ID.coords: set of ID values defined through the argument ID.coords.
kappa: fixed value of the shape parameter of the Matern function.
kappa.t: fixed value for the shape parameter of the Matern covariance function in the separable
double-Matern spatio-temporal model.
fixed.rel.nugget: fixed value for the relative variance of the nugget effect.
call: the matched call.
Author(s)
<NAME> <<EMAIL>>
<NAME> <<EMAIL>anc<EMAIL>>
References
<NAME>., <NAME>. (2019). Model-based Geostatistics for Global Public Health. CRC/Chapman
& Hall.
<NAME>., <NAME>. (2017). PrevMap: an R package for prevalence mapping. Journal of Statis-
tical Software. 78(8), 1-29. doi: 10.18637/jss.v078.i08
<NAME>. (2004). Monte carlo maximum likelihood in model-based geostatistics. Journal
of Computational and Graphical Statistics 13, 702-718.
<NAME>. (1998). A process-convolution approach to modeling temperatures in the North Atlantic
Ocean. Environmental and Ecological Statistics 5, 173-190.
See Also
Laplace.sampling, Laplace.sampling.lr, summary.PrevMap, coef.PrevMap, matern, matern.kernel,
control.mcmc.MCML, create.ID.coords.
Laplace.sampling Langevin-Hastings MCMC for conditional simulation
Description
This function simulates from the conditional distribution of a Gaussian random effect, given bino-
mial or Poisson observations y.
Usage
Laplace.sampling(
mu,
Sigma,
y,
units.m,
control.mcmc,
ID.coords = NULL,
messages = TRUE,
plot.correlogram = TRUE,
poisson.llik = FALSE
)
Arguments
mu mean vector of the marginal distribution of the random effect.
Sigma covariance matrix of the marginal distribution of the random effect.
y vector of binomial/Poisson observations.
units.m vector of binomial denominators, or offset if the Poisson model is used.
control.mcmc output from control.mcmc.MCML.
ID.coords vector of ID values for the unique set of spatial coordinates obtained from
create.ID.coords. These must be provided if, for example, spatial random
effects are defined at household level but some of the covariates are at individ-
ual level. Warning: the household coordinates must all be distinct otherwise
see jitterDupCoords. Default is NULL.
messages logical; if messages=TRUE then status messages are printed on the screen (or
output device) while the function is running. Default is messages=TRUE.
plot.correlogram
logical; if plot.correlogram=TRUE the autocorrelation plot of the conditional
simulations is displayed.
poisson.llik logical; if poisson.llik=TRUE a Poisson model is used or, if poisson.llik=FALSE,
a binomial model is used.
Details
Binomial model. Conditionally on the random effect S, the data y follow a binomial distribution
with probability p and binomial denominators units.m. The logistic link function is used for the
linear predictor, which assumes the form
log(p/(1 − p)) = S.
Poisson model. Conditionally on the random effect S, the data y follow a Poisson distribution with
mean mλ, where m is an offset set through the argument units.m. The log link function is used
for the linear predictor, which assumes the form
log(λ) = S.
The random effect S has a multivariate Gaussian distribution with mean mu and covariance matrix
Sigma.
Laplace sampling. This function generates samples from the distribution of S given the data y.
Specifically a Langevin-Hastings algorithm is used to update S̃ = Σ̃−1/2 (S − s̃) where Σ̃ and s̃ are
the inverse of the negative Hessian and the mode of the distribution of S given y, respectively. At
each iteration a new value s̃prop for S̃ is proposed from a multivariate Gaussian distribution with
mean
s̃curr + (h/2)∇ log f (S̃|y),
where s̃curr is the current value for S̃, h is a tuning parameter and ∇ log f (S̃|y) is the the gradient
of the log-density of the distribution of S̃ given y. The tuning parameter h is updated according to
the following adaptive scheme: the value of h at the i-th iteration, say hi , is given by
hi = hi−1 + c1 i−c2 (αi − 0.547),
where c1 > 0 and 0 < c2 < 1 are pre-defined constants, and αi is the acceptance rate at the
i-th iteration (0.547 is the optimal acceptance rate for a multivariate standard Gaussian distri-
bution). The starting value for h, and the values for c1 and c2 can be set through the function
control.mcmc.MCML.
Random effects at household-level. When the data consist of two nested levels, such as households
and individuals within households, the argument ID.coords must be used to define the household
IDs for each individual. Let i and j denote the i-th household and the j-th person within that
household; the logistic link function then assumes the form
log(pij /(1 − pij )) = µij + Si
where the random effects Si are now defined at household level and have mean zero. Warning:
this modelling option is available only for the binomial model.
Value
A list with the following components
samples: a matrix, each row of which corresponds to a sample from the predictive distribution.
h: vector of the values of the tuning parameter at each iteration of the Langevin-Hastings MCMC
algorithm.
Author(s)
<NAME> <<EMAIL>>
<NAME> <<EMAIL>>
See Also
control.mcmc.MCML, create.ID.coords.
Laplace.sampling.lr Langevin-Hastings MCMC for conditional simulation (low-rank ap-
proximation)
Description
This function simulates from the conditional distribution of the random effects of binomial and
Poisson models.
Usage
Laplace.sampling.lr(
mu,
sigma2,
K,
y,
units.m,
control.mcmc,
messages = TRUE,
plot.correlogram = TRUE,
poisson.llik = FALSE
)
Arguments
mu mean vector of the linear predictor.
sigma2 variance of the random effect.
K random effect design matrix, or kernel matrix for the low-rank approximation.
y vector of binomial/Poisson observations.
units.m vector of binomial denominators, or offset if the Poisson model is used.
control.mcmc output from control.mcmc.MCML.
messages logical; if messages=TRUE then status messages are printed on the screen (or
output device) while the function is running. Default is messages=TRUE.
plot.correlogram
logical; if plot.correlogram=TRUE the autocorrelation plot of the conditional
simulations is displayed.
poisson.llik logical; if poisson.llik=TRUE a Poisson model is used or, if poisson.llik=FALSE,
a binomial model is used.
Details
Binomial model. Conditionally on Z, the data y follow a binomial distribution with probability
p and binomial denominators units.m. Let K denote the random effects design matrix; a logistic
link function is used, thus the linear predictor assumes the form
log(p/(1 − p)) = µ + KZ
where µ is the mean vector component defined through mu. Poisson model. Conditionally on Z, the
data y follow a Poisson distribution with mean mλ, where m is an offset set through the argument
units.m. Let K denote the random effects design matrix; a log link function is used, thus the linear
predictor assumes the form
log(λ) = µ + KZ
where µ is the mean vector component defined through mu. The random effect Z has iid components
distributed as zero-mean Gaussian variables with variance sigma2.
Laplace sampling. This function generates samples from the distribution of Z given the data y.
Specifically, a Langevin-Hastings algorithm is used to update Z̃ = Σ̃−1/2 (Z − z̃) where Σ̃ and z̃
are the inverse of the negative Hessian and the mode of the distribution of Z given y, respectively.
At each iteration a new value z̃prop for Z̃ is proposed from a multivariate Gaussian distribution with
mean
z̃curr + (h/2)∇ log f (Z̃|y),
where z̃curr is the current value for Z̃, h is a tuning parameter and ∇ log f (Z̃|y) is the the gradient
of the log-density of the distribution of Z̃ given y. The tuning parameter h is updated according to
the following adaptive scheme: the value of h at the i-th iteration, say hi , is given by
hi = hi−1 + c1 i−c2 (αi − 0.547),
where c1 > 0 and 0 < c2 < 1 are pre-defined constants, and αi is the acceptance rate at the
i-th iteration (0.547 is the optimal acceptance rate for a multivariate standard Gaussian distri-
bution). The starting value for h, and the values for c1 and c2 can be set through the function
control.mcmc.MCML.
Value
A list with the following components
samples: a matrix, each row of which corresponds to a sample from the predictive distribution.
h: vector of the values of the tuning parameter at each iteration of the Langevin-Hastings MCMC
algorithm.
Author(s)
<NAME> <<EMAIL>>
<NAME> <<EMAIL>>
See Also
control.mcmc.MCML.
Laplace.sampling.SPDE Independence sampler for conditional simulation of a Gaussian pro-
cess using SPDE
Description
This function simulates from the conditional distribution of a Gaussian process given binomial y.
The Guassian process is also approximated using SPDE.
Usage
Laplace.sampling.SPDE(
mu,
sigma2,
phi,
kappa,
y,
units.m,
coords,
mesh,
control.mcmc,
messages = TRUE,
plot.correlogram = TRUE,
poisson.llik
)
Arguments
mu mean vector of the Gaussian process to approximate.
sigma2 variance of the Gaussian process to approximate.
phi scale parameter of the Matern function for the Gaussian process to approximate.
kappa smothness parameter of the Matern function for the Gaussian process to approx-
imate.
y vector of binomial observations.
units.m vector of binomial denominators.
coords matrix of two columns corresponding to the spatial coordinates.
mesh mesh object set through inla.mesh.2d.
control.mcmc control parameters of the Independence sampler set through control.mcmc.MCML.
messages logical; if messages=TRUE then status messages are printed on the screen (or
output device) while the function is running. Default is messages=TRUE.
plot.correlogram
logical; if plot.correlogram=TRUE the autocorrelation plot of the conditional
simulations is displayed.
poisson.llik logical: if poisson.llik=TRUE then conditional conditional distribution of the
data is Poisson; poisson.llik=FALSE then conditional conditional distribution
of the data is Binomial.
Details
Binomial model. Conditionally on the random effect S, the data y follow a binomial distribution
with probability p and binomial denominators units.m. The logistic link function is used for the
linear predictor, which assumes the form
log(p/(1 − p)) = S.
The random effect S has a multivariate Gaussian distribution with mean mu and covariance matrix
Sigma.
Value
A list with the following components
samples: a matrix, each row of which corresponds to a sample from the predictive distribution.
Author(s)
<NAME> <<EMAIL>>
<NAME> <<EMAIL>>
See Also
control.mcmc.MCML.
linear.model.Bayes Bayesian estimation for the geostatistical linear Gaussian model
Description
This function performs Bayesian estimation for the geostatistical linear Gaussian model.
Usage
linear.model.Bayes(
formula,
coords,
data,
kappa,
control.mcmc,
control.prior,
low.rank = FALSE,
knots = NULL,
messages = TRUE
)
Arguments
formula an object of class "formula" (or one that can be coerced to that class): a sym-
bolic description of the model to be fitted.
coords an object of class formula indicating the geographic coordinates.
data a data frame containing the variables in the model.
kappa shape parameter of the Matern covariance function.
control.mcmc output from control.mcmc.Bayes.
control.prior output from control.prior.
low.rank logical; if low.rank=TRUE a low-rank approximation is fitted.
knots if low.rank=TRUE, knots is a matrix of spatial knots used in the low-rank ap-
proximation. Default is knots=NULL.
messages logical; if messages=TRUE then status messages are printed on the screen (or
output device) while the function is running. Default is messages=TRUE.
Details
This function performs Bayesian estimation for the geostatistical linear Gaussian model, specified
as
Y = d0 β + S(x) + Z,
where Y is the measured outcome, d is a vector of coavariates, β is a vector of regression coef-
ficients, S(x) is a stationary Gaussian spatial process and Z are independent zero-mean Gaussian
variables with variance tau2. More specifically, S(x) has an isotropic Matern covariance function
with variance sigma2, scale parameter phi and shape parameter kappa. The shape parameter kappa
is treated as fixed.
Priors definition. Priors can be defined through the function control.prior. The hierarchical
structure of the priors is the following. Let θ be the vector of the covariance parameters (σ 2 , φ, τ 2 );
then each component of θ can have independent priors freely defined by the user. However, uniform
and log-normal priors are also available as default priors for each of the covariance parameters. To
remove the nugget effect Z, no prior should be defined for tau2. Conditionally on sigma2, the
vector of regression coefficients beta has a multivariate Gaussian prior with mean beta.mean and
covariance matrix sigma2*beta.covar, while in the low-rank approximation the covariance matrix
is simply beta.covar.
Updating the covariance parameters using a Metropolis-Hastings algorithm. In the MCMC
algorithm implemented in linear.model.Bayes, the transformed parameters
(θ1 , θ2 , θ3 ) = (log(σ 2 )/2, log(σ 2 /φ2κ ), log(τ 2 ))
are independently updated using a Metropolis Hastings algorithm. At the i-th iteration, a new value
is proposed for each from a univariate Gaussian distrubion with variance, say h2i , tuned according
the following adaptive scheme
hi = hi−1 + c1 i−c2 (αi − 0.45),
where αi is the acceptance rate at the i-th iteration (0.45 is the optimal acceptance rate for a univari-
ate Gaussian distribution) whilst c1 > 0 and 0 < c2 < 1 are pre-defined constants. The starting val-
ues h1 for each of the parameters θ1 , θ2 and θ3 can be set using the function control.mcmc.Bayes
through the arguments h.theta1, h.theta2 and h.theta3. To define values for c1 and c2 , see the
documentation of control.mcmc.Bayes.
Low-rank approximation. In the case of very large spatial data-sets, a low-rank approximation
of the Gaussian spatial process S(x) might be computationally beneficial. Let (x1 , . . . , xm ) and
(t1 , . . . , tm ) denote the set of sampling locations and aP
grid of spatial knots covering the area of
m
interest, respectively. Then S(x) is approximated as i=1 K(kx − ti k; φ, κ)Ui , where Ui are
zero-mean mutually independent Gaussian variables with variance sigma2 and K(.; φ, κ) is the
isotropic Matern kernel (see matern.kernel). Since the resulting approximation is no longer a
stationary process (but only approximately), sigma2 may take very different values from the actual
variance of the Gaussian process to approximate. The function adjust.sigma2 can then be used to
(approximately) explore the range for sigma2. For example if the variance of the Gaussian process
is 0.5, then an approximate value for sigma2 is 0.5/const.sigma2, where const.sigma2 is the
value obtained with adjust.sigma2.
Value
An object of class "Bayes.PrevMap". The function summary.Bayes.PrevMap is used to print a
summary of the fitted model. The object is a list with the following components:
estimate: matrix of the posterior samples for each of the model parameters.
S: matrix of the posterior samplesfor each component of the random effect. This is only returned
for the low-rank approximation.
y: response variable.
D: matrix of covariarates.
coords: matrix of the observed sampling locations.
kappa: vaues of the shape parameter of the Matern function.
knots: matrix of spatial knots used in the low-rank approximation.
const.sigma2: vector of the values of the multiplicative factor used to adjust the sigma2 in the
low-rank approximation.
h1: vector of values taken by the tuning parameter h.theta1 at each iteration.
h2: vector of values taken by the tuning parameter h.theta2 at each iteration.
h3: vector of values taken by the tuning parameter h.theta3 at each iteration.
call: the matched call.
Author(s)
<NAME> <<EMAIL>>
<NAME> <<EMAIL>>
References
<NAME>., <NAME>. (2019). Model-based Geostatistics for Global Public Health. CRC/Chapman
& Hall.
<NAME>., <NAME>. (2017). PrevMap: an R package for prevalence mapping. Journal of Statis-
tical Software. 78(8), 1-29. doi: 10.18637/jss.v078.i08
<NAME>. (1998). A process-convolution approach to modeling temperatures in the North Atlantic
Ocean. Environmental and Ecological Statistics 5, 173-190.
See Also
control.prior, control.mcmc.Bayes, shape.matern, summary.Bayes.PrevMap, autocor.plot,
trace.plot, dens.plot, matern, matern.kernel, adjust.sigma2.
linear.model.MLE Maximum Likelihood estimation for the geostatistical linear Gaussian
model
Description
This function performs maximum likelihood estimation for the geostatistical linear Gaussian Model.
Usage
linear.model.MLE(
formula,
coords = NULL,
data,
ID.coords = NULL,
kappa,
fixed.rel.nugget = NULL,
start.cov.pars,
method = "BFGS",
low.rank = FALSE,
knots = NULL,
messages = TRUE,
profile.llik = FALSE,
SPDE = FALSE,
mesh = NULL,
SPDE.analytic.hessian = FALSE
)
Arguments
formula an object of class "formula" (or one that can be coerced to that class): a sym-
bolic description of the model to be fitted.
coords an object of class formula indicating the geographic coordinates.
data a data frame containing the variables in the model.
ID.coords vector of ID values for the unique set of spatial coordinates obtained from
create.ID.coords. These must be provided in order to define a geostatistical
model where locations have multiple observations. Default is ID.coords=NULL.
See the Details section for more information.
kappa shape parameter of the Matern covariance function.
fixed.rel.nugget
fixed value for the relative variance of the nugget effect; default is fixed.rel.nugget=NULL
if this should be included in the estimation.
start.cov.pars if ID.coords=NULL, a vector of length two with elements corresponding to the
starting values of phi and the relative variance of the nugget effect nu2, re-
spectively, that are used in the optimization algorithm; if ID.coords is pro-
vided, a third starting value for the relative variance of the individual unex-
plained variation nu2.star = omega2/sigma2 must be provided. If nu2 is fixed
through fixed.rel.nugget, then start.cov.pars represents the starting value for
phi only, if ID.coords=NULL, or for phi and nu2.star, otherwise.
method method of optimization. If method="BFGS" then the maxBFGS function is used;
otherwise method="nlminb" to use the nlminb function. Default is method="BFGS".
low.rank logical; if low.rank=TRUE a low-rank approximation of the Gaussian spatial
process is used when fitting the model. Default is low.rank=FALSE.
knots if low.rank=TRUE, knots is a matrix of spatial knots that are used in the low-
rank approximation. Default is knots=NULL.
messages logical; if messages=TRUE then status messages are printed on the screen (or
output device) while the function is running. Default is messages=TRUE.
profile.llik logical; if profile.llik=TRUE the maximization of the profile likelihood is
carried out. If profile.llik=FALSE the full-likelihood is used. Default is
profile.llik=FALSE.
SPDE logical; if SPDE=TRUE the SPDE approximation for the Gaussian spatial model
is used. Default is SPDE=FALSE.
mesh an object obtained as result of a call to the function inla.mesh.2d.
SPDE.analytic.hessian
logical; if SPDE.analytic.hessian=TRUE computation of the hessian matrix
using the SPDE approximation is carried out using analytical expressions, other-
wise a numerical approximation is used. Defauls is SPDE.analytic.hessian=FALSE.
Details
This function estimates the parameters of a geostatistical linear Gaussian model, specified as
Y = d0 β + S(x) + Z,
where Y is the measured outcome, d is a vector of coavariates, β is a vector of regression coef-
ficients, S(x) is a stationary Gaussian spatial process and Z are independent zero-mean Gaussian
variables with variance tau2. More specifically, S(x) has an isotropic Matern covariance function
with variance sigma2, scale parameter phi and shape parameter kappa. In the estimation, the shape
parameter kappa is treated as fixed. The relative variance of the nugget effect, nu2=tau2/sigma2,
can be fixed though the argument fixed.rel.nugget; if fixed.rel.nugget=NULL, then the vari-
ance of the nugget effect is also included in the estimation.
Locations with multiple observations. If multiple observations are available at any of the sampled
locations the above model is modified as follows. Let Yij denote the random variable associated to
the measured outcome for the j-th individual at location xi . The linear geostatistical model assumes
the form
Yij = d0ij β + S(xi ) + Zi + Uij ,
where S(xi ) and Zi are specified as mentioned above, and Uij are i.i.d. zer0-mean Gaussian vari-
able with variance ω 2 . his model can be fitted by specifing a vector of ID for the unique set locations
thourgh the argument ID.coords (see also create.ID.coords).
Low-rank approximation. In the case of very large spatial data-sets, a low-rank approximation
of the Gaussian spatial process S(x) can be computationally beneficial. Let (x1 , . . . , xm ) and
(t1 , . . . , tm ) denote the set of sampling locations andPm a grid of spatial knots covering the area of
interest, respectively. Then S(x) is approximated as i=1 K(kx − ti k; φ, κ)Ui , where Ui are zero-
mean mutually independent Gaussian variables with variance sigma2 and K(.; φ, κ) is the isotropic
Matern kernel (see matern.kernel). Since the resulting approximation is no longer a stationary
process, the parameter sigma2 is adjusted by a factorconstant.sigma2. See adjust.sigma2 for
more details on the the computation of the adjustment factor constant.sigma2 in the low-rank
approximation.
Value
An object of class "PrevMap". The function summary.PrevMap is used to print a summary of the
fitted model. The object is a list with the following components:
estimate: estimates of the model parameters; use the function coef.PrevMap to obtain estimates
of covariance parameters on the original scale.
covariance: covariance matrix of the ML estimates.
log.lik: maximum value of the log-likelihood.
y: response variable.
D: matrix of covariates.
coords: matrix of the observed sampling locations.
ID.coords: set of ID values defined through the argument ID.coords.
method: method of optimization used.
kappa: fixed value of the shape parameter of the Matern function.
knots: matrix of the spatial knots used in the low-rank approximation.
const.sigma2: adjustment factor for sigma2 in the low-rank approximation.
fixed.rel.nugget: fixed value for the relative variance of the nugget effect.
mesh: the mesh used in the SPDE approximation.
call: the matched call.
Author(s)
<NAME> <<EMAIL>>
<NAME> <<EMAIL>>
References
<NAME>., <NAME>. (2019). Model-based Geostatistics for Global Public Health. CRC/Chapman
& Hall.
<NAME>., <NAME>. (2017). PrevMap: an R package for prevalence mapping. Journal of Statis-
tical Software. 78(8), 1-29. doi: 10.18637/jss.v078.i08
<NAME>. (1998). A process-convolution approach to modeling temperatures in the North Atlantic
Ocean. Environmental and Ecological Statistics 5, 173-190.
See Also
shape.matern, summary.PrevMap, coef.PrevMap, matern, matern.kernel, maxBFGS, nlminb.
lm.ps.MCML Monte Carlo Maximum Likelihood estimation of the geostatistical lin-
ear model with preferentially sampled locations
Description
This function performs Monte Carlo maximum likelihood (MCML) estimation for a geostatistical
linear model with preferentially sampled locations. For more details on the model, see below.
Usage
lm.ps.MCML(
formula.response,
formula.log.intensity = ~1,
coords,
which.is.preferential = NULL,
data.response,
data.intensity = NULL,
par0,
control.mcmc,
kappa1,
kappa2,
mesh,
grid.intensity,
start.par = NULL,
method = "nlminb",
messages = TRUE,
plot.correlogram = TRUE
)
Arguments
formula.response
an object of class formula (or one that can be coerced to that class): a symbolic
description of the sub-model for the response variable.
formula.log.intensity
an object of class formula (or one that can be coerced to that class): a symbolic
description of the log-Gaussian Cox process sub-model.
coords an object of class formula indicating the spatial coordinates in the data.
which.is.preferential
a vector of 0 and 1, where 1 indicates a location in the data from a prefential
sampling scheme and 0 from a non-preferential. This option is used to fit a
model with a mix of preferentally and non-preferentiall sampled locations. For
more, details on the model structure see the ’Details’ section.
data.response a data frame containing the variables in the sub-model of the response variable.
data.intensity a data frame containing the variables in the log-Gaussian Coz process sub-
model. This data frame must be provided only when explanatory variables are
used in the log-Gaussian Cox process model. Each row in the data frame must
correspond to a point in the grid provided through the argument ’grid.intensity’.
Deafult is data.intensity=NULL, which corresponds to a model with only the
intercept.
par0 an object of class ’coef.PrevMap.ps’. This argument is used to define the pa-
rameters of the importance sampling distribution used in the MCML algorithm.
The input of this argument must be defined using the set.par.ps function.
control.mcmc output from control.mcmc.MCML which defined the control parameters of the
Monte Carlo Markv chain algorithm.
kappa1 fixed value for the shape parameter of the Matern covariance function of the spa-
tial process of the sampling intensity (currently only kappa1=1 is implemented).
kappa2 fixed value for the shape parameter of the Matern covariance function of the
spatial process of the response variable.
mesh an object obtained as result of a call to the function inla.mesh.2d.
grid.intensity a regular grid covering the geographical region of interest, used to approximate
the density function of the log-Gaussian Cox process.
start.par starting value of the optimization algorithm. This is an object of class ’coef.PrevMap.ps’
and must be defined using the function set.par.ps. Default is start.cov.pars=NULL,
so that the starting values are set automatically.
method method of optimization. If method="BFGS" then the maxBFGS function is used;
otherwise method="nlminb" to use the nlminb function. Default is method="BFGS".
messages logical; if messages=TRUE then status messages are printed on the screen (or
output device) while the function is running. Default is messages=TRUE.
plot.correlogram
logical; if plot.correlogram=TRUE the autocorrelation plot of the samples of
the random effect is displayed after completion of conditional simulation. De-
fault is plot.correlogram=TRUE.
Details
This function performs parameter estimation for a geostatistical linear model with preferentially
sampled locations. Let S1 and S2 denote two independent, stationary and isotropic Gaussian pro-
cesses. The overall model consists of two sub-models: the log-Gaussian Cox process model for the
preferentially sampled locations, say X; the model for the response variable, say Y . The model
assumes that
[X, Y, S1 , S2 ] = [S1 ][S2 ][X|S1 ][Y |X, S1 , S2 ],
where [.] denotes ’the distribution of .’. Each of the two submodels has an associated linear predictor.
Let Λ(x) denote the intensity of the Poisson process X, continionally on S1 . Then
, where d(x) is vector of explanatory variables with regression coefficient α. This linear predictor
is defined through the argument formula.log.intensity. The density of [X|S1 ] is given by
Λ(x)
R
A
Λ(u)du
, where A is the region of interest. The integral at the denominator is intractable and is then approx-
imated using a quadrature procedure. The regular grid covering A, used for the quadrature, must
be provided through the argument grid.intensity. Conditionally on X, S1 and S2 , the response
variable model is given by
where β is another vector of regression coefficients and γ is the preferentiality parameter. If γ = 0
then we recover the standard geostatistical model. More details on the fitting procedure can be
found in Diggle and Giorgi (2016).
When the data have a mix of preferentially and non-preferentially sampled locations. In some
cases the set of locations may consist of a sub-set which is preferentially sampled, X, and a standard
non-prefential sample, X ∗ . Let Y and Y ∗ denote the measurments at locations X and X ∗ . In the
current implementation, the model has the following form
[X, X ∗ , Y, Y ∗ , S1 , S2 , S2∗ ] = [S1 ][S2 ][S2∗ ][X|S1 ][Y |X, S1 , S2 ][X ∗ ][Y ∗ |X ∗ , S2∗ ],
where S2 and S2∗ are two independent Gaussian process but with shared parameters, associated with
Y and Y ∗ , respectively. The linear predictor for Y is the same as above. The measurements Y ∗ ,
instead, have linear predicotr
where β ∗ is vector of regression coefficients, different from β. The linear predictor for Y and Y ∗ is
specified though formula.response. For example, response ~ x | x + z defines a linear predictor
for Y with one explanatory variable x and a linear predictor for Y ∗ with two explanatory variables
x and z. An example on the application of this model is given in Diggle and Giorgi (2016).
Value
An object of class "PrevMap.ps". The function summary.PrevMap.ps is used to print a summary
of the fitted model. The object is a list with the following components:
estimate: estimates of the model parameters; use the function coef.PrevMap.ps to obtain esti-
mates of covariance parameters on the original scale.
covariance: covariance matrix of the MCML estimates.
log.lik: maximum value of the approximated log-likelihood.
y: observed values of the response variable. If which.is.preferential has been provided, then
y is a list with components y$preferential, for the data with prefentially sampled locations, and
y$non.preferential, for the remiaining.
D.response: matrix of covariates used to model the mean component of the response variable. If
which.is.preferential has been provided, then D.response is a list with components D.response$preferential,
for the data with prefentially sampled locations, and D.response$non.preferential, for the remi-
aining.
D.intensity: matrix of covariates used to model the mean component of log-intensity of the log-
Gaussian Cox process.
grid.intensity: grid of locations used to approximate the intractable integral of the log-Gaussian
Cox process model.
coords: matrix of the observed sampling locations. If which.is.preferential has been pro-
vided, then coords is a list with components y$preferential, for the data with prefentially sam-
pled locations, and y$non.preferential, for the remiaining.
method: method of optimization used.
ID.coords: set of ID values defined through the argument ID.coords.
kappa.response: fixed value of the shape parameter of the Matern covariance function used to
model the spatial process associated with the response variable.
mesh: the mesh used in the SPDE approximation.
samples: matrix of the random effects samples from the importance sampling distribution used to
approximate the likelihood function.
call: the matched call.
Author(s)
<NAME> <<EMAIL>>
References
<NAME>., <NAME>. (2019). Model-based Geostatistics for Global Public Health. CRC/Chapman
& Hall.
<NAME>., <NAME>. (2017). PrevMap: an R package for prevalence mapping. Journal of Statis-
tical Software. 78(8), 1-29. doi: 10.18637/jss.v078.i08
<NAME>., <NAME>. (2017). Preferential sampling of exposures levels. In: Handbook of Envi-
ronmental and Ecological Statistics. Chapman & Hall.
<NAME>., <NAME>. and <NAME>. (2010). Geostatistical analysis under preferential sampling
(with Discussion). Applied Statistics, 59, 191-232.
<NAME>., <NAME>., <NAME>. (2011). An explicit link between Gaussian fields and Gaus-
sian Markov random fields: the stochastic partial differential equation approach (with discussion).
Journal of the Royal Statistical Society, Series B, 73, 423–498.
<NAME>., <NAME>., and <NAME>. (2011). Bayesian geostatistical modelling with informative
sampling locations. Biometrika, 98, 35-48.
loaloa Loa loa prevalence data from 197 village surveys
Description
This data-set relates to a study of the prevalence of Loa loa (eyeworm) in a series of surveys un-
dertaken in 197 villages in west Africa (Cameroon and southern Nigeria). The variables are as
follows:
• ROW row id: 1 to 197.
• VILLCODE village id.
• LONGITUDE Longitude in degrees.
• LATITUDE Latitude in degrees.
• NO_EXAM Number of people tested.
• NO_INF Number of positive test results.
• ELEVATION Height above sea-level in metres.
• MEAN9901 Mean of all NDVI values recorded at village location, 1999-2001
• MAX9901 Maximum of all NDVI values recorded at village location, 1999-2001
• MIN9901 Minimum of all NDVI values recorded at village location, 1999-2001
• MIN9901 Minimum of all NDVI values recorded at village location, 1999-2001
• STDEV9901 standard deviation of all NDVI values recorded at village location, 1999-2001
Usage
data(loaloa)
Format
A data frame with 197 rows and 11 variables
References
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>.,
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>. and <NAME>. (2007).
Spatial modelling and prediction of Loa loa risk: decision making under uncertainty. Annals of
Tropical Medicine and Parasitology, 101, 499-509.
loglik.ci Profile likelihood confidence intervals
Description
Computes confidence intervals based on the interpolated profile likelihood computed for a single
covariance parameter.
Usage
loglik.ci(object, coverage = 0.95, plot.spline.profile = TRUE)
Arguments
object object of class "profile.PrevMap" obtained from loglik.linear.model.
coverage a value between 0 and 1 indicating the coverage of the confidence interval based
on the interpolated profile likelihood. Default is coverage=0.95.
plot.spline.profile
logical; if TRUE an interpolating spline of the profile-likelihood of for a univariate
parameter is plotted. Default is FALSE.
Value
A list with elements lower and upper for the upper and lower limits of the confidence interval,
respectively.
Author(s)
<NAME> <<EMAIL>>
<NAME> <<EMAIL>>
loglik.linear.model Profile log-likelihood or fixed parameters likelihood evaluation for the
covariance parameters in the geostatistical linear model
Description
Computes profile log-likelihood, or evaluatesx likelihood keeping the other paramaters fixed, for
the scale parameter phi of the Matern function and the relative variance of the nugget effect nu2 in
the linear Gaussian model.
Usage
loglik.linear.model(
object,
control.profile,
plot.profile = TRUE,
messages = TRUE
)
Arguments
object an object of class ’PrevMap’, which is the fitted linear model obtained with the
function linear.model.MLE.
control.profile
control parameters obtained with control.profile.
plot.profile logical; if TRUE a plot of the computed profile likelihood is displayed.
messages logical; if messages=TRUE then status messages are printed on the screen (or
output device) while the function is running. Default is messages=TRUE.
Value
an object of class "profile.PrevMap" which is a list with the following values
eval.points.phi: vector of the values used for phi in the evaluation of the likelihood.
eval.points.rel.nugget: vector of the values used for nu2 in the evaluation of the likelihood.
profile.phi: vector of the values of the likelihood function evaluated at eval.points.phi.
profile.rel.nugget: vector of the values of the likelihood function evaluated at eval.points.rel.nugget.
profile.phi.rel.nugget: matrix of the values of the likelihood function evaluated at eval.points.phi
and eval.points.rel.nugget.
fixed.par: logical value; TRUE is the evaluation if the likelihood is carried out by fixing the other
parameters, and FALSE if the computation of the profile-likelihood was performed instead.
Author(s)
<NAME> <<EMAIL>>
<NAME> <<EMAIL>>
matern.kernel Matern kernel
Description
This function computes values of the Matern kernel for given distances and parameters.
Usage
matern.kernel(u, rho, kappa)
Arguments
u a vector, matrix or array with values of the distances between pairs of data loca-
tions.
rho value of the (re-parametrized) scale parameter; this corresponds to the re-parametrization
rho = 2*sqrt(kappa)*phi.
kappa value of the shape parameter.
Details
The Matern kernel is defined as:
Γ(κ + 1)1/2 κ(κ+1)/4 u(κ−1)/2
K(u; φ, κ) = Kκ (u/φ), u > 0,
π 1/2 Γ((κ + 1)/2)Γ(κ)1/2 (2κ1/2 φ)(κ+1)/2
where φ and κ are the scale and shape parameters, respectively, and Kκ (.) is the modified Bessel
function of the third kind of order κ. The family is valid for φ > 0 and κ > 0.
Value
A vector matrix or array, according to the argument u, with the values of the Matern kernel function
for the given distances.
Author(s)
<NAME> <<EMAIL>>
<NAME> <<EMAIL>>
plot.pred.PrevMap Plot of a predicted surface
Description
plot.pred.PrevMap displays predictions obtained from spatial.pred.linear.MLE, spatial.pred.linear.Bayes,spati
spatial.pred.binomial.Bayes and spatial.pred.poisson.MCML.
Usage
## S3 method for class 'pred.PrevMap'
plot(x, type = NULL, summary = "predictions", ...)
Arguments
x an object of class "PrevMap".
type a character indicating the type of prediction to display: ’prevalence’,’odds’,
’logit’ or ’probit’ for binomial models; "log" or "exponential" for Poisson mod-
els. Default is NULL.
summary character indicating which summary to display: ’predictions’,’quantiles’, ’stan-
dard.errors’ or ’exceedance.prob’; default is ’predictions’. If summary="exceedance.prob",
the argument type is ignored.
... further arguments passed to plot of the ’raster’ package.
Author(s)
<NAME> <<EMAIL>>
<NAME> <<EMAIL>>
plot.pred.PrevMap.ps Plot of a predicted surface of geostatistical linear fits with preferen-
tially sampled locations
Description
plot.pred.PrevMap.ps displays predictions obtained from lm.ps.MCML.
Usage
## S3 method for class 'pred.PrevMap.ps'
plot(x, target = NULL, summary = "predictions", ...)
Arguments
x an object of class "PrevMap".
target a integer value indicating the predictive target: target=1 to visualize summaries
of the surface associated with the response variable; target=2 to visualize sum-
maries of the surface associated with the sampling intensity. If only one target
has been predicted, this argument is ignored.
summary character indicating which summary to display: ’predictions’,’quantiles’ or ’stan-
dard.errors’. Default is summary='predictions'. If summary="exceedance.prob",
the argument type is ignored.
... further arguments passed to plot of the ’raster’ package.
Author(s)
<NAME> <<EMAIL>>
plot.PrevMap.diagnostic
Plot of the variogram-based diagnostics
Description
Displays the results from a call to variog.diagnostic.lm and variog.diagnostic.glgm.
Usage
## S3 method for class 'PrevMap.diagnostic'
plot(x, ...)
Arguments
x an object of class "PrevMap.diagnostic".
... further arguments passed to plot of the ’raster’ package.
Author(s)
<NAME> <<EMAIL>>
<NAME> <<EMAIL>>
See Also
variog.diagnostic.lm, variog.diagnostic.glgm
plot.profile.PrevMap Plot of the profile log-likelihood for the covariance parameters of the
Matern function
Description
This function displays a plot of the profile log-likelihood that is computed by the function loglik.linear.model.
Usage
## S3 method for class 'profile.PrevMap'
plot(x, log.scale = FALSE, plot.spline.profile = FALSE, ...)
Arguments
x object of class "profile.PrevMap" obtained as output from loglik.linear.model.
log.scale logical; if log.scale=TRUE, the profile likleihood is plotted on the log-scale of
the parameter values.
plot.spline.profile
logical; if TRUE an interpolating spline of the profile-likelihood of for a univariate
parameter is plotted. Default is FALSE.
... further arugments passed to plot if the profile log-likelihood is for only one
parameter, or to contour for the bi-variate profile-likelihood.
Value
A plot is returned. No value is returned.
Author(s)
<NAME> <<EMAIL>>
<NAME> <<EMAIL>>
plot.shape.matern Plot of the profile likelihood for the shape parameter of the Matern
covariance function
Description
This function plots the profile likelihood for the shape parameter of the Matern covariance function
using the output from shape.matern function.
Usage
## S3 method for class 'shape.matern'
plot(x, plot.spline = TRUE, ...)
Arguments
x an object of class ’shape.matern’ obtained as result of a call to shape.matern
plot.spline logical; if TRUE an interpolating spline of the profile likelihood is added to the
plot.
... further arguments passed to plot.
Value
The function does not return any value.
Author(s)
<NAME> <<EMAIL>>
<NAME> <<EMAIL>>
See Also
shape.matern
point.map Point map
Description
This function produces a plot with points indicating the data locations. Arguments can control
the points sizes, patterns and colors. These can be set to be proportional to data values, ranks or
quantiles. Alternatively, points can be added to the current plot.
Usage
point.map(data, var.name, coords, ...)
Arguments
data an object of class "data.frame" containing the data.
var.name a formula object indicating the variable to display.
coords a formula object indicating the geographical coordinates.
... additional arguments to be passed to points.geodata.
poisson.log.MCML Monte Carlo Maximum Likelihood estimation for the Poisson model
Description
This function performs Monte Carlo maximum likelihood (MCML) estimation for the geostatistical
Poisson model with log link function.
Usage
poisson.log.MCML(
formula,
units.m = NULL,
coords,
data,
ID.coords = NULL,
par0,
control.mcmc,
kappa,
fixed.rel.nugget = NULL,
start.cov.pars,
method = "BFGS",
low.rank = FALSE,
knots = NULL,
messages = TRUE,
plot.correlogram = TRUE
)
Arguments
formula an object of class formula (or one that can be coerced to that class): a symbolic
description of the model to be fitted.
units.m an object of class formula indicating the multiplicative offset for the mean of
the Poisson model; if not specified this is then internally set as 1.
coords an object of class formula indicating the geographic coordinates.
data a data frame containing the variables in the model.
ID.coords vector of ID values for the unique set of spatial coordinates obtained from
create.ID.coords. These must be provided if, for example, spatial random
effects are defined at location-level but some of the covariates are at individ-
ual level. Warning: the spatial coordinates must all be distinct otherwise see
jitterDupCoords. Default is NULL.
par0 parameters of the importance sampling distribution: these should be given in
the following order c(beta,sigma2,phi,tau2), where beta are the regression
coefficients, sigma2 is the variance of the Gaussian process, phi is the scale
parameter of the spatial correlation and tau2 is the variance of the nugget effect
(if included in the model).
control.mcmc output from control.mcmc.MCML.
kappa fixed value for the shape parameter of the Matern covariance function.
fixed.rel.nugget
fixed value for the relative variance of the nugget effect; fixed.rel.nugget=NULL
if this should be included in the estimation. Default is fixed.rel.nugget=NULL.
start.cov.pars a vector of length two with elements corresponding to the starting values of phi
and the relative variance of the nugget effect nu2, respectively, that are used in
the optimization algorithm. If nu2 is fixed through fixed.rel.nugget, then
start.cov.pars represents the starting value for phi only.
method method of optimization. If method="BFGS" then the maxBFGS function is used;
otherwise method="nlminb" to use the nlminb function. Default is method="BFGS".
low.rank logical; if low.rank=TRUE a low-rank approximation of the Gaussian spatial
process is used when fitting the model. Default is low.rank=FALSE.
knots if low.rank=TRUE, knots is a matrix of spatial knots that are used in the low-
rank approximation. Default is knots=NULL.
messages logical; if messages=TRUE then status messages are printed on the screen (or
output device) while the function is running. Default is messages=TRUE.
plot.correlogram
logical; if plot.correlogram=TRUE the autocorrelation plot of the samples of
the random effect is displayed after completion of conditional simulation. De-
fault is plot.correlogram=TRUE.
Details
This function performs parameter estimation for a geostatistical Poisson model with log link func-
tion. Conditionally on a zero-mean stationary Gaussian process S(x) and mutually independent
zero-mean Gaussian variables Z with variance tau2, the observations y are generated from a Pois-
son distribution with mean mλ, where m is an offset defined through the argument units.m. A
canonical log link is used, thus the linear predictor assumes the form
log(λ) = d0 β + S(x) + Z,
where d is a vector of covariates with associated regression coefficients β. The Gaussian process
S(x) has isotropic Matern covariance function (see matern) with variance sigma2, scale parameter
phi and shape parameter kappa. In the poisson.log.MCML function, the shape parameter is treated
as fixed. The relative variance of the nugget effect, nu2=tau2/sigma2, can also be fixed through
the argument fixed.rel.nugget; if fixed.rel.nugget=NULL, then the relative variance of the
nugget effect is also included in the estimation.
Monte Carlo Maximum likelihood. The Monte Carlo maximum likelihood method uses condi-
tional simulation from the distribution of the random effect T (x) = d(x)0 β + S(x) + Z given
the data y, in order to approximate the high-dimensiional intractable integral given by the likeli-
hood function. The resulting approximation of the likelihood is then maximized by a numerical
optimization algorithm which uses analytic epression for computation of the gradient vector and
Hessian matrix. The functions used for numerical optimization are maxBFGS (method="BFGS"),
from the maxLik package, and nlminb (method="nlminb").
Low-rank approximation. In the case of very large spatial data-sets, a low-rank approximation
of the Gaussian spatial process S(x) might be computationally beneficial. Let (x1 , . . . , xm ) and
(t1 , . . . , tm ) denote the set of sampling locations and aP grid of spatial knots covering the area of
m
interest, respectively. Then S(x) is approximated as i=1 K(kx − ti k; φ, κ)Ui , where Ui are
zero-mean mutually independent Gaussian variables with variance sigma2 and K(.; φ, κ) is the
isotropic Matern kernel (see matern.kernel). Since the resulting approximation is no longer a
stationary process (but only approximately), the parameter sigma2 is then multiplied by a factor
constant.sigma2 so as to obtain a value that is closer to the actual variance of S(x).
Value
An object of class "PrevMap". The function summary.PrevMap is used to print a summary of the
fitted model. The object is a list with the following components:
estimate: estimates of the model parameters; use the function coef.PrevMap to obtain estimates
of covariance parameters on the original scale.
covariance: covariance matrix of the MCML estimates.
log.lik: maximum value of the log-likelihood.
y: observations.
units.m: offset.
D: matrix of covariates.
ID.coords: set of ID values defined through the argument ID.coords.
coords: matrix of the observed sampling locations.
method: method of optimization used.
kappa: fixed value of the shape parameter of the Matern function.
knots: matrix of the spatial knots used in the low-rank approximation.
const.sigma2: adjustment factor for sigma2 in the low-rank approximation.
h: vector of the values of the tuning parameter at each iteration of the Langevin-Hastings MCMC
algorithm; see Laplace.sampling, or Laplace.sampling.lr if a low-rank approximation is used.
samples: matrix of the random effects samples from the importance sampling distribution used to
approximate the likelihood function.
fixed.rel.nugget: fixed value for the relative variance of the nugget effect.
call: the matched call.
Author(s)
<NAME> <<EMAIL>>
<NAME> <<EMAIL>>
References
<NAME>., <NAME>. (2019). Model-based Geostatistics for Global Public Health. CRC/Chapman
& Hall.
<NAME>., <NAME>. (2017). PrevMap: an R package for prevalence mapping. Journal of Statis-
tical Software. 78(8), 1-29. doi: 10.18637/jss.v078.i08
<NAME>. (2004). Monte carlo maximum likelihood in model-based geostatistics. Journal
of Computational and Graphical Statistics 13, 702-718.
<NAME>. (1998). A process-convolution approach to modeling temperatures in the North Atlantic
Ocean. Environmental and Ecological Statistics 5, 173-190.
See Also
Laplace.sampling, Laplace.sampling.lr, summary.PrevMap, coef.PrevMap, matern, matern.kernel,
control.mcmc.MCML.
set.par.ps Define the model coefficients of a geostatistical linear model with pref-
erentially sampled locations
Description
set.par.ps defines the model coefficients of a geostatistical linear model with preferentially sam-
pled locations. The output of this function can be used to: 1) define the parameters of the impor-
tance sampling distribution in lm.ps.MCML; 2) the starting values of the optimization algorithm in
lm.ps.MCML.
Usage
set.par.ps(p = 1, q = 1, intensity, response, preferentiality.par)
Arguments
p number of covariates used in the response variable model, including the inter-
cept. Default is p=1.
q number of covariates used in the log-Guassian Cox process model, including the
intercept. Default is q=1.
intensity a vector of parameters of the log-Gaussian Cox process model. These must be
provided in the following order: regression coefficients of the explanatory vari-
ables; variance and scale of the spatial correlation for the isotropic Gaussian pro-
cess. In the case of a model with a mix of preferentially and non-preferentially
sampled locations, the order of the regression coefficients should be the follow-
ing: regression coefficients for the linear predictor with preferential sampling;
regression coefficients for the linear predictor with non-preferential samples.
response a vector of parameters of the response variable model. These must be provided
in the following order: regression coefficients of the explanatory variables; vari-
ance and scale of the spatial correlation for the isotropic Gaussian process; and
variance of the nugget effect.
preferentiality.par
value of the preferentiality paramter.
Value
a list of coefficients of class coef.PrevMap.ps.
Author(s)
<NAME> <<EMAIL>>
shape.matern Profile likelihood for the shape parameter of the Matern covariance
function
Description
This function plots the profile likelihood for the shape parameter of the Matern covariance function
used in the linear Gaussian model. It also computes confidence intervals of coverage coverage
by interpolating the profile likelihood with a spline and using the asymptotic distribution of a chi-
squared with one degree of freedom.
Usage
shape.matern(
formula,
coords,
data,
set.kappa,
fixed.rel.nugget = NULL,
start.par,
coverage = NULL,
plot.profile = TRUE,
messages = TRUE
)
Arguments
formula an object of class formula (or one that can be coerced to that class): a symbolic
description of the model to be fitted.
coords an object of class formula indicating the geographic coordinates.
data a data frame containing the variables in the model.
set.kappa a vector indicating the set values for evluation of the profile likelihood.
fixed.rel.nugget
a value for the relative variance nu2 of the nugget effect, that is then treated as
fixed. Default is NULL.
start.par starting values for the scale parameter phi and the relative variance of the nugget
effect nu2; if fixed.rel.nugget is provided, then a starting value for phi only
should be provided.
coverage a value between 0 and 1 indicating the coverage of the confidence interval
based on the interpolated profile liklelihood for the shape parameter. Default
is coverage=NULL and no confidence interval is then computed.
plot.profile logical; if TRUE the computed profile-likelihood is plotted together with the in-
terpolating spline.
messages logical; if messages=TRUE then status messages are printed on the screen (or
output device) while the function is running. Default is messages=TRUE.
Value
The function returns an object of class ’shape.matern’ that is a list with the following components
set.kappa set of values of the shape parameter used to evaluate the profile-likelihood.
val.kappa values of the profile likelihood.
If a value for coverage is specified, the list also contains lower, upper and kappa.hat that corre-
spond to the lower and upper limits of the confidence interval, and the maximum likelihood estimate
for the shape parameter, respectively.
Author(s)
<NAME> <<EMAIL>>
<NAME> <<EMAIL>>
spat.corr.diagnostic Diagnostics for residual spatial correlation
Description
This function performs two variogram-based tests for residual spatial correlation in real-valued and
count (Binomial and Poisson) data.
Usage
spat.corr.diagnostic(
formula,
units.m = NULL,
coords,
data,
likelihood,
ID.coords = NULL,
n.sim = 200,
nAGQ = 1,
uvec = NULL,
plot.results = TRUE,
lse.variogram = FALSE,
kappa = 0.5,
which.test = "both"
)
Arguments
formula an object of class formula (or one that can be coerced to that class): a symbolic
description of the model to be fitted.
units.m vector of binomial denominators, or offset if the Poisson model is used.
coords an object of class formula indicating the geographic coordinates.
data an object of class "data.frame" containing the data.
likelihood a character that can be set to "Gaussian","Binomial" or "Poisson"
ID.coords vector of ID values for the unique set of spatial coordinates obtained from
create.ID.coords. These must be provided if, for example, spatial random
effects are defined at household level but some of the covariates are at individ-
ual level. Warning: the household coordinates must all be distinct otherwise
see jitterDupCoords. Default is NULL.
n.sim number of simulations used to perform the selected test(s) for spatial correlation.
nAGQ integer scalar (passed to glmer) - the number of points per axis for evaluating
the adaptive Gauss-Hermite approximation to the log-likelihood. Defaults to
1, corresponding to the Laplace approximation. Values greater than 1 produce
greater accuracy in the evaluation of the log-likelihood at the expense of speed.
A value of zero uses a faster but less exact form of parameter estimation for
GLMMs by optimizing the random effects and the fixed-effects coefficients in
the penalized iteratively reweighted least squares step.
uvec a vector with values used to define the variogram binning. If uvec=NULL, then
uvec is then set to seq(MIN_DIST,(MAX_DIST-MIN_DIST)/2,length=15) where
MIN_DIST and MAX_DIST are the minimum and maximum observed distances.
plot.results if plot.results=TRUE, a plot is returned showing the results for the selected
test(s) for spatial correlation. By default plot.results=TRUE.
lse.variogram if lse.variogram=TRUE, a weighted least square fit of a Matern function (with
fixed kappa) to the empirical variogram is performed. If plot.results=TRUE
and lse.variogram=TRUE, the fitted weighted least square fit is displayed as a
dashed line in the returned plot.
kappa smothness parameter of the Matern function for the Gaussian process to approx-
imate. The deafault is kappa=0.5.
which.test a character specifying which test for residual spatial correlation is to be per-
formed: "variogram", "test statistic" or "both". The default is which.test="both".
See ’Details’.
Details
The function first fits a generalized linear mixed model using the for an outcome Yi which, condi-
tionally on i.i.d. random effects Zi , are mutually independent GLMs with linear predictor
g −1 (ηi ) = d0i β + Zi
where di is a vector of covariates which are specified through formula. Finally, the Zi are assumed
to be zero-mean Gaussian variables with variance σ 2
Variogram-based graphical diagnostic
This graphical diagnostic is performed by setting which.test="both" or which.test="variogram".
The output are 95 (see below lower.lim and upper.lim) that are generated under the assumption
of spatial indepdence through the following steps
1. Fit a generalized linear mixed model as indicated by the equation above.
2. Obtain the mode, say Ẑi , of the Zi conditioned on the data Yi .
3. Compute the empirical variogram using Ẑi
4. Permute the locations specified in coords, n.sim time while holding the Ẑi fixed.
5. For each of the permuted data-sets compute the empirical variogram based on the Ẑi .
6. From the n.sim variograms obtained in the previous step, compute the 95
If the observed variogram (obs.variogram below), based on the un-permuted Ẑi , falls within the
95 residual spatial correlation; if, instead, that partly falls outside the 95
Test for spatial independence
This diagnostic test is performed if which.test="both" or which.test="test statistic". Let
v̂(B) denote the empirical variogram based on Ẑi for the distance bin B. The test statistic used for
testing residual spatial correlation is
X
T = N (B){v(B) − σ̂ 2 }
B
where N (B) is the number of pairs of data-points falling within the distance bin B (n.bins below)
and σ̂ 2 is the estimate of σ 2 .
To obtain the distribution of the test statistic T under the null hypothesis of spatial independence,
we use the simulated empirical variograms as obtained in step 5 of the iterative procedure described
in "Variogram-based graphical diagnostic." The p-value for the test of spatial independence is then
computed by taking the proportion of simulated values for T under the null the hypothesis that are
larger than the value of T based on the original (un-permuted) Ẑi
Value
An object of class "PrevMap.diagnostic" which is a list containing the following components:
obs.variogram: a vector of length length(uvec)-1 containing the values of the variogram for
each of the distance bins defined through uvec.
distance.bins: a vector of length length(uvec)-1 containing the average distance within each
of the distance bins defined through uvec.
n.bins: a vector of length length(uvec)-1 containing the number of pairs of data-points falling
within each distance bin.
lower.lim: (available only if which.test="both" or which.test="variogram") a vector of
length length(uvec)-1 containing the lower limits of the 95 generated under the assumption of
absence of spatial correlation at each fo the distance bins defined through uvec.
upper.lim: (available only if which.test="both" or which.test="variogram") a vector of
length length(uvec)-1 containing the upper limits of the 95 generated under the assumption of
absence of spatial correlation at each fo the distance bins defined through uvec.
mode.rand.effects: the predictive mode of the random effects from the fitted non-spatial gener-
alized linear mixed model.
p.value: (available only if which.test="both" or which.test="test statistic") p-value of
the test for residual spatial correlation.
lse.variogram: (available only if lse.variogram=TRUE) a vector of length length(uvec)-1 con-
taining the values of the estimated Matern variogram via a weighted least square fit.
spatial.pred.binomial.Bayes
Bayesian spatial prediction for the binomial logistic and binary probit
models
Description
This function performs Bayesian spatial prediction for the binomial logistic and binary probit mod-
els.
Usage
spatial.pred.binomial.Bayes(
object,
grid.pred,
predictors = NULL,
type = "marginal",
scale.predictions = "prevalence",
quantiles = c(0.025, 0.975),
standard.errors = FALSE,
thresholds = NULL,
scale.thresholds = NULL,
messages = TRUE
)
Arguments
object an object of class "Bayes.PrevMap" obtained as result of a call to binomial.logistic.Bayes
or binary.probit.Bayes.
grid.pred a matrix of prediction locations.
predictors a data frame of the values of the explanatory variables at each of the locations
in grid.pred; each column correspond to a variable and each row to a location.
Warning: the names of the columns in the data frame must match those in the
data used to fit the model. Default is predictors=NULL for models with only an
intercept.
type a character indicating the type of spatial predictions: type="marginal" for
marginal predictions or type="joint" for joint predictions. Default is type="marginal".
In the case of a low-rank approximation only joint predictions are available.
scale.predictions
a character vector of maximum length 3, indicating the required scale on which
spatial prediction is carried out: "logit", "prevalence", "odds" and "probit". De-
fault is scale.predictions="prevalence".
quantiles a vector of quantiles used to summarise the spatial predictions.
standard.errors
logical; if standard.errors=TRUE, then standard errors for each scale.predictions
are returned. Default is standard.errors=FALSE.
thresholds a vector of exceedance thresholds; default is NULL.
scale.thresholds
a character value ("logit", "prevalence", "odds" or "probit") indicating the scale
on which exceedance thresholds are provided.
messages logical; if messages=TRUE then status messages are printed on the screen (or
output device) while the function is running. Default is messages=TRUE.
Value
A "pred.PrevMap" object list with the following components: logit; prevalence; odds; probit;exceedance.prob,
corresponding to a matrix of the exceedance probabilities where each column corresponds to a spec-
ified value in thresholds; samples, corresponding to a matrix of the posterior samples at each
prediction locations for the linear predictor; grid.pred prediction locations. Each of the three
components logit, prevalence, odds and probit is also a list with the following components:
predictions: a vector of the predictive mean for the associated quantity (logit, odds or prevalence).
standard.errors: a vector of prediction standard errors (if standard.errors=TRUE).
quantiles: a matrix of quantiles of the resulting predictions with each column corresponding to a
quantile specified through the argument quantiles.
Author(s)
<NAME> <<EMAIL>>
<NAME> <<EMAIL>>
spatial.pred.binomial.MCML
Spatial predictions for the binomial logistic model using plug-in of
MCML estimates
Description
This function performs spatial prediction, fixing the model parameters at the Monte Carlo maximum
likelihood estimates of a geostatistical binomial logistic model.
Usage
spatial.pred.binomial.MCML(
object,
grid.pred,
predictors = NULL,
control.mcmc,
type = "marginal",
scale.predictions = c("logit", "prevalence", "odds"),
quantiles = c(0.025, 0.975),
standard.errors = FALSE,
thresholds = NULL,
scale.thresholds = NULL,
plot.correlogram = FALSE,
messages = TRUE
)
Arguments
object an object of class "PrevMap" obtained as result of a call to binomial.logistic.MCML.
grid.pred a matrix of prediction locations.
predictors a data frame of the values of the explanatory variables at each of the locations
in grid.pred; each column correspond to a variable and each row to a location.
Warning: the names of the columns in the data frame must match those in the
data used to fit the model. Default is predictors=NULL for models with only an
intercept.
control.mcmc output from control.mcmc.MCML.
type a character indicating the type of spatial predictions: type="marginal" for
marginal predictions or type="joint" for joint predictions. Default is type="marginal".
In the case of a low-rank approximation only joint predictions are available.
scale.predictions
a character vector of maximum length 3, indicating the required scale on which
spatial prediction is carried out: "logit", "prevalence" and "odds". Default is
scale.predictions=c("logit","prevalence","odds").
quantiles a vector of quantiles used to summarise the spatial predictions.
standard.errors
logical; if standard.errors=TRUE, then standard errors for each scale.predictions
are returned. Default is standard.errors=FALSE.
thresholds a vector of exceedance thresholds; default is thresholds=NULL.
scale.thresholds
a character value indicating the scale on which exceedance thresholds are pro-
vided; "logit", "prevalence" or "odds". Default is scale.thresholds=NULL.
plot.correlogram
logical; if plot.correlogram=TRUE the autocorrelation plot of the conditional
simulations is displayed.
messages logical; if messages=TRUE then status messages are printed on the screen (or
output device) while the function is running. Default is messages=TRUE.
Value
A "pred.PrevMap" object list with the following components: logit; prevalence; odds; exceedance.prob,
corresponding to a matrix of the exceedance probabilities where each column corresponds to a spec-
ified value in thresholds; samples, corresponding to a matrix of the predictive samples at each
prediction locations for the linear predictor of the binomial logistic model (if scale.predictions="logit"
and neither the SPDE nor the low-rank approximations have been used, this component is NULL);
grid.pred prediction locations. Each of the three components logit, prevalence and odds is
also a list with the following components:
predictions: a vector of the predictive mean for the associated quantity (logit, odds or prevalence).
standard.errors: a vector of prediction standard errors (if standard.errors=TRUE).
quantiles: a matrix of quantiles of the resulting predictions with each column corresponding to a
quantile specified through the argument quantiles.
Author(s)
<NAME> <<EMAIL>>
<NAME> <<EMAIL>>
spatial.pred.linear.Bayes
Bayesian spatial predictions for the geostatistical Linear Gaussian
model
Description
This function performs Bayesian prediction for a geostatistical linear Gaussian model.
Usage
spatial.pred.linear.Bayes(
object,
grid.pred,
predictors = NULL,
type = "marginal",
scale.predictions = c("logit", "prevalence", "odds"),
quantiles = c(0.025, 0.975),
standard.errors = FALSE,
thresholds = NULL,
scale.thresholds = NULL,
messages = TRUE
)
Arguments
object an object of class "Bayes.PrevMap" obtained as result of a call to linear.model.Bayes.
grid.pred a matrix of prediction locations.
predictors a data frame of the values of the explanatory variables at each of the locations
in grid.pred; each column correspond to a variable and each row to a location.
Warning: the names of the columns in the data frame must match those in the
data used to fit the model. Default is predictors=NULL for models with only an
intercept.
type a character indicating the type of spatial predictions: type="marginal" for
marginal predictions or type="joint" for joint predictions. Default is type="marginal".
In the case of a low-rank approximation only joint predictions are available.
scale.predictions
a character vector of maximum length 3, indicating the required scale on which
spatial prediction is carried out: "logit", "prevalence" and "odds". Default is
scale.predictions=c("logit","prevalence","odds").
quantiles a vector of quantiles used to summarise the spatial predictions.
standard.errors
logical; if standard.errors=TRUE, then standard errors for each scale.predictions
are returned. Default is standard.errors=FALSE.
thresholds a vector of exceedance thresholds; default is thresholds=NULL.
scale.thresholds
a character value indicating the scale on which exceedance thresholds are pro-
vided: "logit", "prevalence" or "odds". Default is scale.thresholds=NULL.
messages logical; if messages=TRUE then status messages are printed on the screen (or
output device) while the function is running. Default is messages=TRUE.
Value
A "pred.PrevMap" object list with the following components: logit; prevalence; odds; exceedance.prob,
corresponding to a matrix of the exceedance probabilities where each column corresponds to a spec-
ified value in thresholds; grid.pred prediction locations. Each of the three components logit,
prevalence and odds is also a list with the following components:
predictions: a vector of the predictive mean for the associated quantity (logit, odds or prevalence).
standard.errors: a vector of prediction standard errors (if standard.errors=TRUE).
quantiles: a matrix of quantiles of the resulting predictions with each column corresponding to a
quantile specified through the argument quantiles.
Author(s)
<NAME> <<EMAIL>>
<NAME> <<EMAIL>>
spatial.pred.linear.MLE
Spatial predictions for the geostatistical Linear Gaussian model using
plug-in of ML estimates
Description
This function performs spatial prediction, fixing the model parameters at the maximum likelihood
estimates of a linear geostatistical model.
Usage
spatial.pred.linear.MLE(
object,
grid.pred,
predictors = NULL,
predictors.samples = NULL,
type = "marginal",
scale.predictions = c("logit", "prevalence", "odds"),
quantiles = c(0.025, 0.975),
n.sim.prev = 0,
standard.errors = FALSE,
thresholds = NULL,
scale.thresholds = NULL,
messages = TRUE,
include.nugget = FALSE
)
Arguments
object an object of class "PrevMap" obtained as result of a call to linear.model.MLE.
grid.pred a matrix of prediction locations.
predictors a data frame of the values of the explanatory variables at each of the locations
in grid.pred; each column correspond to a variable and each row to a location.
Warning: the names of the columns in the data frame must match those in the
data used to fit the model. Default is predictors=NULL for models with only an
intercept.
predictors.samples
a list of data frame objects. This argument is used to average over repeated simu-
lations of the predictor variables in order to obtain an "average" map over the dis-
tribution of the explanatory variables in the model. Each component of the list is
a simulation. The number of simulations passed through predictors.samples
must be the same as n.sim.prev. NOTE: This argument can currently only be
used only for a linear regression model that does not use any approximation of
the spatial Gaussian process.
type a character indicating the type of spatial predictions: type="marginal" for
marginal predictions or type="joint" for joint predictions. Default is type="marginal".
In the case of a low-rank approximation only marginal predictions are available.
scale.predictions
a character vector of maximum length 3, indicating the required scale on which
spatial prediction is carried out: "logit", "prevalence" and "odds". Default is
scale.predictions=c("logit","prevalence","odds").
quantiles a vector of quantiles used to summarise the spatial predictions.
n.sim.prev number of simulation for non-linear predictive targets. Default is n.sim.prev=0.
standard.errors
logical; if standard.errors=TRUE, then standard errors for each scale.predictions
are returned. Default is standard.errors=FALSE.
thresholds a vector of exceedance thresholds; default is thresholds=NULL.
scale.thresholds
a character value indicating the scale on which exceedance thresholds are pro-
vided; "logit", "prevalence" or "odds". Default is scale.thresholds=NULL.
messages logical; if messages=TRUE then status messages are printed on the screen (or
output device) while the function is running. Default is messages=TRUE.
include.nugget logical; if include.nugget=TRUE then the nugget effect is included in the pre-
dictions. This option is available only for fitted linear models with locations
having multiple observations. Default is include.nugget=FALSE.
Value
A "pred.PrevMap" object list with the following components: logit; prevalence; odds; exceedance.prob,
corresponding to a matrix of the exceedance probabilities where each column corresponds to a spec-
ified value in thresholds; grid.pred prediction locations; samples, corresponding to the predic-
tive samples of the linear predictor (only if any(scale.predictions=="prevalence")). Each of
the three components logit, prevalence and odds is also a list with the following components:
predictions: a vector of the predictive mean for the associated quantity (logit, odds or prevalence).
standard.errors: a vector of prediction standard errors (if standard.errors=TRUE).
quantiles: a matrix of quantiles of the resulting predictions with each column corresponding to a
quantile specified through the argument quantiles.
samples: If n.sim.prev > 0, the function returns n.sim.prev samples of the linear predictor at
each of the prediction locations.
Author(s)
<NAME> <<EMAIL>>
<NAME> <p.<EMAIL>>
spatial.pred.lm.ps Spatial predictions for the geostatistical Linear Gaussian model using
plug-in of ML estimates
Description
This function performs spatial prediction, fixing the model parameters at the maximum likelihood
estimates of a linear geostatistical model.
Usage
spatial.pred.lm.ps(
object,
grid.pred = NULL,
predictors = NULL,
predictors.intensity = NULL,
control.mcmc = NULL,
target = 3,
type = "marginal",
quantiles = NULL,
standard.errors = FALSE,
messages = TRUE,
return.samples = FALSE
)
Arguments
object an object of class "PrevMap" obtained as result of a call to linear.model.MLE.
grid.pred a matrix of prediction locations. Default is grid.pred=NULL, in which case the
grid used to approximate the intractable integral in the log-Gaussian Cox process
model is used for prediction.
predictors a data frame of the values of the explanatory variables at each of the locations
in grid.pred, for the response variable model; each column correspond to a
variable and each row to a location. Warning: the names of the columns in
the data frame must match those in the data used to fit the model. Default is
predictors=NULL for models with only an intercept.
predictors.intensity
a data frame of the values of the explanatory variables at each of the locations in
grid.pred, for the log-Gaussian Cox process model; each column correspond
to a variable and each row to a location. Warning: the names of the columns
in the data frame must match those in the data used to fit the model. Default is
predictors=NULL for models with only an intercept.
control.mcmc output from control.mcmc.MCML which defined the control parameters of the
Monte Carlo Markv chain algorithm.
target an integeter indicating the predictive target: target=1 if the predictive target
is the linear predictor of the response; target=2 is the predictive target is the
sampling intensity of the preferentially sampled data; target=3 if both of the
above are the predictive targets. Default is target=3.
type a character indicating the type of spatial predictions for target=1: type="marginal"
for marginal predictions or type="joint" for joint predictions. Default is type="marginal".
Note that predictions for the sampling intensity (target=2) are always joint.
quantiles a vector of quantiles used to summarise the spatial predictions.
standard.errors
logical; if standard.errors=TRUE, then standard errors for each scale.predictions
are returned. Default is standard.errors=FALSE.
messages logical; if messages=TRUE then status messages are printed on the screen (or
output device) while the function is running. Default is messages=TRUE.
return.samples logical; if return.samples=TRUE a matrix of the predictive samples for the
prediction target (as specified in target) are returned in the output.
Value
A "pred.PrevMap.ps" object list with the following components: response (if target=1 or target=3)
and intensity (if target=2 pr target=3). grid.pred prediction locations. Each of the compo-
nents intensity and response is a list with the following components:
predictions: a vector of the predictive mean for the corresponding target.
standard.errors: a vector of prediction standard errors (if standard.errors=TRUE).
quantiles: a matrix of quantiles of the resulting predictions with each column corresponding to a
quantile specified through the argument quantiles.
samples: a matrix corresponding to the predictive samples of the predictive target (only if return.samples=TRUE),
with each row corresponding to a samples and column to a prediction location. In the case of a
model with a mix of preferential and non-preferential data, if target=1 or target=3, each of the
above components will be a list with two components, namely preferential and non.preferential,
associated with response.
Author(s)
<NAME> <<EMAIL>>
spatial.pred.poisson.MCML
Spatial predictions for the Poisson model with log link function, using
plug-in of MCML estimates
Description
This function performs spatial prediction, fixing the model parameters at the Monte Carlo maximum
likelihood estimates of a geostatistical Poisson model with log link function.
Usage
spatial.pred.poisson.MCML(
object,
grid.pred,
predictors = NULL,
control.mcmc,
type = "marginal",
scale.predictions = c("log", "exponential"),
quantiles = c(0.025, 0.975),
standard.errors = FALSE,
thresholds = NULL,
scale.thresholds = NULL,
plot.correlogram = FALSE,
messages = TRUE
)
Arguments
object an object of class "PrevMap" obtained as result of a call to poisson.log.MCML.
grid.pred a matrix of prediction locations.
predictors a data frame of the values of the explanatory variables at each of the locations
in grid.pred; each column correspond to a variable and each row to a location.
Warning: the names of the columns in the data frame must match those in the
data used to fit the model. Default is predictors=NULL for models with only an
intercept.
control.mcmc output from control.mcmc.MCML.
type a character indicating the type of spatial predictions: type="marginal" for
marginal predictions or type="joint" for joint predictions. Default is type="marginal".
In the case of a low-rank approximation only joint predictions are available.
scale.predictions
a character vector of maximum length 2, indicating the required scale on which
spatial prediction is carried out: "log" and "exponential". Default is scale.predictions=c("log","exp
quantiles a vector of quantiles used to summarise the spatial predictions.
standard.errors
logical; if standard.errors=TRUE, then standard errors for each scale.predictions
are returned. Default is standard.errors=FALSE.
thresholds a vector of exceedance thresholds; default is thresholds=NULL.
scale.thresholds
a character value indicating the scale on which exceedance thresholds are pro-
vided; "log" or "exponential". Default is scale.thresholds=NULL.
plot.correlogram
logical; if plot.correlogram=TRUE the autocorrelation plot of the conditional
simulations is displayed.
messages logical; if messages=TRUE then status messages are printed on the screen (or
output device) while the function is running. Default is messages=TRUE.
Value
A "pred.PrevMap" object list with the following components: log; exponential; exceedance.prob,
corresponding to a matrix of the exceedance probabilities where each column corresponds to a spec-
ified value in thresholds; samples, corresponding to a matrix of the predictive samples at each
prediction locations for the linear predictor of the Poisson model (if scale.predictions="log"
this component is NULL); grid.pred prediction locations. Each of the three components log and
exponential is also a list with the following components:
predictions: a vector of the predictive mean for the associated quantity (log or exponential).
standard.errors: a vector of prediction standard errors (if standard.errors=TRUE).
quantiles: a matrix of quantiles of the resulting predictions with each column corresponding to a
quantile specified through the argument quantiles.
Author(s)
<NAME> <<EMAIL>>
<NAME> <<EMAIL>>
summary.Bayes.PrevMap Summarizing Bayesian model fits
Description
summary method for the class "Bayes.PrevMap" that computes the posterior mean, median, mode
and high posterior density intervals using samples from Bayesian fits.
Usage
## S3 method for class 'Bayes.PrevMap'
summary(object, hpd.coverage = 0.95, ...)
Arguments
object an object of class "Bayes.PrevMap" obatained as result of a call to binomial.logistic.Bayes
or linear.model.Bayes.
hpd.coverage value of the coverage of the high posterior density intervals; default is 0.95.
... further arguments passed to or from other methods.
Value
A list with the following values
linear: logical value that is TRUE if a linear model was fitted and FALSE otherwise.
binary: logical value that is TRUE if a binary model was fitted and FALSE otherwise.
probit: logical value that is TRUE if a binary model with probit link function was fitted and FALSE
if with logistic link function.
ck: logical value that is TRUE if a low-rank approximation was fitted and FALSE otherwise.
beta: matrix of the posterior summaries for the regression coefficients.
sigma2: vector of the posterior summaries for sigma2.
phi: vector of the posterior summaries for phi.
tau2: vector of the posterior summaries for tau2.
call: matched call.
kappa: fixed value of the shape paramter of the Matern covariance function.
Author(s)
<NAME> <<EMAIL>>
<NAME> <<EMAIL>>
summary.PrevMap Summarizing likelihood-based model fits
Description
summary method for the class "PrevMap" that computes the standard errors and p-values of likelihood-
based model fits.
Usage
## S3 method for class 'PrevMap'
summary(object, log.cov.pars = TRUE, ...)
Arguments
object an object of class "PrevMap" obatained as result of a call to binomial.logistic.MCML
or linear.model.MLE.
log.cov.pars logical; if log.cov.pars=TRUE the estimates of the covariance parameters are
given on the log-scale. Note that standard errors are also adjusted accordingly.
Default is log.cov.pars=TRUE.
... further arguments passed to or from other methods.
Value
A list with the following components
linear: logical value; linear=TRUE if a linear model was fitted and linear=FALSE otherwise.
poisson: logical value; poisson=TRUE if a Poisson model was fitted and poisson=FALSE other-
wise.
ck: logical value; ck=TRUE if a low-rank approximation was used and ck=FALSE otherwise.
spde: logical value; spde=TRUE if the SPDE approximation was used and spde=FALSE otherwise.
coefficients: matrix of the estimates, standard errors and p-values of the estimates of the regres-
sion coefficients.
cov.pars: matrix of the estimates and standard errors of the covariance parameters.
log.lik: value of likelihood function at the maximum likelihood estimates.
kappa: fixed value of the shape paramter of the Matern covariance function.
kappa.t: fixed value of the shape paramter of the Matern covariance function for the temporal
covariance matrix, if a spatio-temporal model has been fitted.
fixed.rel.nugget: fixed value for the relative variance of the nugget effect.
call: matched call.
Author(s)
<NAME> <<EMAIL>>
<NAME> <<EMAIL>>
summary.PrevMap.ps Summarizing fits of geostatistical linear models with preferentially
sampled locations
Description
summary method for the class "PrevMap" that computes the standard errors and p-values of likelihood-
based model fits.
Usage
## S3 method for class 'PrevMap.ps'
summary(object, log.cov.pars = TRUE, ...)
Arguments
object an object of class "PrevMap.ps" obatained as result of a call to lm.ps.MCML.
log.cov.pars logical; if log.cov.pars=TRUE the estimates of the covariance parameters are
given on the log-scale. Note that standard errors are also adjusted accordingly.
Default is log.cov.pars=TRUE.
... further arguments passed to or from other methods.
Value
A list with the following components
coefficients.response: matrix of the estimates, standard errors and p-values of the estimates of
the regression coefficients for the response variable.
coefficients.intensity: matrix of the estimates, standard errors and p-values of the estimates
of the regression coefficients for the sampling intenisty of the log-Gaussian process.
cov.pars.response: matrix of the estimates and standard errors of the covariance parameters for
the Gaussian process associated with the response.
cov.pars.intenisty: matrix of the estimates and standard errors of the covariance parameters for
the Gaussian process associated with the log-Gaussian process.
log.lik: value of likelihood function at the maximum likelihood estimates.
kappa.response: fixed value of the shape paramter of the Matern covariance function.
call: matched call.
Author(s)
<NAME> <<EMAIL>>
trace.plot Trace-plots for posterior samples
Description
Displays the trace-plots for the posterior samples of the model parameters and spatial random ef-
fects.
Usage
trace.plot(object, param, component.beta = NULL, component.S = NULL)
Arguments
object an object of class ’Bayes.PrevMap’.
param a character indicating for which component of the model the density plot is
required: param="beta" for the regression coefficients; param="sigma2" for
the variance of the spatial random effect; param="phi" for the scale parameter
of the Matern correlation function; param="tau2" for the variance of the nugget
effect; param="S" for the spatial random effect.
component.beta if param="beta", component.beta is a numeric value indicating the component
of the regression coefficients; default is NULL.
component.S if param="S", component.S can be a numeric value indicating the component
of the spatial random effect. Default is NULL.
Author(s)
<NAME> <<EMAIL>>
<NAME> <<EMAIL>>
trace.plot.MCML Trace-plots of the importance sampling distribution samples from the
MCML method
Description
Trace-plots of the MCMC samples from the importance sampling distribution used in binomial.logistic.MCML.
Usage
trace.plot.MCML(object, component = NULL, ...)
Arguments
object an object of class "PrevMap" obatained as result of a call to binomial.logistic.MCML.
component a positive integer indicating the number of the random effect component for
which a trace-plot is required. If component=NULL, then a component is selected
at random. Default is component=NULL.
... further arguments passed to plot.
Author(s)
<NAME> <<EMAIL>>
<NAME> <<EMAIL>>
trend.plot Plot of trends
Description
This function produces a plot of the variable of interest against each of the two geographical coor-
dinates.
Usage
trend.plot(data, var.name, coords, ...)
Arguments
data an object of class "data.frame" containing the data.
var.name a formula object indicating the variable to display.
coords a formula object indicating the geographical coordinates.
... additional arguments to be passed to plot.
variog.diagnostic.glgm
Variogram-based validation for generalized linear geostatistical
model fits (Binomial and Poisson)
Description
This function performs model validation for generalized linear geostatistical models (Binomial and
Poisson) using Monte Carlo methods based on the variogram.
Usage
variog.diagnostic.glgm(
object,
n.sim = 200,
uvec = NULL,
plot.results = TRUE,
which.test = "both"
)
Arguments
object an object of class "PrevMap" obtained as an output from binomial.logistic.MCML
and poisson.log.MCML.
n.sim integer indicating the number of simulations used for the variogram-based diag-
nostics. Defeault is n.sim=1000.
uvec a vector with values used to define the variogram binning. If uvec=NULL, then
uvec is then set to seq(MIN_DIST,(MAX_DIST-MIN_DIST)/2,length=15)
plot.results if plot.results=TRUE, a plot is returned showing the results for the selected
test(s) for spatial correlation. By default plot.results=TRUE. defined as the
distance at which the fitted spatial correlation is no less than 0.05. Default is
range.fact=1
which.test a character specifying which test for residual spatial correlation is to be per-
formed: "variogram", "test statistic" or "both". The default is which.test="both".
See ’Details.’
Details
The function takes as an input through the argument object a fitted generalized linear geostaistical
model for an outcome Yi , with linear predictor
ηi = d0i β + S(xi ) + Zi
where di is a vector of covariates which are specified through formula, S(xi ) is a spatial Gaussian
process and the Zi are assumed to be zero-mean Gaussian. The model validation is performed on
the adopted satationary and isotropic Matern covariance function used for S(xi ). More specifically,
the function allows the users to select either of the following validation procedures.
Variogram-based graphical validation
This graphical diagnostic is performed by setting which.test="both" or which.test="variogram".
The output are 95 (see below lower.lim and upper.lim) that are generated under the assumption
that the fitted model did generate the analysed data-set. This validation procedure proceed through
the following steps.
1. Obtain the mean, say Ẑi , of the Zi conditioned on the data Yi and by setting S(xi ) = 0 in the
equation above.
2. Compute the empirical variogram using Ẑi
3. Simulate n.sim data-sets under the fitted geostatistical model.
4. For each of the simulated data-sets and obtain Ẑi as in Step 1. Finally, compute the empirical
variogram based on the resulting Ẑi .
5. From the n.sim variograms obtained in the previous step, compute the 95
If the observed variogram (obs.variogram below), based on the Ẑi from Step 2, falls within the
95 evidence against the fitted spatial correlation model; if, instead, that partly falls outside the 95
correlation in the data.
Test for suitability of the adopted correlation function
This diagnostic test is performed if which.test="both" or which.test="test statistic". Let
vE (B) and vT (B) denote the empirical and theoretical variograms based on Ẑi for the distance bin
B. The test statistic used for testing residual spatial correlation is
X
T = N (B){vE (B) − vT (B)}
B
where N (B) is the number of pairs of data-points falling within the distance bin B (n.bins below).
To obtain the distribution of the test statistic T under the null hypothesis that the fitted model did
generate the analysed data-set, we use the simulated empirical variograms as obtained in step 5 of
the iterative procedure described in "Variogram-based graphical validation." The p-value for the test
of suitability of the fitted spatial correlation function is then computed by taking the proportion of
simulated values for T that are larger than the value of T based on the original Ẑi in Step 1.
Value
An object of class "PrevMap.diagnostic" which is a list containing the following components:
obs.variogram: a vector of length length(uvec)-1 containing the values of the variogram for
each of the distance bins defined through uvec.
distance.bins: a vector of length length(uvec)-1 containing the average distance within each
of the distance bins defined through uvec.
n.bins: a vector of length length(uvec)-1 containing the number of pairs of data-points falling
within each distance bin.
lower.lim: (available only if which.test="both" or which.test="variogram") a vector of
length length(uvec)-1 containing the lower limits of the 95 generated under the assumption of
absence of suitability of the fitted model at each fo the distance bins defined through uvec.
upper.lim: (available only if which.test="both" or which.test="variogram") a vector of
length length(uvec)-1 containing the upper limits of the 95 generated under the assumption of
absence of suitability of the fitted model at each fo the distance bins defined through uvec.
mode.rand.effects: the predictive mode of the random effects from the fitted non-spatial gener-
alized linear mixed model.
p.value: (available only if which.test="both" or which.test="test statistic") p-value of
the test for residual spatial correlation.
lse.variogram: (available only if lse.variogram=TRUE) a vector of length length(uvec)-1 con-
taining the values of the estimated Matern variogram via a weighted least square fit.
variog.diagnostic.lm Variogram-based validation for linear geostatistical model fits
Description
This function performs model validation for linear geostatistical model using Monte Carlo methods
based on the variogram.
Usage
variog.diagnostic.lm(
object,
n.sim = 1000,
uvec = NULL,
plot.results = TRUE,
range.fact = 1,
which.test = "both",
param.uncertainty = FALSE
)
Arguments
object an object of class "PrevMap" obtained as an output from linear.model.MLE.
n.sim integer indicating the number of simulations used for the variogram-based diag-
nostics. Defeault is n.sim=1000.
uvec a vector with values used to define the variogram binning. If uvec=NULL, then
uvec is then set to seq(MIN_DIST,(MAX_DIST-MIN_DIST)/2,length=15)
plot.results if plot.results=TRUE, a plot is returned showing the results for the selected
test(s) for spatial correlation. By default plot.results=TRUE.
range.fact a value between 0 and 1 used to disregard all distance bins provided through
uvec that are larger than the (pr)xrange.fact, where pr is the practical range,
defined as the distance at which the fitted spatial correlation is no less than 0.05.
Default is range.fact=1
which.test a character specifying which test for residual spatial correlation is to be per-
formed: "variogram", "test statistic" or "both". The default is which.test="both".
See ’Details.’
param.uncertainty
a logical indicating whether uncertainty in the model parameters should be in-
corporated in the selected diagnostic tests. Default is param.uncertainty=FALSE.
See ’Details.’
Details
The function takes as an input through the argument object a fitted linear geostaistical model for
an outcome Yi , which is expressed as
Yi = d0i β + S(xi ) + Zi
where di is a vector of covariates which are specified through formula, S(xi ) is a spatial Gaussian
process and the Zi are assumed to be zero-mean Gaussian. The model validation is performed on
the adopted satationary and isotropic Matern covariance function used for S(xi ). More specifically,
the function allows the users to select either of the following validation procedures.
Variogram-based graphical validation
This graphical diagnostic is performed by setting which.test="both" or which.test="variogram".
The output are 95 (see below lower.lim and upper.lim) that are generated under the assumption
that the fitted model did generate the analysed data-set. This validation procedure proceed through
the following steps.
1. Obtain the mean, say Ẑi , of the Zi conditioned on the data Yi .
2. Compute the empirical variogram using Ẑi
3. Simulate n.sim data-sets under the fitted geostatistical model.
4. For each of the simulated data-sets and obtain Ẑi as in Step 1. Finally, compute the empirical
variogram based on the resulting Ẑi .
5. From the n.sim variograms obtained in the previous step, compute the 95
If the observed variogram (obs.variogram below), based on the Ẑi from Step 2, falls within the
95 evidence against the fitted spatial correlation model; if, instead, that partly falls outside the 95
correlation in the data.
Test for suitability of the adopted correlation function
This diagnostic test is performed if which.test="both" or which.test="test statistic". Let
vE (B) and vT (B) denote the empirical and theoretical variograms based on Ẑi for the distance bin
B. The test statistic used for testing residual spatial correlation is
X
T = N (B){vE (B) − vT (B)}
B
where N (B) is the number of pairs of data-points falling within the distance bin B (n.bins below).
To obtain the distribution of the test statistic T under the null hypothesis that the fitted model did
generate the analysed data-set, we use the simulated empirical variograms as obtained in step 5 of
the iterative procedure described in "Variogram-based graphical validation." The p-value for the test
of suitability of the fitted spatial correlation function is then computed by taking the proportion of
simulated values for T that are larger than the value of T based on the original Ẑi in Step 1.
Value
An object of class "PrevMap.diagnostic" which is a list containing the following components:
obs.variogram: a vector of length length(uvec)-1 containing the values of the variogram for
each of the distance bins defined through uvec.
distance.bins: a vector of length length(uvec)-1 containing the average distance within each
of the distance bins defined through uvec.
n.bins: a vector of length length(uvec)-1 containing the number of pairs of data-points falling
within each distance bin.
lower.lim: (available only if which.test="both" or which.test="variogram") a vector of
length length(uvec)-1 containing the lower limits of the 95 generated under the assumption of
absence of suitability of the fitted model at each fo the distance bins defined through uvec.
upper.lim: (available only if which.test="both" or which.test="variogram") a vector of
length length(uvec)-1 containing the upper limits of the 95 generated under the assumption of
absence of suitability of the fitted model at each fo the distance bins defined through uvec.
mode.rand.effects: the predictive mode of the random effects from the fitted non-spatial gener-
alized linear mixed model.
p.value: (available only if which.test="both" or which.test="test statistic") p-value of
the test for residual spatial correlation.
lse.variogram: (available only if lse.variogram=TRUE) a vector of length length(uvec)-1 con-
taining the values of the estimated Matern variogram via a weighted least square fit.
variogram The empirical variogram
Description
This function computes sample (empirical) variograms with options for the classical or robust esti-
mators. Output can be returned as a binned variogram, a variogram cloud or a smoothed variogram.
Data transformation (Box-Cox) is allowed. “Trends” can be specified and are fitted by ordinary
least squares in which case the variograms are computed using the residuals.
Usage
variogram(data, var.name, coords, ...)
Arguments
data an object of class "data.frame" containing the data.
var.name a formula object indicating the variable to display.
coords a formula object indicating the geographical coordinates.
... additional arguments to be passed to variog.
Value
An object of the class "variogram" which is list containing components as detailed in variog. |
bower_ally_js.jsonl | personal_doc | SQL | # # Getting started
ally.js is a JavaScript library simplifying certain accessibility features, functions and behaviors. However, simply loading ally.js will not automagically make a web application accessible. The library provides certain standard functions the "web platform" should've provided itself, so JavaScript applications be made accessible more easily. This document covers how to import ally.js in your project - see the API documentation to learn what the library actually provides.
## # Requirements
In order to load successfully in IE8, the es5-shim has to be loaded. Please also see Does ally.js support Internet Explorer 8 and below?.
The UMD bundle contains the following dependencies:
* platform.js because parsing the userAgent string yourself is ludicrous.
* CSSOM CSS.escape polyfill for properly constructing CSS query selectors.
## # Downloading the UMD bundle
If you're not comfortable with package mangers, simply download the production ready UMD bundle and drop it in your project.
* ally.min.js UMD bundle, ready for production use
* ally.min.js.map for SourceMap support
* ally.js.zip archive containing CommonJS, AMD and ES6 modules, as well as the UMD bundle (including SourceMap files)
* ally.js.tar.gz archive containing CommonJS, AMD and ES6 modules, as well as the UMD bundle (including SourceMap files)
All downloads are hosted on the github release page.
## # Loading the UMD bundle from CDN
ally.js is made available for production use by jsDelivr:
ally.js is also available for production use by cdnjs:
ally.js is also available via unpkg.com:
## # Installing via package manager
```
npm install --save ally.js
```
Although bower can download archives, it won't be able to inform you of updates:
```
bower install --save https://github.com/medialize/ally.js/releases/download/1.4.1/ally.js.zip
```
You can use system-npm to consume ally.js from npm in SystemJS:
```
System.import('ally.js!npm').then(function(ally) {
console.log('loaded ally.js in version', ally.version);
});
```
## # Using the UMD bundle via
`<script>`
## # Using CommonJS modules
```
var ally = require('ally.js');
console.log('loaded ally.js in version', ally.version);
console.log('focusable elements', ally.query.focusable());
```
Alternatively you can use only specific modules provided by ally.js:
```
var version = require('ally.js/version');
console.log('loaded version of ally.js', version);
var queryFocusable = require('ally.js/query/focusable');
console.log('focusable elements', queryFocusable());
```
ally.js is authored in ES6 and its modules are accessible in the `src` directory:
The ES6 source modules are available from the github repository through npm and `ally.js.zip` .
## # Using ES5 code contained in ES6 modules
ally.js also ships a version of the source code as ES6 modules but with the contents of each module compiled to ES5 in the `esm` directory. It is recommeneded that you use these modules with a build tool such as webpack 2 or Rollup which understand how to parse ES6 modules but generally recommened ignoring the `node_modules` folder.
The ES5 compiled ES6 modules with are available from the github repository through npm and `ally.js.zip` .
```
require.config({
paths: {
'ally.js': 'node_modules/ally.js/ally.min',
},
});
require(['ally.js'], function(ally) {
console.log('loaded ally.js in version', ally.version);
console.log('focusable elements', ally.query.focusable());
});
```
Alternatively you can use only specific modules provided by ally.js, but need to take care of mapping dependencies first:
```
require.config({
paths: {
// map to AMD files
'ally.js': 'node_modules/ally.js/amd',
// provide paths to dependencies
'css.escape': 'node_modules/css.escape/css.escape',
'platform': 'node_modules/platform/platform',
},
});
```
Now you can import specific modules using
```
require(['ally.js/version'], function(version) {
console.log('loaded version of ally.js', version);
});
require(['ally.js/query/focusable'], function(queryFocusable) {
console.log('focusable elements', queryFocusable());
});
```
## # Using with TypeScript
ally.js does not have a dediated set of TypeScript definitions. However you can still use ally.js in TypeScript by declaring a TypeScript module and using the ES5 compiled ES6 modules in the `esm` folder.
```
// in a .d.ts file, usually next to your applications entry point
declare module 'ally.js/esm/version';
declare module 'ally.js/esm/query/focusable';
```
```
// in your application code
import version from 'ally.js/esm/version';
console.log('loaded version of ally.js', version);
You will also need to set `allowJs` in your `tsconfig.json` file to be `true` .
This approach allows TypeScript to build and compile ally.js it does not provide any type checking. Only a properly authored definition file can provide type checking. If you want to contribute TypeScript definitions for ally.js the TypeScript documentation has an excellent section on declartion files.
## # Integrations
* ember-cli-ally exposes ally.js to ember-cli apps as 'ally'
Continue with checking out one of the Tutorials or head on to the API documentation
# # API index
When creating web applications or UI widgets these modules may come in handy.
## # Countering browser bugs
Every software has its problems - so do browsers. These utilities combat things browsers get wrong.
```
ally.fix.pointerFocusChildren
```
(Internet Explorer 10 - 11) *
```
ally.fix.pointerFocusInput
```
(Safari and Firefox on Mac OS X) *
```
ally.fix.pointerFocusParent
```
(WebKit and old Blink)
## # Extended
`:focus` Styling Sometimes `:focus` is not enough for communicating your application's intentions properly.
```
ally.style.focusSource
```
provides
```
html[focus-source="pointer|key|script"]
```
```
ally.style.focusWithin
```
polyfills `:focus-within` with `.ally-focus-within`
## # Altering browser focus behavior
While it's best to use standardized features and leave browsers to figure things out, specifications sometimes leave us hanging in limbo.
```
ally.maintain.disabled
```
renders elements inert to prevent any user interaction *
`ally.maintain.hidden` sets `aria-hidden="true"` on insignificant branches *
```
ally.maintain.tabFocus
```
traps TAB focus in the tabsequence
In order to work with focusable elements, we must first know which elements we're supposed to work with. See what does "focusable" mean? for a differentiation.
```
ally.query.firstTabbable
```
finds the first keyboard focusable element *
`ally.query.focusable` finds all focusable elements *
```
ally.query.shadowHosts
```
finds all elements hosting a `ShadowRoot` *
`ally.query.tabbable` finds all keyboard focusable elements in DOM order *
```
ally.query.tabsequence
```
finds all keyboard focusable elements in Sequential Navigation Focus Order
## # Element state
Unlike any other ally modules, these components do not take take `options.context` argument, but expect the `element` as first argument, allowing easy use in `.filter()` . See what does "focusable" mean? for a differentiation.
```
ally.is.activeElement
```
returns true if the element is the activeElement of its host context, i.e. its document, iFrame or ShadowHost *
`ally.is.disabled` returns true if the element is `:disabled` *
returns true if the element is considered theoretically focusable *
`ally.is.focusable` returns true if the element is considered focusable by script *
`ally.is.onlyTabbable` returns true if the element is tabbable but not focusable *
`ally.is.shadowed` returns true if the element is the descendant of a `ShadowRoot` *
`ally.is.tabbable` returns true if the element is considered keyboard focusable ("tabbable") *
`ally.is.validArea` returns true if the `<area>` element is properly used via `<map>` by an `<img>` *
```
ally.is.validTabindex
```
returns true if the element's `tabindex` attribute value is sound *
`ally.is.visible` returns true if the element is rendered (but not necessarily visible in the viewport)
## # Manipulating element state
Making up for missing or lacking DOM mutation APIs.
*
`ally.element.blur` shifts focus away from an element *
```
ally.element.disabled
```
disables all elements, not only form controls *
`ally.element.focus` shifts focus to an element
## # Reacting to element state
Especially when dealing with transitional user interfaces we need to know when an element can be safely focused.
*
`ally.when.focusable` executes a callback once an element fulfills `ally.is.focusable` and is visible in the viewport *
`ally.when.key` executes a callback when a given key has been pressed *
```
ally.when.visibleArea
```
executes a callback once an element is visible in the viewport
## # DOM traversal
returns an array containing the branches of the DOM that do contain any of the target elements *
```
ally.get.activeElement
```
identifies the element that has focus
## # Values
*
`ally.map.attribute` maps WAI-ARIA states and properties *
`ally.map.keycode` maps control keys to readable names
## # Developer modules
When creating libraries these modules may come in handy.
When you find yourself using one of these in your application code, we should talk about what you're trying to achieve and how we could do that as part of the library instead. Get in touch, file an issue explaining what you're trying to achieve!
### # DOM traversal (extended)
```
ally.get.activeElements
```
identifies the `ShadowHost` ancestry of the active element *
```
ally.get.focusRedirectTarget
```
*
`ally.get.focusTarget` *
`ally.get.parents` *
```
ally.get.shadowHostParents
```
*
`ally.get.shadowHost`
### # Event dispatchers
Emitting events when there's no standardized equivalent
### # Event listeners
Translate volatile events to stateful interfaces
```
ally.observe.interactionType
```
observes user interaction method to distinguish pointer and keyboard actions *
```
ally.observe.shadowMutations
```
registers `MutationObserver` s across nested `ShadowRoot` s
## # Contributor modules
When working on ally.js these modules may come in handy.
When you find yourself using one of these in your application or library code, we should talk about what you're trying to achieve and how we could do that as part of the library instead. Get in touch, file an issue explaining what you're trying to achieve!
::note These modules are only available to be consumed via ES6, AMD or CommonJS directly, they are not exposed in the production bundle `dist/ally.min.js` .
:::
The internal tools are documented in a less accessible way to make it just a tiny bit harder for someone not working on ally to use them. This is intentional. The stability of these APIs is not guaranteed.
Skip to content # Tutorials Accessible dialog Hiding DOM elements # Managing Focus Managing focus in animated UI Mutating the active element Managing focus in SVG
# # Contributing
While up to version 1.0.0 ally.js has been developed primarily by only one person, the intention is to get more people aboard. If there is anything about ally.js you'd like to improve, open an issue so we can discuss how you might approach that goal. We've created some docs and rules to give contributors some guidance. Don't worry if you don't understand every single piece of our fuzzy puzzle, we'll help you in any way we can.
ally.js strives to be a general purpose helper library for accessibility concerns. As such, contributions should be applicable to virtually any project and refrain from being overly specific. GOALS.md describes the possible future of the project. If you have any expertise or experience to share for those topics, please open an issue to discuss.
ally.js is not only about the library code. All the concepts covered by ally.js need to be explained in such a way that people new to accessibility understand what is going on. To make that happen, you can help improve our documentation or write a tutorial.
## # Who can contribute?
Anyone can join. Everything is done on github in the open. Everything is up for discussion.
Issues tagged with good first contribution have been analyzed and explained. They should provide all details required for your first contribution to ally.js. Feel free to pick one, work on it and send a pull request. If you have questions, please post them to the issue and we'll get back to you.
* You specialize in accessibility, but aren't an expert with the JavaScripts? No worries, be the brains, we'll be your code monkeys. See issues tagged with question or discussion.
* You specialize in JavaScript, but aren't experienced with that accessibility thing? No worries, I'm sure there's plenty to optimize and refactor and test and so on. You'll probably find something to improve in
`./src` or `./test` or issues tagged with improve. * You specialize in writing things but your accessibility is about as rusty as your JavaScript? Have a look at the docs, I'm sure they can be translated to actual, proper English. You'll likely find something in
`./docs` or issues tagged with website or documentation. * You specialize in the build pipeline? Issues tagged with build might be your thing.
* You specialize in a framework or library and would like to integrate ally.js? See the issues tagged with integration or open a new issue to discuss your approach.
## # Documentation for contributors
## # Acknowledgements
While the project and most of its resources were created by <NAME>, the following people had substantial impact on ally.js:
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* … and probably a few more
# Focusable Elements - Browser Compatibility Table
Date: 2000-02-16
Categories:
Tags:
The following tables show which elements individual browsers consider focusable or tabbable (keyboard focusable). The tables are based on the focusable test document.
Note that touch devices (without a physical keyboard) only show elements as tabbable (keyboard focusable), that can be navigated to through the on-screen keyboard (or "virtual keyboard").
b'ElementExpectedChromeMicrosoft EdgeFirefoxIEOperaSafariWebKit NightlyChrome Mobile (Android)Safari (iOS)55.057.012.1024013.1058614.1439315.1495150.053.09.010.011.042.08.09.110.0604.155.010.0<html>inert-1inert-1inert-1inert0inert0inert0inert0focusablefocusabletabbable0tabbable0tabbable0inert-1inert-1inert-1inert-1inert-1inert-1inert-1<body>focusable-1focusablefocusablefocusablefocusablefocusablefocusablefocusablefocusabletabbabletabbable0tabbablefocusablefocusablefocusablefocusablefocusablefocusablefocusable'
b'ElementExpectedChromeMicrosoft EdgeFirefoxIEOperaSafariWebKit NightlyChrome Mobile (Android)Safari (iOS)55.057.012.1024013.1058614.1439315.1495150.053.09.010.011.042.08.09.110.0604.155.010.0<button type="button">tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0focusable0focusable0<input type="checkbox">tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0focusable0focusable0<input type="password">tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0focusable0tabbable0<input type="radio">tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0focusable0focusable0<input type="submit">tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0focusable0focusable0<input type="text">tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0focusable0tabbable0<input type="reset">tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0focusable0focusable0<select>tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0focusable0tabbable0<textarea>tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbabletabbable0tabbable0tabbable0tabbable0tabbable0focusable0tabbable0<button type="button" tabindex="-1">focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1<input type="checkbox" tabindex="-1">focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1<input type="password" tabindex="-1">focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1<input type="radio" tabindex="-1">focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1<input type="submit" tabindex="-1">focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1<input type="text" tabindex="-1">focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusablefocusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1<input type="reset" tabindex="-1">focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1<select tabindex="-1">focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1<textarea tabindex="-1">focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1<input type="text" tabindex="1">tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1focusable1tabbable1<input type="text" tabindex="2">tabbable2tabbable2tabbable2tabbable2tabbable2tabbable2tabbable2tabbable2tabbable2tabbable2tabbable2tabbable2tabbable2tabbable2tabbable2tabbable2tabbable2focusable2tabbable2'
b'ElementExpectedChromeMicrosoft EdgeFirefoxIEOperaSafariWebKit NightlyChrome Mobile (Android)Safari (iOS)55.057.012.1024013.1058614.1439315.1495150.053.09.010.011.042.08.09.110.0604.155.010.0<input> within <form>tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0focusable0tabbable0<form tabindex="-1">focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1<form tabindex="0">tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0focusable0focusable0<input> within <form tabindex="-1">tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0focusable0tabbable0<input> within <form tabindex="0">tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0focusable0tabbable0<form disabled tabindex="-1">focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1inert-1inert-1inert-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1<form disabled tabindex="0">tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0inert0inert0inert0tabbable0tabbable0tabbable0tabbable0tabbable0focusable0focusable0<input> within <form disabled>tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0inert0inert0inert0tabbable0tabbable0tabbable0tabbable0tabbable0focusable0tabbable0<input> within <form disabled tabindex="-1">tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0inert0inert0inert0tabbable0tabbable0tabbable0tabbable0tabbable0focusable0tabbable0<input> within <form disabled tabindex="0">tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0inert0inert0inert0tabbable0tabbable0tabbable0tabbable0tabbable0focusable0tabbable0'
b'ElementExpectedChromeMicrosoft EdgeFirefoxIEOperaSafariWebKit NightlyChrome Mobile (Android)Safari (iOS)55.057.012.1024013.1058614.1439315.1495150.053.09.010.011.042.08.09.110.0604.155.010.0<fieldset>inert-1inert-1inert-1inert0inert0inert0inert0inert-1inert-1focusable0focusable0focusableinert-1inert0inert0inert0inert0inert-1inert0<fieldset disabled tabindex="-1">inert-1inertinertinert-1inert-1inert-1inert-1inert-1inert-1inert-1inert-1inert-1inert-1focusable-1focusablefocusable-1focusableinertfocusable-1<fieldset disabled tabindex="0">inert0inertinertinert0inert0inert0inert0inert0inert0inert0inert0inert0inert-1tabbable0tabbable0 5tabbable0tabbable0 5inertfocusable0<legend> within <fieldset>redirectingredirectingredirectinginert0inert0inert0inert0redirectingredirectinginert0inert0inert0redirectingredirectingredirectingredirectingredirectingredirectingredirecting<legend> within <fieldset> that only contains <input tabindex="-1">redirectingredirectingredirectinginert0inert0inert0inert0redirectingredirectinginert0inert0inert0redirectingredirectingredirectingredirectingredirectingredirectingredirecting<legend> within <fieldset> that only contains <textarea>redirectingredirectingredirectinginert0inert0inert0inert0redirectingredirectinginert0inert0inert0redirectingredirectingredirectingredirectingredirectingredirectingredirecting<legend> within <fieldset> that only contains <select>redirectingredirectingredirectinginert0inert0inert0inert0redirectingredirectinginert0inert0inert0redirectingredirectingredirectingredirectingredirectingredirectingredirecting<legend> within <fieldset> that only contains <button>redirectingredirectingredirectinginert0inert0inert0inert0redirectingredirectinginert0inert0inert0redirectingredirectingredirectingredirectingredirectingredirectingredirecting<legend> within <fieldset> that only contains <a>inert-1inert-1inert-1inert0inert0inert0inert0redirectingredirectinginert0inert0inert0inert-1inert-1inert-1inert-1inert-1inert-1inert-1'
b'ElementExpectedChromeMicrosoft EdgeFirefoxIEOperaSafariWebKit NightlyChrome Mobile (Android)Safari (iOS)55.057.012.1024013.1058614.1439315.1495150.053.09.010.011.042.08.09.110.0604.155.010.0<label for="…"> with <input id="…">redirectingredirectingredirectinginert0inert0inert0inert0redirectingredirectinginertinertinertredirectingredirectingredirectingredirectingredirectingredirectingredirecting<label tabindex="-1" for="…"> with <input id="…">redirectingfocusable-1focusable-1focusable-1focusable-1focusable-1focusable-1redirectingredirectingfocusable-1focusable-1focusable-1focusable-1redirectingredirectingredirectingredirectingfocusable-1redirecting<label tabindex="-1">inert-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1inert-1inert-1focusable-1focusable-1focusablefocusable-1inert-1inert-1inert-1inert-1focusable-1inert-1<label tabindex="0">inert0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0only tabbableonly tabbabletabbable0tabbable0tabbable0tabbable0inert0inert0inert0inert0focusable0inert0<label> with nested <input>redirectingredirectingredirectinginert0inert0inert0inert0redirectingredirectinginertinertinertredirectingredirectingredirectingredirectingredirectingredirectingredirecting'
b'ElementExpectedChromeMicrosoft EdgeFirefoxIEOperaSafariWebKit NightlyChrome Mobile (Android)Safari (iOS)55.057.012.1024013.1058614.1439315.1495150.053.09.010.011.042.08.09.110.0604.155.010.0<div contenteditable>tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable-1tabbable-1tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0focusable0tabbable0<div contenteditable tabindex="-1">focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1<span style="user-modify: read-write">tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0inert-1inert-1inert0inert0inert0tabbable0tabbable0tabbable0tabbable0tabbable0focusable0tabbable0<span style="user-modify: read-write" tabindex="-1">focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1'
b'ElementExpectedChromeMicrosoft EdgeFirefoxIEOperaSafariWebKit NightlyChrome Mobile (Android)Safari (iOS)55.057.012.1024013.1058614.1439315.1495150.053.09.010.011.042.08.09.110.0604.155.010.0<div tabindex="-2">focusable-2focusable-2focusable-2focusable-2focusable-2focusable-2focusable-2focusable-2focusable-2focusable-2focusable-2focusable-2 Efocusable-2focusable-2focusable-2focusable-2focusable-2focusable-2focusable-2<div tabindex="-1">focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1<div tabindex="0">tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0focusable0focusable0<div tabindex="1">tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1focusable1focusable1<div tabindex="+2">tabbable2tabbable2tabbable2tabbable2tabbable2tabbable2tabbable2tabbable2tabbable2tabbable2tabbable2tabbable2tabbable2tabbable2tabbable2tabbable2tabbable2focusable2focusable2<div tabindex=" +2">tabbable2tabbable2tabbable2tabbable2tabbable2tabbable2tabbable2tabbable2tabbable2tabbable2tabbable2tabbable2tabbable2tabbable2tabbable2tabbable2tabbable2focusable2focusable2<div tabindex="3 ">tabbable3tabbable3tabbable3tabbable3tabbable3tabbable3tabbable3tabbable3tabbable3tabbable3tabbable3tabbable3tabbable3tabbable3tabbable3tabbable3tabbable3focusable3focusable3<div tabindex="3x">tabbable3tabbable3tabbable3inert0inert0inert0inert0tabbable3tabbable3inert0inert0inert0tabbable3tabbable3tabbable3tabbable3tabbable3focusable3focusable3<div tabindex="">inert-1inert-1inert-1inert0inert0inert0inert0focusable-1focusable-1inertinertinertinert-1inert-1inert-1inert-1inert-1inert-1inert-1<div tabindex="hello">inert-1inert-1inert-1inert0inert0inert0inert0focusable-1focusable-1inert0inert0inert0inert-1inert-1inert-1inert-1inert-1inert-1inert-1<input tabindex="hello">tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbabletabbable0tabbable0tabbable0tabbable0tabbable0focusable0tabbable0'
b'ElementExpectedChromeMicrosoft EdgeFirefoxIEOperaSafariWebKit NightlyChrome Mobile (Android)Safari (iOS)55.057.012.1024013.1058614.1439315.1495150.053.09.010.011.042.08.09.110.0604.155.010.0<a href="">tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0focusable0focusable0<a href>tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0focusable0focusable0<a href="…" tabindex="-1">focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1'
b'ElementExpectedChromeMicrosoft EdgeFirefoxIEOperaSafariWebKit NightlyChrome Mobile (Android)Safari (iOS)55.057.012.1024013.1058614.1439315.1495150.053.09.010.011.042.08.09.110.0604.155.010.0<img usemap="#…">inert-1inert-1inert-1redirectingredirectingredirectingredirectinginert-1inert-1redirectingredirectingredirectinginert-1inert-1inert-1inert-1inert-1inert-1inert-1<area href="…"> with <img usemap="#…">tabbable0tabbabletabbabletabbabletabbabletabbabletabbabletabbabletabbabletabbabletabbabletabbabletabbabletabbabletabbabletabbabletabbablefocusablefocusable<area> with <img usemap="#…">inert0inertinertinertinertinertinerttabbabletabbableinertinertinertinertinertinertinertinertinertinert<area href="…" tabindex="-1"> with <img usemap="#…">focusable-1inertinertfocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusableinertinertinertinertinertinertinert<area tabindex="-1"> with <img usemap="#…">inert-1inertinertfocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusableinertinertinertinertinertinertinert<area href="…"> with <img usemap="#…"> with invalid imagetabbable0tabbabletabbabletabbabletabbabletabbabletabbableinertinerttabbabletabbabletabbabletabbabletabbabletabbabletabbabletabbablefocusablefocusable<area href="…"> with two <img usemap="#…">tabbable0tabbabletabbabletabbabletabbabletabbabletabbabletabbabletabbabletabbabletabbabletabbabletabbabletabbabletabbabletabbabletabbablefocusablefocusable<img usemap="#…" tabindex="-1">focusable-1focusable-1focusable-1redirectingredirectingredirectingredirectinginert-1inert-1redirectingredirectingredirectingfocusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1<area href="…"> with <img usemap="#…" tabindex="-1">tabbable0tabbabletabbablefocusablefocusablefocusablefocusabletabbabletabbablefocusablefocusablefocusabletabbabletabbabletabbabletabbabletabbablefocusablefocusable<area> with <img usemap="#…" tabindex="-1">inert0inertinertfocusablefocusablefocusablefocusabletabbabletabbablefocusablefocusablefocusableinertinertinertinertinertinertinert<input> between <img usemap="#map"> and <img usemap="#map">tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0focusable0 50focusable0 50tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0focusable0tabbable0<area> child of <map name="#…"> with <object type="image/png" usemap="#…">inert0inertinertinertinertinertinertfocusablefocusableinertinertinertinertinertinertinertinertinertinert<area href="…"> with <object type="image/png" usemap="#…" >inert0inertinertinertinertinertinertfocusablefocusableinertinertinertinertinertinertinertinertinertinert'
b'ElementExpectedChromeMicrosoft EdgeFirefoxIEOperaSafariWebKit NightlyChrome Mobile (Android)Safari (iOS)55.057.012.1024013.1058614.1439315.1495150.053.09.010.011.042.08.09.110.0604.155.010.0<audio>inert-1inert-1inert-1focusable0focusable0focusable0focusable0inert0inert0focusable0focusable0focusable0inert-1inert-1inert-1inert-1inert-1inert-1inert-1<audio controls>tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0focusable0focusable0<audio controls tabindex="-1">focusable-1tabbable-1tabbable-1focusable-1focusable-1focusable-1focusable-1focusable-1tabbable-1focusable-1focusable-1focusable-1tabbable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1<video>inert-1inert-1inert-1focusable0focusable0focusable0focusable0tabbable0tabbable0focusable0focusable0focusable0inert-1inert-1inert-1inert-1inert-1inert-1inert-1<video controls>tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0focusable0focusable0<video controls tabindex="-1">focusable-1tabbable-1tabbable-1focusable-1focusable-1focusable-1focusable-1tabbable-1tabbable-1focusable-1focusable-1focusable-1tabbable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1'
b'ElementExpectedChromeMicrosoft EdgeFirefoxIEOperaSafariWebKit NightlyChrome Mobile (Android)Safari (iOS)55.057.012.1024013.1058614.1439315.1495150.053.09.010.011.042.08.09.110.0604.155.010.0::shadowinert-1inert hostinert hostinert0inert0inert0inert0inert-1inert-1inert0inert0inert0inert hostinert-1inert-1inert-1inert-1inert hostinert-1<input tabindex="-1"> within ::shadowfocusablefocusablefocusableinertinertinertinertfocusablefocusableinertinertinertfocusableinertinertinertinertfocusableinert<input tabindex="0"> within ::shadowtabbabletabbabletabbableinertinertinertinerttabbabletabbableinertinertinerttabbableinertinertinertinertfocusableinert<input tabindex="2"> within ::shadowtabbabletabbabletabbableinertinertinertinerttabbabletabbableinertinertinerttabbableinertinertinertinertfocusableinert<input tabindex="1"> within ::shadow within ::shadowtabbabletabbabletabbableinertinertinertinerttabbabletabbableinertinertinerttabbableinertinertinertinertfocusableinert::shadow[tabindex="-1"]focusable-1focusablefocusablefocusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusablefocusable-1focusable-1focusable-1focusable-1focusablefocusable-1<input tabindex="-1"> within ::shadow[tabindex="-1"]focusablefocusablefocusableinertinertinertinertfocusablefocusableinertinertinertfocusableinertinertinertinertfocusableinert<input tabindex="0"> within ::shadow[tabindex="-1"]tabbabletabbabletabbableinertinertinertinerttabbabletabbableinertinertinerttabbableinertinertinertinertfocusableinert<input tabindex="2"> within ::shadow[tabindex="-1"]tabbabletabbabletabbableinertinertinertinerttabbabletabbableinertinertinerttabbableinertinertinertinertfocusableinert<input tabindex="1"> within ::shadow within ::shadow[tabindex="-1"]tabbabletabbabletabbableinertinertinertinerttabbabletabbableinertinertinerttabbableinertinertinertinertfocusableinert::shadow[tabindex="0"]tabbable0tabbabletabbabletabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbabletabbable0tabbable0tabbable0tabbable0focusablefocusable0<input tabindex="-1"> within ::shadow[tabindex="0"]focusablefocusablefocusableinertinertinertinertfocusablefocusableinertinertinertfocusableinertinertinertinertfocusableinert<input tabindex="0"> within ::shadow[tabindex="0"]tabbabletabbabletabbableinertinertinertinerttabbabletabbableinertinertinerttabbableinertinertinertinertfocusableinert<input tabindex="2"> within ::shadow[tabindex="0"]tabbabletabbabletabbableinertinertinertinerttabbabletabbableinertinertinerttabbableinertinertinertinertfocusableinert<input tabindex="1"> within ::shadow within ::shadow[tabindex="0"]tabbabletabbabletabbableinertinertinertinerttabbabletabbableinertinertinerttabbableinertinertinertinertfocusableinert'
b'ElementExpectedChromeMicrosoft EdgeFirefoxIEOperaSafariWebKit NightlyChrome Mobile (Android)Safari (iOS)55.057.012.1024013.1058614.1439315.1495150.053.09.010.011.042.08.09.110.0604.155.010.0<iframe src="…"> without focusable contentfocusable0tabbabletabbablefocusable0focusable0focusable0focusable0tabbabletabbablefocusablefocusablefocusabletabbabletabbabletabbabletabbabletabbablefocusablefocusable<iframe src="…" tabindex="-1"> without focusable contentfocusable-1focusablefocusablefocusable-1focusable-1focusable-1focusable-1focusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusable<iframe src="…"> with SVG documentfocusable0focusablefocusablefocusablefocusablefocusablefocusabletabbablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusable<iframe src="…"> with focusable contentfocusable0focusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusable<html> within <iframe src="…">inert-1inertinertinertinertinertinertfocusablefocusablefocusablefocusablefocusableinertinertinertinertinertinertinert<body> within <iframe src="…">inert-1focusablefocusablefocusablefocusablefocusablefocusableinertinerttabbabletabbabletabbablefocusablefocusablefocusablefocusablefocusablefocusablefocusable<html> within <iframe src="…"> with focusable contentinert-1inertinertinertinertinertinerttabbabletabbablefocusablefocusablefocusableinertinertinertinertinertinertinert<html> within <iframe src="…" tabindex="-1">inert-1inertinertinertinertinertinertfocusablefocusablefocusablefocusablefocusableinertinertinertinertinertinertinert<body> within <iframe src="…" tabindex="-1">inert-1focusablefocusablefocusablefocusablefocusablefocusableinertinertfocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusable<html> within <iframe src="…" tabindex="-1">inert-1inertinertinertinertinertinertfocusablefocusablefocusablefocusablefocusableinertinertinertinertinertinertinert<input> within <iframe src="…">tabbabletabbabletabbabletabbabletabbabletabbabletabbabletabbabletabbabletabbabletabbabletabbabletabbabletabbabletabbabletabbabletabbablefocusablefocusable<input> within <iframe src="…" tabindex="-1">focusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusable<input tabindex="1"> within <iframe src="…">tabbabletabbabletabbabletabbabletabbabletabbabletabbabletabbabletabbabletabbabletabbabletabbabletabbabletabbabletabbabletabbabletabbablefocusablefocusable<input tabindex="1"> within <iframe src="…" tabindex="-1">focusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusable<iframe src="…" style="visibility: hidden">inert0inert hostinert hostinert0inert0inert0inert0inert0inert0inert0inert0inert0inert hostinert hostinert hostinert hostinert hostinert hostinert host<html> within <iframe src="…" style="visibility: hidden">inert-1inertinertinertinertinertinertfocusablefocusableinertinertinertinertinertinertinertinertinertinert<body> within <iframe src="…" style="visibility: hidden">inert-1focusablefocusablefocusablefocusablefocusablefocusableinertinertfocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusable<input> within <iframe src="…" style="visibility: hidden">inert0focusablefocusableinertinertinertinertinertinertinertinertinertfocusablefocusablefocusablefocusablefocusablefocusablefocusable<input tabindex="-1"> within <iframe src="…" style="visibility: hidden">inert1focusablefocusableinertinertinertinertinertinertinertinertinertfocusablefocusablefocusablefocusablefocusablefocusablefocusable<iframe src="…" style="display: none">inert0inert hostinert hostinert0inert0inert0inert0inert0inert0inert0inert0inert0inert hostinert hostinert hostinert hostinert hostinert hostinert host<body> within <iframe src="…" style="display: none">inert-1focusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusable<iframe src="…"> within <details>inert0inert hostinert hostfocusablefocusablefocusablefocusableinert0inert0focusablefocusablefocusableinert hostinert hostinert hostinert hostinert hostinert hostinert host<html> within <iframe src="…"> within <details>inert-1inertinertinertinertinertinertinertinertfocusablefocusablefocusableinertinertinertinertinertinertinert<body> within <iframe src="…"> within <details>inert-1focusablefocusablefocusablefocusablefocusablefocusablefocusablefocusabletabbabletabbabletabbablefocusablefocusablefocusablefocusablefocusablefocusablefocusable<input> within <iframe src="…"> within <details>inert0focusablefocusabletabbabletabbabletabbabletabbableinertinerttabbabletabbabletabbablefocusablefocusablefocusablefocusablefocusablefocusablefocusable<input tabindex="-1"> within <iframe src="…"> within <details>inert1focusablefocusabletabbabletabbabletabbabletabbableinertinerttabbabletabbabletabbablefocusablefocusablefocusablefocusablefocusablefocusablefocusable'
b'ElementExpectedChromeMicrosoft EdgeFirefoxIEOperaSafariWebKit NightlyChrome Mobile (Android)Safari (iOS)55.057.012.1024013.1058614.1439315.1495150.053.09.010.011.042.08.09.110.0604.155.010.0<embed type="video/quicktime" src="…">focusable0focusable0focusable0inert0inert0inertinertfocusablefocusabletabbable0tabbable0tabbable0focusable0focusable0inert-1inert-1inert-1focusable0inert-1<embed type="video/quicktime" src="…" tabindex="-1">focusable-1focusable-1focusable-1inert-1inert-1inertinertfocusablefocusablefocusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1<embed type="video/mp4" src="…">focusable0tabbable0tabbable0inert0inert0only tabbableonly tabbablefocusablefocusabletabbable0tabbable0tabbable0tabbable0focusable0inert-1inert-1inert-1focusable0inert-1<embed type="video/mp4" src="…" tabindex="-1">focusable-1focusable-1focusable-1inert-1inert-1inert-1inert-1focusablefocusablefocusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1<embed type="video/ogv" src="…">focusable0focusable0focusable0inert0inert0inert0inert0focusablefocusableinert0inert0inert0focusable0inert-1inert-1inert-1inert-1focusable0focusable0<embed type="video/ogv" src="…" tabindex="-1">focusable-1focusable-1focusable-1inert-1inert-1inert-1inert-1focusablefocusableinert-1inert-1inert-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1<embed type="image/svg+xml" src="…">focusable0focusablefocusableinert hostinert hostinert hostinert hostfocusablefocusableinert hostinert hostinert hostfocusablefocusablefocusablefocusablefocusablefocusablefocusable<embed type="image/svg+xml" src="…" tabindex="-1">focusable-1focusablefocusableinert hostinert hostinert hostinert hostfocusablefocusableinert hostinert hostinert hostfocusablefocusablefocusablefocusablefocusablefocusablefocusable<embed type="image/svg+xml" src="…" tabindex="0">focusable0focusablefocusableinert hostinert hostinert hostinert hosttabbablefocusableinert hostinert hostinert hostfocusablefocusablefocusablefocusablefocusablefocusablefocusable'
b'ElementExpectedChromeMicrosoft EdgeFirefoxIEOperaSafariWebKit NightlyChrome Mobile (Android)Safari (iOS)55.057.012.1024013.1058614.1439315.1495150.053.09.010.011.042.08.09.110.0604.155.010.0<object type="application/x-shockwave-flash" data="…">focusable-1focusable0focusable0tabbable0tabbable0tabbable0tabbable0focusable-1focusable-1inert0focusable0 33tabbable0focusable0focusable0focusable0focusable0focusable0focusable0focusable0<object type="application/x-shockwave-flash" data="…" tabindex="-1">focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1inert-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1<object type="application/x-shockwave-flash" data="…" tabindex="0">tabbable0focusable0focusable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0inert0focusable0 33tabbable0focusable0focusable0focusable0focusable0focusable0focusable0focusable0<object type="image/svg+xml" data="…">focusable-1focusablefocusableinert hostinert hostinert hostinert hosttabbablefocusableinert hostinert hostinert hostfocusablefocusablefocusablefocusablefocusablefocusablefocusable<object type="image/svg+xml" data="…" tabindex="-1">focusable-1focusablefocusableinert hostinert hostinert hostinert hostfocusablefocusableinert hostinert hostinert hostfocusablefocusablefocusablefocusablefocusablefocusablefocusable<object type="image/svg+xml" data="…" tabindex="0">tabbable0focusablefocusableinert hostinert hostinert hostinert hosttabbablefocusableinert hostinert hostinert hostfocusablefocusablefocusablefocusablefocusablefocusablefocusable<object type="image/svg+xml" data="…" style="visibility: hidden">inert0inert hostinert hostinert0inert0inert0inert0inert0inert0inert0inert0inert0inert hostinert hostinert hostinert hostinert hostinert hostinert host<object type="image/svg+xml" data="…"> within <details>focusable-1inert0inert0inert hostinert hostinert hostinert hostinert0inert0inert hostinert hostinert hostinert0inert-1inert-1inert-1inert-1inert0inert-1'
b'ElementExpectedChromeMicrosoft EdgeFirefoxIEOperaSafariWebKit NightlyChrome Mobile (Android)Safari (iOS)55.057.012.1024013.1058614.1439315.1495150.053.09.010.011.042.08.09.110.0604.155.010.0<svg>inert-1inert-1inert-1tabbabletabbableinertinertinertinert-1tabbabletabbabletabbableinert-1inert-1inert-1inert-1inert-1inert-1inert-1<svg tabindex="-1">focusable-1focusable-1focusable-1tabbabletabbablefocusablefocusableinertinert-1tabbabletabbabletabbablefocusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1<svg focusable="false" tabindex="-1">focusable-1focusable-1focusable-1inertinertinertinertinertinert-1inertinertinertfocusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1<svg focusable="true" tabindex="-1">focusable-1focusable-1focusable-1tabbabletabbabletabbabletabbableinertinert-1tabbabletabbabletabbablefocusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1<svg> containing <a xlink:href="…" tabindex="-1">inert-1inert-1inert-1tabbabletabbableinertinertinertinert-1tabbabletabbabletabbableinert-1inert-1inert-1inert-1inert-1inert-1inert-1<svg> containing <a xlink:href="…" tabindex="0">inert-1inert-1inert-1tabbabletabbableinertinertinertinert-1tabbabletabbabletabbableinert-1inert-1inert-1inert-1inert-1inert-1inert-1<a xlink:href="…"> within <svg>tabbable0tabbable0tabbable0tabbabletabbabletabbabletabbableonly tabbabletabbable-1tabbabletabbabletabbabletabbable0tabbable0tabbable0tabbable0tabbable0focusable0focusable0<a xlink:href="…"> within <svg tabindex="-1">tabbable0tabbable0tabbable0tabbabletabbabletabbabletabbableonly tabbabletabbable-1tabbabletabbabletabbabletabbable0tabbable0tabbable0tabbable0tabbable0focusable0focusable0<a xlink:href="…" focusable="false"> within <svg>tabbable0tabbable0tabbable0inertinertinertinertonly tabbabletabbable-1inertinertinerttabbable0tabbable0tabbable0tabbable0tabbable0focusable0focusable0<a xlink:href="…" tabindex="-1"> within <svg>focusable-1focusable-1focusable-1tabbabletabbablefocusablefocusableonly tabbabletabbable-1tabbabletabbabletabbablefocusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1<a xlink:href="…" tabindex="0"> within <svg>tabbable0tabbable0tabbable0tabbabletabbabletabbabletabbableonly tabbabletabbable0tabbabletabbabletabbabletabbable0tabbable0tabbable0tabbable0tabbable0focusable0focusable0<a xlink:href="…" tabindex="1"> within <svg>tabbable1tabbable1tabbable1tabbabletabbabletabbabletabbableonly tabbabletabbable1tabbabletabbabletabbabletabbable1tabbable1tabbable1tabbable1tabbable1focusable1focusable1<rect tabindex="0">tabbable0tabbable0tabbable0inertinerttabbabletabbableinerttabbable0inertinertinerttabbable0tabbable0tabbable0tabbable0tabbable0focusable0focusable0<rect tabindex="-1">focusable-1focusable-1focusable-1inertinertfocusablefocusableinertinert-1inertinertinertfocusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1<rect focusable="true">inert-1inert-1inert-1tabbabletabbabletabbabletabbableinertinert-1tabbabletabbabletabbableinert-1inert-1inert-1inert-1inert-1inert-1inert-1<a xlink:href="…"> within <svg focusable="false">tabbable0tabbable0tabbable0tabbabletabbabletabbabletabbableonly tabbabletabbable-1tabbabletabbabletabbabletabbable0tabbable0tabbable0tabbable0tabbable0focusable0focusable0<svg viewBox="…">inert-1inert-1inert-1tabbabletabbableinertinertinertinert-1tabbabletabbabletabbableinert-1inert-1inert-1inert-1inert-1inert-1inert-1<rect tabindex="0"> within <svg viewBox="…"> with position outside of boxtabbable0tabbable0tabbable0inertinerttabbabletabbableinerttabbable0inertinertinerttabbable0tabbable0tabbable0tabbable0tabbable0focusable0focusable0<rect tabindex="-1"> within <svg viewBox="…"> with position outside of boxfocusable-1focusable-1focusable-1inertinertfocusablefocusableinertinert-1inertinertinertfocusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1<rect focusable="true"> within <svg viewBox="…"> with position outside of boxinert-1inert-1inert-1tabbabletabbabletabbabletabbableinertinert-1tabbabletabbabletabbableinert-1inert-1inert-1inert-1inert-1inert-1inert-1<svg baseProfile="tiny">inert-1inert-1inert-1tabbabletabbableinertinertinertinert-1tabbabletabbabletabbableinert-1inert-1inert-1inert-1inert-1inert-1inert-1<rect focusable="true"> within <svg baseProfile="tiny">inert-1inert-1inert-1tabbabletabbabletabbabletabbableinertinert-1tabbabletabbabletabbableinert-1inert-1inert-1inert-1inert-1inert-1inert-1<rect tabindex="0"> within <svg baseProfile="tiny">tabbable0tabbable0tabbable0inertinerttabbabletabbableinerttabbable0inertinertinerttabbable0tabbable0tabbable0tabbable0tabbable0focusable0focusable0<a xlink:href="…" focusable="true"> within <svg baseProfile="tiny">tabbable0tabbable0tabbable0inertinertinertinertonly tabbabletabbable-1inertinertinerttabbable0tabbable0tabbable0tabbable0tabbable0focusable0focusable0<a xlink:href="…" tabindex="-1"> within <svg baseProfile="tiny">focusable-1focusable-1focusable-1tabbabletabbablefocusablefocusableonly tabbabletabbable-1tabbabletabbabletabbablefocusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1<rect focusable="true" tabindex="0"> within <svg baseProfile="tiny">tabbable0tabbable0tabbable0tabbabletabbabletabbabletabbableinerttabbable0tabbabletabbabletabbabletabbable0tabbable0tabbable0tabbable0tabbable0focusable0focusable0<rect focusable="true" tabindex="-1"> within <svg baseProfile="tiny">focusable-1focusable-1focusable-1tabbabletabbabletabbabletabbableinertinert-1tabbabletabbabletabbablefocusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1<rect focusable="false" tabindex="0"> within <svg baseProfile="tiny">tabbable0tabbable0tabbable0inertinertinertinertinerttabbable0inertinertinerttabbable0tabbable0tabbable0tabbable0tabbable0focusable0focusable0<rect focusable="false" tabindex="-1"> within <svg baseProfile="tiny">focusable-1focusable-1focusable-1inertinertinertinertinertinert-1inertinertinertfocusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1<a xlink:href="…"> within <svg baseProfile="tiny" focusable="false">tabbable0tabbable0tabbable0tabbabletabbabletabbabletabbableonly tabbabletabbable-1tabbabletabbabletabbabletabbable0tabbable0tabbable0tabbable0tabbable0focusable0focusable0<rect onfocus="">inert-1tabbabletabbableinertinertinertinertinertinert-1inertinertinerttabbabletabbabletabbabletabbable0tabbablefocusablefocusable0'
b'ElementExpectedChromeMicrosoft EdgeFirefoxIEOperaSafariWebKit NightlyChrome Mobile (Android)Safari (iOS)55.057.012.1024013.1058614.1439315.1495150.053.09.010.011.042.08.09.110.0604.155.010.0<input> within <foreignObject>tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0focusable0tabbable0<input tabindex="-1"> within <foreignObject>focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1<input tabindex="0"> within <foreignObject>tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0focusable0tabbable0<input tabindex="1"> within <foreignObject>tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1focusable1tabbable1<foreignObject tabindex="-1">focusable-1focusable-1focusable-1inertinertinertinertinertinert-1inertinertinertfocusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1<input> within <foreignObject tabindex="-1">tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0focusable0tabbable0<input tabindex="-1"> within <foreignObject tabindex="-1">focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1<input tabindex="0"> within <foreignObject tabindex="-1">tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0focusable0tabbable0<input tabindex="1"> within <foreignObject tabindex="-1">tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1focusable1tabbable1<input> within <foreignObject> within <switch>tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0focusable0tabbable0<input tabindex="-1"> within <foreignObject> within <switch>focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1<input tabindex="0"> within <foreignObject> within <switch>tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0focusable0tabbable0<input tabindex="1"> within <foreignObject> within <switch>tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1tabbable1focusable1tabbable1'
b'ElementExpectedChromeMicrosoft EdgeFirefoxIEOperaSafariWebKit NightlyChrome Mobile (Android)Safari (iOS)55.057.012.1024013.1058614.1439315.1495150.053.09.010.011.042.08.09.110.0604.155.010.0<svg> with <use> as contentinert-1inert-1inert-1inerttabbableinertinertinertinert-1tabbabletabbabletabbableinert-1inert-1inert-1inert-1inert-1inert-1inert-1<a xlink:href="…"> within <defs>inert0tabbable0tabbable0inerttabbabletabbabletabbableonly tabbabletabbable-1tabbabletabbabletabbabletabbable0tabbable0tabbable0tabbable0tabbable0focusable0focusable0<a xlink:href="…"> within <g>tabbable0tabbable0tabbable0inerttabbabletabbabletabbableonly tabbabletabbable-1tabbabletabbabletabbabletabbable0tabbable0tabbable0tabbable0tabbable0focusable0focusable0<use> referencing focusable contentinert-1only tabbableonly tabbableinertinertinertinertinertinert-1inertinertinertonly tabbableinert-1 38only tabbableonly tabbableonly tabbableinert-1inert-1<use tabindex="-1"> referencing focusable contentfocusable-1tabbable-1 37tabbable-1 37inertinertinertinertinertinert-1inertinertinerttabbable-1focusable-1 38tabbable-1focusable-1focusable-1focusable-1focusable-1'
b'ElementExpectedChromeMicrosoft EdgeFirefoxIEOperaSafariWebKit NightlyChrome Mobile (Android)Safari (iOS)55.057.012.1024013.1058614.1439315.1495150.053.09.010.011.042.08.09.110.0604.155.010.0<svg> within <iframe>inert-1inertinerttabbabletabbableonly tabbableonly tabbableinerttabbabletabbabletabbabletabbableinertinertinertinertinertinertinert<a xlink:href="…"> within <iframe>tabbabletabbabletabbabletabbabletabbabletabbabletabbableonly tabbabletabbabletabbabletabbabletabbabletabbabletabbabletabbabletabbabletabbablefocusablefocusable<a xlink:href="…" tabindex="1"> within <iframe>tabbabletabbabletabbabletabbabletabbabletabbabletabbableonly tabbabletabbabletabbabletabbabletabbabletabbabletabbabletabbabletabbabletabbablefocusablefocusable<a xlink:href="…" tabindex="-1"> within <iframe>focusablefocusablefocusabletabbabletabbabletabbabletabbableonly tabbabletabbabletabbabletabbabletabbablefocusablefocusablefocusablefocusablefocusablefocusablefocusable<a xlink:href="…" focusable="false"> within <iframe>tabbabletabbabletabbableinertinertinertinertonly tabbabletabbableinertinertinerttabbabletabbabletabbabletabbabletabbablefocusablefocusable<svg> within <iframe tabindex="-1">inertinertinertfocusablefocusableinertinertinertfocusablefocusablefocusablefocusableinertinertinertinertinertinertinert<a xlink:href="…"> within <iframe tabindex="-1">focusablefocusablefocusablefocusablefocusablefocusablefocusableinertfocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusable<a xlink:href="…" tabindex="1"> within <iframe tabindex="-1">focusablefocusablefocusablefocusablefocusablefocusablefocusableinertfocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusable<a xlink:href="…" tabindex="-1"> within <iframe tabindex="-1">focusablefocusablefocusablefocusablefocusablefocusablefocusableinertfocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusable<a xlink:href="…" focusable="false"> within <iframe tabindex="-1">focusablefocusablefocusableinertinertinertinertinertfocusableinertinertinertfocusablefocusablefocusablefocusablefocusablefocusablefocusable'
b'ElementExpectedChromeMicrosoft EdgeFirefoxIEOperaSafariWebKit NightlyChrome Mobile (Android)Safari (iOS)55.057.012.1024013.1058614.1439315.1495150.053.09.010.011.042.08.09.110.0604.155.010.0<svg> within <embed>inert-1inertinerttabbabletabbableinertinertinertfocusabletabbabletabbabletabbableinertinertinertinertinertinertinert<a xlink:href="…"> within <embed>tabbabletabbabletabbabletabbabletabbabletabbabletabbableinertfocusabletabbabletabbabletabbabletabbablefocusablefocusablefocusablefocusablefocusablefocusable<a xlink:href="…" tabindex="1"> within <embed>tabbabletabbabletabbabletabbabletabbabletabbabletabbableinertfocusabletabbabletabbabletabbabletabbablefocusablefocusablefocusablefocusablefocusablefocusable<a xlink:href="…" tabindex="-1"> within <embed>focusablefocusablefocusabletabbabletabbabletabbabletabbableinertfocusabletabbabletabbabletabbablefocusablefocusablefocusablefocusablefocusablefocusablefocusable<a xlink:href="…" focusable="false"> within <embed>tabbabletabbabletabbableinertinertinertinertinertfocusableinertinertinerttabbablefocusablefocusablefocusablefocusablefocusablefocusable<svg> within <embed tabindex="0">inert-1inertinerttabbabletabbableinertinertinerttabbabletabbabletabbabletabbableinertinertinertinertinertinertinert<a xlink:href="…"> within <embed tabindex="0">tabbabletabbabletabbabletabbabletabbabletabbabletabbableonly tabbabletabbabletabbabletabbabletabbabletabbablefocusablefocusablefocusablefocusablefocusablefocusable<a xlink:href="…" tabindex="1"> within <embed tabindex="0">tabbabletabbabletabbabletabbabletabbabletabbabletabbableonly tabbabletabbabletabbabletabbabletabbabletabbablefocusablefocusablefocusablefocusablefocusablefocusable<a xlink:href="…" tabindex="-1"> within <embed tabindex="0">focusablefocusablefocusabletabbabletabbabletabbabletabbableonly tabbabletabbabletabbabletabbabletabbablefocusablefocusablefocusablefocusablefocusablefocusablefocusable<a xlink:href="…" focusable="false"> within <embed tabindex="0">tabbabletabbabletabbableinertinertinertinertonly tabbabletabbableinertinertinerttabbablefocusablefocusablefocusablefocusablefocusablefocusable<svg> within <embed tabindex="-1">inert-1inertinertfocusablefocusableinertinertinertfocusablefocusablefocusablefocusableinertinertinertinertinertinertinert<a xlink:href="…"> within <embed tabindex="-1">focusablefocusablefocusablefocusablefocusablefocusablefocusableinertfocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusable<a xlink:href="…" tabindex="1"> within <embed tabindex="-1">focusablefocusablefocusablefocusablefocusablefocusablefocusableinertfocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusable<a xlink:href="…" tabindex="-1"> within <embed tabindex="-1">focusablefocusablefocusablefocusablefocusablefocusablefocusableinertfocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusable<a xlink:href="…" focusable="false"> within <embed tabindex="-1">focusablefocusablefocusableinertinertinertinertinertfocusableinertinertinertfocusablefocusablefocusablefocusablefocusablefocusablefocusable'
b'ElementExpectedChromeMicrosoft EdgeFirefoxIEOperaSafariWebKit NightlyChrome Mobile (Android)Safari (iOS)55.057.012.1024013.1058614.1439315.1495150.053.09.010.011.042.08.09.110.0604.155.010.0<svg> within <object>inert-1inertinerttabbabletabbableinertinertinerttabbabletabbabletabbabletabbableinertinertinertinertinertinertinert<a xlink:href="…"> within <object>tabbabletabbabletabbabletabbabletabbabletabbabletabbableonly tabbabletabbabletabbabletabbabletabbabletabbablefocusablefocusablefocusablefocusablefocusablefocusable<a xlink:href="…" tabindex="1"> within <object>tabbabletabbabletabbabletabbabletabbabletabbabletabbableonly tabbabletabbabletabbabletabbabletabbabletabbablefocusablefocusablefocusablefocusablefocusablefocusable<a xlink:href="…" tabindex="-1"> within <object>focusablefocusablefocusabletabbabletabbabletabbabletabbableonly tabbabletabbabletabbabletabbabletabbablefocusablefocusablefocusablefocusablefocusablefocusablefocusable<a xlink:href="…" focusable="false"> within <object>tabbabletabbabletabbableinertinertinertinertonly tabbabletabbableinertinertinerttabbablefocusablefocusablefocusablefocusablefocusablefocusable<svg> within <object tabindex="0">inert-1inertinerttabbabletabbableinertinertinerttabbabletabbabletabbabletabbableinertinertinertinertinertinertinert<a xlink:href="…"> within <object tabindex="0">tabbabletabbabletabbabletabbabletabbabletabbabletabbableonly tabbabletabbabletabbabletabbabletabbabletabbablefocusablefocusablefocusablefocusablefocusablefocusable<a xlink:href="…" tabindex="1"> within <object tabindex="0">tabbabletabbabletabbabletabbabletabbabletabbabletabbableonly tabbabletabbabletabbabletabbabletabbabletabbablefocusablefocusablefocusablefocusablefocusablefocusable<a xlink:href="…" tabindex="-1"> within <object tabindex="0">focusablefocusablefocusabletabbabletabbabletabbabletabbableonly tabbabletabbabletabbabletabbabletabbablefocusablefocusablefocusablefocusablefocusablefocusablefocusable<a xlink:href="…" focusable="false"> within <object tabindex="0">tabbable0tabbabletabbableinertinertinertinertonly tabbabletabbableinertinertinerttabbablefocusablefocusablefocusablefocusablefocusablefocusable<svg> within <object tabindex="-1">inert-1inertinertfocusablefocusableinertinertinertfocusablefocusablefocusablefocusableinertinertinertinertinertinertinert<a xlink:href="…"> within <object tabindex="-1">focusablefocusablefocusablefocusablefocusablefocusablefocusableinertfocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusable<a xlink:href="…" tabindex="1"> within <object tabindex="-1">focusablefocusablefocusablefocusablefocusablefocusablefocusableinertfocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusable<a xlink:href="…" tabindex="-1"> within <object tabindex="-1">focusablefocusablefocusablefocusablefocusablefocusablefocusableinertfocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusablefocusable<a xlink:href="…" focusable="false"> within <object tabindex="-1">focusablefocusablefocusableinertinertinertinertinertfocusableinertinertinertfocusablefocusablefocusablefocusablefocusablefocusablefocusable<svg> within <object height="0">inert-1inertinerttabbabletabbableinertinertinerttabbabletabbabletabbabletabbableinertinertinertinertinertinertinert<a xlink:href="…"> within <object height="0">tabbabletabbabletabbabletabbabletabbabletabbabletabbableonly tabbabletabbabletabbabletabbabletabbabletabbableinertinertinertinertfocusableinert<a xlink:href="…" tabindex="1"> within <object height="0">tabbabletabbabletabbabletabbabletabbabletabbabletabbableonly tabbabletabbabletabbabletabbabletabbabletabbableinertinertinertinertfocusableinert<a xlink:href="…" tabindex="-1"> within <object height="0">focusablefocusablefocusabletabbabletabbabletabbabletabbableonly tabbabletabbabletabbabletabbabletabbablefocusableinertinertinertinertfocusableinert<a xlink:href="…" focusable="false"> within <object height="0">tabbabletabbabletabbableinertinertinertinertonly tabbabletabbableinertinertinerttabbableinertinertinertinertfocusableinert<svg> within <object style="display: none">inertnullinertinertinertinertinertinertinertfocusableinertinertinertinertinertinertinertinertinertinert<svg> within <object style="visibility: hidden">inert-1inertinertinertinertinertinertinertfocusableinertinertinertinertinertinertinertinertinertinert<a xlink:href="…"> within <object style="visibility: hidden">inert0focusablefocusableinertinertinertinertinertinertinertinertinertfocusablefocusablefocusablefocusablefocusablefocusablefocusable<a xlink:href="…" tabindex="1"> within <object style="visibility: hidden">inert1focusablefocusableinertinertinertinertinertinertinertinertinertfocusablefocusablefocusablefocusablefocusablefocusablefocusable<a xlink:href="…" tabindex="-1"> within <object style="visibility: hidden">inert-1focusablefocusableinertinertinertinertinertinertinertinertinertfocusablefocusablefocusablefocusablefocusablefocusablefocusable<a xlink:href="…" focusable="false"> within <object style="visibility: hidden">inert0focusablefocusableinertinertinertinertinertinertinertinertinertfocusablefocusablefocusablefocusablefocusablefocusablefocusable<svg> within <object> within <details>inert-1inertinerttabbabletabbableinertinertinertfocusabletabbabletabbabletabbableinertinertinertinertinertinertinert<a xlink:href="…"> within <object> within <details>inert0inertinerttabbabletabbabletabbabletabbableinertinerttabbabletabbabletabbableinertinertinertinertinertinertinert<a xlink:href="…" tabindex="1"> within <object> within <details>inert1inertinerttabbabletabbabletabbabletabbableinertinerttabbabletabbabletabbableinertinertinertinertinertinertinert<a xlink:href="…" tabindex="-1"> within <object> within <details>inert-1inertinerttabbabletabbabletabbabletabbableinertinerttabbabletabbabletabbableinertinertinertinertinertinertinert'
b'ElementExpectedChromeMicrosoft EdgeFirefoxIEOperaSafariWebKit NightlyChrome Mobile (Android)Safari (iOS)55.057.012.1024013.1058614.1439315.1495150.053.09.010.011.042.08.09.110.0604.155.010.0<a href="…" style="visibility: visible"> within <div style="visibility: hidden">tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbabletabbable0tabbable0tabbable0tabbable0tabbable0focusable0focusable0<a href="…"> within <div style="visibility: visible"> within <div style="visibility: hidden">tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0focusable0focusable0<a href="…"> within <td> within <tr>tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0focusable0focusable0<a href="…" style="visibility: visible"> within <td> within <tr style="visibility: collapse">tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0focusable0focusable0<a href="…"> within <td style="visibility: visible"> within <tr style="visibility: collapse">tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0focusable0focusable0'
b'ElementExpectedChromeMicrosoft EdgeFirefoxIEOperaSafariWebKit NightlyChrome Mobile (Android)Safari (iOS)55.057.012.1024013.1058614.1439315.1495150.053.09.010.011.042.08.09.110.0604.155.010.0<span tabindex="-1"> child of <canvas>focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusablefocusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1<span tabindex="0"> child of <canvas>tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0focusable0focusable0<a href="…"> child of <canvas>tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0focusable0focusable0<a href="…" tabindex="-1"> child of <canvas>focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1<input> child of <canvas>tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0focusable0tabbable0<input tabindex="-1"> child of <canvas>focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1'
b'ElementExpectedChromeMicrosoft EdgeFirefoxIEOperaSafariWebKit NightlyChrome Mobile (Android)Safari (iOS)55.057.012.1024013.1058614.1439315.1495150.053.09.010.011.042.08.09.110.0604.155.010.0<details tabindex="-1">focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1<a href="…"> within <details>inert0inert0inert0tabbable0tabbable0tabbable0tabbable0inert0inert0tabbable0tabbable0tabbableinert0inert0inert0inert0inert0inert0inert0<summary tabindex="-1"> within <details>focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1<a href="…"> within <details tabindex="-1">inert0inert0inert0tabbable0tabbable0tabbable0tabbable0inert0inert0tabbable0tabbable0tabbable0inert0inert0inert0inert0inert0inert0inert0<a href="…"> within <details> that has <summary tabindex="-1">inert0inert0inert0tabbable0tabbable0tabbable0tabbable0inert0inert0tabbable0tabbable0tabbable0inert0inert0inert0inert0inert0inert0inert0<summary> within <details>tabbable0tabbable0tabbable0inert0inert0inert0inert0tabbable0tabbable0inert0inert0inert0tabbable0tabbable0tabbable0tabbable0tabbable0focusable0focusable0<summary> within <details tabindex="-1">tabbable0tabbable0tabbable0inert0inert0inert0inert0tabbable0tabbable0inert0inert0inert0tabbable0tabbable0tabbable0tabbable0tabbable0focusable0focusable0<a href="…"> within <details open>tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbabletabbable0tabbable0tabbable0tabbable0tabbable0focusable0focusable0<summary> within <details open>tabbable0tabbable0tabbable0inert0inert0inert0inert0tabbable0tabbable0inert0inert0inert0tabbable0tabbable0tabbable0tabbable0tabbable0focusable0focusable0'
b'ElementExpectedChromeMicrosoft EdgeFirefoxIEOperaSafariWebKit NightlyChrome Mobile (Android)Safari (iOS)55.057.012.1024013.1058614.1439315.1495150.053.09.010.011.042.08.09.110.0604.155.010.0<div> child of horizontally overflowing <div style="overflow: auto">inert-1inert-1inert-1inert0inert0inert0inert0inert-1inert-1focusable0focusable0focusable0inert-1inert-1inert-1inert-1inert-1inert-1inert-1<div> child of overflowing <div style="overflow: hidden">inert-1inert-1inert-1inert0inert0inert0inert0inert-1inert-1focusable0focusable0focusable0inert-1inert-1inert-1inert-1inert-1inert-1inert-1<div> child of overflowing <div style="overflow: scroll">inert-1inert-1inert-1inert0inert0inert0inert0inert-1inert-1focusable0focusable0focusable0inert-1inert-1inert-1inert-1inert-1inert-1inert-1<div> child of overflowing <div style="overflow: visible">inert-1inert-1inert-1inert0inert0inert0inert0inert-1inert-1focusable0focusable0focusable0inert-1inert-1inert-1inert-1inert-1inert-1inert-1<div> child of not overflowing <div style="overflow: scroll">inert-1inert-1inert-1inert0inert0inert0inert0inert-1inert-1focusable0focusable0focusable0inert-1inert-1inert-1inert-1inert-1inert-1inert-1<div> child of not overflowing <div style="overflow: scroll">inert-1inert-1inert-1inert0inert0inert0inert0inert-1inert-1focusable0focusable0focusable0inert-1inert-1inert-1inert-1inert-1inert-1inert-1horizontally overflowing <div style="overflow: auto">inert-1inert-1inert-1inert0inert0inert0inert0tabbable-1tabbable-1focusable0focusable0focusableinert-1inert-1inert-1inert-1inert-1inert-1inert-1overflowing <div style="overflow: hidden">inert-1inert-1inert-1inert0inert0inert0inert0inert-1inert-1focusable0focusable0focusable0inert-1inert-1inert-1inert-1inert-1inert-1inert-1overflowing <div style="overflow: scroll">inert-1inert-1inert-1inert0inert0inert0inert0tabbable-1tabbable-1focusable0focusable0focusable0inert-1inert-1inert-1inert-1inert-1inert-1inert-1overflowing <div style="overflow: visible">inert-1inert-1inert-1inert0inert0inert0inert0inert-1inert-1focusable0focusable0focusable0inert-1inert-1inert-1inert-1inert-1inert-1inert-1not overflowing <div style="overflow: auto">inert-1inert-1inert-1inert0inert0inert0inert0tabbable-1tabbable-1focusable0focusable0focusable0inert-1inert-1inert-1inert-1inert-1inert-1inert-1overflowing <section style="overflow: scroll">inert-1inert-1inert-1inert0inert0inert0inert0tabbable-1tabbable-1inert0inert0 46inert0 46inert-1inert-1inert-1inert-1inert-1inert-1inert-1<div style="overflow: scroll" tabindex="-1">focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1<div> child of <div style="overflow: scroll" tabindex="-1">inert-1inert-1inert-1inert0inert0inert0inert0inert-1inert-1focusable0focusable0focusable0inert-1inert-1inert-1inert-1inert-1inert-1inert-1'
b'ElementExpectedChromeMicrosoft EdgeFirefoxIEOperaSafariWebKit NightlyChrome Mobile (Android)Safari (iOS)55.057.012.1024013.1058614.1439315.1495150.053.09.010.011.042.08.09.110.0604.155.010.0<a href="…"> containing <img ismap src="…">tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0focusable0focusable0<a href="…" tabindex="-1"> containing <img ismap src="…">focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1<a href="…"> containing <img ismap src="…" tabindex="-1">tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0focusable0focusable0<img ismap src="…" tabindex="-1"> child of <a href="…">focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1<img ismap src="…"> child of <a href="…">inert-1inert-1inert-1tabbable0tabbable0tabbable0tabbable0inert-1inert-1tabbabletabbabletabbableinert-1inert-1inert-1inert-1inert-1inert-1inert-1<img ismap src="…"> child of <a href="…" tabindex="-1">inert-1inert-1inert-1tabbable0tabbable0tabbable0tabbable0inert-1inert-1tabbabletabbabletabbableinert-1inert-1inert-1inert-1inert-1inert-1inert-1'
b'ElementExpectedChromeMicrosoft EdgeFirefoxIEOperaSafariWebKit NightlyChrome Mobile (Android)Safari (iOS)55.057.012.1024013.1058614.1439315.1495150.053.09.010.011.042.08.09.110.0604.155.010.0<span> child of <a href="…" style="display: flex">inert-1inert-1inert-1inert0inert0inert0inert0inert-1inert-1inert0focusablefocusableinert-1inert-1inert-1inert-1inert-1inert-1inert-1<span> child of <div tabindex="-1" style="display: flex">inert-1inert-1inert-1inert0inert0inert0inert0inert-1inert-1inert0focusablefocusableinert-1inert-1inert-1inert-1inert-1inert-1inert-1<span tabindex="-1"> child of <div tabindex="-1" style="display: flex">focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1<span tabindex="0"> child of <div tabindex="-1" style="display: flex">tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0focusable0focusable0<span style="display: flex">inert-1inert-1inert-1inert0inert0inert0inert0inert-1inert-1inert0focusablefocusableinert-1inert-1inert-1inert-1inert-1inert-1inert-1<span> child of <span style="display: flex">inert-1inert-1inert-1inert0inert0inert0inert0inert-1inert-1inert0focusablefocusableinert-1inert-1inert-1inert-1inert-1inert-1inert-1<span tabindex="-1"> child of <span style="display: flex">focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1<span tabindex="0"> child of <span style="display: flex">tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0focusable0focusable0<span style="order: 1"> with focusable childinert-1inert-1inert-1inert0inert0inert0inert0inert-1inert-1inert0focusablefocusableinert-1inert-1inert-1inert-1inert-1inert-1inert-1<input type="text"> within a <span> within <div style="display: flex">tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0focusable0tabbable0<input type="text"> within a <span> within <div style="display: flex">tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0tabbable0focusable0tabbable0'
b'ElementExpectedChromeMicrosoft EdgeFirefoxIEOperaSafariWebKit NightlyChrome Mobile (Android)Safari (iOS)55.057.012.1024013.1058614.1439315.1495150.053.09.010.011.042.08.09.110.0604.155.010.0<table>inert-1inert-1inert-1inert0inert0inert0inert0inert-1inert-1focusable0focusable0focusable0inert-1inert-1inert-1inert-1inert-1inert-1inert-1<td>inert-1inert-1inert-1inert0inert0inert0inert0inert-1inert-1focusable0focusable0focusableinert-1inert-1inert-1inert-1inert-1inert-1inert-1<td style="visibility: visible"> within <tr style="visibility: collapse">inert-1inert-1inert-1inert0inert0inert0inert0inert-1inert-1focusable0focusable0focusable0inert-1inert-1inert-1inert-1inert-1inert-1inert-1'
b'ElementExpectedChromeMicrosoft EdgeFirefoxIEOperaSafariWebKit NightlyChrome Mobile (Android)Safari (iOS)55.057.012.1024013.1058614.1439315.1495150.053.09.010.011.042.08.09.110.0604.155.010.0<keygen …>tabbable0tabbable0inert-1inert0inert0inert0inert0tabbable0 52tabbable0 52inert0inert0inert0tabbable0tabbable0tabbable0tabbable0tabbable0focusable0focusable0<keygen … tabindex="-1">focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1 52focusable-1 52inert-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1focusable-1'
`focus` event is not emitted upon the element becoming the active element (
). `:focus` CSS pseudo class is not set on the element when it is the active element (
).
) when an element of `contentDocument` or the `shadowRoot` has focus. The `<html>` element itself is not considered focusable, but some browsers may give it focus when focus is passed from browser UI to the document. The `<body>` element itself is not considered focusable, but it is has focus (i.e. is
) if no other element has focus. HTML5 does not specify that the `<form>` element knows the `disabled` attribute.
should not be focusable as per disabled elements. Blink 453847, Webkit 141086
The activation behavior of the `<label>` element is not defined beyond »… should match the platform's label behavior.« Internet Explorer redirects focus from `<label>` to the referenced form control element upon mouse click, but not on script focus via `element.focus()` . The CSS property `user-modify` was proposed and dropped from CSS UI Level 3 and has thus not become a standard yet.
The value is invalid according to rules for parsing integers required by HTML5 tabindex
`tabindex=""` is parsed to the value `-32768` . Trident 1072965
The value's trailing spaces are considered invalid according to rules for parsing integers required by HTML5 tabindex
The value's trailing non-numeric characters are considered invalid according to rules for parsing integers required by HTML5 tabindex
The `<map>` 's `<area>` s are inert (not focusable) as long as the image is still loading. If an `<area>` element doesn't have an `href` attribute, it's not a link (and should therefore not be interactive). Some browsers will not make an image map focusable, if it is associated with an `<img>` that does have a proper image loaded. `<audio>` is considered interactive content only with the <code>controls</code> attribute present `<video>` is considered interactive content only with the <code>controls</code> attribute present
Shadow DOM is currently only "properly" supported in Blink based browsers (Chrome, Opera). Firefox exposes the (considerably buggy) development state behind flags.
See Can I Use.
Regardless of its own focusable state, an element hosting a Shadow DOM can become the `activeElement` if an element inside the `ShadowRoot` has focus. See Shadow DOM - Active Element The `activeElement` is scoped within the Shadow DOM, meaning the master documet does not know which shadowed content currently has focus. To indicate that a shadowed element has focus, the element hosting the shadowed content is made the `activeElement` regardless of its ability to receive focus otherwise.
Technically the content of an iframe can be accessible to script.
There is no API to interact with the content document of an `<embed>` element `<object>` with visibility:hidden should not be focusable - Blink 586191 Technically the content of an `<object>` element can be accessible to script. The behavior of the `<embed>` element depends on the content type and browser plugin. The `<video>` element should be used for embedding video content instead of `<embed>` element. The HTML5 `<video>` element is supported virtually everywhere. It is highly recommended to embed `<svg>` directly into the document or use the `<object>` element instead. The behavior of the `<object>` element depends on the content it presents. The most common content types used with `<object>` are `SVG` and `SWF` .
This element is actually tabbable (keyboard focusable).
But when tabbing to it, the <kbd>Tab</kbd> behavior for the entire document breaks, as focus remains stuck on the browser UI.
This might be related to Trident 1109008.
SVG 1.1 does not specify much in respect to accessibility. SVG 2 will bring the `tabindex` attribute. SVG Tiny 1.2 knows the `focusable` attribute. The foreignObject element knows the attributes `requiredExtensions` and `requiredFeatures` , which can prevent the rendering of the element's content. But SVG1.1 never defined how exactly these properties should work and browser implementations vary. There is no DOM API to reliably determine if either of these properties prevented the rendering of the element's content. The `<use>` element is keyboard focusable if it references content that contains focusable elements. Blink 665121
This element is actually tabbable (keyboard focusable).
However, as soon as a `<use>` element became the active element, the <kbd>Tab</kbd> effectively becomes useless, because the tabbing order cannot be navigated anymore. See this demo. By registering a `focus` event listener the element becomes focusable. Blink 445798, WebKit 140024.
This is undetectable because elements don't provide a list of their registered event handlers.
SVG 2: Focus says:
In particular, user agents may support using keyboard focus to reveal ‘title’ element text as tooltips, and may allow focus to reach elements which have been assigned listeners for mouse, pointer, or focus events
IE9 and IE10 do not support the `hidden` attribute. The `hidden` attribute itself has no effect on whether an element is focusable or not. It's the CSS style `disaply: none` that is set by the `hidden` attribute that counts. The `<details>` element is specified in HTML 5.1, but not implemented everywhere. Internet Explorer turns `<div>` and `<span>` elements focusable when they're scrollable, but does not do the same for other sectioning or block-level elements The `ismap` attribute makes the `<img>` focusable (in addition to the parent `<a>` )
There is no indication that the focusability of an element can be inherited by its children, let alone triggered by flexbox layout.
Firefox may hide elements from the document's tabbing sequence if they're enclosed by two images referencing the same image map - Gecko 1116126.
This test is not about an element's focusable state, but about potentially content that's potentially hidden from the tabbing sequence.
Firefox transforms `<keygen>` to
```
<select _moz-type="-mozilla-keygen">
```
while parsing HTML. The `<keygen>` element is poorly supported, practically never used and has seen intent to deprecate Keyboard focusable (tabbable) content in nested browsing contexts ( `<iframe>` , `<object>` , `<element>` ) is demoted to script and mouse focusable if the browsing context container has `tabindex="-1"` . The focusable state of descendant elements of an `<svg>` element are not affected by `tabindex="-1"` on the `<svg>` element, contrary to the behavior of browsing contexts ( `<iframe>` , `<object>` , `<element>` ). The focusable state of content elements in Shadow DOM are not affected by `tabindex="-1"` on the shadow host, contrary to the behavior of browsing contexts ( `<iframe>` , `<object>` , `<element>` ).
Focus is redirected to the labeled control.
Focus is redirected to the nested labeled control.
Focus is redirected to the the next keyboard focusable (tabbable) element after the `<legend>` in DOM order (not in order of the document's tab sequence). Note that this does not necessarily have to be a descendant of the same `<fieldset>` element. Focus is redirected to the the first focusable form control element ( `<input>` , `<select>` , `<textarea>` , `<button>` ) of the `<fieldset>` the `<legend>` is the child of. Focus is redirected to the the first `<area>` element of the referenced image map. The `<iframe>` 's document manages its own focus. Any time the `<iframe>` or its content has focus, the master document's `activeElement` points to the `<iframe>` . Note that `<iframe>` s are only accessible to script when they share the same origin. Browser plugins running the `<embed>` can manage their own focus. Any time the `<embed>` or its content has focus, the master document's `activeElement` points to the `<embed>` . Note that the content of `<embed>` elements is not accessible to scripting from the outside, but from within the `<embed>` 's document JavaScript can interact with `window.parent` . The `<object>` 's document manages its own focus. Any time the `<object>` or its content has focus, the master document's `activeElement` points to the `<object>` . Note that `<object>` s are only accessible to script via
```
element.contentWindow
```
when they share the same origin. Whenever an element within a ShadowRoot has focus, the element hosting the `ShadowRoot` is considered the `activeElement` of the document, as per the active element adjustment algorithm.
Firefox' Shadow DOM implementation still has a few problems: Gecko 1117535, Gecko 1117544, Gecko 1117552.
This element could not be tested in this browser.
When this element is the activeElement, the reference element `<img usemap="…">` has the following state: `activeElement` in its context `:focus` CSS pseudo class applied `focus` event When this element is the activeElement, the reference element `<img usemap="…">` has the following state: `activeElement` in its context `:focus` CSS pseudo class applied `focus` event When this element is the activeElement, the reference element `<img usemap="…">` has the following state: `activeElement` in its context `:focus` CSS pseudo class applied `focus` event When this element is the activeElement, the reference element `<img usemap="…">` has the following state: `activeElement` in its context `:focus` CSS pseudo class applied `focus` event When this element is the activeElement, the reference element `<iframe>` has the following state: `activeElement` in its context `:focus` CSS pseudo class applied `focus` event When this element is the activeElement, the reference element `<iframe>` has the following state: `activeElement` in its context `:focus` CSS pseudo class applied `focus` event When this element is the activeElement, the reference element
# # What does "focusable" mean?
An HTML element can be a member of exactly one of the following five categories:
* Inert
* The element is not interactive and thus not focusable.
* Focusable
* The element can be focused by script (
`element.focus()` ) and possibly the mouse (or pointer), but not the keyboard. * Tabbable
* The element is keyboard focusable ("tabbable"), as it is part of the document's sequential focus navigation order. The element is also focusable by script and possibly the mouse (or pointer).
* Only Tabbable
* The element is only keyboard focusable, possibly by the mouse (or pointer), but it cannot be focused by script.
* Forwards Focus
* The element will forward focus to another element instead of receiving focus itself.
In which of these buckets elements are sorted, depends on the browser. Have a look at the following comparison tables:
* What browsers consider focusable details in which category the various HTML elements fall per browser
* Differences between browsers and ally.js shows the few situations that ally.js cannot identify properly
The functions pertaining to "focus" are grouped in two categories: `ally.is.*` represents the filters and `ally.query.*` the crawlers.
## # Filtering elements
A filter takes an element ( `Element` ) for input and returns a boolean result. Filters can be used to verify the state of a given element.
*
`ally.is.focusable` returns true for any element that passes
, is not disabled, rendered (i.e. not visually hidden, e.g. by `display: none` or `visibility: hidden` ) and does not pass `ally.is.onlyTabbable` . *
returns true for any potentially focusable element. *
`ally.is.tabbable` expects `ally.is.focusable` has already passed for the given element and returns true in case the element is keyboard focusable. *
`ally.is.onlyTabbable` returns true for any element that can only be focused by keyboard, but not by script.
A crawler (or "DOM walker") traverses the DOM in order to find elements matching the desired focusable state.
*
`ally.query.focusable` finds all the elements that are script focusable or keyboard focusable, but not only tabbable. By providing the strategies `"quick"` and `"strict"` the user can choose to trade performance for accuracy. The difference in accuracy is detailed by the compatibility tables for quick and strict. Internally the `ally.is.focusable` filter is used to verify each element's state. Internally a third strategy called "all" is available to find elements that are either focus relevant (regardless of disabled and visual state) or only tabbable. *
`ally.query.tabbable` finds all the elements that are keyboard focusable, but not only tabbable.
Date: 2015-01-01
Categories:
Tags:
# # Legal
This document explains which third party works are part of ally.js.
## # ally.js Logo
The logo is a composition of the United Nations Accessibility Logo and the JavaScript Logo by The Community.
## # Bundled works
In its UMD bundle ally.js contains resources of the following projects:
* domtokenlist-shim, also MIT license, removed in
`v1.2.0` * array.prototype.findindex, also MIT license, removed in
`v1.4.1` * css.escape, also MIT license
* platform.js, also MIT License
## # MIT License
The MIT License (MIT)
Copyright (c) 2015 <NAME>
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
fiware-iotagent-ul | readthedoc | CSS | This Internet of Things Agent is a bridge that can be used to communicate devices using the Ultralight 2.0 protocol and NGSI Context Brokers (like Orion). Ultralight 2.0 is a lightweight text based protocol aimed to constrained devices and communications where the bandwidth and device memory may be limited resources.
Github's README.md provides a good documentation summary. The User Manual and the Admin Guide cover more advanced topics.
Date: 2016-06-13
Categories:
Tags:
* API Overview
* Developing new transports
* Development documentation
## API Overview
This section describes the specific South-bound API implemented by this IoTAgent. For the Configuration API and other APIs concerning general IoTAgents, check the API Reference section;
### Ultralight 2.0 Protocol
# Description
Ultralight 2.0 is a lightweight text based protocol aimed to constrained devices and communications where the bandwidth and device memory may be limited resources.
# Measure Payload Syntax
The payload for information update requests is composed of a list of key-value pairs separated by the `|` character.
E.g.: `t|15|k|abc`
In this example, two attributes, one named "t" with value "15" and another named "k" with value "abc" are transmitted. Values in Ultralight 2.0 are not typed (everything is treated as a string).
Multiple groups of measures can be combined into a single request, using the `#` character. In that case, a different
NGSI request will be generated for each group of measures. E.g.: `gps|1.2/3.4#t|10`
This will generate two NGSI requests for the same entity, one for each one of the values. Each one of those requests can contain any number of attributes.
Measure groups can additionally have an optional timestamp, with the following syntax:
```
2016-06-13T00:35:30Z|lle|100
```
The timestamp will be added as a prefix of the measures themselves, separated by a '|'. The attribute will be translated to a `TimeInstant` attribute in the final entity.T
# Active versus passive attributes
Current version of the agent only supports active attributes, i.e. those attributes actively reported +by the device to the agent. Passive or lazy attributes, i.e. those attributes whose value is only given upon explicit +request from the agent, are not implemented. Please check the issue +#23 for more details and updates regarding its implementation.
# Commands Syntax
Commands are messages sent to the device from the IoT Agent. A command has the following format:
```
<device name>@<command name>|<command value>
```
This indicates that the device (named 'device_name' in the Context Broker) has to execute the command 'command_name', with the given value. E.g.:
`Robot1@turn|left`
This example will tell the Robot 1 to turn to left.
In the case of complex commands requiring parameters, the `command_value` could be used to implement parameter passing.
E.g:
```
weatherStation167@ping|param1=1|param2=2
```
This example will tell the Weather Station 167 to reply to a ping message with the provided params.
Once the command has finished its execution in the device, the reply to the server must adhere to the following format:
```
<device name>@<command name>|result
```
Where `device_name` and `command_name` must be the same ones used in the command execution, and the result is the final
result of the command. E.g.:
```
weatherStation167@ping|Ping ok
```
In this case, the Weather station replies with a String value indicating everything has worked fine.
# Bidirectionality Syntax
The latest versions of the Provisioning API allow for the definition of reverse expressions to keep data shared between the Context Broker and the device in sync (regardless of whether the data originated in plain data from the device or in a transformation expression in the IoTAgent). In this cases, when a reverse expression is defined, whenever the bidirectional attribute is modified, the IoTAgent sends a command to the original device, with the name defined in the reverse expression attribute and the ID of the device (see Commands Syntax, just above).
# Commands transformations
It is possible to use expressions to transform commands, in the same way that other attributes could do it, that is adding `expression` to command definition. This way a command could be defined like:
```
{
"name": "reset",
"type": "command",
"expression": "{ set: 0}"
}
```
and when command will be executed the command value will be the result of apply value to defined expression. Following the example case the command will be:
`set|0` Additionally a command could define a `payloadType` in their definition with the aim to transform payload command with
the following meanings:
* binaryfromstring: Payload will transformed into a be Buffer after read it from a string.
* binaryfromhex: Payload will transformed into a be Buffer after read it from a string hex.
* binaryfromjson: Payload will transformed into a be Buffer after read it from a JSON string.
* json: Payload will be stringify from a JSON.
*
`<empty>` : This is the default case. Payload will not be transformed.
# Casting to JSON native format
FIXME: this need to be tested, once IOTA Lib 3.0.0 gets released and IOTA UL 2.0.0 (using it) gets released.
Ultralight 2.0 defines a method that allows to use native JSON types in the NGSI v2. For example: The IotAgent receives this UL measure:
`t|10|s|true|l|78.8` then the NGSI v2 update uses `10` (number), `true` (boolean) and `78.8` (number) instead of "10" (string), "true"
(string) and "78.8" (string). This functionality relies on string measures casting feature implemented in the iotagent library. This functionality uses native JavaScript `JSON.parse()` function to cast data coming from measures (as text) to JSON native types. This functionality does not change the attribute type,
using the type specified in the config group or device provision, even if it is not consistent with the measures that are coming.
As an example, for a given measure:
```
a|1|b|1.01|c|true|d|null|e|[1,2,3]|f|['a','b','c']|g|{a:1,b:2,c:3}|h|I'm a string
```
The resulting entity would be something like:
```
{
"id": "entityid:001",
"type": "entitytype",
"a": {
"type": "provisionedType",
"value": 1
},
"b": {
"type": "provisionedType",
"value": 1.01
},
"c": {
"type": "provisionedType",
"value": true
},
"d": {
"type": "provisionedType",
"value": null
},
"e": {
"type": "provisionedType",
"value": [1,2,3]
},
"f": {
"type": "provisionedType",
"value": ["a","b","c"]
},
"g": {
"type": "provisionedType",
"value": {"a":1,"b":2,"c":3}
},
"h": {
"type": "provisionedType",
"value": "I'm a string"
}
}
```
Note that `provisionedType` is the type included in the device provision or config group, and it is not changed.
### Transport Protocol
Ultralight 2.0 defines a payload describing measures and commands to share between devices and servers but, does not specify a single transport protocol. Instead, different transport protocol bindings can be established for different scenarios.
The following sections describe the bindings currently supported: HTTP, MQTT and AMQP.
# HTTP binding
There are three possible interactions defined in the HTTP binding: requests with GET, requests with POST and commands.
# Requests with GET requests
A device can report new measures to the IoT Platform using an HTTP GET request to the `/iot/d` path with the following
query parameters:
Payloads for GET requests should not contain multiple measure groups.
# Requests with POST requests
Another way of reporting measures is to do it using a POST request. In this case, the payload is passed along as the request payload. Two query parameters are still mandatory:
# Sending commands
All the interations between IotAgent and ContextBroker related to comamnds are described in Theory: Scenario 3: commands and Practice: Scenario 3: commands - happy path and Practice: Scenario 3: commands - error.
MQTT devices commands are always push. For HTTP Devices commands to be push they must be provisioned with the `endpoint` attribute, that will contain the URL where the IoT Agent will send the received commands. Otherwise the
command will be poll. When using the HTTP transport, the command handling have two flavours:
* Push commands: The request payload format will be the one described in the UL Protocol description. The device will reply with a 200OK response containing the result of the command in the UL2.0 result format. Example of the HTTP request sent by IOTA in the case of push command:
```
POST http://[DEVICE_IP]:[PORT]
fiware-service: smart
fiware-servicepath: /streetligths
content-type: text/plain
* Polling commands: in this case, the Agent does not send any messages to the device, being the later responsible of retrieving them from the IoTAgent whenever the device is ready to get commands. In order to retrieve commands from the IoT Agent, the device will send the query parameter 'getCmd' with value '1' as part of a normal measure. As a result of this action, the IoTAgent, instead of returning an empty body (the typical response to a measurement report), will return a list of all the commands available for the device, sepparated by the character '#'. The command payload is described in the commands syntax section (and its shared with the push commands). Whenever the device has completed the execution of the command, it will send the response in the same way measurements are reported, but using the command result format as exposed in the commands syntax section.
Some additional remarks regarding polling commands:
* Commands can be also retrieved without needed of sending a mesaure. In other words, the device is not forced to send a measure in order to get the accumulated commands. However, in this case note that
`GET` method is used to carry the `getCmd=1` query parameter (as they are no actual payload for measures, `POST` wouldn't make too much sense). Example to retrieve commands from IoT Agent:
```
curl -X GET 'http://localhost:7896/iot/d?i=motion001&k=4jggokgpepnvsb2uv4s40d59ov&getCmd=1' -i
```
* Example of the HTTP response sent by IOTA in the case of polling commands (and only one command is stored for that device):
```
200 OK
Content-type: text/plain
# MQTT binding
MQTT is a machine-to-machine (M2M)/IoT connectivity protocol, focused on a lightweight interaction between peers. MQTT is based on publish-subscribe mechanisms over a hierarchical set of topics defined by the user.
This section specifies the topics and messages allowed when using MQTT as the transport protocol for Ultralight 2.0. All the topics subscribed by the agent (to send measures, to configuration command retrieval or to get result of a command) are prefixed with the agent procotol:
```
/ul/<apiKey>/<deviceId>
```
where `<apiKey>` is the API Key assigned to the service and `<deviceId>` is the ID of the device. All topics published by the agent (to send a comamnd or to send configuration information) to a device are not prefixed by the protocol, in this case '/ul', just include apikey and deviceid (e.g:
```
/FF957A98/MydeviceId/cmd
```
and
```
/FF957A98/MyDeviceId/configuration/values
```
). Note Measures and commands are sent over different MQTT topics:
* Measures are sent on the
```
/<protocol>/<api-key>/<device-id>/attrs
```
topic, * Commands are sent on the
```
/<api-key>/<device-id>/cmd
```
topic,
The reasoning behind this is that when sending measures northbound from device to IoT Agent, it is necessary to explicitly identify which IoT Agent is needed to parse the data. This is done by prefixing the relevant MQTT topic with a protocol, otherwise there is no way to define which agent is processing the measure. This mechanism allows smart systems to connect different devices to different IoT Agents according to need.
For southbound commands, this distinction is unnecessary since the correct IoT Agent has already registered itself for the command during the device provisioning step and the device will always receive commands in an appropriate format.
This transport protocol binding is still under development.
# Sending a single measure in one message
In order to send a single measure value to the server, the device must publish the plain value to the following topic:
```
/ul/<apiKey>/<deviceId>/attrs/<attrName>
```
Where `<apiKey>` and `<deviceId>` have the typical meaning and `<attrName>` is the name of the measure the device is
sending. or instance, if using Mosquitto with a device with ID `id_sen1` , API Key `ABCDEF` and
attribute IDs `h` and `t` , then humidity measures are reported this way:
# Sending multiple measures in one message
In order to send multiple measures in a single message, a device must publish a message in the following topic:
```
/ul/<apiKey>/<deviceId>/attrs
```
Where `<apiKey>` and `<deviceId>` have the typical meaning. The payload of such message should be a legal Ultralight 2.0
payload (with or without measure groups). For instance, if using Mosquitto with a device with ID `id_sen1` , API Key `ABCDEF` and
attribute IDs `h` and `t` , then all measures (humidity and temperature) are reported this way:
```
$ mosquitto_pub -t /ul/ABCDEF/id_sen1/attrs -m 'h|70|t|15' -h <mosquitto_broker> -p <mosquitto_port> -u <user> -P <password>
```
# Configuration retrieval
The protocol offers a mechanism for the devices to retrieve its configuration (or any other value it needs from those stored in the Context Broker). Two topics are created in order to support this feature: a topic for configuration commands and a topic to receive configuration information. This mechanism can be enabled or disabled using a configuration flag, `configRetrieval` .
In case of MQTT to retrieve configuration parameters from the Context Broker, it is required that the device should be provisioned using "MQTT" as transport key. By default it will be considered "HTTP" as transport.
The parameter will be given as follows:
`"transport": "MQTT"`
This mechanism and the bidirectionality plugin cannot be simultaneously activated.
# Configuration command topic
```
/ul/{{apikey}}/{{deviceid}}/configuration/commands
```
The IoT Agent listens in this topic for requests coming from the device. The messages must contain an Ultralight 2.0 payload with the following format:
`{{type}}|{{fields}}`
* type: indicates the type of command the device is sending. See below for accepted values.
* fields: array with the names of the values to be retrieved from the Context Broker entity representing the device, separated by the
`|` character.
This command will trigger a query to the CB that will, as a result, end up with a new message posted to the Configuration information topic (described bellow).
E.g.:
```
configuration|pollingInterval|publishInterval
```
There are two accepted values for the configuration command types:
*
`subscription` : this command will generate a subscription in the Context Broker that will be triggered whenever any of the selected values change. In case the value has changed, all the attributes will be retrieved. *
`configuration` : this commands will generate a single request to the Context Broker from the IoTAgent, that will trigger a single publish message in the values topic.
# Configuration information topic
```
/{{apikey}}/{{deviceid}}/configuration/values
```
Every device must subscribe to this topic, so it can receive configuration information. Whenever the device requests any information from the IoTA, the information will be posted in this topic. The information is published in the same format used in multiple command reporting: a plain Ultralight 2.0 text with:
* the
`device id` and `command type` separated by the `@` character; * a
`|` character; * a list of
`attribute=value` requested pairs separated by the `|` character. An additional parameter called `dt` is added with the system current time.
E.g.:
```
device_1@configuration|pollingInterval=200|publishInterval=80|dt=20190626T154200Z
```
# Commands
All the interations between IotAgent and ContextBroker related to comamnds are described in Theory: Scenario 3: commands and Practice: Scenario 3: commands - happy path and Practice: Scenario 3: commands - error.
Commands using the MQTT transport protocol binding always work in PUSH mode: the server publishes a message in a topic where the device is subscribed: the commands topic. Once the device has finished with the command, it publishes it result to another topic.
The commands topic, where the client will be subscribed has the following format:
```
/<apiKey>/<deviceId>/cmd
```
The result of the command must be reported in the following topic:
```
/ul/<apiKey>/<deviceId>/cmdexe
```
The command execution and command reporting payload format is specified under the Ultralight 2.0 Commands Syntax, above.
For instance, if a user wants to send a command `ping` with parameters `data = 22` , he will send the following request
to the Context Broker regarding an entity called `sen1` of type `sensor` :
```
{
"updateAction": "UPDATE",
"contextElements": [
{
"id": "sen1",
"type": "sensor",
"isPattern": "false",
"attributes": [
{
"name": "ping",
"type": "command",
"value": "22"
}
]
}
]
}
```
If the API key associated to de device is `ABCDEF` , and the device ID related to `sen1` entity is `id_sen1` , this will
generate a message in the `/ABCDEF/id_sen1/cmd` topic with the following payload: `id_sen1@ping|22` If using Mosquitto, such a command is received by running the `mosquitto_sub` script:
```
$ mosquitto_sub -v -t /# -h <mosquitto_broker> -p <mosquitto_port> -u <user> -P <password> /ABCDEF/id_sen1/cmd id_sen1@ping|22
```
At this point, Context Broker will have updated the value of `ping_status` to `PENDING` for `sen1` entity. Neither `ping_info` nor `ping` are updated. Once the device has executed the command, it can publish its results in the
```
/ul/ABCDEF/id_sen1/cmdexe
```
topic with a
payload with the following format:
```
id_sen1@ping|1234567890
```
If using Mosquitto, such command result is sent by running the `mosquitto_pub` script:
In the end, Context Broker will have updated the values of `ping_info` and `ping_status` to `1234567890` and `OK` ,
respectively. `ping` attribute is never updated.
Some additional remarks regarding MQTT commands:
* MQTT devices can configure (at provisioning and updating time) each command with different values of MQTT QoS and MQTT retain values, which will be used only by a command. Moreover, in the same MQTT device different commands can be configured to use different MQTT options related with QoS level and Retain message policy. I.E:
```
{
"commands": [
{
"type": "command",
"name": "a_command_name_A",
"mqtt": { "qos": 2, "retain": true }
},
{
"type": "command",
"name": "a_command_name_B",
"mqtt": { "qos": 1, "retain": false }
}
]
}
```
# AMQP binding
AMQP stands for Advance Message Queuing Protocol, and is one of the most popular protocols for message-queue systems. Although the protocol itself is software independent and allows for a great architectural flexibility, this transport binding has been designed to work with the RabbitMQ broker, in a way that closely resembles the MQTT binding (in the previous section). In fact, for IoT Platform deployments in need of an scalable MQTT Broker, RabbitMQ with the MQTT plugin will be used, connecting the IoT Agent to RabbitMQ through AMQP and the clients to RabbitMQ through MQTT.
The binding connects the IoT Agent to an exchange (usually `amq.topic` ) and creates two queues (to share between all the
instances of the IoTAgents in a cluster environment): one for the incoming measures, and another for command result
update messages (named as the measure one, adding the `_commands` sufix).
For both measure reporting and command update status the mechanism is much the same as in the case of the MQTT binding: all the messages must be published to the selected exchange, using the following routing keys:
Key pattern | Meaning |
| --- | --- |
. | Multiple measure reporting |
. | Single measure reporting |
. | Command reception |
. | Command update message |
The payload is the same as for the other bindings.
## Developing new transports
The Ultralight 2.0 IoT Agent can work with multiple different transports for the same Ultralight 2.0 payload. Those transports are dinamically loaded when the Agent starts, by looking in the `lib/bindings` folder for Node.js Modules.
Those module must export the following fields:
deviceProvisioningHandler(device, callback): this handler will be called each time a new device is provisioned in the IoT Agent. The device object contains all the information provided in the device registration.
*
configurationHandler(configuration, callback): handler for changes (provisioning or updates) in device groups. This handler should be used when configuration groups require any initialization or registration in the protocol binding.
*
start(newConfig, callback): starts the binding module, with the provided configuration. The
`newConfig` object contains the global Agent configuration; the module should use a specific attribute inside the global scope to hold all its configuration values instead of using the global configuration scope itself. *
stop(callback): stops the binding module.
*
protocol: This field must contain a string key identifying the protocol. Requests coming from the server (commands and passive attributes) will use the
`protocol` field of the devices and the corresponding `protocol` attribute in the modules to identify which module should attend the request.
All the methods must call the callback before exiting (with or without error). Bindings will use methods in the IoT Agent Node.js library to interact process incoming requests.
## Development documentation
### Project build
The project is managed using npm.
For a list of available task, type
`npm run`
The following sections show the available options in detail.
### Start
Runs a local version of the IoT Agent
### Testing
Mocha Test Runner + Should.js Assertion Library.
The test environment is preconfigured to run BDD testing style.
Module mocking during testing can be done with proxyquire
To run tests, type
```
docker run -d -p 27017:27017 mongo:4.2
docker run -d -p 5672:5672 rabbitmq:3.8.9
docker run -d -p 1883:1883 eclipse-mosquitto:1.6.7
npm test
```
### Coding guidelines
ESLint
Uses the provided `.eslintrc.json` flag file. To check source code style, type `npm run lint`
### Continuous testing
Support for continuous testing by modifying a src file or a test. For continuous testing, type
`npm run test:watch`
If you want to continuously check also source code style, use instead:
`npm run watch`
### Code Coverage
Istanbul
Analizes the code coverage of your tests.
To generate an HTML coverage report under `site/coverage/` and to print out a summary, type
```
# Use git-bash on Windows
npm run test:coverage
```
### Documentation guidelines
remark
To check consistency of the Markdown markup, type
`npm run lint:md`
textlint
Uses the provided `.textlintrc` flag file. To check for spelling and grammar errors, dead links and keyword consistency,
type `npm run lint:text`
### Clean
Removes `node_modules` and `coverage` folders, and `package-lock.json` file so that a fresh copy of the project is
restored.
### Prettify Code
Runs the prettier code formatter to ensure consistent code style (whitespacing, parameter placement and breakup of long lines etc.) within the codebase.
To ensure consistent Markdown formatting run the following:
```
# Use git-bash on Windows
npm run prettier:text
```
## Installation
There are three ways of installing the Ultralight 2.0 Agent: cloning the GitHub repository, using the RPM or using Docker. Regardless of the installation method, there are some middlewares that must be present, as a prerequisite for the component installation (no installation instructions are provided for these middlewares):
A MQTT v3.1 Broker is needed for the MQTT Binding to work. Both Mosquitto and Rabbit MQ (with the MQTT plugin activated) have been tested for this purpose.
*
A MongoDB instance (v3.2+) is required for those IoT Agents configured to have persistent storage. An in-memory storage repository is also provided for testing purposes.
*
The IoT Agent purpose is to connect devices (using a native device protocol on the South Port of the IoT Agent) and NGSI endpoints on the North Port of the IoT Agent - typically a NGSI Context Broker, like Orion), so an accessible Context Broker is also required. IoT Agents were tested with v0.26.0 (higher versions should also work).
Please, follow the links to the official Web Pages to find how can you install each of the middlewares in your environment.
The following sections describe each installation method in detail.
# Cloning the GitHub repository
Clone the repository with the following command:
```
git clone https://github.com/telefonicaid/iotagent-ul.git
```
Once the repository is cloned, from the root folder of the project execute:
`npm install`
This will download the dependencies for the project, and let it ready to the execution.
When the component is executed from a cloned GitHub repository, it takes the default config file that can be found in the root of the repository.
# Using the RPM
To see how to generate the RPM, follow the instructions in Packaging.
To install the RPM, use the YUM tool:
```
yum localinstall --nogpg <rpm-file_name>
```
Be aware that the RPM installs linux services that can be used to start the application, instead of directly calling the executable (as explained in the section Usage.
When this option is used, all the files are installed under the `/opt/iotaul` folder. There you can find the `config.js` file to configure the service. Remember to restart the service each time the config file has changed.
# Using Docker
There are automatic builds of the development version of the IOTAgent published in Docker hub. In order to install using the docker version, just execute the following:
```
docker run -d --link orion:orion --link mosquitto:mosquitto --link mongo:mongo -p 7896:7896 -p 4061:4061 telefonicaiot/iotagent-ul
```
As you can see, the Ultralight 2.0 (as any other IOTA) requires some docker dependencies to work:
* mongo: Mongo database instance (to store provisioning data).
* orion: Orion Context Broker.
* mosquitto: Mosquitto MQTT broker, to deal with MQTT based requests.
In order to link them, deploy them using docker and use the option `--link` as shown in the example. You may also want
to map the external IoT Agent North and South ports, for external calls: 4061 (NGSI Interactions for traffic north of
the IoT Agent) and 7896 (HTTP binding for traffic south of the IoT Agent).
# Build your own Docker image
There is also the possibility to build your own local Docker image of the IOTAUL component.
To do it, follow the next steps once you have installed Docker in your machine:
* Navigate to the path where the component repository was cloned.
* Launch a Docker build
```
bash sudo docker build -f Dockerfile .
```
* Using an alternative NodeJS version:
```
bash sudo docker build --build-arg NODEJS_VERSION=0.10.46 -f Dockerfile .
```
## Usage
# GitHub installation
In order to execute the IOTAgent, just issue the following command from the root folder of the cloned project:
```
bin/iotagent-ul [config file]
```
The optional name of a config file is optional and described in the following section.
# RPM installation
The RPM installs a linux service that can be managed with the typical instructions:
```
service iotaul start
service iotaul status
service iotaul stop
```
In this mode, the log file is written in
```
/var/log/iotaul/iotaul.log
```
.
# Docker installation
The Docker automatically starts listening in the API ports, so there is no need to execute any process in order to have the application running. The Docker image will automatically start.
## Packaging
The only package type allowed is RPM. In order to execute the packaging scripts, the RPM Build Tools must be available in the system.
From the root folder of the project, create the RPM with the following commands:
```
cd rpm
./create-rpm.sh -v <version-number> -r <release-number>
```
Where `<version-number>` is the version (x.y.z) you want the package to have and `<release-number>` is an increasing
number dependent un previous installations.
## Configuration
All the configuration for the IoT Agent resides in the `config.js` file, in the root of the application. This file is a
JavaScript file, that contains the following sections:
* config.iota: general IoT Agent configuration. This group of attributes is common to all types of IoT Agents, and is described in the global IoT Agent Library Documentation.
* config.mqtt: configuration for the MQTT transport protocol binding of the IoT Agent (described in the following subsections).
* config.http: configuration for the HTTP transport protocol binding of the IoT Agent (described in the following subsections).
* config.defaultKey: default API Key, for devices lacking a provided Configuration.
* config.defaultTransport: code of the MQTT transport that will be used to resolve incoming commands and lazy attributes in case a transport protocol could not be inferred for the device.
* protocol: protocol to use for connecting with the MQTT broker (
`mqtt` , `mqtts` , `tcp` , `tls` , `ws` , `wss` ). The default is `mqtt` * host: Host where the MQTT Broker is located.
* port: Port where the MQTT Broker is listening
* username: Username for the IoT Agent in the MQTT broker, if authentication is activated.
* password: Password for the IoT Agent in the MQTT broker, if authentication is activated.
* ca: ca certificates to use for validating server certificates (optional). Default is to trust the well-known CAs curated by Mozilla. Mozilla's CAs are completely replaced when CAs are explicitly specified using this option.
* cert: cert chains in PEM format to use for authenticating into the MQTT broker (optional). Only used when using
`mqtts` , `tls` or `wss` as connnection protocol. * key: optional private keys in PEM format to use on the client-side for connecting with the MQTT broker (optional). Only used when using
`mqtts` , `tls` or `wss` as connection protocol. The included CA list will be used to determine if server is authorized. * qos: QoS level: at most once (
`0` ), at least once ( `1` ), exactly once ( `2` ). (default is `0` ). * retain: retain flag (default is
`false` ). * retries: Number of MQTT connection error retries (default is 5).
* retryTime: Time between MQTT connection retries (default is 5 seconds).
* keepalive: Time to keep connection open between client and MQTT broker (default is 60 seconds). If you experience disconnnection problems using 0 (as the one described in this case) a value greater than 0 is recommended.
* rejectUnauthorized whether to reject any connection which is not authorized with the list of supplied CAs. This option only has an effect when using
`mqtts` , `tls` or `wss` protocols (default is `true` ). Set to `false` if using a self-signed certificate but beware that you are exposing yourself to man in the middle attacks, so it is a configuration that is not recommended for production environments. * avoidLeadingSlash this flag sets whether the agent publishes commands to topics starting with slash (default in order versions) or without the slash. See discussion.
* clean: this flag is by default true, set to false to receive QoS 1 and 2 messages while offline.
* clientId: string ID which identifies client in mqtt broker. By default is using a string composed by a fixed prefix
`iotaul_` and a random suffix, i.e. `iotaul_43bf8a3a` .
# AMQP Binding configuration
* host: Host where the AMQP Broker is located.
* port: Port where the AMQP Broker is listening
* username: Username for the IoT Agent in the AMQP broker
* password: Password for the IoT Agent in the AMQP broker
* exchange: Exchange in the AMQP broker
* queue: Queue in the AMQP broker
* durable: durable queue flag (default is
`false` ). * retries: Number of AMQP connection error retries (default is 5).
* retryTime: Time between AMQP connection retries (default is 5 seconds).
The `config.http` section of the config file contains all the information needed to start the HTTP server for the HTTP
transport protocol binding. The following options are accepted:
* port: South Port where the HTTP listener will be listening for information from the devices.
* timeout: HTTP Timeout for the HTTP endpoint (in milliseconds).
* key: Path to your private key for HTTPS binding
* cert: Path to your certificate for HTTPS binding
# Configuration with environment variables
Some of the more common variables can be configured using environment variables. The ones overriding general parameters in the `config.iota` set are described in the
IoT Agent Library Configuration manual.
The ones relating global configuration described in the following table.
Environment variable | Configuration attribute |
| --- | --- |
IOTA_CONFIG_RETRIEVAL | configRetrieval |
IOTA_DEFAULT_KEY | defaultKey |
IOTA_DEFAULT_TRANSPORT | defaultTransport |
The ones relating specific Ultralight 2.0 bindings are described in the following table.
Environment variable | Configuration attribute |
| --- | --- |
IOTA_MQTT_PROTOCOL | mqtt.protocol |
IOTA_MQTT_HOST | mqtt.host |
IOTA_MQTT_PORT | mqtt.port |
IOTA_MQTT_CA | mqtt.ca |
IOTA_MQTT_CERT | mqtt.cert |
IOTA_MQTT_KEY | mqtt.key |
IOTA_MQTT_REJECT_UNAUTHORIZED | mqtt.rejectUnauthorized |
IOTA_MQTT_USERNAME | mqtt.username |
IOTA_MQTT_PASSWORD | mqtt.password |
IOTA_MQTT_QOS | mqtt.qos |
IOTA_MQTT_RETAIN | mqtt.retain |
IOTA_MQTT_RETRIES | mqtt.retries |
IOTA_MQTT_RETRY_TIME | mqtt.retryTime |
IOTA_MQTT_KEEPALIVE | mqtt.keepalive |
IOTA_MQTT_AVOID_LEADING_SLASH | mqtt.avoidLeadingSlash |
IOTA_MQTT_CLEAN | mqtt.clean |
IOTA_MQTT_CLIENT_ID | mqtt.clientId |
IOTA_MQTT_DISABLED | mqtt.disabled |
IOTA_AMQP_HOST | amqp.host |
IOTA_AMQP_PORT | amqp.port |
IOTA_AMQP_USERNAME | amqp.username |
IOTA_AMQP_PASSWORD | amqp.password |
IOTA_AMQP_EXCHANGE | amqp.exchange |
IOTA_AMQP_QUEUE | amqp.queue |
IOTA_AMQP_DURABLE | amqp.durable |
IOTA_AMQP_RETRIES | amqp.retries |
IOTA_AMQP_RETRY_TIME | amqp.retryTime |
IOTA_AMQP_DISABLED | amqp.disabled |
IOTA_HTTP_HOST | http.host |
IOTA_HTTP_PORT | http.port |
IOTA_HTTP_TIMEOUT | http.timeout |
IOTA_HTTP_KEY | http.key |
IOTA_HTTP_CERT | http.cert |
# High performance configuration
Node.js is single‑threaded and uses nonblocking I/O, allowing it to scale up to tens of thousands of concurrent operations. Nevertheless, Node.js has a few weak points and vulnerabilities that can make Node.js‑based systems to offer underperformance behaviour, specially when a Node.js web application experiences rapid traffic growth.
Additionally, It is important to know the place in which the node.js server is running, because it has limitations. There are two types of limits on the host: hardware and software. Hardware limits can be easy to spot. Your application might be consuming all of the memory and needing to consume disk to continue working. Adding more memory by upgrading your host, whether physical or virtual, seems to be the right choice.
Moreover, Node.js applications have also a software memory limit (imposed by V8), therefore we cannot forget about these limitations when we execute a service. In this case of 64-bit environment, your application would be running by default at a 1 GB V8 limit. If your application is running in high traffic scenarios, you will need a higher limit. The same is applied to other parameters.
It means that we need to make some changes in the execution of node.js and in the configuration of the system:
Node.js flags
--use-idle-notification
Turns of the use idle notification to reduce memory footprint.
*
--expose-gc
Use the expose-gc command to enable manual control of the garbage collector from the own node.js server code. In case of the IoTAgent, it is not implemented because it is needed to implement the calls to the garbage collector inside the ser server, nevertheless the recommended value is every 30 seconds.
*
--max-old-space-size=xxxx
In that case, we want to increase the limit for heap memory of each V8 node process in order to use max capacity that it is possible instead of the 1,4Gb default on 64-bit machines (512Mb on a 32-bit machine). The recommendation is at least to use half of the total memory of the physical or virtual instance.
*
User software limits
Linux kernel provides some configuration about system related limits and maximums. In a distributed environment with multiple users, usually you need to take into control the resources that are available for each of the users. Nevertheless, when the case is that you have only one available user but this one request a lot of resources due to a high performance application the default limits are not proper configured and need to be changed to resolve the high performance requirements. These are like maximum file handler count, maximum file locks, maximum process count etc.
You can see the limits of your system executing the command:
`bash ulimit -a`
You can detine the corresponding limits inside the file limits.conf. This description of the configuration file syntax applies to the
```
/etc/security/limits.conf
```
file and *.conf files in the
```
/etc/security/limits.d
```
directory. You can get more information about the limits.conf in the limits.conf - linux man pages. The recommended values to be changes are the following:
core
Limits of the core file size in KB, we recommend to change to
`unlimited` both hard and soft types.
```
* soft core unlimited * hard core unlimited
```
data
```
* soft data unlimited * hard data unlimited
```
fsize
Maximum filesize in KB, we recommend to change to
`unlimited` both hard and soft types.
```
* soft fsize unlimited * hard fsize unlimited
```
memlock
Maximum locked-in-memory address space in KB, we recommend to change to
`unlimited` both hard and soft types.
```
* memlock unlimited * memlock unlimited
```
nofile
Maximum number of open file descriptors, we recommend to change to
`65535` both hard and soft types.
```
* soft nofile 65535 * hard nofile 65535
```
rss
Maximum resident set size in KB (ignored in Linux 2.4.30 and higher), we recommend to change to
`unlimited` both hard and soft types.
```
* soft rss unlimited * hard rss unlimited
```
stack
nproc
Maximum number of processes, we recommend to change to
`unlimited` both hard and soft types.
You can take a look to the limits.conf file provided in this folder with all the values provided.
*
Configure kernel parameters
sysctl is used to modify kernel parameters at runtime. We plan to modify the corresponding
`/etc/sysctl.conf` file. You can get more information in the corresponding man pages of sysctl and sysctl.conf. You can search all the kernel parameters by using the command `sysctl -a`
fs.file-max
The maximum file handles that can be allocated, the recommended value is
`1000000` .
```
fs.file-max = 1000000
```
fs.nr_open
Max amount of file handles that can be opened, the recommended value is
`1000000` . `fs.nr_open = 1000000` *
net.netfilter.nf_conntrack_max
Size of connection tracking table. Default value is nf_conntrack_buckets value * 4.
```
net.nf_conntrack_max = 1048576
```
For more details about any other kernel parameters, take a look to the example sysctl.conf file.
|
sshex | hex | Erlang | sshex v2.2.1
API Reference
===
Modules
===
[SSHEx](SSHEx.html)
Module to deal with SSH connections. It uses low level erlang
[ssh library](http://www.erlang.org/doc/man/ssh.html)
[SSHEx.ConfigurableClientKeys](SSHEx.ConfigurableClientKeys.html)
Provides public key behavior for SSH clients
[SSHEx.Helpers](SSHEx.Helpers.html)
require SSHEx.Helpers, as: H # the cool way
sshex v2.2.1
SSHEx
===
Module to deal with SSH connections. It uses low level erlang
[ssh library](http://www.erlang.org/doc/man/ssh.html).
:ssh.start # just in case
{:ok, conn} = SSHEx.connect ip: ‘123.123.123.123’, user: ‘myuser’
Summary
===
[Functions](#functions)
---
[cmd!(conn, cmd, opts \\ [])](#cmd!/3)
Convenience function to run [`run/3`](#run/3) and get output string straight from it,
like `:os.cmd/1`
[connect(opts)](#connect/1)
Establish a connection with given options. Uses `:ssh.connect/4` for that
[run(conn, cmd, opts \\ [])](#run/3)
Gets an open SSH connection reference (as returned by `:ssh.connect/4`),
and a command to execute
[stream(conn, cmd, opts \\ [])](#stream/3)
Gets an open SSH connection reference (as returned by `:ssh.connect/4`),
and a command to execute
Functions
===
cmd!(conn, cmd, opts \\ [])
Convenience function to run [`run/3`](#run/3) and get output string straight from it,
like `:os.cmd/1`.
See [`run/3`](#run/3) for options.
Returns `response` only if [`run/3`](#run/3) return value matches `{:ok, response, _}`,
or returns `{stdout, stderr}` if [`run/3`](#run/3) returns `{:ok, stdout, stderr, _}`.
Raises any `{:error, details}` returned by [`run/3`](#run/3). Note return status from
`cmd` is also ignored.
Ex:
```
SSHEx.cmd! conn, 'mkdir -p /path/to/newdir'
res = SSHEx.cmd! conn, 'ls /some/path'
```
connect(opts)
Establish a connection with given options. Uses `:ssh.connect/4` for that.
Recognised options are `ip` (mandatory), `port` and `negotiation_timeout`.
Any other option is passed to `:ssh.connect/4` as is
(so be careful if you use binaries and `:ssh` expects char lists…).
See [its reference](http://erlang.org/doc/man/ssh.html#connect-4) for available options.
Default values exist for some options, which are:
* `port`: 22
* `negotiation_timeout`: 5000
* `silently_accept_hosts`: `true`
Returns `{:ok, connection}`, or `{:error, reason}`.
run(conn, cmd, opts \\ [])
Gets an open SSH connection reference (as returned by `:ssh.connect/4`),
and a command to execute.
Optionally it gets a `channel_timeout` for the underlying SSH channel opening,
and an `exec_timeout` for the execution itself. Both default to 5000ms.
Returns `{:ok,data,status}` on success. Otherwise `{:error, details}`.
If `:separate_streams` is `true` then the response on success looks like `{:ok,stdout,stderr,status}`.
Ex:
```
{:ok, _, 0} = SSHEx.run conn, 'rm -fr /something/to/delete'
{:ok, res, 0} = SSHEx.run conn, 'ls /some/path'
{:error, reason} = SSHEx.run failing_conn, 'ls /some/path'
{:ok, stdout, stderr, 2} = SSHEx.run conn, 'ls /nonexisting/path', separate_streams: true
```
stream(conn, cmd, opts \\ [])
Gets an open SSH connection reference (as returned by `:ssh.connect/4`),
and a command to execute.
See [`run/3`](#run/3) for options.
Returns a [`Stream`](http://elixir-lang.org/docs/stable/elixir/Stream.html) that you can use to lazily retrieve each line of output
for the given command.
Each iteration of the stream will read from the underlying connection and
return one of these:
* `{:stdout,row}`
* `{:stderr,row}`
* `{:status,status}`
* `{:error,reason}`
Keep in mind that rows may not be received in order.
Ex:
```
{:ok, conn} = :ssh.connect('123.123.123.123', 22,
[ {:user,'myuser'}, {:silently_accept_hosts, true} ], 5000)
str = SSHEx.stream conn, 'somecommand'
Stream.each(str, fn(x)->
case x do
{:stdout,row} -> process_stdout(row)
{:stderr,row} -> process_stderr(row)
{:status,status} -> process_exit_status(status)
{:error,reason} -> process_error(row)
end
end)
```
sshex v2.2.1
SSHEx.ConfigurableClientKeys
===
Provides public key behavior for SSH clients.
valid options:
* `key`: `IO.device` providing the ssh key (required)
* `known_hosts`: `IO.device` providing the known hosts list (required)
* `accept_hosts`: `boolean` silently accept and add new hosts to the known hosts. By default only known hosts will be accepted.
`SSHEx.connect(
ip: to_charlist(hostname),
user: to_charlist(username),
key_cb: {SSHEx.ConfigurableClientKeys, [
key: <IO.device>,
known_hosts: <IO.device> ]}
)`
A convenience method is provided that can take filenames instead of IO devices
`cb_module = SSHEx.ConfigurableClientKeys.get_cb_module(key_file: "path/to/keyfile", known_hosts_file: "path_to_known_hostsFile", accept_hosts: false)
SSHEx.connect(
ip: to_charlist(hostname),
user: to_charlist(username),
key_cb: cb_module
)`
Summary
===
[Functions](#functions)
---
[add_host_key(hostname, key, opts)](#add_host_key/3)
Callback implementation for `c::ssh_client_key_api.add_host_key/3`
[get_cb_module(opts)](#get_cb_module/1)
[is_host_key(key, hostname, alg, opts)](#is_host_key/4)
Callback implementation for `c::ssh_client_key_api.is_host_key/4`
[user_key(alg, opts)](#user_key/2)
Callback implementation for `c::ssh_client_key_api.user_key/2`
Functions
===
add_host_key(hostname, key, opts)
```
add_host_key(hostname :: charlist, key :: :public_key.public_key, opts :: list) ::
:ok |
{:error, term}
```
Callback implementation for `c::ssh_client_key_api.add_host_key/3`.
get_cb_module(opts)
```
get_cb_module(opts :: list) :: {atom, list}
```
is_host_key(key, hostname, alg, opts)
```
is_host_key(key :: :public_key.public_key, hostname :: charlist, alg :: :ssh_client_key_api.algorithm, opts :: list) :: boolean
```
Callback implementation for `c::ssh_client_key_api.is_host_key/4`.
user_key(alg, opts)
```
user_key(alg :: :ssh_client_key_api.algorithm, opts :: list) ::
{:error, term} |
{:ok, :public_key.private_key}
```
Callback implementation for `c::ssh_client_key_api.user_key/2`.
sshex v2.2.1
SSHEx.Helpers
===
require SSHEx.Helpers, as: H # the cool way
Summary
===
[Functions](#functions)
---
[convert_value(v)](#convert_value/1)
[convert_values(args)](#convert_values/1)
[defaults(args, defs)](#defaults/2)
Apply given defaults to given Keyword. Returns merged Keyword
[env(key, default \\ nil)](#env/2)
Convenience to get environment bits. Avoid all that repetitive
`Application.get_env( :myapp, :blah, :blah)` noise
[env(app, key, default)](#env/3)
[Macros](#macros)
---
[spit(obj \\ "", inspect_opts \\ [])](#spit/2)
Spit to output any passed variable, with location information
[todo(msg \\ "")](#todo/1)
Print to stdout a *TODO* message, with location information
Functions
===
convert_value(v)
convert_values(args)
defaults(args, defs)
Apply given defaults to given Keyword. Returns merged Keyword.
The inverse of `Keyword.merge`, best suited to apply some defaults in a
chainable way.
Ex:
```
kw = gather_data
|> transform_data
|> H.defaults(k1: 1234, k2: 5768)
|> here_i_need_defaults
```
Instead of:
```
kw1 = gather_data
|> transform_data kw = [k1: 1234, k2: 5768]
|> Keyword.merge(kw1)
|> here_i_need_defaults
```
env(key, default \\ nil)
Convenience to get environment bits. Avoid all that repetitive
`Application.get_env( :myapp, :blah, :blah)` noise.
env(app, key, default)
Macros
===
spit(obj \\ "", inspect_opts \\ [])
Spit to output any passed variable, with location information.
todo(msg \\ "")
Print to stdout a *TODO* message, with location information. |
capnpy | readthedoc | Python | capnpy 0.0 documentation
[capnpy](index.html#document-index)
---
capnpy documentation[¶](#capnpy-documentation)
===
`capnpy` is an implementation of Cap’n Proto for Python. Its primary goal is to provide a library which is fast, both on CPython and PyPy, and which offers a pythonic API and feeling whenever possible.
Usage[¶](#usage)
---
### Installation and requirements[¶](#installation-and-requirements)
To install `capnpy`, just type:
```
$ pip install capnpy
```
`capnpy` relies on the official capnproto implementation to parse the schema files, so it needs to be able to find the `capnp` executable to compile a schema. It requires `capnp 0.5.0` or later.
### Quick example[¶](#quick-example)
Suppose to have a capnp schema called `example.capnp`:
```
@0xe62e66ea90a396da;
struct Point {
x @0 :Int64;
y @1 :Int64;
}
```
You can use `capnpy` to read and write messages of type `Point`:
```
import capnpy
# load the schema using dynamic loading example = capnpy.load_schema('example')
# create a new Point object p = example.Point(x=1, y=2)
# serialize the message and load it back message = p.dumps()
p2 = example.Point.loads(message)
print('p2.x ==', p2.x)
print('p2.y ==', p2.y)
```
```
p2.x == 1 p2.y == 2
```
### Compiling schemas[¶](#compiling-schemas)
`capnpy` supports different ways of compiling schemas:
`setuptools` integration to compile and distribute schemas using `setup.py`.
Dynamic loading to compile and load capnproto schemas on the fly.
Manual compilation to compile schemas manually.
If you use `setup.py` or [manual compilation](#manual-compilation), you need `capnp` to compile the schema, but not to load it later; this means that you can distribute the precompiled schemas, and the client machines will be able to load it without having to install the official capnproto distribution.
If you use [dynamic loading](#dynamic-loading), you always need the `capnp` executable whenever you want to load a schema.
#### Integration with `setuptools`[¶](#integration-with-setuptools)
If you use `setuptools`, you can use the `capnpy_schema` keyword to automatically compile your schemas from `setup.py`:
```
from setuptools import setup setup(name='foo',
version='0.1',
packages=['mypkg'],
capnpy_schemas=['mypkg/example.capnp'],
)
```
You can specify additional [compilation options](#compilation-options) by using `capnpy_options`:
```
from setuptools import setup setup(name='foo',
version='0.1',
packages=['mypkg'],
capnpy_options={
'pyx': False, # do NOT use Cython (default is 'auto')
'convert_case': False, # do NOT convert camelCase to camel_case
# (default is True)
}
capnpy_schemas=['mypkg/example.capnp'],
)
```
#### Manual compilation[¶](#manual-compilation)
You can manually compile a capnproto schema by using `python -m capnpy compile`:
```
$ python -m capnpy compile example.capnp
```
This will produce `example.py` (if you are using py mode) or `example.so`
(if you are using pyx mode). Run `python -m capnpy --help` for additional options.
#### Dynamic loading[¶](#dynamic-loading)
To dynamically load a capnproto schema, use `capnpy.load_schema`; its full signature is:
```
def load_schema(modname=None, importname=None, filename=None,
pyx='auto', options=None):
...
```
`modname`, `importname` and `filename` corresponds to three different ways to specify and locate the schema file to load. You need to pass exactly one of them.
`modname` (the default) is interpreted as if it were the name of a Python module with the `.capnp` extension. This means that it is searched in all the directories listed in `sys.path` and that you can use dotted names to load a schema inside packages or subpackages:
```
>>> import capnpy
>>> import mypackage
>>> mypackage
<module 'mypackage' from '/tmp/mypackage/__init__.pyc'>
>>> example = capnpy.load_schema('mypackage.mysub.example')
>>> example
<module 'example' from '/tmp/mypackage/mysub/example.capnp'>
```
This is handy because it allows you to distribute the capnproto schemas along the Python packages, and to load them with no need to care where they are on the filesystem, as long as the package is importable by Python.
`importname` is similar to `modname`, with the difference that it uses the same syntax you would use in capnproto’s *import expressions*. In particular,
if you use an absolute path, `load_schema` searches for the file in each of the search path directories, which by default correspond to the ones listed in
`sys.path`. Thus, the example above is completely equivalent to this:
```
>>> example = capnpy.load_schema(importname='/mypackage/mysub/example.capnp')
>>> example
<module 'example' from '/tmp/mypackage/mysub/example.capnp'>
```
Finally, `filename` specifies the exact file name of the schema file. No search will be performed.
`pyx` specifies whether to use pyx or py mode. `options` can be used to change the default [compilation options](#compilation-options):
```
>>> from capnpy.annotate import Options
>>> example = capnpy.load_schema('example', options=Options(convert_case=False))
```
#### Compilation options[¶](#compilation-options)
The `capnpy` schema compiler has two modes of compilation:
py mode Generate pure Python modules, which can be used either on CPython or PyPy: it is optimized to be super fast on PyPy. It produces slow code on CPython, but it has the advantage of not requiring `cython`. This is the default on PyPy.
pyx mode Generate pyx modules, which are then compiled into native extension modules by `cython` and `gcc`. It is optimized for speed on CPython. This is the default on CPython, if `cython` is available.
Moreover, it supports the following options:
`version_check`
If enabled, the compiled schema contains a check which is run at import time to ensure that the current version of capnpy matches to the one we compiled the schema with. See note below for more details. The default is
**True**.
`convert_case`
If enabled, `capnpy` will automatically convert field names from camelCase to underscore_delimiter: i.e., `fooBar` will become
`foo_bar`. The default is **True**.
`text_type`
Can be `bytes` or `unicode`, Determines the default Python type for
[Text](#text) fields. The default is `bytes`.
`include_reflection_data`
If enabled, `capnpy` will embed [Reflection data](#reflection-data) into the compiled schemas.
Note
**Version checking** is needed in particular if you are using pyx mode,
which is the default on CPython. Capnproto `struct` are represented by Python classes which inherits from
`capnpy.struct_.Struct`: in pyx mode, this is a Cython `cdef class`, and it has a certain C layout which depends on the number and type of its fields. If the C layout at compilation and import time don’t match, you risk segfault and/or misbehavior. Since the internal layout of classes might change between capnpy version, the version check prevents this risk.
#### Options annotation[¶](#options-annotation)
`capnpy` options can also be configured by using the `$Py.options` annotation,
which can be applied to `file`, `struct` and `field` nodes. The annotation recurively applies also to all the children nodes and can be used to override the options used by the parents.
This can be used to have a more granular control on how certain capnproto types are translated into Python. For example, you could use it to apply the
`convert_case` option only to certain structs or fields:
```
@0x97a960ad8d4cf616;
using Py = import "/capnpy/annotate.capnp";
# don't convert the case by default
$Py.options(convertCase=false);
struct A {
fieldOne @0 :Int64;
}
struct B $Py.options(convertCase=true) {
fieldOne @0 :Int64;
fieldTwo @1 :Int64;
fieldThree @2 :Int64 $Py.options(convertCase=false);
}
```
```
>>> mod = capnpy.load_schema('example_options')
>>> mod.A.fieldOne
<property object at ...>
>>> mod.B.field_one
<property object at ...>
>>> mod.B.field_two
<property object at ...>
>>> mod.B.fieldThree
<property object at ...>
```
In the example above, `A.fieldOne` is not converted because of the file-level annotation. `B.field_one` and `B.field_two` are converted because the annotation on the struct overrides it. Finally, `B.fieldThree`
overrides it again.
Note
Note the different spelling of options names: when you specify them in `setup.py`, they follow Python’s `naming_convention` and thus are spelled e.g. `convert_case` and `text_type`. However, when you specify them as annotation, the capnproto schema language mandates `camelCase`.
### Loading and dumping messages[¶](#loading-and-dumping-messages)
The API to read and write capnproto messages is inspired by the ones offered by `pickle` and `json`:
> * `capnpy.load(f, payload_type)`: load a message from a file-like object
> * `capnpy.loads(s, payload)`: load a message from a string
> * `capnpy.load_all(f, payload_type)`: return a generator which yields all
> the messages from the given file-like object
> * `capnpy.dump(obj)`: write a message to a file-like object
> * `capnpy.dumps(obj)`: write a message to a string
For example:
```
>>> import capnpy
>>> example = capnpy.load_schema('example')
>>> p = example.Point(x=100, y=200)
>>> mybuf = capnpy.dumps(p)
>>> mybuf
'\x00\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00d\x00\x00\x00\x00\x00\x00\x00\xc8\x00\x00\x00\x00\x00\x00\x00'
>>> p2 = capnpy.loads(mybuf, example.Point)
>>> print(p2.x, p2.y)
100 200
```
Alternatively, you can call `load`/`loads` directly on the class, and
`dump`/`dumps` directly on the objects:
```
>>> p = example.Point(x=100, y=200)
>>> mybuf = p.dumps()
>>> p2 = example.Point.loads(mybuf)
>>> print(p2.x, p2.y)
100 200
```
By default, `dump` and `dumps` try to use a fast path which is faster if you pass an object which is [compact](#compact). If the fast path can be taken, it is approximately 5x faster on CPython and 10x faster on PyPy. However, if the object is **not** compact, the fast path check makes it ~2x slower. If you are sure that the object is not compact, you can disable the check by passing
`fastpath=False`:
```
>>> mybuf = p.dumps(fastpath=False)
```
### Loading from sockets[¶](#loading-from-sockets)
In case you want to load your messages from a `socket`, you can use
`capnpy.buffered.BufferedSocket` to wrap it into a file-like object:
```
>>> from capnpy.buffered import BufferedSocket
>>> sock = socket.create_connection(('localhost', 5000))
>>> buf = BufferedSocket(sock)
>>> example.Point.load(buf)
```
Warning
The obvious solution to wrap a socket into a file-like object would be to use `socket.makefile()`. However, because of [this bug](https://bitbucket.org/pypy/pypy/issues/2272/socket_fileobjectread-horribly-slow) it is horribly slow. **Don’t use it**. See also the
[benchmarks](index.html#buffered-streams).
### Raw dumps[¶](#raw-dumps)
Raw dumps are intented primarly for debugging and should **never** be used as a general transmission mechanism. They dump the internal state of the segments and the offsets used to identify a given capnproto object.
In particular, they dump the whole buffer in which the object is contained,
which might be much larger that the object itself.
If you encounter a canpy bug, you can use `_raw_dumps` and `_raw_loads` to save the offending object to make it easier to reproduce the bug:
```
>>> p = example.Point(x=100, y=200)
>>> mydump = p._raw_dumps()
>>> p2 = example.Point._raw_loads(mydump)
>>> print(p2.x, p2.y)
100 200
```
### capnproto types[¶](#capnproto-types)
#### Text[¶](#text)
Capnproto defines `Text` fields as “always UTF-8 encoded and NUL-terminated”. There are at least two reasonable ways to represent this in Python:
> * as `bytes`: this will contain the undecoded UTF-8 string.
> * as `unicode`: this will automatically do `.decode('utf-8')` for
> you. However, it is potentially less efficient because capnpy needs to
> re-decode the string again and again any time you read the field.
By default, `Text` fields are represented as `bytes`. You can change the default behavior by setting the appropriate [Compilation options](#compilation-options). In case you are using [Integration with setuptools](#integration-with-setuptools), you need to pass
`capnpy_options={'text_type': 'unicode'}` in your `setup.py`.
If you want more granular control, you can annotate single files/struct/fields by using the [Options annotation](#options-annotation).
#### Struct[¶](#struct)
`capnpy` turns each capnproto struct into a Python class. The API is inspired by `namedtuples`:
> * the fields of the struct are exposed as plain attributes
> * objects are **immutable**; it is not possible to change the value of a
> field once the object has been instantiated. If you need to change the
> value of a field, you can instantiate a new object, as you would do with
> namedtuples
> * objects can be made [comparable and hashable](#equality-and-hashing) by specifying the
> `$Py.key` annotation
Moreover, in case the type of a field is a pointer (e.g. `Text`, `Data`,
structs and lists), `capnpy` generates two different accessors. For a field named `foo`:
> * `has_foo()`: return `True` if `foo` is not `NULL`, `False`
> otherwise
> * `get_foo()`: if `has_foo()` is `True`, it is equivalent to
> `foo`. Else, it returns the default value for that field
Note that in case of a `struct` field, the default value is a struct whose fields have all the default value, recursively:
```
@0xe62e66ea90a396da;
struct Point {
x @0 :Int64;
y @1 :Int64;
name @2 :Text;
}
struct Rectangle {
a @0 :Point;
b @1 :Point;
}
```
```
>>> mod = capnpy.load_schema('example_struct')
>>> p = mod.Point()
>>> p
<Point: (x = 0, y = 0)>
>>> print(p.name)
None
>>> p.has_name()
False
>>> p.get_name()
''
>>> rect = mod.Rectangle()
>>> print(rect.a)
None
>>> print(rect.has_a())
False
>>> print(rect.get_a())
<Point: (x = 0, y = 0)>
>>> rect.get_a().get_name()
''
```
The rationale is that `get_foo()` and `has_foo()` are modeled after the semantics of the original C++ implementation of capnproto, while `.foo` is modeled after the Pythonic `namedtuple` API. In particular `.foo` returns
`None` instead of the default value to avoid unpythonic and surprising cases such as `Point(name=None).name == ''`
#### Enum[¶](#enum)
capnproto enums are represented as subclasses of `int`, so that we can easily use both the numeric and the symbolic values:
```
@0x8eecd1de76ded4c4;
enum Color {
red @0;
green @1;
blue @2;
yellow @3;
}
```
```
>>> mod = capnpy.load_schema('example_enum')
>>> Color = mod.Color
>>> Color.green
<Color.green: 1>
>>> int(Color.green)
1
>>> str(Color.green)
'green'
>>> Color.green + 2 3
>>> Color(2)
<Color.blue: 2>
>>> Color.__members__
('red', 'green', 'blue', 'yellow')
```
#### Union[¶](#union)
capnproto uses a special enum value, called *tag*, to identify the field which is currently set inside an union; `capnpy` follows this semantics by automatically creating an [Enum](#enum) whose members correspond to fields of the union.
```
@0x8ced518a09aa7ce3;
struct Shape {
area @0 :Float64;
union {
circle @1 :Float64; # radius
square @2 :Float64; # width
}
}
struct Type {
union {
void @0 :Void;
bool @1 :Void;
int64 @2 :Void;
float64 @3 :Void;
text @4 :Void;
}
}
```
```
>>> mod = capnpy.load_schema('example_union')
>>> Shape, Type = mod.Shape, mod.Type
>>> Shape.__tag__
<class 'example_union.Shape__tag__'>
>>> Shape.__tag__.__members__
('circle', 'square')
>>> Type.__tag__.__members__
('void', 'bool', 'int64', 'float64', 'text')
```
You can query which field is set by calling `which()`, or by calling one of the `is_*()` methods which are automatically generated:
```
>>> s = Shape(area=16, square=4)
>>> s.which()
<Shape__tag__.square: 1>
>>> s.__which__()
1
>>> s.is_circle()
False
>>> s.is_square()
True
```
The difference between `which()` and `__which__()` is that the former return an `Enum` value, while the latter a raw integer: on CPython,
`which()` is approximately [4x slower](index.html#special-union-attributes), so you might consider to use the raw form in performance-critical parts of your code. On PyPy, the two forms have the very same performance.
Since `capnpy` objects are immutable, union fields must be set when instantiating the object. The first way is to call the default constructor and set the field as usual:
```
>>> s = Shape(area=3*3*math.pi, circle=3)
>>> s.is_circle()
True
```
If you try to specify two conflicting fields, you get an error:
```
>>> Shape(area=16, square=4, circle=42)
Traceback (most recent call last):
...
TypeError: got multiple values for the union tag: circle, square
```
The second way is to use one of the special `new_*()` alternate constructors:
```
>>> s = Shape.new_square(area=16, square=4)
>>> s.is_square()
True
```
```
>>> s = Shape.new_square(area=16, square=4, circle=42)
Traceback (most recent call last):
...
TypeError: new_square() got an unexpected keyword argument 'circle'
```
The alternate constructors are especially handy in case of `Void` union fields, because in that case you don’t need to specify the (void) value of the field:
```
>>> t = Type.new_int64()
>>> t.which()
<Type__tag__.int64: 2>
>>> t.is_int64()
True
```
#### Groups[¶](#groups)
```
@0x97a960ad8d4cf616;
struct Point {
position :group {
x @0 :Int64;
y @1 :Int64;
}
color @2 :Text;
}
```
Group fields are initialized using a tuple, and accessed using the usual dot notation:
```
>>> mod = capnpy.load_schema('example_group')
>>> Point = mod.Point
>>> p = Point(position=(3, 4), color='red')
>>> p.position.x 3
>>> p.position.y 4
```
`capnpy` also generates a **group constructor**, which is a `staticmethod`
named as the capitalized group name. It is useful because you can use keyword arguments and get the desired tuple in the right order:
```
>>> Point.Position(y=6, x=5)
(5, 6)
>>> p2 = Point(position=Point.Position(x=5, y=6), color='red')
>>> p2.position.x 5
>>> p2.position.y 6
```
By using the group constructor, you can also **omit** some parameters; in this case, they will get the default value, as usual:
```
>>> Point.Position(y=7)
(0, 7)
```
Note
Make sure to notice the difference between the lowercase
`Point.position` which is a property used to read the field, and the capitalized `Point.Position` which is the group constructor:
```
>>> Point.position
<property object at ...>
>>> Point.Position
<function Position at ...>
```
#### Virtual groups[¶](#virtual-groups)
You can use the `$Py.group` annotation on a `Void` field to generate a virtual group, which fishes the data from normal “flat” fields.
```
@0x97a960ad8d4cf616;
using Py = import "/capnpy/annotate.capnp";
struct Point {
x @0 :Int64;
y @1 :Int64;
color @2 :Text;
position @3 :Void $Py.group("x, y") $Py.key("*");
}
```
This becomes particularly handy in conjuction with `$Py.key` (see [Equality and hashing](#equality-and-hashing)), because it allows to get an hashable/comparable subset of the fields without affecting other parts of the code which want to access the flat fields:
```
>>> mod = capnpy.load_schema('example_py_group')
>>> p = mod.Point(x=1, y=2, color='red')
>>> p.x 1
>>> p.position.x 1
>>> p.position == (1, 2)
True
```
#### Named unions[¶](#named-unions)
Named unions are a special case of groups.
```
@0xe1f94ddddf8858c4;
struct Person {
name @0 :Text;
job :union {
unemployed @1 :Void;
employer @2 :Text; # this is the company name
selfEmployed @3 :Void;
}
}
```
You can instantiate new objects as you would do with a normal group, by using the group constructor. If you want to specify a `Void` union field, you can use `None`:
```
>>> mod = capnpy.load_schema('example_named_union')
>>> Person = mod.Person
>>> p1 = Person(name='Alice', job=Person.Job(unemployed=None))
>>> p2 = Person(name='Bob', job=Person.Job(employer='Capnpy corporation'))
```
Reading named unions is the same as anonymous ones:
```
>>> p1.job.which()
<Person_job__tag__.unemployed: 0>
>>> p1.job.is_unemployed()
True
>>> p2.job.employer
'Capnpy corporation'
```
Note
The reason why you have to use the group constructor is that it automatically insert the special `undefined` value in the right positions:
```
>>> from capnpy.struct_ import undefined
>>> undefined
<undefined>
>>> Person.Job(unemployed=None)
(None, <undefined>, <undefined>)
>>> Person.Job(employer='Capnpy corporation')
(<undefined>, 'Capnpy corporation', <undefined>)
```
### “Compact” structs[¶](#compact-structs)
A struct object is said to be “compact” if:
> 1. there is no gap between the data and pointers sections
> 2. there is no gap between the children
> 3. the pointers to the children are ordered
> 4. the children are recursively compact
The compactness of a message depends on the implementation which generates it.
The most natural way to generate Cap’n Proto messages is to write them in pre-order (i.e., you write first the root, then its children in order,
recursively). If the messages are generated this way and without introducing gaps, it is automatically compact.
Messages created by `capnpy` are always compact.
You can check for compactness by calling the `_is_compact` method:
```
>>> mod = capnpy.load_schema('example_compact')
>>> p = mod.Point(1, 2)
>>> p._is_compact()
True
```
#### List items[¶](#list-items)
Cap’n Proto lists are implemented in such a way that items are placed one next to the other, and the children of the items are placed at the end of the list body. This means that, if the items have children, surely there will be a gap.
Hence, as soon as you have a Cap’n Proto list whose items have pointers, the items are **not** compact, even if the list as a whole is.
```
>>> mod = capnpy.load_schema('example_compact')
>>> p0 = mod.Point(1, 2, name='p0')
>>> p1 = mod.Point(3, 4, name='p1')
>>> poly = mod.Polygon(points=[p0, p1])
>>> poly._is_compact()
True
>>> poly.points[0]._is_compact()
False
```
#### The `compact()` method[¶](#the-compact-method)
Cap’n Proto message can be arbitrarly large and occupy a big amount of memory;
moreover, when you access a struct field or a list item, the resulting object keeps alive the whole message.
However, sometimes you are interested in keeping alive only a smaller part it:
you can accomplish this by calling the `compact()` method, which creates a new, smaller message containing only the desired subset. Also, as the name suggests, the newly created message is guaranteed to be compact:
```
>>> mod = capnpy.load_schema('example_compact')
>>> poly = mod.Polygon([mod.Point(1, 2, 'p0'), mod.Point(3, 4, 'p1')])
>>> len(poly._seg.buf)
80
>>> p0 = poly.points[0]
>>> len(p0._seg.buf) # p0 keeps the whole segment alive 80
>>> p0._is_compact()
False
>>> pnew = p0.compact()
>>> len(pnew._seg.buf) # pnew keeps only a subset alive 40
>>> pnew._is_compact()
True
```
### Equality and hashing[¶](#equality-and-hashing)
By default, structs are not hashable and cannot be compared:
```
>>> p1 = example.Point(x=1, y=2)
>>> p2 = example.Point(x=1, y=2)
>>> p1 == p2 Traceback (most recent call last):
...
TypeError: Cannot hash or compare capnpy structs. Use the $Py.key annotation to enable it
```
By specifying the `$Py.key` annotation, you explicitly tell `capnpy` which fields to consider when doing equality testing and hashing:
```
@0xaff59c0b39ac4242;
using Py = import "/capnpy/annotate.capnp";
# the name will be ignored in comparisons, as it is NOT in the key struct Point $Py.key("x, y") {
x @0 :Int64;
y @1 :Int64;
name @2 :Text;
}
```
```
>>> mod = capnpy.load_schema('example_key')
>>> Point = mod.Point
>>> p1 = Point(1, 2, "p1")
>>> p2 = Point(1, 2, "p2")
>>> p3 = Point(3, 4, "p3")
>>>
>>> p1 == p2 True
>>> p1 == p3 False
```
You can also use them as dictionary keys:
```
>>> d = {}
>>> d[p1] = 'hello'
>>> d[p2]
'hello'
```
Tip
If you have many fields, you can use `$Py.key("*")` to include all of them in the comparison key: this is equivalent of explicitly listing all the fields which are present in the schema **at the moment of compilation**. In particular, be aware that if later get objects which come from a *newer* schema, the additional fields will
**not** be considered in the comparisons.
Moreover, the structs are guaranteed to hash and compare equal to the corresponding tuples:
```
>>> p1 == (1, 2)
True
>>> p3 == (3, 4)
True
>>> d[(1, 2)]
'hello'
```
#### Rationale[¶](#rationale)
Why are not structs comparable by defaults but you have to manually specify
`$Py.key`? Couldn’t `capnpy` be smart enough to figure out by itself?
We choose to use `$Py.key` because it is not obvious what is the right thing to do in presence of schema evolution. For example, suppose you start with previous version of `struct Point` which contains only `x` and `y`:
```
struct OlderPoint {
x @0 :Int64;
y @1 :Int64;
}
```
```
>>> OlderPoint = mod.OlderPoint
>>> p1 = OlderPoint(1, 2) # there is no "name" yet
```
Then, you receive some other object created with a newer schema which contains an additional field, such as our `Point`. Since `Point` is an evolution of
`OlderPoint`, it is perfectly legit to load it:
```
>>> p_with_name = Point(1, 2, 'this is my name')
>>> message_from_the_future = p_with_name.dumps()
>>> p2 = OlderPoint.loads(message_from_the_future)
>>> p2.x, p2.y
(1, 2)
```
Now, note that the underyling data contains the name, although we don’t have the `name` field (because we are using an older schema):
```
>>> hasattr(p2, 'name')
False
>>> 'this is my name' in p2._seg.buf True
```
So, what should `p1 == p2` return? We might choose to simply ignore the
`name` and return `True`. Or choose to consider `p1.name` equal to the empty string, or to `None`, and thus return `False`. Or we could declare that two objects are equal when their canonical representation is the same,
which introduces even more subtle consequences.
According to the Zen of Python:
> *Explicit is better than implicit.*
> *In the face of ambiguity, refuse the temptation to guess.*
Hence, we require you to explicity specify which fields to consider.
### Extending `capnpy` structs[¶](#extending-capnpy-structs)
As described above, each capnproto `struct` is converted into a Python class. With `capnpy` you can easily add methods by using the `__extend__`
class decorator:
```
>>> import math
>>> import capnpy
>>> Point = example.Point
>>>
>>> @Point.__extend__
... class Point:
... def distance(self):
... return math.sqrt(self.x**2 + self.y**2)
...
>>>
>>> p = Point(x=3, y=4)
>>> p.distance()
5.0
```
Although it seems magical, `__extend__` is much simpler than it looks: what it does is simply to copy the content of the new class body `Point` into the body of the automatically-generated `example.Point`; the result is that
`example.Point` contains both the original fields and the new methods.
When loading a schema, e.g. `example.capnp`, `capnpy` also searches for a file named `example_extended.py` in the same directory. If it exists, the code is executed in the same namespace as the schema being loaded, meaning that it is the perfect place where to put the `__extend__` code to be sure that it will be immediately available. For example, suppose to have the following `example_extended.py` in the same directory as `example.capnp`:
```
# example_extended.py import math
# Note that the Point class is already available, as this code is executed
# inside the namespace of the module loaded from example.capnp
@Point.__extend__ class Point:
def distance(self):
return math.sqrt(self.x**2 + self.y**2)
```
Then, the `distance` method will be immediately available as soon as we load the schema:
```
>>> import capnpy
>>> example = capnpy.load_schema('example')
>>> p = example.Point(3, 4)
>>> print(p.distance())
5.0
```
### Reflection API[¶](#reflection-api)
Using the reflection API, it is possible to programmatically query information about a schema, for example what are the fields inside a struct.
The main entry point is the function
`capnpy.get_reflection_data()`, which returns the metadata for a given module as an instance of `ReflectionData`.
```
>>> mod = capnpy.load_schema('example')
>>> reflection = capnpy.get_reflection_data(mod)
```
Under the hood, the `capnp` compiler produces a [capnproto representation](https://github.com/antocuni/capnpy/blob/master/capnpy/schema.capnp)
of the parsed schema, where most capnproto entities are represented by
[nodes](https://github.com/antocuni/capnpy/blob/master/capnpy/schema.capnp#L30). You can use `get_node` to get the capnproto node corresponding to a given Python-level entity:
```
>>> # get the node for the Point struct
>>> node = reflection.get_node(mod.Point)
>>> type(node)
<class 'capnpy.schema.Node__Struct'>
>>> node.displayName[-19:]
'example.capnp:Point'
>>> node.which()
<Node__tag__.struct: 1>
>>> node.is_struct()
True
>>> for f in node.struct.fields:
... print(f)
...
<Field 'x': int64>
<Field 'y': int64>
```
Note
By default, reflection data is included into all compiled schemas. You can change the behavior by setting the [option](#option)
`include_reflection_data` to `False`.
#### Nodes vs Python entities[¶](#nodes-vs-python-entities)
When compiling a schema `capnpy` generates Python entities from nodes: for example, `Struct` are compiled as Python classes, and fields as Python properties. Although closely related, they are not always equivalent: for example, `Field.name` is always `camelCase`, but the Python property might be called differently, depending on the [compilation options](#compilation-options).
For example, consider the following schema:
```
@0xe62e66ea90a396da;
struct Foo {
myField @0 :Int64;
}
enum Color {
lightRed @0;
darkGreen @1;
}
```
To get the correct Python-level name, you can call `reflection.field_name()`:
```
>>> mod = capnpy.load_schema('example_reflection')
>>> reflection = capnpy.get_reflection_data(mod)
>>> node = reflection.get_node(mod.Foo)
>>> f = node.get_struct_fields()[0]
>>> f
<Field 'myField': int64>
>>> reflection.field_name(f)
'my_field'
```
This works also for enums:
```
>>> node = reflection.get_node(mod.Color)
>>> node.is_enum()
True
>>> enumerants = node.get_enum_enumerants()
>>> enumerants[0].name
'lightRed'
>>> reflection.field_name(enumerants[0])
'light_red'
```
#### Inspecting annotations[¶](#inspecting-annotations)
The Reflection API provides methods to inspect capnproto annotations. Consider the following schema, in which we use custom annotations to map structs to database tables:
```
@0x801e5c7f340eaf8f;
annotation dbTable(struct) :Text;
annotation dbPrimaryKey(field) :Void;
struct Person $dbTable("Persons") {
id @0 :UInt64 $dbPrimaryKey;
firstName @1 :Text;
lastName @2 :Text;
school @3 :UInt64;
}
struct School $dbTable("Schools") {
id @0 :UInt64 $dbPrimaryKey;
name @1 :Text;
city @2 :Text;
}
```
You can use `has_annotation()` and `get_annotation()` to query about them:
```
>>> mod = capnpy.load_schema('example_reflection_db')
>>> reflection = capnpy.get_reflection_data(mod)
>>> reflection.has_annotation(mod.Person, mod.dbTable)
True
>>> reflection.get_annotation(mod.Person, mod.dbTable)
'Persons'
```
The following shows a complete example of how to use annotations to create a simple dump of the DB structure. It is also worth noticing the usage of
`reflection.field_name()` to convert from e.g. `firstName` to
`first_name`:
```
>>> def print_table(node):
... table = reflection.get_annotation(node, mod.dbTable)
... print('DB Table:', table)
... for f in node.get_struct_fields():
... print(' ', reflection.field_name(f), end='')
... if reflection.has_annotation(f, mod.dbPrimaryKey):
... print(' PRIMARY KEY', end='')
... print()
>>>
>>> for node in reflection.allnodes.values():
... if reflection.has_annotation(node, mod.dbTable):
... print_table(node)
...
DB Table: Persons
id PRIMARY KEY
first_name
last_name
school DB Table: Schools
id PRIMARY KEY
name
city
```
### `capnpy` vs `pycapnp`[¶](#capnpy-vs-pycapnp)
To be written
Changelog[¶](#changelog)
---
### 0.8.1[¶](#id1)
* Fix the Reflection API in presence of large schemas, which `capnp`
compiles using multiple segments and far pointers.
### 0.8.0[¶](#id2)
* Improve the `shortrepr()` method and consequently the `__repr__` of capnpy structs: the goal is to make the output of shortrepr() fully compatible with the standard `capnp encode` tool: this way it is possible to reconstruct the original binary message from a capnpy textual dump.
* Fix a corner case when reading far pointers: this bug prevented capnpy to parse large schemas under some conditions.
* Add a new compilation option to control whether to include the Reflection data: see [Compilation options](index.html#option).
* Improve support for `const` inside capnproto schemas: it is now possible to declare struct and list constants.
### 0.7.0[¶](#id3)
* Add the [Reflection API](index.html#reflection-api), which makes it possible to programmatically query information about a schema, for example what are the fields inside a struct.
### 0.6.4[¶](#id4)
* Fix `$Py.groups` collisions ([PR #45](https://github.com/antocuni/capnpy/pull/45)).
### 0.6.3[¶](#id5)
* Fix the repr text fields when `textType=unicode`.
### 0.6.2[¶](#id6)
* Don’t crash if we can’t determine the version of `capnp` ([PR #43](https://github.com/antocuni/capnpy/pull/43)).
### 0.6.1[¶](#id7)
* Improve `load()` and `load_all()`. Try harder to distinguish between a clean close of the connection and an unclean one: now we raise EOFError
*only* if we read an empty string at the very beginning of the message.
* Fix constructors when using a `$Py.nullable` on a group value.
### 0.6[¶](#id8)
* Add the new `text_type` option (see [Compilation options](index.html#option)). It is now possible to choose whether `Text` fields are represented as bytes or unicode.
Benchmarks[¶](#benchmarks)
---
Every time we push new code to github, our [Continuous Integration System](https://travis-ci.org/antocuni/capnpy/)
re-runs all the benchmarks and [regenerates](https://readthedocs.org/projects/capnpy/builds/) these charts.
This section shows the current benchmark results and compares `capnpy`
to various alternative implementations. [Evolution over time](#evolution-over-time) shows how
`capnpy` performance has evolved.
### How to read the charts[¶](#how-to-read-the-charts)
For each benchmark we show two charts, one for CPython and one for PyPy. Make sure to notice the different scale on the Y axis: PyPy is often an order of magnitue faster than CPython, so it does not make sense to directly compare them, but inside each chart it is useful to compare the performance of `capnpy` to the other reference points.
Moreover, all benchmarks are written so that they repeat the same operation for a certain number of iteration inside a loop. The charts show the total time spent into the loop, not the time per iteration. Again, it is most useful to just compare `capnpy` to the other reference points.
Most benchmarks compare the performance of `capnpy` objects against alternative implementations. In particular:
| instance: | objects are instances of plain Python classes. This is an useful reference point because often it represents the best we can potentially do. The goal of `capnpy` is to be as close as possible to instances. |
| namedtuple: | same as above, but using `collections.namedtuple` instead of Python classes. |
| [pycapnp](http://jparyani.github.io/pycapnp/): | the default Cap’n Proto implementation for Python. It does not work on PyPy. |
### Get Attribute[¶](#get-attribute)
This benchmark measures how fast is to read an attribute out of an object, for different types of attribute.
The benchmarks for `group`, `struct` and `list` are expected to take a bit longer than the others, because after getting the attribute, they “do something” with the result, i.e. reading another attribute in case of
`group` and `struct`, and getting the length of a `list`.
The PyPy charts shows that `uint64` fields are much slower than the others:
this is because the benchmarks are run on PyPy 5.4, which misses an optimization in that area. With PyPy 5.6, `uint64` is as fast as `int64`.
### Special union attributes[¶](#special-union-attributes)
If you have an [Union](index.html#union), you can inspect its tag value by calling
`which()`, `__which__()` or one of the `is_*()` methods. Ultimately, all of them boil down to reading an `int16` field, so the corresponding benchmark is included as a reference.
Note that on CPython, `which()` is slower than `__which__()`: this is because the former returns an [Enum](index.html#enum), while the latter returns a raw integer. On the other hand, PyPy is correctly able to optimize away all the abstraction overhead.
### Lists[¶](#lists)
These benchmark measure the time taken to perform various operations on lists. The difference with the `list` benchmark of the previous section is that here we do not take into account the time taken to **read** the list itself out of its containing struct, but only the time taken to perform the operations after we got it.
The `iter` benchmark iterates over a list of 4 elements.
### Hashing[¶](#hashing)
If you use `$Py.key` (see [Equality and hashing](index.html#equality-and-hashing)), you can `hash`
your objects, and the return value is guaranteed to be the same as the corresponding tuple.
The simplest implementation would be to create the tuple call `hash()` on it. However, `capnpy` uses an ad-hoc implementation so that it can compute the hash value **without** creating the tuple. This is especially useful if you have `text` fields, as you completely avoid the expensive creation of the string.
### Constructors[¶](#constructors)
This benchmark measure the time needed to create new objects. Because of the Cap’n Proto specs, this **has** to be more expensive than creating e.g. a new instance, as we need to do extra checks and pack all the objects inside a buffer. However, as the following charts show, creating new `capnpy` objects is almost as fast as creating instances. As shown by the charts, the performances are different depending on the type of the fields of the target struct.
List fields are special: normally, if you pass a list object to an instance or namedtuple, you store only a reference to it. However, if you need to construct a new Cap’n Proto object, you need to copy the whole content of the list into the new buffer. In particular, if it is a list of structs, you need to deeep-copy each item of the list, separately. This explains why
`test_list` looks slower than the rest.
### Deep copy[¶](#deep-copy)
Sometimes we need to perform a deep-copy of a Cap’n Proto object. In particular, this is needed:
> * if you construct a new object having a struct field
> * if you construct a new object having a list of structs field
> * if you `dump()` an object which is not “compact”
`capnpy` includes a generic, schema-less implementation which can recursively copy an arbritrary Capn’n Proto pointer into a new buffer. It is written in pure Python but compiled with Cython, and heavily optimized for speed. `PyCapnp` relies on the official capnproto implementation written in C++.
The `copy_pointer` benchmarks repeatedly copies a big recursive tree so that the majority of the time is spent inside the deep-copy function and we can ignore the small amout of time spent outside. Thus, we are effetively benchmarking our Cython-based function against the heavily optimized C++
one. The resulting speed is very good. On some machine, it has measured to be even **faster** than the C++ version.
### Loading messages[¶](#loading-messages)
These benchmark measure the performance of reading a stream of Cap’n Proto messages, either from a file or from a TCP socket.
Note
`pycapnp` delegates the reading to the underlying C++ library, so you need to pass anything with a `fileno()` method: so, we pass a
`socket` object directly. On the other hand, `capnpy` needs a file-like object, so we pass a [BufferedSocket](usage.html#loading-from-sockets).
### Buffered streams[¶](#buffered-streams)
As explained in the section [Loading from sockets](index.html#loading-from-sockets), `capnpy` provides its own buffered wrapper around `socket`, which is immensely faster than
`socket.makefile()`.
### Dumping messages[¶](#dumping-messages)
These benchmark measure the performance of dumping an existing `capnpy`
object into a message to be sent over the wire. At mimimum, to dump a message you need to copy all the bytes which belongs to the object: this is measured by `test_copy_buffer`, which blindly copy the entire buffer and it is used as a baseline.
The actual implementation of `dumps()` needs to do more: in particular, it needs to compute the exact range of bytes to copy. Thus, the goal is that
`dumps()` should be as close as possible to `copy_buffer`.
If the structure was inside a `capnpy` list, it will be “non compact”: in other words, it is not represented by a contiguous amount of bytes in memory. In that case, `dumps()` needs to do even more work to produce the message. At the moment of writing, the implementation of `.compact()` is known to be slow and non-optimized.
### Evolution over time[¶](#evolution-over-time) |
mintscan | rust | Rust | Crate mintscan
===
mintscan.rs iqlusion
---
![Crate](https://img.shields.io/crates/v/mintscan.svg)
![Docs](https://docs.rs/mintscan/badge.svg)
![Apache 2.0 Licensed](https://img.shields.io/badge/license-Apache2.0/MIT-blue.svg)
![MSRV](https://img.shields.io/badge/rustc-1.56+-blue.svg)
![Build Status](https://github.com/iqlusioninc/crates/actions/workflows/mintscan.yml/badge.svg)
API client for the Mintscan Cosmos explorer by Cosmostation.
Documentation
### Minimum Supported Rust Version
Rust **1.56**
### License
Copyright © 2021-2022 iqlusion
**mintscan.rs** is distributed under the terms of either the MIT license or the Apache License (Version 2.0), at your option.
See LICENSE (Apache License, Version 2.0) file in the `iqlusioninc/crates`
toplevel directory of this repository or LICENSE-MIT for details.
### Contribution
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you shall be dual licensed as above,
without any additional terms or conditions.
Re-exports
---
`pub use self::coin::Coin;``pub use tendermint;`Modules
---
coinCoin types.
v1`/v1` API endpoints.
Structs
---
MintscanMintscan API client.
Type Definitions
---
AddressBech32-encoded address.
RateValidator rates.
Crate mintscan
===
mintscan.rs iqlusion
---
![Crate](https://img.shields.io/crates/v/mintscan.svg)
![Docs](https://docs.rs/mintscan/badge.svg)
![Apache 2.0 Licensed](https://img.shields.io/badge/license-Apache2.0/MIT-blue.svg)
![MSRV](https://img.shields.io/badge/rustc-1.56+-blue.svg)
![Build Status](https://github.com/iqlusioninc/crates/actions/workflows/mintscan.yml/badge.svg)
API client for the Mintscan Cosmos explorer by Cosmostation.
Documentation
### Minimum Supported Rust Version
Rust **1.56**
### License
Copyright © 2021-2022 iqlusion
**mintscan.rs** is distributed under the terms of either the MIT license or the Apache License (Version 2.0), at your option.
See LICENSE (Apache License, Version 2.0) file in the `iqlusioninc/crates`
toplevel directory of this repository or LICENSE-MIT for details.
### Contribution
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you shall be dual licensed as above,
without any additional terms or conditions.
Re-exports
---
`pub use self::coin::Coin;``pub use tendermint;`Modules
---
coinCoin types.
v1`/v1` API endpoints.
Structs
---
MintscanMintscan API client.
Type Definitions
---
AddressBech32-encoded address.
RateValidator rates.
Struct mintscan::coin::Coin
===
```
pub struct Coin {
pub denom: Denom,
pub amount: Amount,
}
```
Coin defines a token with a denomination and an amount.
Fields
---
`denom: Denom`Denomination
`amount: Amount`Amount.
Trait Implementations
---
source### impl Clone for Coin
source#### fn clone(&self) -> Coin
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for Coin
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl<'de> Deserialize<'de> for Coin
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
Auto Trait Implementations
---
### impl RefUnwindSafe for Coin
### impl Send for Coin
### impl Sync for Coin
### impl Unpin for Coin
### impl UnwindSafe for Coin
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Module mintscan::coin
===
Coin types.
Structs
---
AmountAmount.
CoinCoin defines a token with a denomination and an amount.
DenomDenomination.
Module mintscan::v1
===
`/v1` API endpoints.
Modules
---
staking`/v1/staking` endpoints.
Structs
---
Status`/v1/status` endpoint.
Struct mintscan::Mintscan
===
```
pub struct Mintscan { /* private fields */ }
```
Mintscan API client.
Implementations
---
source### impl Mintscan
source#### pub fn new(hostname: impl Into<String>) -> Self
Create a new Mintscan client for the given API hostname
(e.g. `api.cosmostation.io`)
source#### pub async fn status(&self) -> Result<StatusGet `/v1/status` endpoint.
source#### pub async fn validator(&self, addr: impl Into<Address>) -> Result<ValidatorGet `/v1/staking/validator` endpoint.
Accepts a Bech32-encoded account address for the validator.
source#### pub async fn validator_uptime(&self, addr: impl Into<Address>) -> Result<UptimeGet `/v1/staking/validator/uptime` endpoint.
Accepts a Bech32-encoded account address for the validator.
Trait Implementations
---
source### impl From<HttpsClient> for Mintscan
source#### fn from(client: HttpsClient) -> Mintscan
Converts to this type from the input type.
Auto Trait Implementations
---
### impl !RefUnwindSafe for Mintscan
### impl Send for Mintscan
### impl Sync for Mintscan
### impl Unpin for Mintscan
### impl !UnwindSafe for Mintscan
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Type Definition mintscan::Address
===
```
pub type Address = String;
```
Bech32-encoded address.
Type Definition mintscan::Rate
===
```
pub type Rate = String;
```
Validator rates. |
@thi.ng/geom-clip-line | npm | JavaScript | This project is part of the
[@thi.ng/umbrella](https://github.com/thi-ng/umbrella/) monorepo and anti-framework.
* [About](#about)
* [Status](#status)
* [Related packages](#related-packages)
* [Installation](#installation)
* [Dependencies](#dependencies)
* [API](#api)
* [Authors](#authors)
* [License](#license)
[About](#about)
---
2D line clipping (Liang-Barsky). This is a support package for [@thi.ng/geom](https://github.com/thi-ng/umbrella/tree/develop/packages/geom).
Current implementation is partially based on [toxiclibs](http://toxiclibs.org)
(Java) and Clojure version [thi.ng/geom-clj](http://thi.ng/geom-clj). Also see
[@thi.ng/geom-clip-poly](https://github.com/thi-ng/umbrella/blob/develop/packages/geom-clip-poly)
sister package.
The following main functions are provided:
* [`clipLinePoly()`](https://docs.thi.ng/umbrella/geom-clip-line/functions/clipLinePoly.html)
* [`clipLineSegmentPoly()`](https://docs.thi.ng/umbrella/geom-clip-line/functions/clipLineSegmentPoly.html)
* [`clipPolylinePoly()`](https://docs.thi.ng/umbrella/geom-clip-line/functions/clipPolylinePoly.html)
* [`liangBarsky2()`](https://docs.thi.ng/umbrella/geom-clip-line/functions/liangBarsky2.html)
[Status](#status)
---
**STABLE** - used in production
[Search or submit any issues for this package](https://github.com/thi-ng/umbrella/issues?q=%5Bgeom-clip-line%5D+in%3Atitle)
[Related packages](#related-packages)
---
* [@thi.ng/geom-clip-poly](https://github.com/thi-ng/umbrella/tree/develop/packages/geom-clip-poly) - 2D polygon clipping / offsetting (Sutherland-Hodgeman, Grainer-Hormann)
[Installation](#installation)
---
```
yarn add @thi.ng/geom-clip-line
```
ES module import:
```
<script type="module" src="https://cdn.skypack.dev/@thi.ng/geom-clip-line"></script>
```
[Skypack documentation](https://docs.skypack.dev/)
For Node.js REPL:
```
const geomClipLine = await import("@thi.ng/geom-clip-line");
```
Package sizes (brotli'd, pre-treeshake): ESM: 755 bytes
[Dependencies](#dependencies)
---
* [@thi.ng/api](https://github.com/thi-ng/umbrella/tree/develop/packages/api)
* [@thi.ng/geom-isec](https://github.com/thi-ng/umbrella/tree/develop/packages/geom-isec)
* [@thi.ng/vectors](https://github.com/thi-ng/umbrella/tree/develop/packages/vectors)
[API](#api)
---
[Generated API docs](https://docs.thi.ng/umbrella/geom-clip-line/)
```
import { clipPolylinePoly, liangBarsky2 } from "@thi.ng/geom-clip-line";
clipPolylinePoly(
// polyline vertices
[[10, -50], [30, 30], [-50, 50], [150, 50], [70, 70], [90, 150]],
// boundary polygon vertices
[[0, 0], [100, 0], [100, 100], [0, 100]]
);
// result is 3 polylines:
// (since the original is temporarily leaving the poly)
// [
// [ [ 22.5, 0 ], [ 30, 30 ], [ 0, 37.5 ] ],
// [ [ 0, 50 ], [ 100, 50 ] ],
// [ [ 100, 62.5 ], [ 70, 70 ], [ 77.5, 100 ] ]
// ]
// Liang-Barsky is optimized for rectangular clipping regions liangBarsky2(
// line end points
[-10, -20], [30, 400],
// min/max clip rect
[0, 0], [100, 200]
)
// [ [ 0, 85 ], [ 10.952380952380953, 200 ], 0.25, 0.5238095238095238 ]
// returns undefined if line is completely outside the clip rect liangBarsky2(
// line end points
[-10, -20], [-30, 400],
// min/max bbox
[0, 0], [100, 200]
)
// undefined
```
[Authors](#authors)
---
* [<NAME>](https://thi.ng)
If this project contributes to an academic publication, please cite it as:
```
@misc{thing-geom-clip-line,
title = "@thi.ng/geom-clip-line",
author = "<NAME>",
note = "https://thi.ng/geom-clip-line",
year = 2013
}
```
[License](#license)
---
© 2013 - 2023 <NAME> // Apache License 2.0
Readme
---
### Keywords
* 2d
* bbox
* clipping
* geometry
* graphics
* liang-barsky
* line
* typescript |
openprovider | rust | Rust | Struct openprovider::Builder
===
```
pub struct Builder { /* private fields */ }
```
Constructs an API client.
Right now, this builder does not accept any options, but more may be added in the future.
```
let mut client = openprovider::Builder::new().build();
// use the client to make requests
```
Implementations
---
### impl Builder
#### pub fn new() -> Self
Create a new API client builder object.
#### pub fn token(self, token: Option<String>) -> Self
Make sure the client to be built is configured to use this token.
#### pub fn max_retries(self, max_retries: u32) -> Self
Limit the amount of HTTP request retries to the given number.
#### pub fn no_max_retries(self) -> Self
Allow as many HTTP request retries as needed in the API client.
#### pub fn build(self) -> Client
Build the actual API client. This is a destructive operation.
Auto Trait Implementations
---
### impl RefUnwindSafe for Builder
### impl Send for Builder
### impl Sync for Builder
### impl Unpin for Builder
### impl UnwindSafe for Builder
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct openprovider::Client
===
```
pub struct Client { /* private fields */ }
```
Communiates with the OpenProvider.nl API.
```
let mut client = openprovider::Client::default();
let token = client.login("bob", "123456789").await?;
client.set_token(token);
```
Implementations
---
### impl Client
#### pub async fn login<S1: AsRef<str>, S2: AsRef<str>>(
&mut self,
username: S1,
password: S2
) -> Result<StringAuthenticate with the OpenProvider API and receive a fresh token.
Use `set_token` to assign the token to the client that should use it.
```
let mut client = openprovider::Client::default();
let token = client.login("bob", "123456789").await?;
client.set_token(token);
```
#### pub fn get_token(&self) -> Option<&StringGet the current token used for authorization, if any.
#### pub fn has_token(&self) -> bool
Return `true` if a token is present and ready to be used for authorization; `false`
otherwise.
#### pub fn set_token<S: Into<String>>(&mut self, token: S)
Use `login` to obtain a token from a combination of a username and password.
```
let mut client = openprovider::Client::default();
match std::env::var("OPENPROVIDER_TOKEN") {
Ok(token) => client.set_token(token),
Err(_) => {},
}
```
#### pub async fn list_zones(&mut self) -> Result<Vec<Zone>List all known DNS zones for this particular authenticated user.
```
let mut client = openprovider::Client::default();
// ...
let zones = client
.list_zones()
.await?
.iter()
.filter(|z| !z.is_deleted);
```
#### pub async fn get_zone<S: AsRef<str>>(&mut self, name: S) -> Result<ZoneGet more information about a specific DNS zone.
```
let client = openprovider::Client::default();
let info = client.get_zone("example.com").await?;
eprintln!("Zone created on {}", info.creation_date);
eprintln!("Zone modified on {}", info.modification_date);
```
#### pub async fn list_records<S: AsRef<str>>(
&mut self,
name: S
) -> Result<Vec<Record>List all records that belong to the provided DNS zone.
```
use openprovider::RecordType;
let client = openprovider::Client::default();
let records = client.list_records("example.com").await?;
for record in records {
if record.name == "wiki" && record.ty == RecordType::A {
eprintln!("Found our wiki A-record pointing to {}", record.value);
}
}
```
#### pub async fn set_record<S: AsRef<str>>(
&mut self,
name: S,
orig_record: &Record,
new_record: &Record
) -> Result<()Update a given DNS record with new attributes.
Due to the way the OpenProvider API works, you must supply the old DNS record as well.
You can do this by using `list_zones` and filtering on the DNS record that you want to change.
```
use openprovider::RecordType;
let record = client.list_record("example.com")
.await?
.iter()
.filter(|r| r.name === "wiki" && r.ty == RecordType::A)
.first()
.expect("A record for wiki.example.com not found");
let mut new_record = record.clone();
new_record.value = "93.184.216.34".to_string();
client.set_record("example.com", record, new_record)
```
Trait Implementations
---
### impl Default for Client
#### fn default() -> Self
Returns the “default value” for a type. Read moreAuto Trait Implementations
---
### impl !RefUnwindSafe for Client
### impl Send for Client
### impl Sync for Client
### impl Unpin for Client
### impl !UnwindSafe for Client
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct openprovider::SectigoData
===
```
pub struct SectigoData {
pub autorenew: bool,
pub order_date: String,
pub renewal_date: String,
pub securd: bool,
pub website_id: u64,
}
```
Represents additional data about premium Sectigo DNS services for a DNS zone.
Fields
---
`autorenew: bool``order_date: String``renewal_date: String``securd: bool``website_id: u64`Trait Implementations
---
### impl Clone for SectigoData
#### fn clone(&self) -> SectigoData
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer. Read moreAuto Trait Implementations
---
### impl RefUnwindSafe for SectigoData
### impl Send for SectigoData
### impl Sync for SectigoData
### impl Unpin for SectigoData
### impl UnwindSafe for SectigoData
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: for<'de> Deserialize<'de>,
Struct openprovider::Zone
===
```
pub struct Zone {
pub active: bool,
pub creation_date: String,
pub dnskey: Option<String>,
pub id: u64,
pub ip: String,
pub is_deleted: bool,
pub is_shadow: bool,
pub is_spamexperts_enabled: bool,
pub modification_date: String,
pub name: String,
pub premium_dns: Option<PremiumDnsData>,
pub provider: String,
pub records: Option<Vec<Record>>,
pub reseller_id: u64,
pub ty: String,
}
```
Represents the DNS configuration of a single domain.
Fields
---
`active: bool``creation_date: String``dnskey: Option<String>``id: u64``ip: String``is_deleted: bool``is_shadow: bool``is_spamexperts_enabled: bool``modification_date: String``name: String``premium_dns: Option<PremiumDnsData>``provider: String``records: Option<Vec<Record>>``reseller_id: u64``ty: String`Trait Implementations
---
### impl Clone for Zone
#### fn clone(&self) -> Zone
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer. Read moreAuto Trait Implementations
---
### impl RefUnwindSafe for Zone
### impl Send for Zone
### impl Sync for Zone
### impl Unpin for Zone
### impl UnwindSafe for Zone
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: for<'de> Deserialize<'de>,
Enum openprovider::PremiumDnsData
===
```
pub enum PremiumDnsData {
Sectigo(SectigoData),
}
```
Represents additional data about premium DNS services for a DNS zone.
Variants
---
### Sectigo(SectigoData)
Trait Implementations
---
### impl Clone for PremiumDnsData
#### fn clone(&self) -> PremiumDnsData
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer. Read moreAuto Trait Implementations
---
### impl RefUnwindSafe for PremiumDnsData
### impl Send for PremiumDnsData
### impl Sync for PremiumDnsData
### impl Unpin for PremiumDnsData
### impl UnwindSafe for PremiumDnsData
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: for<'de> Deserialize<'de>, |
turboEM | cran | R | Package ‘turboEM’
October 14, 2022
Title A Suite of Convergence Acceleration Schemes for EM, MM and Other
Fixed-Point Algorithms
Description Algorithms for accelerating the convergence of slow,
monotone sequences from smooth, contraction mapping such as the
EM and MM algorithms. It can be used to accelerate any smooth,
linearly convergent acceleration scheme. A tutorial style
introduction to this package is available in a vignette on the
CRAN download page or, when the package is loaded in an R
session, with vignette(``turboEM'').
Depends R (>= 2.12.0), doParallel, foreach, numDeriv, quantreg
Imports iterators
Suggests setRNG
Version 2021.1
LazyLoad yes
License GPL-2
Author <NAME> [aut],
<NAME> [aut, cre],
<NAME> [ctb]
Maintainer <NAME> <<EMAIL>>
URL https://coah.jhu.edu/people/Faculty_personal_Pages/Varadhan.html
Repository CRAN
NeedsCompilation no
Date/Publication 2021-08-05 04:20:02 UTC
R topics documented:
implant... 2
partie... 2
psychfactor... 3
rat... 3
turb... 4
turboe... 6
turboSi... 11
turbosi... 13
vote... 16
implants Fetal Death in Mice
Description
Data on the number of fetal deaths arising from dominant lethal testing in mice.
Usage
data(implants)
Format
A data frame containing the number of dead and survived implants from 523 mice.
Source
<NAME> and <NAME> (2010). MM Algorithms for Some Discrete Multivariate Distributions. Jour-
nal of Computational and Graphical Statistics. 19 (3) 645-665. Supplementary material.
References
Haseman JK and Soares ER (1976). The Distribution of Fetal Death in Control Mice and Its Im-
plications on Statistical Tests for Dominant Lethal Effects, Mutation Research/Fundamental and
Molecular Mechanisms of Mutagenesis. 41, 277-288.
parties Political Parties
Description
Political parties of members of the U.S. House of Representatives, 2005.
Usage
data(votes)
Format
A vector of integers representing the political parties, with 0, 1, and 2 corresponding to Republicans,
Democrats, and Independents, respectively.
Source
Diaconis PD, <NAME>, and <NAME> (2008). Horseshoes in Multidimensional Scaling and Local
Kernel Methods. Annals of Applied Statistics. 2 (3) 777-807. Supplementary material.
psychfactors Psychiatric Test Correlations
Description
Intercorrelations of outcomes from a set of psychiatric tests for 148 children.
Usage
data(psychfactors)
Format
A 10-by-10 correlation matrix.
Source
Maxwell AE (1961). Recent Trends in Factor Analysis. Journal of the Royal Statistical Society.
Series A (General). 124 (1) 49-59.
rats Population Growth of Rats
Description
Longitudinal measurements of the weights of rats in control and treatment groups.
Usage
data(rats)
Format
A data frame containing weights from 60 rats divided into two groups, with each rat measured at 5
time points.
Source
Gelfand AE, Hills SE, Racine-Poon A, and Smith AFM (1990). Illustration of Bayesian inference
in normal data models using Gibbs sampling. Journal of the American Statistical Association. 85,
972-985.
turbo Methods for objects of class "turbo"
Description
The turbo class represents results from parameter estimation in fixed-point mapping problems. The
turboem function outputs objects of class turbo.
Usage
## S3 method for class 'turbo'
print(x, ...)
## S3 method for class 'turbo'
pars(x, ...)
## S3 method for class 'turbo'
error(x, ...)
## S3 method for class 'turbo'
plot(x, which.methods = seq_along(x$method),
method.names = x$method[which.methods], xlim, ylim, ...)
## S3 method for class 'turbo'
grad(x, objfn=x$objfn, which.methods = seq_along(x$method),
method.names = x$method[which.methods], ...)
## S3 method for class 'turbo'
hessian(x, objfn=x$objfn, which.methods = seq_along(x$method),
method.names = x$method[which.methods], ...)
## S3 method for class 'turbo'
stderror(x, objfn=x$objfn, which.methods = seq_along(x$method),
method.names = x$method[which.methods], ...)
Arguments
x An object of class turbo, typically the output of a call to turboem.
which.methods A vector identifying for which subset of algorithms results are desired.
method.names A vector of unique identifiers for the algorithms for which results are being
provided.
xlim Optional range for the x-axis of the trace plot.
ylim Optional range for the y-axis of the trace plot.
objfn Objective function. Usually this is taken to be the appropriate component of a
turbo object.
... Additional arguments.
Value
print Shows a brief summary of the results from fitting the acceleration schemes.
pars Prints the fixed-point values across acceleration schemes at termination of the
algorithms.
error Prints any error messages from running the acceleration schemes
plot Shows a trace plot of the objective function value over iterations. This method is
only available if the call to turboem had the argumentcontrol.run[["keep.objfval"]]=TRUE
grad Calculates the gradient of the objective function evaluated at the fixed-point
across acceleration schemes. Uses numerical methods from the package numDeriv.
hessian Calculates the Hessian of the objective function evaluated at the fixed-point
across acceleration schemes. Uses numerical methods from the package numDeriv.
stderror Provides estimates of the standard error of the fixed-point across acceleration
schemes.
See Also
turboem
Examples
###########################################################################
# Also see the vignette by typing:
# vignette("turboEM")
#
# EM algorithm for Poisson mixture estimation
fixptfn <- function(p,y) {
# The fixed point mapping giving a single E and M step of the EM algorithm
#
pnew <- rep(NA,3)
i <- 0:(length(y)-1)
zi <- p[1]*exp(-p[2])*p[2]^i / (p[1]*exp(-p[2])*p[2]^i + (1 - p[1])*exp(-p[3])*p[3]^i)
pnew[1] <- sum(y*zi)/sum(y)
pnew[2] <- sum(y*i*zi)/sum(y*zi)
pnew[3] <- sum(y*i*(1-zi))/sum(y*(1-zi))
p <- pnew
return(pnew)
}
objfn <- function(p,y) {
# Objective function whose local minimum is a fixed point
# negative log-likelihood of binary poisson mixture
i <- 0:(length(y)-1)
loglik <- y*log(p[1]*exp(-p[2])*p[2]^i/exp(lgamma(i+1)) +
(1 - p[1])*exp(-p[3])*p[3]^i/exp(lgamma(i+1)))
return ( -sum(loglik) )
}
# Real data from Hasselblad (JASA 1969)
poissmix.dat <- data.frame(death=0:9, freq=c(162,267,271,185,111,61,27,8,3,1))
y <- poissmix.dat$freq
# Use a preset seed so the example is reproducable.
require("setRNG")
old.seed <- setRNG(list(kind="Mersenne-Twister", normal.kind="Inversion",
seed=1))
p0 <- c(runif(1),runif(2,0,4)) # random starting value
# Basic EM algorithm, SQUAREM, and parabolic EM, with default settings
res1 <- turboem(par=p0, y=y, fixptfn=fixptfn, objfn=objfn, method=c("EM", "squarem", "pem"))
# Apply methods for class "turbo"
res1
pars(res1)
grad(res1)
hessian(res1)
stderror(res1)
error(res1)
# We get an error for Dynamic ECME (decme) if we do not specify the boundary function
res2 <- turboem(par=p0, y=y, fixptfn=fixptfn, objfn=objfn,
method=c("EM", "squarem", "pem", "decme"))
res2
error(res2)
# we can't plot the results, because we did not store the objective function value at each iteration
# Changing the options to store the objective function values, we can:
res1keep <- turboem(par=p0, y=y, fixptfn=fixptfn, objfn=objfn, method=c("EM", "squarem", "pem"),
control.run=list(keep.objfval=TRUE))
plot(res1keep, xlim=c(0.001, 0.02))
turboem A suite of acceleration schemes for fixed-point iterations
Description
Globally-convergent, partially monotone, acceleration schemes for accelerating the convergence
of any smooth, monotone, slowly-converging contraction mapping. It can be used to accelerate
the convergence of a wide variety of iterations including the expectation-maximization (EM) algo-
rithms and its variants, majorization-minimization (MM) algorithm, power method for dominant
eigenvalue-eigenvector, Google’s page-rank algorithm, and multi-dimensional scaling.
Usage
turboem(par, fixptfn, objfn, method = c("em","squarem","pem","decme","qn"),
boundary, pconstr = NULL, project = NULL, parallel = FALSE, ...,
control.method = replicate(length(method),list()), control.run = list())
Arguments
par A vector of parameters denoting the initial guess for the fixed point.
fixptfn A vector function, F that denotes the fixed-point mapping. This function is the
most essential input in the package. It should accept a parameter vector as input
and should return a parameter vector of same length. This function defines the
fixed-point iteration: xk+1 = F (xk ). In the case of EM algorithm, F defines a
single E and M step.
objfn This is a scalar function, L, that denotes a “merit” function which attains its local
minimum at the fixed-point of F . This function should accept a parameter vector
as input and should return a scalar value. In the EM algorithm, the merit function
L is the negative log-likelihood. In some problems, a natural merit function may
not exist. However, this argument is required for all of the algorithms *except*
Squarem (which defaults to Squarem-2 if objfn not provided) and EM.
method Specifies which algorithm(s) will be applied. Must be a vector containing one
or more of c("em", "squarem", "pem", "decme", "qn").
boundary Argument required for Dynamic ECME (decme) only. Function to define the
subspaces over which the line search is conducted.
pconstr Optional function for defining boundary constraints on parameter values. Func-
tion maps a vector of parameter values to TRUE if constraints are satisfied. Note
that this argument is only used for the Squarem (squarem), Parabolic EM (pem),
and quasi-Newton (qn) algorithms, and it has no effect on the other algorithms.
project Optional function for defining a projection that maps an out-of-bound parameter
value into the constrained parameter space. Requires the pconstr argument to
be specified in order for the project to be applied.
parallel Logical indicating whether the acceleration schemes will be run in parallel. Note
that the parallel implementation is based on the foreach package, which de-
pends on a parallel backend being registered prior to running turboem(). See
*Details* of foreach.
control.method If method = c(method1, method2, ...), then control.method = list(list1,
list2, ...) where list1 is the list of control parameters for method1, list2
is the list of control parameters for method2, and so on. If length(method)
== 1, then control.method is the list of control parameters for the acceleration
scheme. See *Details*.
control.run List of control parameters for convergence and stopping the algorithms. See
*Details*.
... Arguments passed to fixptfn and objfn.
Details
The function turboem is a general-purpose algorithm for accelerating the convergence of any
slowly-convergent (smooth) fixed-point iteration.
The component lists of the control.method are used to specify any changes to default values
of algorithm control parameters. Full names of control list elements must be specified, other-
wise, user specifications are ignored. Default control parameters for method="squarem" are K=1,
square=TRUE, version=3, step.min0=1, step.max0=1, mstep=4, kr=1, objfn.inc=1. Default
control parameters for method="pem" are l=10, h=0.1, a=1.5, and version="geometric". De-
fault control parameters for method="decme" are version="v2" and tol_op=0.01. Default control
parameters for method="qn" are qn=5.
Default values of control.run are: convtype = "parameter", tol = 1.0e-07, stoptype = "maxiter",
maxiter = 1500, maxtime = 60, convfn.user = NULL, stopfn.user = NULL, trace = FALSE, keep.objfval
= FALSE, keep.paramval = FALSE.
There are two ways the algorithm will terminate. Either the algorithm will terminate if conver-
gence has been achieved, or the algorithm will terminate if convergence has not been achieved
within a pre-specified maximum number of iterations or maximum running time. The arguments
convtype, tol, and convfn.user control the convergence criterion. The arguments stoptype,
maxiter, maxtime, and stopfn.user control the alternative stopping criterion.
Two types of convergence criteria have been implemented, with an option for the user to define
his/her own convergence criterion. If convtype = "parameter", then the default convergence cri-
terion is to terminate if sqrt(crossprod(new - old)) < tol, where new denotes the current value
of the fixed point and old denotes the previous fixed-point value. If convtype = "objfn", then
the default convergence criterion is to terminate if abs(new - old) < tol, where new denotes the
current value of the objective function and old denotes the previous value of the objective function.
If the user desires alternate convergence criteria, convfn.user may be specified as a function with
inputs new and old that maps to a logical taking the value TRUE if convergence is achieved and the
value FALSE if convergence is not achieved.
Two types of alternative stopping criteria have been implemented, with the option for the user to
define his/her own stopping criterion. If stoptype = "maxiter", then the algorithm will termi-
nate if convergence has not been achieved within maxiter iterations of the acceleration scheme.
If stoptype = "maxtime", then the algorithm will terminate if convergence has not been achieved
within maxtime seconds of running the acceleration scheme. Note: the running time of the accel-
eration scheme is calculated once every iteration. If the user desires different alternate stopping
criteria than those implemented, stopfn.user may be specified as a function with no inputs that
maps to a logical taking the value TRUE which leads to the algorithm being terminated or the value
FALSE which leads to the algorithm proceeding as usual.
convtype A character string equal to "parameter" or "objfn". "parameter" indicates that the
convergence criterion is a function of the current and previous value of the fixed point. objfn
indicates that the convergence criterion is a function of the current and previous value of the
objective function.
tol A small, positive scalar that determines when convergence is achieved. See details above for
convergence criteria currently implemented. Default is 1.e-07.
stoptype A character string equal to "maxiter" or "maxtime" that determines an alternative stop-
ping rule for the algorithm. See details above for stopping rules currently implemented. De-
fault is "maxiter".
maxiter If stoptype = "maxiter", specifies the number of iterations after which the algorithm
will be terminated if convergence has not been achieved. Default is 1500.
maxtime If stoptype = "maxtime", specifies the running time (in seconds) after which the algo-
rithm will be terminated if convergence has not been achieved. Default is 60.
convfn.user Optional, user-specified function for determining whether convergence has been
achieved. Function should take as inputs new and old, where new is the current value (of
the fixed point if convtype = "parameter" and of the objective function value if convtype
= "objfn") and old is the previous value. Function should map to a logical taking the value
TRUE if convergence is achieved (and hence the algorithm is terminated) and the value FALSE
if convergence is not achieved. Default is NULL.
stopfn.user Optional, user-specified function for determining whether to terminate the algorithm
if convergence has not been achieved. See details above for how to specify. Default is NULL.
trace A logical variable denoting whether some of the intermediate results of iterations should be
displayed to the user. Default is FALSE.
keep.objfval A logical variable denoting whether the objective function value at each iteration
should be stored. Default is FALSE.
keep.paramval A logical variable denoting whether the parameter estimates at each iteration
should be stored. Default is FALSE.
Value
turboem returns an object of class turbo. An object of class turbo is a list containing at least the
following components:
fail Vector of logical values whose jth element indicates whether algorithm j failed
(produced an error)
value.objfn Vector of the value of the objective function L at termination for each algorithm.
itr Vector of the number of iterations completed for each algorithm.
fpeval Vector of the number of fixed-point evaluations completed for each algorithm.
objfeval Vector of the number of objective function evaluations completed for each algo-
rithm.
convergence Vector of logical values whose jth element indicates whether algorithm j satis-
fied the convergence criterion before termination
runtime Matrix whose jth row contains the “user”, “system”, and “elapsed” time for
running the jth algorithm.
errors Vector whose jth element is either NA or contains the error message from run-
ning the jth algorithm
pars Matrix whose jth row contains the fixed-point parameter values at termination
for the jth algorithm.
trace.objfval If control.run[["keep.objfval"]]=TRUE, contains a list whose jth compo-
nent is a vector of objective function values across iterations for the jth algo-
rithm.
trace.paramval If control.run[["keep.paramval"]]=TRUE, contains a list whose jth compo-
nent is a matrix of parameter estimates across iterations for the jth algorithm.
References
<NAME> and <NAME> (2008). Simple and globally convergent numerical schemes for acceler-
ating the convergence of any EM algorithm. Scandinavian Journal of Statistics, 35:335-353.
<NAME> and <NAME> (2009). Parabolic acceleration of the EM algorithm. Stat Comput. 19 (1)
35-47.
<NAME> and <NAME> (2010) The Dynamic ECME Algorithm. Technical Report. arXiv:1004.0524v1.
<NAME>, <NAME>, and <NAME> (2011). A quasi-Newton acceleration for high-dimensional
optimization algorithms. Stat Comput. 21 (2) 261-273.
See Also
turbo
Examples
###########################################################################
# Also see the vignette by typing:
# vignette("turboEM")
#
# EM algorithm for Poisson mixture estimation
fixptfn <- function(p,y) {
# The fixed point mapping giving a single E and M step of the EM algorithm
#
pnew <- rep(NA,3)
i <- 0:(length(y)-1)
zi <- p[1]*exp(-p[2])*p[2]^i / (p[1]*exp(-p[2])*p[2]^i + (1 - p[1])*exp(-p[3])*p[3]^i)
pnew[1] <- sum(y*zi)/sum(y)
pnew[2] <- sum(y*i*zi)/sum(y*zi)
pnew[3] <- sum(y*i*(1-zi))/sum(y*(1-zi))
p <- pnew
return(pnew)
}
objfn <- function(p,y) {
# Objective function whose local minimum is a fixed point
# negative log-likelihood of binary poisson mixture
i <- 0:(length(y)-1)
loglik <- y*log(p[1]*exp(-p[2])*p[2]^i/exp(lgamma(i+1)) +
(1 - p[1])*exp(-p[3])*p[3]^i/exp(lgamma(i+1)))
return ( -sum(loglik) )
}
# Real data from Hasselblad (JASA 1969)
poissmix.dat <- data.frame(death = 0:9,
freq = c(162,267,271,185,111,61,27,8,3,1))
y <- poissmix.dat$freq
# Use a preset seed so the example is reproducable.
require("setRNG")
old.seed <- setRNG(list(kind = "Mersenne-Twister", normal.kind = "Inversion",
seed = 54321))
p0 <- c(runif(1),runif(2,0,4)) # random starting value
# Basic EM algorithm, SQUAREM, and parabolic EM, with default settings
res1 <- turboem(par = p0, y = y, fixptfn = fixptfn, objfn = objfn,
method = c("EM", "squarem", "pem"))
# To apply the dynamic ECME (decme) acceleration scheme,
# we need to include a boundary function
boundary <- function(par, dr) {
lower <- c(0, 0, 0)
upper <- c(1, 10000, 10000)
low1 <- max(pmin((lower-par)/dr, (upper-par)/dr))
upp1 <- min(pmax((lower-par)/dr, (upper-par)/dr))
return(c(low1, upp1))
}
res2 <- turboem(par = p0, y = y, fixptfn = fixptfn, objfn = objfn,
boundary = boundary, method = c("EM", "squarem", "pem", "decme"))
# change some of the algorithm-specific default specifications (control.method),
# as well as the global control parameters (control.run)
res3 <- turboem(par = p0, y = y, fixptfn = fixptfn, objfn = objfn,
boundary = boundary, method = c("em", "squarem", "squarem", "decme", "qn", "qn"),
control.method = list(list(), list(K = 2), list(K = 3),
list(version = "v3"), list(qn = 1), list(qn = 2)),
control.run = list(tol = 1e-12, stoptype = "maxtime", maxtime = 1))
# Only the standard EM algorithm and SQUAREM *do not* require
# providing the objective function.
res4 <- turboem(par = p0, y = y, fixptfn = fixptfn,
method = c("em", "squarem", "squarem"),
control.method = list(list(), list(K = 1), list(K = 2)))
# If no objective function is provided, the "squarem" method defaults to Squarem-2
# Or, if control parameter K > 1, it defaults to Cyclem-2.
# Compare Squarem with and without objective function provided:
res5 <- turboem(par = p0, y = y, fixptfn = fixptfn, method = "squarem")
res5
res6 <- turboem(par = p0, y = y, fixptfn = fixptfn, objfn = objfn, method = "squarem")
res6
turboSim Conduct benchmark studies of EM accelerator
Description
The turboSim function conducts benchmark studies to compare performance of multiple acceler-
ation schemes over a large number of repetitions. The turboSim function outputs objects of class
turbosim.
Usage
turboSim(parmat, fixptfn, objfn, method = c("em","squarem","pem","decme","qn"),
boundary, pconstr = NULL, project = NULL, parallel = FALSE, method.names,
keep.pars = FALSE, ..., control.method = replicate(length(method),list()),
control.run = list())
Arguments
parmat A matrix of starting parameter values, where each row corresponds to a single
benchmark study repetition.
fixptfn A vector function, F that denotes the fixed-point mapping. This function is the
most essential input in the package. It should accept a parameter vector as input
and should return a parameter vector of same length. This function defines the
fixed-point iteration: xk+1 = F (xk ). In the case of EM algorithm, F defines a
single E and M step.
objfn This is a scalar function, L, that denotes a “merit” function which attains its local
minimum at the fixed-point of F . This function should accept a parameter vector
as input and should return a scalar value. In the EM algorithm, the merit function
L is the negative log-likelihood. In some problems, a natural merit function may
not exist. However, this argument is required for all of the algorithms *except*
Squarem (which defaults to Squarem-2 if objfn not provided) and EM.
method Specifies which algorithm(s) will be applied. Must be a vector containing one
or more of c("em", "squarem", "pem", "decme", "qn").
boundary Argument required for Dynamic ECME (decme) only. Function to define the
subspaces over which the line search is conducted.
pconstr Optional function for defining boundary constraints on parameter values. Func-
tion maps a vector of parameter values to TRUE if constraints are satisfied. Note
that this argument is only used for the Squarem (squarem), Parabolic EM (pem),
and quasi-Newton (qn) algorithms, and it has no effect on the other algorithms.
project Optional function for defining a projection that maps an out-of-bound parameter
value into the constrained parameter space. Requires the pconstr argument to
be specified in order for the project to be applied.
parallel Logical indicating whether the repetitions of the benchmark study will be run in
parallel. Note that the parallel implementation is based on the foreach pack-
age, which depends on a parallel backend being registered prior to running
turboSim(). See *Details* of foreach.
method.names Vector of unique names that identify the algorithms being compared.
keep.pars Logical indicating whether the parameter values at termination should be kept.
Defaults to FALSE.
control.method If method = c(method1, method2, ...), then control.method = list(list1,
list2, ...) where list1 is the list of control parameters for method1, list2
is the list of control parameters for method2, and so on. If length(method)
== 1, then control.method is the list of control parameters for the acceleration
scheme. See *Details* of turboem.
control.run List of control parameters for convergence and stopping the algorithms. See
*Details* of turboem.
... Arguments passed to fixptfn and objfn.
Value
turboSim returns an object of class turbosim.
See Also
turbosim, turboem
Examples
###########################################################################
# Examples provided in the vignette, which can be seen by typing
# vignette("turboEM")
turbosim Methods for objects of class "turbosim"
Description
The turbosim class represents results from benchmark studies of algorithms to acceleration param-
eter estimation in fixed-point mapping problems.
Usage
## S3 method for class 'turbosim'
print(x, ...)
## S3 method for class 'turbosim'
summary(object, which.methods = seq_along(object$method),
method.names = object$method.names[which.methods], eps = 0.1, sol = NULL, ...)
## S3 method for class 'turbosim'
boxplot(x, which.methods = seq_along(x$method),
method.names = x$method.names[which.methods],
whichfail = (x$fail | !x$conv)[,which.methods], xunit="sec", log=FALSE, ...)
## S3 method for class 'turbosim'
dataprof(x, which.methods = seq_along(x$method),
method.names = x$method.names[which.methods],
whichfail = (x$fail | !x$conv)[,which.methods], col, lty, nout = 50, xlim, ...)
## S3 method for class 'turbosim'
pairs(x, which.methods=seq_along(x$method),
method.names = x$method.names[which.methods],
whichfail = (x$fail | !x$conv)[,which.methods], ...)
Arguments
object An object of class turbosim, the structure of which is described in *Details*.
x An object of class turbosim, the structure of which is described in *Details*.
which.methods A vector identifying for which subset of algorithms results are desired.
method.names A vector of unique identifiers for the algorithms for which results are being
provided.
eps Used to define a tolerance between the objective function value attained by a
particular acceleration scheme and the best achievable objective function value
(either across schemes or as defined by the user). See *Details*.
sol Optional argument defining the best achievable objective function value for a
given fixed-point mapping problem. Defaults to NULL. See *Details*.
xunit Units for running time to be used in the boxplots. Argument takes the value
"sec" or "min."
log Logical indicating whether the log of the running time will be plotted. Defaults
to FALSE.
whichfail A matrix of logical values where the (i,j)-entry indicates whether algorithm j
of simulation iteration i failed (however the user wishes to define a failure for
visualization purposes). If argument is not provided by user, then by default a
failure is defined to be the event where the algorithm produces an error *or*
does not converge.
col Optional argument: A vector where each component defines the color for the
line corresponding to each algorithm being compared.
lty Optional argument: A vector where each component defines the line-type for
the line corresponding to each algorithm being compared.
nout Number of values at which the empirical distribution function is estimated.
Should be less than the number of simulation iterations.
xlim Optional argument: Defines the x-axis limits for the data profile. Defaults to the
full range of the running times over all algorithms being plotted.
... Additional arguments.
Details
An object of class turbosim is typically the product of the function turboSim. It is a list containing
at least the following components:
method.names Vector of unique identifiers for the algorithms being compared
fail Matrix whose (i,j)-element is a logical (TRUE/FALSE) for whether the jth algorithm at the
ith benchmark study repetition failed (produced an error).
convergence Matrix whose (i,j)-element is a logical (TRUE/FALSE) for whether the jth algo-
rithm at the ith benchmark study repetition satisfied the convergence criterion before termina-
tion.
value.objfn Matrix whose (i,j)-element is the value of the objective function of the jth algorithm
at the ith benchmark study repetition.
runtime Matrix whose (i,j)-element is the running time of the jth algorithm at the ith benchmark
study repetition.
itr Matrix whose (i,j)-element is the number of completed iterations of the jth algorithm at the
ith benchmark study repetition.
fpeval Matrix whose (i,j)-element is the number of fixed-point function evaluations of the jth
algorithm at the ith benchmark study repetition.
objfeval Matrix whose (i,j)-element is the number of objective function evaluations of the jth
algorithm at the ith benchmark study repetition.
errors Matrix whose (i,j)-element contains the error message produced by the jth algorithm at
the ith benchmark study repetition (if there was an error).
This list usually will also contain the components fixptfn, objfn, method, pconstr, project,
control.method, and control.run, which were provided as arguments for turboSim.
The summary function shows a table of the number of failures across acceleration schemes. There
are three types of failures. The first occurs when the algorithm produces an error message. The
second is if the algorithm does not converge before the alternative stopping rule is achieved (e.g.
the maximum number of iterations or maximum pre-specified runtime is reached). The third is
if the algorithm claims convergence but the value of the objective function is "far" from the best
achievable value. To assess this third type of failure, we determine whether the objective function
value achieved by the algorithm is close (within eps) to the smallest value achieved across all
algorithms at that simulation iteration. Alternatively, if the user knows a priori the true objective
function value, he/she may specify the argument sol, in which case, the third type of failure occurs
when the objective function value achieved by the algorithm is within eps of sol.
Further details for each of the methods are provided in the vignette, which can be seen by typing
vignette("turboEM").
Value
summary Summarizes the number of failures by type across simulation iterations for each
acceleration scheme.
boxplot Shows box plots of algorithm running times for each acceleration scheme.
dataprof Plots the data profile, or the estimated distribution function of the time until
convergence for each acceleration scheme.
pairs Scatterplot matrix showing pairwise comparison of the running times for each
pair of acceleration schemes.
See Also
turboem, turbo
Examples
###########################################################################
# Examples provided in the vignette, which can be seen by typing
# vignette("turboEM")
votes Roll Call Votes
Description
Roll call votes from the U.S. House of Representatives, 2005.
Usage
data(votes)
Format
A 401-by-669 matrix whose (i,j)-entry corresponds to the vote of the ith representative on the jth
roll call. Possible votes are "yea", "nay", or "not voting", which are represented by 1/2, -1/2, and 0,
respectively.
Source
Diaconis PD, Goel S, and <NAME> (2008). Horseshoes in Multidimensional Scaling and Local
Kernel Methods. Annals of Applied Statistics. 2 (3) 777-807. Supplementary material. |
github.com/EdgeCast/vflow | go | Go | README
[¶](#section-readme)
---
![vFlow](https://github.com/EdgeCast/vflow/raw/v0.9.1/docs/imgs/vflow_logo.png "vFlow logo")
###
[![Build Status](https://github.com/EdgeCast/vflow/workflows/vflow/badge.svg)](https://github.com/EdgeCast/vflow/actions?query=workflow%3Avflow) [![Go Report Card](https://goreportcard.com/badge/github.com/EdgeCast/vflow)](https://goreportcard.com/report/github.com/EdgeCast/vflow) [![GoDev](https://pkg.go.dev/badge/github.com/EdgeCast/vflow?utm_source=godoc)](https://pkg.go.dev/github.com/EdgeCast/vflow)
High-performance, scalable and reliable IPFIX, sFlow and Netflow collector (written in pure Golang).
### Features
* IPFIX RFC7011 collector
* sFLow v5 raw header / counters collector
* Netflow v5 collector
* Netflow v9 collector
* Decoding sFlow raw header L2/L3/L4
* Produce to Apache Kafka, NSQ, NATS
* Replicate IPFIX and sFlow to 3rd party collector
* Supports IPv4 and IPv6
* Prometheus and RESTful APIs monitoring
![Alt text](https://github.com/EdgeCast/vflow/raw/v0.9.1/docs/imgs/vflow.gif "vFlow")
### Documentation
* [Architecture](https://github.com/EdgeCast/vflow/blob/v0.9.1/docs/design.md).
* [Configuration](https://github.com/EdgeCast/vflow/blob/v0.9.1/docs/config.md).
* [Quick Start](https://github.com/EdgeCast/vflow/blob/v0.9.1/docs/quick_start_nsq.md).
* [JUNOS Integration](https://github.com/EdgeCast/vflow/blob/v0.9.1/docs/junos_integration.md).
* [Monitoring](https://github.com/EdgeCast/vflow/blob/v0.9.1/monitor/README.md).
* [Stress / Load Generator](https://github.com/EdgeCast/vflow/blob/v0.9.1/stress/README.md).
* [Kafka consumer examples](https://github.com/EdgeCast/vflow/tree/master/consumers).
### Decoded IPFIX data
The IPFIX data decodes to JSON format and IDs are [IANA IPFIX element ID](http://www.iana.org/assignments/ipfix/ipfix.xhtml)
```
{"AgentID":"192.168.21.15","Header":{"Version":10,"Length":420,"ExportTime":1483484642,"SequenceNo":1434533677,"DomainID":32771},"DataSets":[[{"I":8,"V":"192.16.28.217"},{"I":12,"V":"180.10.210.240"},{"I":5,"V":2},{"I":4,"V":6},{"I":7,"V":443},{"I":11,"V":64381},{"I":32,"V":0},{"I":10,"V":811},{"I":58,"V":0},{"I":9,"V":24},{"I":13,"V":20},{"I":16,"V":4200000000},{"I":17,"V":27747},{"I":15,"V":"180.105.10.210"},{"I":6,"V":"0x10"},{"I":14,"V":1113},{"I":1,"V":22500},{"I":2,"V":15},{"I":52,"V":63},{"I":53,"V":63},{"I":152,"V":1483484581770},{"I":153,"V":1483484622384},{"I":136,"V":2},{"I":243,"V":0},{"I":245,"V":0}]]}
```
### Decoded sFlow data
```
{"Version":5,"IPVersion":1,"AgentSubID":5,"SequenceNo":37591,"SysUpTime":3287084017,"SamplesNo":1,"Samples":[{"SequenceNo":1530345639,"SourceID":0,"SamplingRate":4096,"SamplePool":1938456576,"Drops":0,"Input":536,"Output":728,"RecordsNo":3,"Records":{"ExtRouter":{"NextHop":"115.131.251.90","SrcMask":24,"DstMask":14},"ExtSwitch":{"SrcVlan":0,"SrcPriority":0,"DstVlan":0,"DstPriority":0},"RawHeader":{"L2":{"SrcMAC":"58:00:bb:e7:57:6f","DstMAC":"f4:a7:39:44:a8:27","Vlan":0,"EtherType":2048},"L3":{"Version":4,"TOS":0,"TotalLen":1452,"ID":13515,"Flags":0,"FragOff":0,"TTL":62,"Protocol":6,"Checksum":8564,"Src":"10.1.8.5","Dst":"161.140.24.181"},"L4":{"SrcPort":443,"DstPort":56521,"DataOffset":5,"Reserved":0,"Flags":16}}}}],"IPAddress":"192.168.10.0","ColTime": 1646157296}
```
### Decoded Netflow v5 data
```
{"AgentID":"114.23.3.231","Header":{"Version":5,"Count":3,"SysUpTimeMSecs":51469784,"UNIXSecs":1544476581,"UNIXNSecs":0,"SeqNum":873873830,"EngType":0,"EngID":0,"SmpInt":1000},"Flows":[{"SrcAddr":"125.238.46.48","DstAddr":"114.23.236.96","NextHop":"114.23.3.231","Input":791,"Output":817,"PktCount":4,"L3Octets":1708,"StartTime":51402145,"EndTime":51433264,"SrcPort":49233,"DstPort":443,"Padding1":0,"TCPFlags":16,"ProtType":6,"Tos":0,"SrcAsNum":4771,"DstAsNum":56030,"SrcMask":20,"DstMask":22,"Padding2":0},{"SrcAddr":"125.238.46.48","DstAddr":"114.23.236.96","NextHop":"114.23.3.231","Input":791,"Output":817,"PktCount":1,"L3Octets":441,"StartTime":51425137,"EndTime":51425137,"SrcPort":49233,"DstPort":443,"Padding1":0,"TCPFlags":24,"ProtType":6,"Tos":0,"SrcAsNum":4771,"DstAsNum":56030,"SrcMask":20,"DstMask":22,"Padding2":0},{"SrcAddr":"210.5.53.48","DstAddr":"103.22.200.210","NextHop":"122.56.118.157","Input":564,"Output":802,"PktCount":1,"L3Octets":1500,"StartTime":51420072,"EndTime":51420072,"SrcPort":80,"DstPort":56108,"Padding1":0,"TCPFlags":16,"ProtType":6,"Tos":0,"SrcAsNum":56030,"DstAsNum":13335,"SrcMask":24,"DstMask":23,"Padding2":0}]}
```
### Decoded Netflow v9 data
```
{"AgentID":"10.81.70.56","Header":{"Version":9,"Count":1,"SysUpTime":357280,"UNIXSecs":1493918653,"SeqNum":14,"SrcID":87},"DataSets":[[{"I":1,"V":"0x00000050"},{"I":2,"V":"0x00000002"},{"I":4,"V":2},{"I":5,"V":192},{"I":6,"V":"0x00"},{"I":7,"V":0},{"I":8,"V":"10.81.70.56"},{"I":9,"V":0},{"I":10,"V":0},{"I":11,"V":0},{"I":12,"V":"224.0.0.22"},{"I":13,"V":0},{"I":14,"V":0},{"I":15,"V":"0.0.0.0"},{"I":16,"V":0},{"I":17,"V":0},{"I":21,"V":300044},{"I":22,"V":299144}]]}
```
### Supported platform
* Linux
* Windows
### Build
Given that the Go Language compiler (version 1.14.x preferred) is installed, you can build it with:
```
go get github.com/EdgeCast/vflow/vflow cd $GOPATH/src/github.com/EdgeCast/vflow
make build or
cd vflow; go build
```
### Installation
You can download and install pre-built debian package as below ([RPM and Linux binary are available](https://github.com/EdgeCast/vflow/releases)).
dpkg -i [vflow-0.9.0-x86_64.deb](https://github.com/EdgeCast/vflow/releases/download/v0.9.0/vflow-0.9.0-x86_64.deb)
Once you installed you need to configure the below files, for more information check [configuration guide](https://github.com/EdgeCast/vflow/blob/v0.9.1/docs/config.md):
```
/etc/vflow/vflow.conf
/etc/vflow/mq.conf
```
You can start the service by the below:
```
service vflow start
```
### Kubernetes
```
kubectl apply -f https://github.com/EdgeCast/vflow/blob/master/kubernetes/deploy.yaml
```
### Docker
```
docker run -d -p 2181:2181 -p 9092:9092 spotify/kafka docker run -d -p 4739:4739 -p 4729:4729 -p 6343:6343 -p 8081:8081 -e VFLOW_KAFKA_BROKERS="172.17.0.1:9092" mehrdadrad/vflow
```
### License
Licensed under the Apache License, Version 2.0 (the "License")
### Contribute
Welcomes any kind of contribution, please follow the next steps:
* Fork the project on github.com.
* Create a new branch.
* Commit changes to the new branch.
* Send a pull request.
None |
EnvExpInd | cran | R | Package ‘EnvExpInd’
October 12, 2022
Type Package
Title Environmental Exposure on the Individual Level
Imports gstat,RCurl,dplyr,stringi,sp,maptools,zoo
Version 0.1.0
Depends R(>= 3.5.0)
Description Tools for the assessment of the environmental exposure. The package pro-
vides three methods (nearest monitoring site, inverse distance weighted as de-
scribed in Li Wu (2017) <doi:10.1016/j.envint.2016.11.013>,and ordinary kriging) to calcu-
late the environmental exposure (e.g. air pollution) on the individual level.
URL https://github.com/Spatial-R/EnvExpInd
License GPL-3
Encoding UTF-8
LazyData true
RoxygenNote 7.1.0
Suggests knitr, rmarkdown
VignetteBuilder knitr
NeedsCompilation no
Author <NAME> [aut, cre]
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2020-10-23 15:50:02 UTC
R topics documented:
exposure_estimate_id... 2
exposure_estimate_krig... 4
expoure_estimate_simpl... 6
get_latlon_chin... 7
get_refrence_id_simpl... 8
individual_dat... 10
pollutant_dat... 10
site_dat... 11
timeseries_impu... 11
exposure_estimate_idw Estimate the pollutant exposure using the inverse distance weighting
method
Description
Used the pollutant concentration in the individual location as the reference point to estimate the
environmental exposure. The pollutant concentration at the refrence point was calculated based on
the inverse distance weighting method.
Usage
exposure_estimate_idw(
individual_data,
individual_id,
exposure_date,
individual_lat,
individual_lon,
pollutant_data,
pollutant_date = "date",
pollutant_site_lat,
pollutant_site_lon,
pollutant_name = c("pm10", "so2"),
estimate_interval = c(0:30)
)
Arguments
individual_data
data.frame, contains the refrence id, individual_id and exposure_date
individual_id character,varibale name in individual_data, represents the unique id for each
individual
exposure_date character, varibale name in individual_data, which represents the start date to
estimate the environment exposure
individual_lat character, varibale name in individual_data, represents the latitude information
of each idividual
individual_lon character, varibale name in individual_data, represents the longtitude informa-
tion of each idividual
pollutant_data data.frame, contains the pollutant and site informatin. One column represents
the site information and other columns represent the concentration of pollutants
pollutant_date character, varibale name represents the date infromation for the air pollutant
dataset
pollutant_site_lat
character, varibale name in pollutant_data, includes the latitude information of
each monitoring site
pollutant_site_lon
character, varibale name in pollutant_data, includes the longtitude information
of each monitoring site
pollutant_name vector, pollutant name in the pollutant_data, which represent the name of the
target pollutants to be estimated
estimate_interval
continue numeric vector, the estimation period, for example: 0:30, for each in-
dividual we estimate the environment exposure ranging from the exposure_date
to exposure_date + 30 days
Value
A list. For each element in the list, there is a dataframe with the first column representing the
individual id, the remaining columns represent the exposure estimation in different time points.
Author(s)
<NAME>, https://github.com/Spatial-R/EnvExpInd
Examples
library(EnvExpInd)
individual_data$date <- as.Date(individual_data$date)
pollutant_data$date <- as.Date(pollutant_data$date)
pollutant_data_full <- timeseries_imput(data= pollutant_data,date_var = "date",
site_var = "site.name",imput_col = 3:8)
pollutant_data_tem <- merge(pollutant_data_full,site_data,by.x = "site.name",by.y = "site")
exposure_estimate_idw(
individual_data = individual_data,
individual_id = "id",
exposure_date ="date",
individual_lat ="lat",
individual_lon ="lon",
pollutant_data = pollutant_data_tem,
pollutant_date = "date",
pollutant_site_lat = "lat",
pollutant_site_lon = "lon",
pollutant_name = c("PM10","PM2.5"),
estimate_interval = c(0:10))
exposure_estimate_krige
Assess the environmental exposure using the kringe method
Description
Based on the kringe method, the pollutant exposure in each individual location was estimated and
then assess the total pollutant exposure through the estimate_interval
Usage
exposure_estimate_krige(
individual_data,
individual_id,
exposure_date,
individual_lat,
individual_lon,
pollutant_data,
pollutant_date = "date",
pollutant_site_lat,
pollutant_site_lon,
pollutant_name = c("pm10", "so2"),
estimate_interval = c(0:30),
krige_model,
nmax = 7,
krige_method = "med"
)
Arguments
individual_data
data.frame, contains the refrence id, individual_id and exposure_date
individual_id character, varibale name in individual_data, represents the unique id for each
individual
exposure_date character, varibale name in individual_data, which represents the start date to
estimate the environment exposure
individual_lat character, varibale name in individual_data, represents the latitude information
of each idividual
individual_lon character, varibale name in individual_data, represents the longtitude informa-
tion of each idividual
pollutant_data data.frame, contains the pollutant and site informatin. One column represents
the site information and other columns represent the concentration of pollutants
pollutant_date character,varibale name represents the date infromation for the air pollutant
dataset
pollutant_site_lat
character, varibale name in pollutant_data, includes the latitude information of
each monitoring site
pollutant_site_lon
character, varibale name in pollutant_data, includes the longtitude information
of each monitoring site
pollutant_name vector, pollutant name in the pollutant_data need to be estimated
estimate_interval
continue numeric vector, the estimation period, for example: 0:30, for each in-
dividual we estimate the environment exposure ranging from the exposure_date
to exposure_date + 30 days
krige_model ?krige
nmax ?krige
krige_method ?krige
Value
A list. For each element in the list, there is a dataframe with the first column representing the
individual id, the remaining columns represent the exposure estimation in different time points.
Author(s)
<NAME>, https://github.com/Spatial-R/EnvExpInd
Examples
## Not run:
library(EnvExpInd)
library(maptools)
library(gstat)
individual_data$date <- as.Date(individual_data$date)
pollutant_data$date <- as.Date(pollutant_data$date)
pollutant_data_full <- timeseries_imput(data= pollutant_data,date_var = "date",
site_var = "site.name",imput_col = 3:8)
pollutant_data_tem <- merge(pollutant_data_full,site_data,by.x = "site.name",by.y = "site")
test.pollutant <- pollutant_data_tem[pollutant_data_tem$date == "2014-09-20",]
coordinates(test.pollutant) = ~lat + lon
########## please define the variogram in a right way ####################
m <- fit.variogram(variogram(PM10~1, test.pollutant), vgm(1, "Sph", 200, 1))
exposure_estimate_krige(
individual_data = individual_data,
individual_id = "id",
exposure_date ="date",
individual_lat ="lat",
individual_lon ="lon",
pollutant_data = pollutant_data_tem,
pollutant_date = "date",
pollutant_site_lat = "lat",
pollutant_site_lon = "lon",
pollutant_name = c("PM10","PM2.5"),
krige_model = m,
nmax = 7,
krige_method = "med",
estimate_interval = c(0:10))
## End(Not run)
expoure_estimate_simple
Assess the environmental exposure using the simplest method: nearest
monitoring site method
Description
Using the nearest surveillance site as the refrence site to estimate the pollutant exposure.
Usage
expoure_estimate_simple(
individual_data,
individual_id,
refrence_id,
exposure_date,
pollutant_data,
pollutant_site = "site",
pollutant_date = "date",
pollutant_name = c("pm10", "so2"),
estimate_interval
)
Arguments
individual_data
data.frame, inludes the refrence id, individual_id and exposure_date
individual_id character, variable name in the individual_data, which represents the unique id
for each individual
refrence_id character, varibale name in the individual_data, which represents the nearest
surveillance site for each individual
exposure_date character, varibale name in the individual_data, which represents the start date
to estimate the environment exposure
pollutant_data data.frame, contains the pollutant and site informatin. One column represents
the site information and other columns represent the concentration of pollutants
pollutant_site character, varibale name in the pollutant_data, which represents the monitoring
site information
pollutant_date character, varibale name in the pollutant_data, which represents the surveillance
date for pollutant concentration
pollutant_name vector, varibale names in the pollutant_data, which represent the name of the
target pollutants to be estimated
estimate_interval
continue numeric vector, the estimation period, for example: 0:30, for each in-
dividual we estimate the environment exposure ranging from the exposure_date
to exposure_date + 30 days
Value
A list. For each element in the list, there is a dataframe with the first column representing the
individual id, the remaining columns represent the exposure estimation in different time points.
Author(s)
<NAME>, https://github.com/Spatial-R/EnvExpInd
Examples
library(EnvExpInd)
individual_data$date <- as.Date(individual_data$date)
pollutant_data$date <- as.Date(pollutant_data$date)
pollutant_data_full <- timeseries_imput(data= pollutant_data,
date_var = "date",site_var = "site.name",imput_col = 3:8)
pollutant_data_tem <- merge(pollutant_data_full,site_data,by.x = "site.name",by.y = "site")
individual_data$refrence_id <- get_refrence_id_simple(
individual_data = individual_data,
individual_lat = "lat",
individual_lon = "lon",
individual_id = "id",
site_data = site_data,
site_lon = "lon",
site_lat = "lat",
site_id = "site")
expoure_estimate_simple(
individual_data = individual_data,
individual_id = "id",
refrence_id = "refrence_id",
exposure_date = "date",
pollutant_data = pollutant_data_tem,
pollutant_site = "site.name",
pollutant_date = "date",
pollutant_name = c("PM10","PM2.5"),
estimate_interval = c(0:10))
get_latlon_china transform the address information into the longitude and latitude
Description
Based on the Baidumap api, get_latlon_china function coverts the detailed address into the longi-
tude and latitude
Usage
get_latlon_china(data, add_var = "address", api_key = "")
Arguments
data data frame, contains the address information
add_var character, variable name in the data, which represents the address information
api_key character, baidumap api key, seeing: http://lbsyun.baidu.com/index.php?
title=webapi/guide/webservice-geocoding
Value
two clomuns (lon and lat) was added into the origin data.frame
Author(s)
<NAME>, https://github.com/Spatial-R/EnvExpInd
Examples
## Not run:
get_latlon_china(wuhan.sem,add_var = "add",api_key = "sksksksksksk")
## End(Not run)
get_refrence_id_simple
Match the nearing monitoring site for each individual
Description
Match the nearing monitoring site for each individual
Usage
get_refrence_id_simple(
individual_data,
individual_lat,
individual_lon,
individual_id,
site_data,
site_lat,
site_lon,
site_id
)
Arguments
individual_data
data.frame, including three variables (individual_lat, individual_lon and individ-
ual_id)
individual_lat character, varibale name in individual_data, includes the latitude information of
each idividual
individual_lon character, varibale name in individual_data, includes the longtitude information
of each idividual
individual_id character, varibale name in individual_data, includes the unique id for each in-
dividual
site_data data.frame, including three variables (site_lat, site_lon and site_id)
site_lat character varibale includes the latitude value of the site
site_lon character varibale includes the longtitude value of the site
site_id character varibale includes the id for each site
Value
A vector, including the refrence_id for each individual
Author(s)
<NAME>, https://github.com/Spatial-R/EnvExpInd
Examples
get_refrence_id_simple(
individual_data = individual_data,
individual_lat = "lat",
individual_lon = "lon",
individual_id = "id",
site_data = site_data,
site_lon = "lon",
site_lat = "lat",
site_id = "site")
individual_data The detailed information for each individual.
Description
A dataset containing the detailed information for each individual
Usage
individual_data
Format
A data frame with 21 rows and 3 variables:
id id number for each individual
date the monitoring time point
lat the latitude for each individual
lon the longtitude for each individual ...
pollutant_data The concentration of air pollutant at each time point.
Description
A dataset containing the concentration of air pollutant at each time point
Usage
pollutant_data
Format
A data frame with 11090 rows and 8 variables:
date the monitoring time point
site.name the names of the monitoring site
SO2 the concentration of SO2
NO2 the concentration of NO2
PM10 the concentration of PM10
CO the concentration of CO
O3 the concentration of O3
PM2.5 the concentration of PM2.5 ...
site_data Monitoring sites.
Description
A dataset containing the information of the monitoring sites
Usage
site_data
Format
A data frame with 10 rows and 2 variables:
site the name of monitoring sites
lat the latitude for each monitoring site
lon the longtitude for each monitoring site ...
timeseries_imput Impute the missing value for the timeseries using the linear interpola-
tion
Description
Complete the time series using the linear interpolation
Usage
timeseries_imput(data, date_var, site_var, imput_col)
Arguments
data data.frame, contains the refrence id, individual_id and exposure_date
date_var character, varibale name in data, represents the monitoring date.
site_var character, varibale name in data, represents the name of monitoring site.
imput_col numeric,the column position of the target variables need to be imputed
Value
a data.frame
Author(s)
<NAME>, https://github.com/Spatial-R/EnvExpInd
Examples
library(EnvExpInd)
pollutant_data_com <- timeseries_imput(data= pollutant_data,date_var = "date",
site_var = "site.name",imput_col = 3:8) |
meriyah | npm | JavaScript | Meriyah
===
100% compliant, self-hosted javascript parser with high focus on both performance and stability. Stable and already used in production.
[Demo](https://meriyah.github.io/meriyah)
---
Features
---
* Conforms to the standard ECMAScript® 2021 (ECMA-262 11th Edition) language specification
* Support TC39 proposals via option
* Support for additional ECMAScript features for Web Browsers
* JSX support via option
* Does **not** support TypeScript or Flow
* Optionally track syntactic node locations
* Emits an ESTree-compatible abstract syntax tree
* No backtracking
* Low memory usage
* Very well tested (~99 000 unit tests with full code coverage)
* Lightweight - ~90 KB minified
ESNext features
---
* [Decorators](https://github.com/tc39/proposal-decorators)
* [Class Public Instance Fields & Private Instance Fields](https://github.com/tc39/proposal-class-fields)
* [Hashbang grammar](https://github.com/tc39/proposal-hashbang)
* [Private methods](https://github.com/tc39/proposal-private-methods)
* [Static class fields and private static methods](https://github.com/tc39/proposal-static-class-features/)
**Note:** These features need to be enabled with the `next` option.
Installation
---
```
npm install meriyah --save-dev
```
API
---
Meriyah generates `AST` according to [ESTree AST format](https://github.com/estree/estree), and can be used to perform [syntactic analysis](https://en.wikipedia.org/wiki/Parsing) (parsing) of a JavaScript program, and with `ES2015` and later a JavaScript program can be either [a script or a module](https://tc39.github.io/ecma262/index.html#sec-ecmascript-language-scripts-and-modules).
The `parse` method exposed by meriyah takes an optional `options` object which allows you to specify whether to parse in [`script`](https://tc39.github.io/ecma262/#sec-parse-script) mode (the default) or in [`module`](https://tc39.github.io/ecma262/#sec-parsemodule) mode.
This is the available options:
```
{
// The flag to allow module code
module: false;
// The flag to enable stage 3 support (ESNext)
next: false;
// The flag to enable start, end offsets and range: [start, end] to each node
ranges: false;
// Enable web compatibility
webcompat: false;
// The flag to enable line/column location information to each node
loc: false;
// The flag to attach raw property to each literal and identifier node
raw: false;
// Enabled directives
directives: false;
// The flag to allow return in the global scope
globalReturn: false;
// The flag to enable implied strict mode
impliedStrict: false;
// Allows comment extraction. Accepts either a function or array
onComment: []
// Allows token extraction. Accepts either a function or array
onToken: []
// Enable non-standard parenthesized expression node
preserveParens: false;
// Enable lexical binding and scope tracking
lexical: false;
// Adds a source attribute in every node’s loc object when the locations option is `true`
source: false;
// Distinguish Identifier from IdentifierPattern
identifierPattern: false;
// Enable React JSX parsing
jsx: false
// Allow edge cases that deviate from the spec
specDeviation: false
}
```
### onComment and onToken
If an array is supplied, comments/tokens will be pushed to the array, the item in the array contains `start/end/range` information when ranges flag is true, it will also contain `loc` information when loc flag is true.
If a function callback is supplied, the signature must be
```
function onComment(type: string, value: string, start: number, end: number, loc: SourceLocation): void {}
function onToken(token: string, start: number, end: number, loc: SourceLocation): void {}
```
Note the `start/end/loc` information are provided to the function callback regardless of the settings on ranges and loc flags. onComment callback has one extra argument `value: string` for the body string of the comment.
Example usage
---
```
import { parseScript } from './meriyah';
parseScript('({x: [y] = 0} = 1)');
```
This will return when serialized in json:
```
{
type: "Program",
sourceType: "script",
body: [
{
type: "ExpressionStatement",
expression: {
type: "AssignmentExpression",
left: {
type: "ObjectPattern",
properties: [
{
type: "Property",
key: {
type: "Identifier",
name: "x"
},
value: {
type: "AssignmentPattern",
left: {
type: "ArrayPattern",
elements: [
{
"type": "Identifier",
"name": "y"
}
]
},
right: {
type: "Literal",
value: 0
}
},
kind: "init",
computed: false,
method: false,
shorthand: false
}
]
},
operator: "=",
right: {
type: "Literal",
value: 1
}
}
}
]
}
```
Readme
---
### Keywords
* parsing
* ecmascript
* javascript
* parser
* performance
* estree
* es2018
* es2019
* es2020
* es2021
* esnext
* lexer
* ast
* lightweight |
replex | hex | Erlang | replex
===
[![Hex version](https://img.shields.io/hexpm/v/replex.svg "Hex version")](https://hex.pm/packages/replex)
[![API docs](https://img.shields.io/hexpm/v/replex.svg?label=hexdocs "API docs")](https://hexdocs.pm/replex/Replex.html)
Use Elixir to replay radio signals on a Raspberry Pi on GPIO 4
About
---
This was inspired by the project [`rpitx`](https://github.com/F5OEO/rpitx) which allows you to transmit signals from 5 KHz - 1500 MHz from a single GPIO pin. There is a lot of really cool stuff in `rpitx`, but this only focuses on the [`sendiq`](https://github.com/F5OEO/rpitx/blob/master/src/sendiq.cpp)
binary for transmitting an I/Q recording file.
If you're new to radio, SDR, and replaying radio signals, I have a full write-up about the motiviation for this library and how to go through the full process at
[Nerves @ 434 MHz](https://embedded-elixir.com/post/2019-08-29-nerves-at-434-mhz/)
You can see this in action and a little more on it's use-case in this lightning talk I presented at ElixirConf 2019:
[![](http://img.youtube.com/vi/PEheIY6gGhY/0.jpg)](http://www.youtube.com/watch?v=PEheIY6gGhY "Radio")
How to Use
---
Install the dep:
```
def deps do
{
#...other deps
{:replex, "~> 0.1"}
}
end
```
Then you need to make sure you have your recording files as part of your project.
The easiest way to do this is to put into the `priv/` under your project root.
From there, you can use it like so:
```
defmodule Radio do
def fan_light() do
file = Path.join(:code.priv_dir(:radio), "fan_light.iq")
Replex.replay(file, 433907740, sample_rate: 250_000)
end end
```
```
iex()> Radio.fan_light
:ok
```
Caveats
---
Because of the nature of replaying radio signals, there is no guarantee on the success or failure of your radio signal *actually* being received. Devices won't send back an `ack` or any response to the action. So this will always return `:ok` as long as the binary ran and signal was attempted but you won't
*really know* that it worked.
A recommendation would be to just obnoxiously blast the signal asyncronously and play the numbers game. *Surely* the device will receive it 1 out of 5 times:
```
Task.async(fn ->
Room.lights_on
Room.lights_on
Room.lights_on
Room.lights_on end)
```
That said, if the signal is *binary* (meaning it is the same signal to toggle on and off), then this process won't really work. Unless you're hoping to bring back disco and flashing lights 🕺
Goals
---
* [X] Support compiling `sendiq` (I mainly compile and include in release)
* [X] Support more raspberry pi than `rpi3`
* [] Support GPIO 6 and 20 pins for transmitting
[API Reference](api-reference.html)
[Next Page →
Changelog](changelog.html)
Replex
===
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[replay(file, frequency, opts \\ [])](#replay/3)
[sendiq()](#sendiq/0)
[Link to this section](#functions)
Functions
===
API Reference
===
Modules
---
[Replex](Replex.html)
[replex](readme.html) |
graph4lg | cran | R | Package ‘graph4lg’
January 30, 2023
Type Package
Title Build Graphs for Landscape Genetics Analysis
Version 1.8.0
Maintainer <NAME> <<EMAIL>>
Description Build graphs for landscape genetics analysis. This set of
functions can be used to import and convert spatial and genetic data
initially in different formats, import landscape graphs created with
'GRAPHAB' software (Foltete et al., 2012) <doi:10.1016/j.envsoft.2012.07.002>,
make diagnosis plots of isolation by distance relationships in order to
choose how to build genetic graphs, create graphs with a large range of
pruning methods, weight their links with several genetic distances, plot
and analyse graphs,compare them with other graphs. It uses functions from
other packages such as 'adegenet'
(Jombart, 2008) <doi:10.1093/bioinformatics/btn129> and 'igraph' (Csardi
et Nepusz, 2006) <https://igraph.org/>. It also implements methods
commonly used in landscape genetics to create graphs, described by Dyer et
Nason (2004) <doi:10.1111/j.1365-294X.2004.02177.x> and Greenbaum et
Fefferman (2017) <doi:10.1111/mec.14059>, and to analyse distance data
(van Strien et al., 2015) <doi:10.1038/hdy.2014.62>.
Depends R(>= 3.1.0)
License GPL-2
Encoding UTF-8
LazyData true
Imports adegenet, ggplot2, stringr, igraph, stats, spatstat.geom,
spatstat.linnet, Matrix, vegan, utils, methods, pegas, MASS,
tidyr, sp, sf, hierfstat, rappdirs, gdistance, raster, foreign,
ecodist, Rdpack
Suggests knitr, rmarkdown
RdMacros Rdpack
RoxygenNote 7.2.1
VignetteBuilder knitr, rmarkdown
NeedsCompilation no
Author <NAME> [aut, cre] (<https://orcid.org/0000-0002-2104-9941>),
<NAME> [ctb] (<https://orcid.org/0000-0001-6330-6136>),
<NAME> [ctb],
<NAME> [ctb]
Repository CRAN
Date/Publication 2023-01-30 14:00:05 UTC
R topics documented:
add_nodes_att... 3
compute_graph_modu... 5
compute_node_metri... 6
convert_c... 7
data_ex_genin... 9
data_ex_gstu... 9
data_ex_loc... 10
data_simul_genin... 11
data_tut... 11
df_to_pw_ma... 12
dist_max_cor... 13
genepop_to_genin... 15
genind_to_genepo... 17
gen_graph_inde... 18
gen_graph_th... 21
gen_graph_top... 22
get_grapha... 24
get_graphab_linkse... 25
get_graphab_linkset_cos... 26
get_graphab_metri... 27
get_graphab_raster_code... 28
graphab_capacit... 29
graphab_corrido... 31
graphab_grap... 32
graphab_interpo... 34
graphab_lin... 36
graphab_metri... 37
graphab_modu... 41
graphab_pointse... 42
graphab_projec... 44
graphab_project_des... 46
graphab_to_igrap... 47
graph_modul_compa... 49
graph_node_compa... 51
graph_pla... 53
graph_plot_compa... 54
graph_topo_compa... 55
graph_to_d... 57
graph_to_sh... 58
gstud_to_genin... 60
g_perco... 61
kernel_para... 61
loci_to_genin... 62
mat_cost_dis... 63
mat_gen_dis... 65
mat_geo_dis... 67
plot_graph_l... 69
plot_w_his... 71
pop_gen_inde... 71
pts_pop_e... 72
pts_pop_simu... 73
pw_mat_to_d... 74
reorder_ma... 74
scatter_dis... 75
scatter_dist_... 77
structure_to_genin... 78
add_nodes_attr Add attributes to the nodes of a graph
Description
The function adds attributes to the nodes of a graph from either an object of class data.frame or
from a shapefile layer. The node IDs in the input objects must be the same as in the graph object.
Usage
add_nodes_attr(
graph,
input = "df",
data,
dir_path = NULL,
layer = NULL,
index = "Id",
include = "all"
)
Arguments
graph A graph object of class igraph.
input A character string indicating the nature of the input data from which come the
attributes to add to the nodes.
• If ’input = "shp"’, then attributes come from the attribute table of a shapefile
layer of type point.
• If ’input = "df"’, then attributes come from an object of class data.frame
In both cases, input attribute table or dataframe must have a column with the
exact same values as the node IDs.
data (only if ’input = "df"’) The name of the object of class data.frame with the
attributes to add to the nodes.
dir_path (only if ’input = "shp"’) The path (character string) to the directory containing
the shapefile layer of type point whose attribute table contains the attributes to
add to the nodes.
layer (only if ’input = "shp"’) The name (character string) of the shapefile layer of
type point (without extension, ex.: "nodes" refers to "nodes.shp" layer) whose
attribute table contains the attributes to add to the nodes.
index The name (character string) of the column with the nodes names in the input
data (column of the attribute table or of the dataframe).
include A character string (vector) indicating which columns of the input data will be
added as nodes’ attributes. By default, ’include = "all"’, i.e. every column of the
input data is added. Alternatively, ’include’ can be a vector with the names of
the columns to add (ex.: "c(’x’, ’y’, ’pop_name’)").
Details
The graph can be created with the function graphab_to_igraph by importing output from Graphab
projects. Values of the metrics computed at the node level with Graphab can then be added to such
a graph with this function.
Value
A graph object of class igraph
Author(s)
<NAME>
Examples
data("data_tuto")
graph <- data_tuto[[3]]
df_nodes <- data.frame(Id = igraph::V(graph)$name,
Area = runif(50, min = 10, max = 60))
graph <- add_nodes_attr(graph,
data = df_nodes,
input = "df",
index = "Id",
include = "Area")
compute_graph_modul Compute modules from a graph by maximising modularity
Description
The function computes modules from a graph by maximising modularity.
Usage
compute_graph_modul(
graph,
algo = "fast_greedy",
node_inter = NULL,
nb_modul = NULL
)
Arguments
graph An object of class igraph. Its nodes must have names.
algo A character string indicating the algorithm used to create the modules with
igraph.
• If algo = 'fast_greedy' (default), function cluster_fast_greedy from
igraph is used (Clauset et al., 2004).
• If algo = 'walktrap', function cluster_walktrap from igraph is used
(Pons et Latapy, 2006) with 4 steps (default options).
• If algo = 'louvain', function cluster_louvain from igraph is used (Blon-
del et al., 2008). In that case, the number of modules created in each graph
is imposed.
• If algo = 'optimal', function cluster_optimal from igraph is used (Bran-
des et al., 2008) (can be very long). In that case, the number of modules
created in each graph is imposed.
node_inter (optional, default = NULL) A character string indicating whether the links of
the graph are weighted by distances or by similarity indices. It is only used to
compute the modularity index. It can be:
• ’distance’: Link weights correspond to distances. Nodes that are close to
each other will more likely be in the same module.
• ’similarity’: Link weights correspond to similarity indices. Nodes that are
similar to each other will more likely be in the same module. Inverse link
weights are then used to compute the modularity index.
nb_modul (optional , default = NULL) A numeric or integer value indicating the number
of modules in the graph. When this number is not specified, the optimal value is
retained.
Value
A data.frame with the node names and the corresponding module ID.
Author(s)
<NAME>
Examples
data("data_tuto")
mat_gen <- data_tuto[[1]]
graph <- gen_graph_thr(mat_w = mat_gen, mat_thr = mat_gen,
thr = 0.8)
res_mod <- compute_graph_modul(graph = graph,
algo = "fast_greedy",
node_inter = "distance")
compute_node_metric Compute graph-theoretic metrics from a graph at the node level
Description
The function computes graph-theoretic metric values at the node level.
Usage
compute_node_metric(
graph,
metrics = c("deg", "close", "btw", "str", "siw", "miw"),
weight = TRUE
)
Arguments
graph An object of class igraph. Its nodes must have names.
metrics Character vector specifying the graph-theoretic metrics computed at the node-
level in the graphs Graph-theoretic metrics can be:
• Degree (metrics = c("deg", ...))
• Closeness centrality index (metrics = c("close",...))
• Betweenness centrality index (metrics = c("btw",...))
• Strength (sum of the weights of the links connected to a node) (metrics =
c("str",...))
• Sum of the inverse weights of the links connected to a node (metrics =
c("siw", ...), default)
• Mean of the inverse weights of the links connected to a node (metrics =
c("miw", ...))
By default, the vector metrics includes all these metrics.
weight Logical which indicates whether the links are weighted during the calculation
of the centrality indices betweenness and closeness. (default: weight = TRUE).
Link weights are interpreted as distances when computing the shortest paths.
They should then be inversely proportional to the strength of the relationship
between nodes (e.g. to fluxes).
Value
A data.frame with the node names and the metrics computed.
Author(s)
<NAME>
Examples
data(data_ex_genind)
mat_gen <- mat_gen_dist(x = data_ex_genind, dist = "DPS")
graph <- gen_graph_thr(mat_w = mat_gen, mat_thr = mat_gen,
thr = 0.8)
res_met <- compute_node_metric(graph)
convert_cd Fit a model to convert cost-distances into Euclidean distances
Description
The function fits a model to convert cost-distances into Euclidean distances as implemented in
Graphab software.
Usage
convert_cd(
mat_euc,
mat_ld,
to_convert,
method = "log-log",
fig = TRUE,
line_col = "black",
pts_col = "#999999"
)
Arguments
mat_euc A symmetric matrix or dist object with pairwise geographical Euclidean dis-
tances between populations or sample sites. It will be the explanatory variable,
and only values from the off diagonal lower triangle will be used.
mat_ld A symmetric matrix or dist object with pairwise landscape distances between
populations or sample sites. These distances can be cost-distances or resistance
distances, among others. It will be the explained variable, and only values from
the off diagonal lower triangle will be used.
to_convert A numeric value or numeric vector with Euclidean distances to convert into
cost-distances.
method A character string indicating the method used to fit the model.
• If ’method = "log-log"’ (default), then the model takes the following form :
log(ld) ~ A + B * log(euc)
• If ’method = "lm"’, then the model takes the following form : ld ~ A + B *
euc
fig Logical (default = TRUE) indicating whether a figure is plotted
line_col (if ’fig = TRUE’) Character string indicating the color used to plot the line (de-
fault: "blue"). It must be a hexadecimal color code or a color used by default in
R.
pts_col (if ’fig = TRUE’) Character string indicating the color used to plot the points
(default: "#999999"). It must be a hexadecimal color code or a color used by
default in R.
Details
IDs in ’mat_euc’ and ’mat_ld’ must be the same and refer to the same sampling site or populations,
and both matrices must be ordered in the same way. Matrix of Euclidean distance ’mat_euc’ can
be computed using the function mat_geo_dist. Matrix of landscape distance ’mat_ld’ can be
computed using the function mat_cost_dist. Before the log calculation, 0 distance values are
converted into 1, so that they are 0 after this calculation.
Value
A list of output (converted values, estimated parameters, R2) and optionally a ggplot2 object to plot
Author(s)
<NAME>
References
<NAME>, <NAME>, <NAME> (2012). “A software tool dedicated to the modelling of landscape
networks.” Environmental Modelling & Software, 38, 316–327.
Examples
data("data_tuto")
mat_ld <- data_tuto[[2]][1:10, 1:10] * 1000
mat_euc <- data_tuto[[1]][1:10, 1:10] * 50000
to_convert <- c(30000, 40000)
res <- convert_cd(mat_euc = mat_euc,
mat_ld = mat_ld,
to_convert = to_convert, fig = FALSE)
data_ex_genind data_ex_genind genetic dataset
Description
Genetic dataset from genetic simulation on CDPOP 200 individuals, 10 populations 20 microsatel-
lite loci (3 digits coding) 100 generations simulated
Usage
data_ex_genind
Format
An object of type ’genind’
Details
The simulation was made with CDPOP during 100 generations. Dispersal was possible between
the 10 populations. Its probability depended on the cost distance between populations, calculated
on a simulated resistance surface (raster). Mutations were not possible. There were initially 600
alleles in total (many disappeared because of drift). Population stayed constant with a sex-ratio of
1. Generations did not overlap. This simulation includes a part of stochasticity and these data result
from only 1 simulation run.
References
<NAME>, <NAME> (2010). “CDPOP: a spatially explicit cost distance population genetics
program.” Molecular Ecology Resources, 10(1), 156–161.
Examples
data("data_ex_genind")
length(unique(data_ex_genind@pop))
data_ex_gstud data_ex_gstud genetic dataset
Description
Genetic dataset from genetic simulation on CDPOP 200 individuals, 10 populations 20 microsatel-
lite loci (3 digits coding) 100 generations simulated
Usage
data_ex_gstud
Format
A ’data.frame’ with columns:
ID Individual ID
POP Population name
LOCI-1 to LOCI-20 20 loci columns with microsatellite data with 3 digits coding, alleles sepa-
rated by ":", and blank missing data (class ’locus’ from gstudio)
Examples
data("data_ex_gstud")
str(data_ex_gstud)
length(unique(data_ex_gstud$POP))
data_ex_loci data_ex_loci genetic dataset
Description
Genetic dataset from genetic simulation on CDPOP 200 individuals, 10 populations 20 microsatel-
lite loci (3 digits coding) 100 generations simulated
Usage
data_ex_loci
Format
An object of class ’loci’ and ’data.frame’ with the columns :
population Population name
Other columns 20 loci columns with microsatellite data with 3 digits coding, alleles separated by
"/", and missing data noted "NA/NA"
Row names correspond to individuals’ ID
Examples
data("data_ex_loci")
length(unique(data_ex_loci$population))
data_simul_genind data_simul_genind genetic dataset
Description
Genetic dataset from genetic simulation on CDPOP 1500 individuals, 50 populations 20 microsatel-
lite loci (3 digits coding) 50 generations simulated
Usage
data_simul_genind
Format
An object of type ’genind’
Details
The simulation was made with CDPOP during 50 generations. Dispersal was possible between
the 50 populations. Its probability depended on the cost distance between populations, calculated
on a simulated resistance surface (raster). Mutations were not possible. There were initially 600
alleles in total (many disappeared because of drift). Population stayed constant with a sex-ratio of
1. Generations did not overlap. This simulation includes a part of stochasticity and these data result
from only 1 simulation run.
References
<NAME>, <NAME> (2010). “CDPOP: a spatially explicit cost distance population genetics
program.” Molecular Ecology Resources, 10(1), 156–161.
Examples
data("data_simul_genind")
length(unique(data_simul_genind@pop))
data_tuto data_tuto : data used to generate the vignette
Description
Data used to generate the vignette
Data used to generate the vignette
Usage
data_tuto
data_tuto
Format
Several outputs or inputs to show how the package works in a list
mat_dps Genetic distance matrix example
mat_pg Second genetic distance matrix example
graph_ci Genetic independence graph example
dmc Output of the function ’dist_max_corr’
land_graph Landscape graph example
mat_ld Landscape distance matrix example
Several outputs or inputs to show how the package works in a list
dmc Output of the function ’dist_max_corr’
graph_ci Genetic independence graph example
mat_dps Genetic distance matrix example
mat_pg Second genetic distance matrix example
Examples
data("data_tuto")
mat_dps <- data_tuto[[1]]
str(mat_dps)
data("data_tuto")
mat_dps <- data_tuto[[1]]
str(mat_dps)
df_to_pw_mat Convert an edge-list data.frame into a pairwise matrix
Description
The function converts an edge-list data.frame into a symmetric pairwise matrix
Usage
df_to_pw_mat(data, from, to, value)
Arguments
data An object of class data.frame
from A character string indicating the name of the column with the ID of the origins
to A character string indicating the name of the column with the ID of the arrivals
value A character string indicating the name of the column with the values correspond-
ing to each pair
Details
The matrix is a symmetric matrix. Be careful, you shall not provide a data.frame with different
values corresponding to the pair 1-2 and 2-1 as an example. Ideally, for a complete matrix, data
should have n(n-1)/2 rows if values are computed between n objects.
Value
A pairwise matrix
Author(s)
<NAME>
Examples
data(pts_pop_simul)
suppressWarnings(mat_geo <- mat_geo_dist(pts_pop_simul,
ID = "ID",
x = "x",
y = "y"))
g <- gen_graph_topo(mat_w = mat_geo,
mat_topo = mat_geo,
topo = "comp")
df <- data.frame(igraph::as_edgelist(g))
df$w <- igraph::E(g)$weight
df_to_pw_mat(df, from = "X1", to = "X2", value = "w")
dist_max_corr Compute the distance at which the correlation between genetic dis-
tance and landscape distance is maximal
Description
The function enables to compute the distance at which the correlation between genetic distance and
landscape distance is maximal, using a method similar to that employed by van Strien et al. (2015).
Iteratively, distance threshold values are tested. For each value, all the population pairs separated by
a landscape distance larger than the threshold are removed before the Mantel correlation coefficient
between genetic distance and landscape distance is computed. The distance threshold at which the
correlation is the strongest is then identified. A figure showing the evolution of the correlation
coefficients when landscape distance threshold increases is plotted.
Usage
dist_max_corr(
mat_gd,
mat_ld,
interv,
from = NULL,
to = NULL,
fig = TRUE,
thr_gd = NULL,
line_col = "black",
pts_col = "#999999"
)
Arguments
mat_gd A symmetric matrix or dist object with pairwise genetic distances between
populations or sample sites.
mat_ld A symmetric matrix or dist object with pairwise landscape distances between
populations or sample sites. These distances can be Euclidean distances, cost-
distances or resistance distances, among others.
interv A numeric or integer value indicating the interval between the different distance
thresholds for which the correlation coefficients are computed.
from (optional) The minimum distance threshold value at which the correlation coef-
ficient is computed.
to (optional) The maximum distance threshold value at which the correlation coef-
ficient is computed.
fig Logical (default = TRUE) indicating whether a figure is plotted.
thr_gd (optional) A numeric or integer value used to remove genetic distance values
from the data before the calculation. All genetic distances values above ’thr_gd’
are removed from the data. This parameter can be used especially when there
are outliers.
line_col (optional, if fig = TRUE) A character string indicating the color used to plot the
line (default: "blue"). It must be a hexadecimal color code or a color used by
default in R.
pts_col (optional, if fig = TRUE) A character string indicating the color used to plot the
points (default: "#999999"). It must be a hexadecimal color code or a color used
by default in R.
Details
IDs in ’mat_gd’ and ’mat_ld’ must be the same and refer to the same sampling sites or populations,
and both matrices must be ordered in the same way. The correlation coefficient between genetic
distance and landscape distance computed is a Mantel correlation coefficient. If there are less
than 50 pairwise values, the correlation is not computed, as in van Strien et al. (2015). Such a
method can be subject to criticism from a strict statistical point of view given correlation coefficients
computed from samples of different size are compared. The matrix of genetic distance ’mat_gd’
can be computed using mat_gen_dist. The matrix of landscape distance ’mat_ld’ can be computed
using mat_geo_dist when the landscape distance needed is a Euclidean geographical distance.
Mantel correlation coefficients are computed using the function mantel.
Value
A list of objects:
• The distance at which the correlation is the highest.
• The vector of correlation coefficients at the different distance thresholds
• The vector of the different distance thresholds
• A ggplot2 object to plot
Author(s)
<NAME>
References
<NAME>, <NAME>, <NAME> (2015). “Isolation-by-distance in landscapes: consid-
erations for landscape genetics.” Heredity, 114(1), 27.
Examples
data("data_tuto")
mat_gen <- data_tuto[[1]]
mat_dist <- data_tuto[[2]]*1000
res_dmc <- dist_max_corr(mat_gd = mat_gen,
mat_ld = mat_dist,
from = 32000, to = 42000,
interv = 5000,
fig = FALSE)
genepop_to_genind Convert a GENEPOP file into a genind object
Description
The function converts a text file in the format used by GENEPOP software into a genind object
Usage
genepop_to_genind(path, n.loci, pop_names = NULL, allele.digit.coding = 3)
Arguments
path A character string with the path leading to the GENEPOP file in format .txt, or
alternatively the name of this file in the working directory.
n.loci The number of loci in the GENEPOP file (integer or numeric).
pop_names (optional) Populations’ names in the same order as in the GENEPOP file. Vec-
tor object (class character) of the same length as the number of populations.
Without this parameter, populations are numbered from 1 to the number of pop-
ulations.
allele.digit.coding
Number indicating whether alleles are coded with 3 (default) or 2 digits.
Details
This function uses functions from pegas package. GENEPOP file should can include microsatellites
loci or SNPs with allele names of length 2 or 3 (noted as 01, 02, 03 or 04 for SNPs). The loci line(s)
must not start with a spacing.
Value
An object of type genind.
Author(s)
<NAME>
References
<NAME> (1995). “GENEPOP: Population genetics software for exact tests and ecumenism.
Vers. 1.2.” Journal of Heredity, 86, 248–249.
See Also
For more details about GENEPOP file formatting : https://genepop.curtin.edu.au:443/help_
input.html For the opposite conversion, see genind_to_genepop. The output file can be used to
compute pairwise FST matrix with mat_pw_fst
Examples
path_in <- system.file('extdata', 'gpop_simul_10_g100_04_20.txt',
package = 'graph4lg')
file_n <- file.path(tempdir(), "gpop_simul_10_g100_04_20.txt")
file.copy(path_in, file_n, overwrite = TRUE)
genepop_to_genind(path = file_n, n.loci = 20,
pop_names = as.character(order(as.character(1:10))))
file.remove(file_n)
genind_to_genepop Convert a genind object into a GENEPOP file
Description
The function converts an object of class genind into a GENEPOP file. It then allows to use the
functionalities of the GENEPOP software and its derived package GENEPOP on R, as well as
some functions from other packages (differentiation test, F-stats calculations, HWE test,...). It is
designed to be used with diploid microsatellite data with alleles coded with 2 or 3 digits or SNPs
genind objects.
Usage
genind_to_genepop(x, output = "data.frame")
Arguments
x An object of class genind from package adegenet.
output A character string indicating the option used to select what the function will
return:
• If output = "data.frame"(default), then the function will return an object
’x’ of class data.frame ready to be saved as a text file with the following
command: write.table(x, file = "file_name.txt", quote=FALSE,row.names=FALSE,
col.names=FALSE)
• If output = "path_to_file/file_name.txt", then the function will write
a text file named ’file_name.txt’ in the directory corresponding to ’path_to_file’.
Without ’path_to_file’, the text file is written in the current working direc-
tory. The text file has the format required by GENEPOP software.
Value
An object of type data.frame if ouput = "data.frame". If output is the path and/or the file name
of a text file, then nothing is returned in R environment but a text file is created with the specified
file name, either in the current working directory or in the specified folder.
Warning
Confusion: Do not confound this function with genind2genpop from adegenet. The latter
converts an object of class genind into an object of class genpop, whereas genind_to_genepop
converts an object of class genind into a text file compatible with GENEPOP software (Rousset,
2008).
Allele coding: This function can handle genetic data with different allele coding: 2 or 3 digit
coding for microsatellite data or 2 digit coding for SNPs (A,C,T,G become respectively 01, 02,
03, 04).
Individuals order: When individuals in input data are not ordered by populations, individuals
from the same population can be separated by individuals from other populations. It can be prob-
lematic when calculating then pairwise distance matrices. Therefore, in such a case, individuals
are ordered by populations and populations ordered in alphabetic order.
Author(s)
<NAME>
References
<NAME> (1995). “GENEPOP: Population genetics software for exact tests and ecumenism.
Vers. 1.2.” Journal of Heredity, 86, 248–249.
See Also
For more details about GENEPOP file formatting : https://genepop.curtin.edu.au:443/help_
input.html. For the opposite conversion, see genepop_to_genind. The output file can be used to
compute pairwise FST matrix with mat_pw_fst
Examples
data(data_ex_genind)
x <- data_ex_genind
df_genepop <- suppressWarnings(genind_to_genepop(x,
output = "data.frame"))
gen_graph_indep Create an independence graph of genetic differentiation from genetic
data of class genind
Description
The function allows to create genetic graphs from genetic data by applying the conditional inde-
pendence principle. Populations whose allelic frequencies covary significantly once the covariance
with the other populations has been taken into account are linked on the graphs.
Usage
gen_graph_indep(
x,
dist = "basic",
cov = "sq",
pcor = "magwene",
alpha = 0.05,
test = "EED",
adj = "none",
output = "igraph"
)
Arguments
x An object of class genind that contains the multilocus genotype (format ’locus’)
of the individuals as well as their population and their geographical coordinates.
dist A character string indicating the method used to compute the multilocus genetic
distance between populations
• If ’dist = ’basic” (default), then the multilocus genetic distance is computed
using a Euclidean genetic distance formula (Excoffier et al., 1992)
• If ’dist = ’weight”, then the multilocus genetic distance is computed as in
Fortuna et al. (2009). It is a Euclidean genetic distance giving more weight
to rare alleles
• If ’dist = ’PG”, then the multilocus genetic distance is computed as in pop-
graph::popgraph function, following several steps of PCA and SVD (Dyer
et Nason, 2004).
• If ’dist = ’PCA”, then the genetic distance is computed following a PCA
of the matrix of allelic frequencies by population. It is a Euclidean genetic
distance between populations in the multidimensional space defined by all
the independent principal components.
cov A character string indicating the formula used to compute the covariance matrix
from the distance matrix
• If ’cov = ’sq” (default), then the covariance matrix is calculated from the
matrix of squared distances as in Everitt et Hothorn (2011)
• If ’cov = ’dist”, then the covariance matrix is calculated from the matrix of
distances as in Dyer et Nason (2004) and popgraph function
pcor A character string indicating the way the partial correlation matrix is computed
from the covariance matrix.
• If ’pcor = ’magwene”, the steps followed are the same as in Magwene
(2001) and in popgraph::popgraph function. It is the recommended option
as it meets mathematical requirements.
• If ’pcor = ’other”, the steps followed are the same as used by Fortuna et al.
(2009). They are not consistent with the approach of Magwene (2001).
alpha A numeric value corresponding to the statistical tolerance threshold used to test
the difference from 0 of the partial correlation coefficients. By default, ’al-
pha=0.05’.
test A character string indicating the method used to test the significance of the par-
tial correlation coefficients.
• If ’test = ’EED” (default), then the Edge Exclusion Deviance criterion is
used (Whittaker, 2009). Although other methods exist, this is the most
common and thus the only one implemented here.
adj A character string indicating the way of adjusting p-values to assess the signifi-
cance of the p-values
• If ’adj = ’none” (default), there is no p-value adjustment correction
• If ’adj = ’holm”, p-values are adjusted using the sequential Bonferroni cor-
rection (Holm, 1979)
• If ’adj = ’bonferroni”, p-values are adjusted using the classic Bonferroni
correction
• If ’adj = ’BH”, p-values are adjusted using Benjamini et Hochberg (1995)
correction controlling false discovery rate
output A character string indicating the matrices included in the output list.
• If ’output = ’all” (default), then D (distance matrix), C (covariance matrix),
Rho (partial correlation matrix), M (graph incidence matrix) and S (strength
matrix) are included
• If ’output = ’dist_graph”, then the distance matrix D is returned only with
the values corresponding to the graph edges
• If ’output = ’str_graph”, then the strength values matrix S is returned only
with the values corresponding to the graph edges
• If ’output = ’inc”, then the binary adjacency matrix M is returned
• If ’output = ’igraph”, then a graph of class igraph is returned
Details
The function allows to vary many parameters such as the genetic distance used, the formula used to
compute the covariance, the statistical tolerance threshold, the p-values adjustment, among others.
Value
A list of objects of class matrix, an object of class matrix or a graph object of class igraph
Author(s)
<NAME>
References
<NAME>, Nason JD (2004). “Population graphs: the graph theoretic shape of genetic structure.”
Molecular ecology, 13(7), 1713–1727. <NAME>, Hochberg Y (1995). “Controlling the false
discovery rate: a practical and powerful approach to multiple testing.” Journal of the royal statis-
tical society. Series B (Methodological), 289–300. Bowcock AM, Ruiz-Linares A, Tomfohrde J,
Minch E, Kidd JR, Cavalli-Sforza LL (1994). “High resolution of human evolutionary trees with
polymorphic microsatellites.” nature, 368(6470), 455–457. Everitt B, Hothorn T (2011). An in-
troduction to applied multivariate analysis with R. Springer. Excoffier L, Smouse PE, Quattro JM
(1992). “Analysis of molecular variance inferred from metric distances among DNA haplotypes:
application to human mitochondrial DNA restriction data.” Genetics, 131(2), 479–491. Fortuna
MA, Albaladejo RG, <NAME>, <NAME>, <NAME> (2009). “Networks of spatial genetic
variation across species.” Proceedings of the National Academy of Sciences, 106(45), 19044–19049.
Holm S (1979). “A simple sequentially rejective multiple test procedure.” Scandinavian journal of
statistics, 65–70. Magwene PM (2001). “New tools for studying integration and modularity.” Evo-
lution, 55(9), 1734–1745. Wermuth N, <NAME> (1977). “Algorithm AS 105: fitting a covariance
selection model to a matrix.” Journal of the Royal Statistical Society. Series C (Applied Statis-
tics), 26(1), 88–92. <NAME> (2009). Graphical models in applied multivariate statistics. Wiley
Publishing.
Examples
data(data_ex_genind)
dist_graph_test <- gen_graph_indep(x = data_ex_genind, dist = "basic",
cov = "sq", pcor = "magwene",
alpha = 0.05, test = "EED",
adj = "none", output = "igraph")
gen_graph_thr Create a graph of genetic differentiation using a link weight threshold
Description
The function allows to construct a genetic graph whose links’ weights are larger or lower than a
specific threshold
Usage
gen_graph_thr(mat_w, mat_thr = NULL, thr, mode = "larger")
Arguments
mat_w A symmetric (pairwise) matrix or a dist object whose elements will be the
links’ weights
mat_thr (optional) A symmetric (pairwise) distance matrix or a dist object whose val-
ues will be used for the pruning based on the threshold value.
thr The threshold value (logically between min(mat_thr) and max(mat_thr))(integer
or numeric)
mode • If ’mode = ’larger” (default), all the links whose weight is larger than ’thr’
are removed.
• If ’mode = ’lower”, all the links whose weight is lower than ’thr’ are re-
moved.
Details
If ’mat_thr’ is not defined, ’mat_w’ is used for the pruning. Matrices ’mat_w’ and ’mat_thr’ must
have the same dimensions and the same rows’ and columns’ names. Values in ’mat_thr’ matrix
must be positive. Negative values from ’mat_w’ are transformed into zeros. The function works
only for undirected graphs. If dist objects are specified, it is assumed that colnames and row.names
of mat_w and mat_thr refer to the same populations/locations.
Value
A graph object of class igraph
Author(s)
<NAME>
Examples
mat_w <- mat_gen_dist(x = data_ex_genind, dist = 'DPS')
suppressWarnings(mat_thr <- mat_geo_dist(pts_pop_ex,
ID = "ID",
x = "x",
y = "y"))
mat_thr <- mat_thr[row.names(mat_w), colnames(mat_w)]
graph <- gen_graph_thr(mat_w, mat_thr, thr = 6000, mode = "larger")
gen_graph_topo Create a graph of genetic differentiation with a specific topology
Description
The function constructs a genetic graph with a specific topology from genetic and/or geographical
distance matrices
Usage
gen_graph_topo(mat_w, mat_topo = NULL, topo = "gabriel", k = NULL)
Arguments
mat_w A symmetric (pairwise) matrix or a dist object whose elements will be the
links’ weights
mat_topo (optional) A symmetric (pairwise) distance matrix or a dist object whose val-
ues will be used for the pruning method.
topo Which topology does the created graph have?
• If ’topo = ’gabriel” (default), the resulting graph will be a Gabriel graph
(Gabriel et al., 1969). Itq
means that there is a link between nodes x and y if
2
and only if dxy ≤ min( d2xz + d2yz ), with z any other node of the graph.
• If ’topo = ’mst”, the resulting graph will have the topology of a minimum
spanning tree. It means that the graph will not include any cycle (tree) and
it will be the subgraph with a tree topology with the minimum total links’
weight (based on ’mat_topo’ values).
• If ’topo = ’percol”, if the link of the resulting graph with the minimum
weight is removed, then the graph breaks into two components.
• If ’topo = ’comp”, a complete graph whose links are weighted with values
from ’mat_w’ is created.
• If ’topo = ’knn”, a k-nearest neighbor graph whose links are weighted with
values from ’mat_w’ is created. If the distance between node i and node
j is among the k-th smallest distances between node i and the other nodes
according to distances in matrix ’mat_topo’, then there is a link between i
and j in the resulting graph. Therefore, a node can be connected to more
than two nodes because the nearest node to node j is not necessarily among
the k nearest neighbors to node i. Let d1 be the smallest distance from node
i to other nodes, if there are k nodes or more at this distance from node i,
they are all connected to i. If there are less than k nodes connected to i at
a distance d1, then we consider nodes at a distance d2 from i. In the latter
case, all the nodes at a distance d2 from i are connected to i.
k (if ’topo = ’knn”) An integer which indicates the number of nearest neighbors
considered to create the K-nearest neighbor graph. k must be lower than the
total number of nodes minus 1.
Details
If ’mat_topo’ is not defined, ’mat_w’ is used for the pruning. Matrices ’mat_w’ and ’mat_topo’
must have the same dimensions and the same rows’ and columns’ names. Values in ’mat_topo’
matrix must be positive. Negative values from ’mat_w’ are transformed into zeros. The function
works only for undirected graphs. Note that the topology ’knn’ works best when ’mat_topo’ con-
tains distance values from a continuous value range, thereby avoiding equal distances between a
node and the others. are more than k nodes located at distances in the k-th smallest distances If dist
objects are specified, it is assumed that colnames and row.names of mat_w and mat_topo refer to
the same populations/locations.
Value
A graph object of class igraph
Author(s)
<NAME>
References
<NAME>R, Sokal RR (1969). “A new statistical approach to geographic variation analysis.” Sys-
tematic zoology, 18(3), 259–278.
Examples
mat_w <- mat_gen_dist(x = data_ex_genind, dist = 'DPS')
suppressWarnings(mat_topo <- mat_geo_dist(pts_pop_ex,
ID = "ID",
x = "x",
y = "y"))
mat_topo <- mat_topo[row.names(mat_w), colnames(mat_w)]
graph <- gen_graph_topo(mat_w, mat_topo, topo = "mst")
get_graphab Download Graphab if not present on the user’s machine
Description
The function checks for the presence of Graphab (.jar) on the user’s machine and downloads it if
absent. It also checks that users have installed java on their machine.
Usage
get_graphab(res = TRUE, return = FALSE)
Arguments
res Logical indicating whether a message says if Graphab has been downloaded or
not.
return Logical indicating whether the function returns a 1 or a 0 to indicate if Graphab
has been downloaded or not.
Details
If the download does not work, you can create a directory named ’graph4lg_jar’ in the directory
rappdirs::user_data_dir() and copy Graphab software downloaded from https://thema.
univ-fcomte.fr/productions/download.php?name=graphab&version=2.8&username=Graph4lg&
institution=R
Value
If res = TRUE, the function displays a message indicating to users what has been done. If return =
TRUE, it returns a 0 if Graphab is already on the machine and a 1 if it has been downloaded.
Author(s)
<NAME>
Examples
## Not run:
get_graphab()
## End(Not run)
get_graphab_linkset Get linkset computed in the Graphab project
Description
The function gets a linkset computed in the Graphab project
Usage
get_graphab_linkset(proj_name, linkset, proj_path = NULL)
Arguments
proj_name A character string indicating the Graphab project name. The project name is
also the name of the project directory in which the file proj_name.xml is.
linkset A character string indicating the name of the link set whose properties are im-
ported. The link set has been created with Graphab or using graphab_link
function.
proj_path (optional) A character string indicating the path to the directory that contains
the project directory. It should be used when the project directory is not in the
current working directory. Default is NULL. When ’proj_path = NULL’, the
project directory is equal to getwd().
Details
See more information in Graphab 2.8 manual: https://sourcesup.renater.fr/www/graphab/
download/manual-2.8-en.pdf. This function works if link{get_graphab} function works cor-
rectly.
Value
A data.frame with the link properties (from, to, cost-distance, Euclidean distance)
Author(s)
<NAME>
Examples
## Not run:
get_graphab_linkset(proj_name = "grphb_ex",
linkset = "lkst1")
## End(Not run)
get_graphab_linkset_cost
Get cost values associated with a linkset in a Graphab project
Description
The function extracts the cost values associated with a linkset in a Graphab project
Usage
get_graphab_linkset_cost(proj_name, linkset, proj_path = NULL)
Arguments
proj_name A character string indicating the Graphab project name. The project name is
also the name of the project directory in which the file proj_name.xml will be
created.
linkset (optional, default=NULL) A character string indicating the name of the link set
used to create the graph. Link sets can be created with graphab_link.
proj_path (optional) A character string indicating the path to the directory that contains
the project directory. It should be used when the project directory is not in the
current working directory. Default is NULL. When ’proj_path = NULL’, the
project directory is equal to getwd().
Value
The function returns a data.frame with the cost values corresponding to every raster code value.
Author(s)
<NAME>
Examples
## Not run:
proj_name <- "grphb_ex"
get_graphab_linkset_cost(proj_name = proj_name,
linkset = "lkst1")
## End(Not run)
get_graphab_metric Get metrics computed at the node in the Graphab project
Description
The function gets the metrics computed at the node-level in the Graphab project
Usage
get_graphab_metric(proj_name, proj_path = NULL)
Arguments
proj_name A character string indicating the Graphab project name. The project name is
also the name of the project directory in which the file proj_name.xml is.
proj_path (optional) A character string indicating the path to the directory that contains
the project directory. It should be used when the project directory is not in the
current working directory. Default is NULL. When ’proj_path = NULL’, the
project directory is equal to getwd().
Details
The imported metrics describe the patches and have been computed from the different graphs cre-
ated in the Graphab project. See more information in Graphab 2.8 manual: https://sourcesup.
renater.fr/www/graphab/download/manual-2.8-en.pdf
Value
A data.frame with metrics computed at the patch level.
Author(s)
<NAME>
Examples
## Not run:
get_graphab_metric(proj_name = "grphb_ex")
## End(Not run)
get_graphab_raster_codes
Get unique raster codes from a Graphab project
Description
The function extracts unique raster codes from a Graphab project
Usage
get_graphab_raster_codes(proj_name, mode = "all", proj_path = NULL)
Arguments
proj_name A character string indicating the Graphab project name. The project name is
also the name of the project directory in which the file proj_name.xml will be
created.
mode A character string equal to either ’all’ (default) or ’habitat’ indicating whether
the returned codes are all the codes of the source raster used for creating the
project or only the code corresponding to the habitat patches.
proj_path (optional) A character string indicating the path to the directory that contains
the project directory. It should be used when the project directory is not in the
current working directory. Default is NULL. When ’proj_path = NULL’, the
project directory is equal to getwd().
Value
The function returns a vector of integer values corresponding to the source raster codes (all the
codes or only the one corresponding to habitat patches).
Author(s)
<NAME>
Examples
## Not run:
proj_name <- "grphb_ex"
get_graphab_raster_codes(proj_name = proj_name,
mode = "all")
## End(Not run)
graphab_capacity Computes custom capacities of patches in the Graphab project
Description
The function computes custom capacities of patches in the Graphab project
Usage
graphab_capacity(
proj_name,
mode = "area",
patch_codes = NULL,
exp = NULL,
ext_file = NULL,
thr = NULL,
linkset = NULL,
codes = NULL,
cost_conv = FALSE,
weight = FALSE,
proj_path = NULL,
alloc_ram = NULL
)
Arguments
proj_name A character string indicating the Graphab project name. The project name is
also the name of the project directory in which the file proj_name.xml is. It can
be created with graphab_project
mode A character string indicating the way capacities are computed. It must be either:
• mode='area'(default): The capacity of the patches is computed as the area
of each habitat patch. The argument exp makes it possible to raise area to a
power given by an exposant.
• mode='ext_file': The capacity of the patches is given by an external .csv
file. See argument ext_file below.
• mode='neigh': The capacity is computed depending on the neighbouring
raster cells from each habitat patch. The number of cells with a value given
by codes argument is summed up to the distance thr. This number can be
weighted according to the weight argument.
patch_codes (optional, default=NULL) An integer value or vector specifying the codes cor-
responding to the habitat pixel whose corresponding patches are included to
compute the capacity as the area of the habitat when mode='area'. Patches
corresponding to other initial habitat codes are weighted by 0.
exp An integer value specifying the power to which patch area are raised when
mode='area'. When not specified, exp=1 by default.
ext_file A character string specifying the name of the .csv file in which patch capacities
are stored. It must be located either in the working directory or in the directory
defined by proj_path. It must have as many rows as there are patches in the
project and its column names must include ’Id’ and ’Capacity’. The ’Id’ column
must correspond to the patch ID in the ’patches’ layer (see get_graphab_metric).
The ’Capacity’ column must contain the corresponding patch capacities to as-
sign each patch.
thr (optional, default=NULL) An integer or numeric value indicating the maximum
distance in cost distance units (except when cost_conv = TRUE) at which cells
are considered for computing the capacity when mode='neigh'.
linkset (optional, default=NULL) A character string indicating the name of the link set
used to take distance into account when computing the capacity. Only used
when mode='neigh'. Link sets can be created with graphab_link.
codes An integer value or a vector of integer values specifying the codes of the raster
cells taken into account when computing the capacity in the neighbourhood of
the patches, when mode='neigh'.
cost_conv FALSE (default) or TRUE. Logical indicating whether numeric thr values are
converted from cost-distance into Euclidean distance using a log-log linear re-
gression. See also convert_cd function. Only used when mode='neigh'.
weight A logical indicating whether the cells are weighted by a weight decreasing with
the distance from the patches (TRUE) or not (FALSE). The weights follow a
negative exponential decline such that wi = exp(-alpha*di), where wi is the
weight of cell i, di its distance from the patch and alpha a parameter determined
such that wi = 0.05 when di = thr.
proj_path (optional) A character string indicating the path to the directory that contains
the project directory. It should be used when the project directory is not in the
current working directory. Default is NULL. When ’proj_path = NULL’, the
project directory is equal to getwd().
alloc_ram (optional, default = NULL) Integer or numeric value indicating RAM gigabytes
allocated to the java process. Increasing this value can speed up the computa-
tions. Too large values may not be compatible with your machine settings.
Details
See more information in Graphab 2.8 manual: https://sourcesup.renater.fr/www/graphab/
download/manual-2.8-en.pdf Be careful, when capacity has been changed. The last changes are
taken into account for subsequent calculations in a project.
Author(s)
<NAME>
Examples
## Not run:
graphab_capacity(proj_name = "grphb_ex",
mode = "area")
## End(Not run)
graphab_corridor Computes corridors from least-cost paths already computed in the
Graphab project
Description
The function computes corridors around the least-cost paths which have been computed in the
Graphab project.
Usage
graphab_corridor(
proj_name,
graph,
maxcost,
format = "raster",
cost_conv = FALSE,
proj_path = NULL,
alloc_ram = NULL
)
Arguments
proj_name A character string indicating the Graphab project name. The project name is
also the name of the project directory in which the file proj_name.xml is. It can
be created with graphab_project
graph A character string indicating the name of the graph with the links from which
the corridors are computed. This graph has been created with Graphab or using
graphab_graph function and is associated with a link set. Only the links present
in the graph will be used in the computation.
maxcost An integer or numeric value indicating the maximum cost distance from the
least-cost paths considered for creating the corridors, in cost distance units (ex-
cept when cost_conv = TRUE).
format (optional, default = "raster") A character string indicating whether the output is
a raster file or a shapefile layer.
cost_conv FALSE (default) or TRUE. Logical indicating whether numeric thr values are
converted from cost-distance into Euclidean distance using a log-log linear re-
gression. See also convert_cd function. Only used when mode='neigh'.
proj_path (optional) A character string indicating the path to the directory that contains
the project directory. It should be used when the project directory is not in the
current working directory. Default is NULL. When ’proj_path = NULL’, the
project directory is equal to getwd().
alloc_ram (optional, default = NULL) Integer or numeric value indicating RAM gigabytes
allocated to the java process. Increasing this value can speed up the computa-
tions. Too large values may not be compatible with your machine settings.
Details
See more information in Graphab 2.8 manual: https://sourcesup.renater.fr/www/graphab/
download/manual-2.8-en.pdf Be careful, when capacity has been changed. The last changes are
taken into account for subsequent calculations in a project.
Author(s)
<NAME>
Examples
## Not run:
graphab_corridor(proj_name = "grphb_ex",
graph = "graph",
maxcost = 1000,
format = "raster",
cost_conv = FALSE)
## End(Not run)
graphab_graph Create a graph in the Graphab project
Description
The function creates a graph from a link set in a Graphab project
Usage
graphab_graph(
proj_name,
linkset = NULL,
name = NULL,
thr = NULL,
cost_conv = FALSE,
proj_path = NULL,
alloc_ram = NULL
)
Arguments
proj_name A character string indicating the Graphab project name. The project name is
also the name of the project directory in which the file proj_name.xml is. It can
be created with graphab_project
linkset (optional, default=NULL) A character string indicating the name of the link set
used to create the graph. If linkset=NULL, every link set present in the project
will be used to create a graph. Link sets can be created with graphab_link.
name (optional, default=NULL) A character string indicating the name of the graph
created. If name=NULL, a name will be created. If both linkset=NULL and
name=NULL, then a graph will be created for every link set present in the project
and a name will be created every time. In the latter case, a unique name cannot
be specified. Link sets can be created with graphab_link.
thr (optional, default=NULL) An integer or numeric value indicating the maximum
distance associated with the links of the created graph. It allows users to create
a pruned graph based on a distance threshold. Note that when the link set used
has a planar topology, the graph is necessarily a pruned graph (not complete)
and adding this threshold parameter can remove other links. When the link set
has been created with cost-distances, the parameter is expressed in cost-distance
units whereas when the link set is based upon Euclidean distances, the parameter
is expressed in meters.
cost_conv FALSE (default) or TRUE. Logical indicating whether numeric thr values are
converted from cost-distance into Euclidean distance using a log-log linear re-
gression. See also convert_cd function.
proj_path (optional) A character string indicating the path to the directory that contains
the project directory. It should be used when the project directory is not in the
current working directory. Default is NULL. When ’proj_path = NULL’, the
project directory is equal to getwd().
alloc_ram (optional, default = NULL) Integer or numeric value indicating RAM gigabytes
allocated to the java process. Increasing this value can speed up the computa-
tions. Too large values may not be compatible with your machine settings.
Details
By default, intra-patch distances are considered for metric calculation. See more information
in Graphab 2.8 manual: https://sourcesup.renater.fr/www/graphab/download/manual-2.
8-en.pdf
Author(s)
<NAME>
Examples
## Not run:
graphab_graph(proj_name = "grphb_ex",
linkset = "lcp",
name = "graph")
## End(Not run)
graphab_interpol Creates a raster with interpolated connectivity metric values from met-
rics already computed in the Graphab project
Description
The function creates a raster with interpolated connectivity metric values from a metric already
computed in the Graphab project.
Usage
graphab_interpol(
proj_name,
name,
reso,
linkset,
graph,
var,
dist,
prob = 0.05,
thr = NULL,
summed = FALSE,
proj_path = NULL,
alloc_ram = NULL
)
Arguments
proj_name A character string indicating the Graphab project name. The project name is
also the name of the project directory in which the file proj_name.xml is. It can
be created with graphab_project
name A character string indicating the name of the raster to be created after the inter-
polation.
reso An integer indicating the spatial resolution in meters of the raster resulting from
the metric interpolation.
linkset A character string indicating the name of the link set used for the interpolation.
It should be the one used to create the used graph and the metric.
graph A character string indicating the name of the graph from which the metric was
computed and whose links are considered for a potential multi-linkage with
patches. This graph has been created with Graphab or using graphab_graph
function and is associated with a link set.
var A character string indicating the name of the already computed metric to be
interpolated.
dist A numeric or integer value specifying the distance at which we assume a prob-
ability equal to prob during the interpolation. It is used to set α for computing
probabilities associated with distances between each pixel and the neighboring
patch(es) such that probability between patch i and pixel j is pij = e−αdij .
prob A numeric or integer value specifying the probability at distance dist. By de-
fault, code=0.05. It is used to set α (see param dist above).
thr (default NULL) If NULL, the value of each pixel is computed from the value
of the metric at the nearest habitat patch, weighted by a probability depending
on distance. If an integer, the value of each pixel depends on the values of the
metric taken at several of the nearest habitat patches, up to a distance (cost or
Euclidean distance, depending on the type of linkset) equal to thr.
summed Logical (default = FALSE) only used if thr is not NULL, and specifying whether
multiple values are summed up (TRUE) or averaged after being weighted by
probabilities.
proj_path (optional) A character string indicating the path to the directory that contains
the project directory. It should be used when the project directory is not in the
current working directory. Default is NULL. When ’proj_path = NULL’, the
project directory is equal to getwd().
alloc_ram (optional, default = NULL) Integer or numeric value indicating RAM gigabytes
allocated to the java process. Increasing this value can speed up the computa-
tions. Too large values may not be compatible with your machine settings.
Details
See more information in Graphab 2.8 manual: https://sourcesup.renater.fr/www/graphab/
download/manual-2.8-en.pdf Be careful, when capacity has been changed. The last changes are
taken into account for subsequent calculations in a project.
Author(s)
<NAME>
Examples
## Not run:
graphab_interpol(proj_name = "grphb_ex",
name = "F_interp",
reso = 20,
linkset = "lcp",
graph = "graph",
var = "F_d600_p0.5_beta1_graph",
dist = 600,
prob = 0.5)
## End(Not run)
graphab_link Create a link set in the Graphab project
Description
The function creates a link set between habitat patches in the Graphab project.
Usage
graphab_link(
proj_name,
distance = "cost",
name,
cost = NULL,
topo = "planar",
remcrosspath = FALSE,
proj_path = NULL,
alloc_ram = NULL
)
Arguments
proj_name A character string indicating the Graphab project name. The project name is
also the name of the project directory in which the file proj_name.xml is. It can
be created with graphab_project
distance A character string indicating whether links between patches are computed based
on:
• Shortest cost distances: distance='cost' (default)
• Straight Euclidean distances: distance='euclid'
In the resulting link set, each link will be associated with its corresponding cost-
distance and the length of the least-cost path in meters (if distance='cost') or
with its length in Euclidean distance (if distance='euclid')
name A character string indicating the name of the created linkset.
cost This argument could be:
• A data.frame indicating the cost values associated to each raster cell value.
These values refer to the raster used to create the project with graphab_project.
The data.frame must have two columns:
– ’code’: raster cell values
– ’cost’: corresponding cost values
• The path to an external raster file in .tif format with cost values.
topo A character string indicating the topology of the created link set. It can be:
• Planar (topo='planar' (default)): a planar set of links is created. It speeds
up the computation but will prevent from creating complete graphs with
graphab_graph.
• Complete (topo='complete'): a complete set of links is created. A link is
computed between every pair of patches.
remcrosspath (optional, default = FALSE) A logical indicating whether links crossing patches
are removed (TRUE).
proj_path (optional) A character string indicating the path to the directory that contains
the project directory. It should be used when the project directory is not in the
current working directory. Default is NULL. When ’proj_path = NULL’, the
project directory is equal to getwd().
alloc_ram (optional, default = NULL) Integer or numeric value indicating RAM gigabytes
allocated to the java process. Increasing this value can speed up the computa-
tions. Too large values may not be compatible with your machine settings.
Details
By default, links crossing patches are not ignored nor broken into two links. For example, a link
from patches A to C crossing patch B is created. It takes into account the distance inside patch
B. It can be a problem when computing BC index. See more information in Graphab 2.8 manual:
https://sourcesup.renater.fr/www/graphab/download/manual-2.8-en.pdf
Author(s)
<NAME>, <NAME>
Examples
## Not run:
df_cost <- data.frame(code = 1:5,
cost = c(1, 10, 100, 1000, 1))
graphab_link(proj_name = "grphb_ex",
distance = "cost",
name = "lcp",
cost = df_cost,
topo = "complete")
## End(Not run)
graphab_metric Compute connectivity metrics from a graph in the Graphab project
Description
The function computes connectivity metrics on a graph from a link set in a Graphab project
Usage
graphab_metric(
proj_name,
graph,
metric,
multihab = FALSE,
dist = NULL,
prob = 0.05,
beta = 1,
cost_conv = FALSE,
return_val = TRUE,
proj_path = NULL,
alloc_ram = NULL
)
Arguments
proj_name A character string indicating the Graphab project name. The project name is
also the name of the project directory in which the file proj_name.xml is.
graph A character string indicating the name of the graph on which the metric is com-
puted. This graph has been created with Graphab or using graphab_graph
function and is associated with a link set. Only the links present in the graph
and their corresponding weights will be used in the computation, together with
patch areas.
metric A character string indicating the metric which will be computed on the graph.
This metric can be:
• A global metric:
– Probability of Connectivity (metric = 'PC'): Sum of products of area
of all pairs of patches weighted by their interaction probability, divided
by the square of the area of the study zone. This ratio is the equivalent
to the probability that two points randomly placed in the study area are
connected.
– Equivalent Connectivity (metric = 'EC'): Square root of the sum of
products of capacity of all pairs of patches weighted by their interaction
probability. This is the size of a single patch (maximally connected)
that would provide the same probability of connectivity as the actual
habitat pattern in the landscape (Saura et al., 2011).
– Integral Index of Connectivity (metric = 'IIC'): For the entire graph:
product of patch areas divided by the number of links between them,
the sum is divided by the square of the area of the study zone. IIC is
built like the PC index but using the inverse of a topological distance
rather than a negative exponential function of the distance based on the
link weight.
• A local metric:
– Flux (metric = 'F'): For the focal patch i : sum of area of patches
other than i and weighted according to their minimum distance to the
focal patch through the graph. This sum is an indicator of the potential
dispersion from the patch i or, conversely to the patch i
– Betweenness Centrality index (metric = 'BC'): Sum of the shortest
paths through the focal patch i, each path is weighted by the product
of the areas of the patches connected and of their interaction probabil-
ity. All possible paths between every pair of patches is considered in
this computation.
– Interaction Flux (metric = 'IF'): Sum of products of the focal patch
area with all the other patches, weighted by their interaction probabil-
ity.
– Degree (metric = 'Dg'): Number of edges connected to the node i i.e.
number of patches connected directly to the patch i.
– Closeness Centrality index (metric = 'CCe'): Mean distance from the
patch i to all other patches of its component k.
– Current Flux (metric = 'CF'): Sum of currents passing through the
patch i. cji represents the current through the patch i when currents are
sent from all patches (except j) to the patch j. The patch j is connected
to the ground.
• A delta metric:
– delta Probability of Connectivity (metric = 'dPC'): Rate of variation
between the value of PC index and the value of PC’ corresponding to
the removal of the patch i. The value of dPC is decomposed into three
parts:
* dP Carea is the variation induced by the area lost after removal;
* dP Cf lux is the variation induced by the loss of interaction between
the patch i and other patches;
* dP Cconnector is the variation induced by the modification of paths
connecting other patches and initially routed through i.
For most metrics, the interaction probability is computed for each pair of patches
from the path that minimizes the distance d (or the cost) between them. It then
maximizes e−αdij for patches i and j. To use patch capacity values different
from the patch area, please use directly Graphab software.
multihab A logical (default = FALSE) indicating whether the ’multihabitat’ mode is used
when computing the metric. It only applies to the following metrics: ’EC’, ’F’,
’IF’ and ’BC’. If TRUE, then the project must have been created with the op-
tion nomerge=TRUE. It then returns several columns with metric values including
the decomposition of the computation according to the type of habitat of every
patch. Be careful, this option is in development and we cannot guarantee the
results are correct.
dist A numeric or integer value specifying the distance at which dispersal probability
is equal to prob. This argument is mandatory for weighted metrics (PC, F, IF,
BC, dPC, CCe, CF) but not used for others. It is used to set α for computing dis-
persal probabilities associated with all inter-patch distances such that dispersal
probability between patches i and j is pij = e−αdij .
prob A numeric or integer value specifying the dispersal probability at distance dist.
By default, code=0.05. It is used to set α (see param dist above).
beta A numeric or integer value between 0 and 1 specifying the exponent associated
with patch areas in the computation of metrics weighted by patch area. By
default, beta=1. When beta=0, patch areas do not have any influence in the
computation.
cost_conv FALSE (default) or TRUE. Logical indicating whether numeric dist values are
converted from cost-distance into Euclidean distance using a log-log linear re-
gression. See also convert_cd function.
return_val Logical (default = TRUE) indicating whether metric values are returned in R
(TRUE) or only stored in the patch attribute layer (FALSE)
proj_path (optional) A character string indicating the path to the directory that contains
the project directory. It should be used when the project directory is not in the
current working directory. Default is NULL. When ’proj_path = NULL’, the
project directory is equal to getwd().
alloc_ram (optional, default = NULL) Integer or numeric value indicating RAM gigabytes
allocated to the java process. Increasing this value can speed up the computa-
tions. Too large values may not be compatible with your machine settings.
Details
The metrics are described in Graphab 2.8 manual: https://sourcesup.renater.fr/www/graphab/
download/manual-2.8-en.pdf Graphab software makes possible the computation of other met-
rics. Be careful, when the same metric is computed several times, the option return=TRUE is not
returning the right columns. In these cases, use get_graphab_metric.
Value
If return_val=TRUE, the function returns a data.frame with the computed metric values and the
corresponding patch ID when the metric is local or delta metric, or the numeric value of the global
metric.
Author(s)
<NAME>
Examples
## Not run:
graphab_metric(proj_name = "grphb_ex",
graph = "graph",
metric = "PC",
dist = 1000,
prob = 0.05,
beta = 1)
## End(Not run)
graphab_modul Create modules from a graph in the Graphab project
Description
The function creates modules from a graph by maximising modularity
Usage
graphab_modul(
proj_name,
graph,
dist,
prob = 0.05,
beta = 1,
nb = NULL,
return = TRUE,
proj_path = NULL,
alloc_ram = NULL
)
Arguments
proj_name A character string indicating the Graphab project name. The project name is
also the name of the project directory in which the file proj_name.xml is.
graph A character string indicating the name of the graph on which the modular-
ity index is computed. This graph has been created with Graphab or using
graphab_graph function and is associated with a link set. Only the links present
in the graph and their corresponding weights will be used in the computation,
together with patch areas.
dist A numeric or integer value specifying the distance at which dispersal probability
is equal to prob. This argument is mandatory for weighted metrics (PC, F, IF,
BC, dPC, CCe, CF) but not used for others. It is used to set α for computing dis-
persal probabilities associated with all inter-patch distances such that dispersal
probability between patches i and j is pij = e−αdij .
prob A numeric or integer value specifying the dispersal probability at distance dist.
By default, code=0.05. It is used to set α (see param dist above).
beta A numeric or integer value between 0 and 1 specifying the exponent associated
with patch areas in the computation of metrics weighted by patch area. By
default, beta=1. When beta=0, patch areas do not have any influence in the
computation.
nb (optional, default=NULL) An integer or numeric value indicating the number of
modules to be created. By default, it is the number that maximises the modular-
ity index.
return Logical (default=TRUE) indicating whether results are returned to user.
proj_path (optional) A character string indicating the path to the directory that contains
the project directory. It should be used when the project directory is not in the
current working directory. Default is NULL. When ’proj_path = NULL’, the
project directory is equal to getwd().
alloc_ram (optional, default = NULL) Integer or numeric value indicating RAM gigabytes
allocated to the java process. Increasing this value can speed up the computa-
tions. Too large values may not be compatible with your machine settings.
Details
This function maximises a modularity index by searching for the node partition involves a large
number of links within modules and a small number of inter-module links. Each link is given a
weight in the computation, such as the weight wij of the link between patches i and j is:
wij = (ai aj )β e−αdij
. This function does not allow users to convert automatically Euclidean distances into cost-distances.
See more information in Graphab 2.8 manual: https://sourcesup.renater.fr/www/graphab/
download/manual-2.8-en.pdf
Value
If return=TRUE, the function returns a message indicating whether the partition has been done.
New options are being developed.
Author(s)
<NAME>
Examples
## Not run:
graphab_modul(proj_name = "grphb_ex",
graph = "graph",
dist = 1000,
prob = 0.05,
beta = 1)
## End(Not run)
graphab_pointset Add a point set to the Graphab project
Description
The function adds a spatial point set to the Graphab project, allowing users to identify closest habitat
patch from each point and get corresponding connectivity metrics.
Usage
graphab_pointset(
proj_name,
linkset,
pointset,
id = "ID",
return_val = TRUE,
proj_path = NULL,
alloc_ram = NULL
)
Arguments
proj_name A character string indicating the Graphab project name. The project name is
also the name of the project directory in which the file proj_name.xml is.
linkset A character string indicating the name of the link set used. The link set is here
used to get the defined cost values and compute the distance from the point to
the patches. Link sets can be created with graphab_link.
pointset Can be either;
• A character string indicating the path (absolute or relative) to a shapefile
point layer
• A character string indicating the path to a .csv file with three columns: ID,
x and y, respectively indicating the point ID, longitude and latitude.
• A data.frame with three columns: ID, x and y, respectively indicating the
point ID, longitude and latitude.
• A SpatialPointsDataFrame
The point ID column must be ’ID’ by default but can also be specified by the id
argument in all three cases.
id A character string indicating the name of the column in either the .csv table,
data.frame or attribute table, corresponding to the ID of the points. By default,
it should be ’ID’. This column is used for naming the points when returning the
output.
return_val Logical (default=TRUE) indicating whether the metrics associated with closest
habitat patches from the points are returned to users.
proj_path (optional) A character string indicating the path to the directory that contains
the project directory. It should be used when the project directory is not in the
current working directory. Default is NULL. When ’proj_path = NULL’, the
project directory is equal to getwd().
alloc_ram (optional, default = NULL) Integer or numeric value indicating RAM gigabytes
allocated to the java process. Increasing this value can speed up the computa-
tions. Too large values may not be compatible with your machine settings.
Details
Point coordinates must be in the same coordinate reference system as the habitat patches (and initial
raster layer). See more information in Graphab 2.8 manual: https://sourcesup.renater.fr/
www/graphab/download/manual-2.8-en.pdf
Value
If return_val=TRUE, the function returns a data.frame with the properties of the nearest patch to
every point in the point set, as well as the distance from each point to the nearest patch.
Author(s)
<NAME>
Examples
## Not run:
graphab_pointset(proj_name = "grphb_ex",
graph = "graph",
pointset = "pts.shp")
## End(Not run)
graphab_project Create a Graphab project
Description
The function creates a Graphab project from a raster file on which habitat patches can be delimited.
Usage
graphab_project(
proj_name,
raster,
habitat,
nomerge = FALSE,
minarea = 0,
nodata = NULL,
maxsize = NULL,
con8 = FALSE,
alloc_ram = NULL,
proj_path = NULL
)
Arguments
proj_name A character string indicating the Graphab project name. The project name is
also the name of the project directory in which the file proj_name.xml will be
created.
raster A character string indicating the name of the .tif raster file or of its path. If the
path is not specified, the raster must be present in the current working directory.
Raster cell values must be in INT2S encoding.
habitat An integer or numeric value or vector indicating the code.s (cell value.s) of the
habitat cells in the raster file.
nomerge (optional, default=FALSE) A logical indicating whether contiguous patches cor-
responding to different pixel codes are merged (FALSE, default) or not merged
(TRUE). Be careful, the nomerge = TRUE option is in development and we can-
not guarantee the results are correct.
minarea (optional, default=0) An integer or numeric value specifiying the minimum area
in hectares for a habitat patch size to become a graph node.
nodata (optional, default=NULL) An integer or numeric value specifying the code in
the raster file associated with nodata value (often corresponding to peripheric
cells)
maxsize (optional, default=NULL) An integer or numeric value specifying the maximum
side length of the rectangular full extent of each habitat patch in metric units. If
this side length exceeds maxsize m, then several patches are created. (often
corresponding to peripheric cells)
con8 (optional, default=FALSE) A logical indicating whether a neighborhood of 8
pixels (TRUE) is used for patch definition. By default, con8=4, corresponding
to 4 pixel neighborhood.
alloc_ram (optional, default = NULL) Integer or numeric value indicating RAM gigabytes
allocated to the java process. Increasing this value can speed up the computa-
tions. Too large values may not be compatible with your machine settings.
proj_path (optional) A character string indicating the path to the directory that contains
the project directory. It should be used when the project directory is not in the
current working directory. Default is NULL. When ’proj_path = NULL’, the
project directory is equal to getwd().
Details
A habitat patch consists of the central pixel with its eight neighbors if they are of the same value (8-
connexity) and the path geometry is not simplified. See more information in Graphab 2.8 manual:
https://sourcesup.renater.fr/www/graphab/download/manual-2.8-en.pdf
Author(s)
<NAME>, <NAME>
Examples
## Not run:
proj_name <- "grphb_ex"
raster <- "rast_ex.tif"
habitat <- 5
graphab_project(proj_name = proj_name,
raster = raster,
habitat = habitat)
## End(Not run)
graphab_project_desc Describe the objects of a Graphab project
Description
The function describes the objects of a Graphab project
Usage
graphab_project_desc(
proj_name,
mode = "patches",
linkset = NULL,
proj_path = NULL,
fig = FALSE,
return_val = TRUE
)
Arguments
proj_name A character string indicating the Graphab project name. The project name is
also the name of the project directory in which the file proj_name.xml is.
mode A character string indicating the objects of the project that are described. It must
be either:
• mode='patches'(default): The habitat patches are described with synthetic
descriptors (code, number, mean capacity, median capacity, capacity har-
monic mean, capacity Gini coefficient) and a histogram of capacity distri-
bution.
• mode='linkset': The links of a link set are described with synthetic de-
scriptors (codes, costs, number, mean cost distance, median cost distance,
cost distance harmonic mean, cost distance Gini coefficient) and a his-
togram of cost distance distribution.
• mode='both': Both the patches and links of a linkset are described
linkset A character string indicating the name of the link set whose properties are im-
ported. The link set has been created with Graphab or using graphab_link
function.
proj_path (optional) A character string indicating the path to the directory that contains
the project directory. It should be used when the project directory is not in the
current working directory. Default is NULL. When ’proj_path = NULL’, the
project directory is equal to getwd().
fig Logical (default = FALSE) indicating whether to plot a figure of the resulting
spatial graph. The figure is plotted using function plot_graph_lg. The plotting
can be long if the graph has many nodes and links.
return_val Logical (default = TRUE) indicating whether the project features are returned
as a list (TRUE) or only displayed in the R console (FALSE).
Author(s)
<NAME>
Examples
## Not run:
graphab_project_desc(proj_name = "grphb_ex",
mode = "patches",
fig = FALSE)
## End(Not run)
graphab_to_igraph Create landscape graphs from Graphab link set
Description
The function creates a landscape graph from a link set created with Graphab software or different
functions of this package and converts it into a graph object of class igraph. The graph has weighted
links and is undirected. Nodes attributes present in the Graphab project are included, including
connectivity metrics when computed
Usage
graphab_to_igraph(
proj_name,
linkset,
nodes = "patches",
weight = "cost",
proj_path = NULL,
fig = FALSE,
crds = FALSE
)
Arguments
proj_name A character string indicating the project name. It is also the name of the directory
in which proj_name.xml file is found. By default, ’proj_name’ is searched into
the current working directory
linkset A character string indicating the name of the linkset used to create the graph
links. The linkset must have been created previously (see the function graphab_link).
It can be complete or planar. The graph is given the topology of the selected link
set.
nodes A character string indicating whether the nodes of the created graph are given all
the attributes or metrics computed in Graphab or only those specific to a given
graph previously created with graphab_graph It can be:
• nodes = "patches"(default): all the attributes and metrics of the habitat
patches are included as node attributes in igraph object.
• nodes = "graph_name"(default): only the metrics of the habitat patches
computed from the graph ’graph_name’ created with graphab_graph are
included as node attributes in igraph object, along with some basic patch
attributes.
weight A character string ("euclid" or "cost") indicating whether to weight the links
with Euclidean distance or cost-distance (default) values.
proj_path (optional) A character string indicating the path to the directory that contains
the project directory (’proj_name’). By default, ’proj_name’ is searched into
the current working directory
fig Logical (default = FALSE) indicating whether to plot a figure of the resulting
spatial graph. The figure is plotted using function plot_graph_lg. The plotting
can be long if the graph has many nodes and links.
crds Logical (default = FALSE) indicating whether to create an object of class data.frame
with the node centroid spatial coordinates. Such a data.frame has 3 columns:
’ID’, ’x’, ’y’.
Value
A graph object of class igraph (if crds = FALSE) or a list of objects: a graph object of class igraph
and a data.frame with the nodes spatial coordinates (if crds = TRUE).
Author(s)
<NAME>
References
<NAME>, <NAME>, <NAME> (2012). “A software tool dedicated to the modelling of landscape
networks.” Environmental Modelling & Software, 38, 316–327.
Examples
## Not run:
proj_path <- system.file('extdata',package='graph4lg')
proj_name <- "grphb_ex"
linkset <- "lkst1"
nodes <- "graph"
graph <- graphab_to_igraph(proj_name = proj_name,
linkset = "lkst1",
nodes = "graph",
links = links,
weights = "cost",
proj_path = proj_path,
crds = FALSE,
fig = FALSE)
## End(Not run)
graph_modul_compar Compare the partition into modules of two graphs
Description
The function computes the Adjusted Rand Index (ARI) to compare two graphs’ partitions into
modules or clusters more generally. Both graphs must have the same number of nodes, but not
necessarily the same number of links. They must also have the same node names and in the same
order.
Usage
graph_modul_compar(
x,
y,
mode = "graph",
nb_modul = NULL,
algo = "fast_greedy",
node_inter = "distance",
data = NULL
)
Arguments
x The first graph object
• If mode = 'graph' (default), x is a graph object of class igraph. Then, its
nodes must have the same names as in graph y.
• If mode = 'data.frame', x refers to a column of the data.frame ’data’.
Then x must be a character string indicating the name of the column of
’data’ with the modules’ labels of the nodes in the first graph. In that case,
the column can be of class numeric, character or factor but will be
converted into a numeric vector in any case.
• If mode = 'vector', x is a vector of class character, factor or numeric.
In that case, it must have the same length as vector y and will be converted
into a numeric vector.
y The second graph object Same classes possible as for x. Must be of the same
format as x
mode A character string indicating whether x and y are igraph objects, vectors or
columns from a data.frame. mode can be ’graph’, ’data.frame’ or ’vector’.
nb_modul (if x and y are igraph objects) A numeric or integer value or a numeric vector
with 2 elements indicating the number of modules to create in both graphs.
• If nb_modul is a numeric value, then the same number of modules are cre-
ated in both graphs.
• If nb_modul is a numeric vector of length 2, then the numbers of modules
created in graphs x and y are the first and second elements of nb_modul,
respectively.
algo (if x and y are igraph objects) A character string indicating the algorithm used
to create the modules with igraph.
• If algo = 'fast_greedy' (default), function cluster_fast_greedy from
igraph is used (Clauset et al., 2004).
• If algo = 'walktrap' (default), function cluster_walktrap from igraph
is used (Pons et Latapy, 2006) with 4 steps (default options).
• If algo = 'louvain', function cluster_louvain from igraph is used (Blon-
del et al., 2008). In that case, the number of modules created in each graph
is imposed.
• If algo = 'optimal', function cluster_optimal from igraph is used (Bran-
des et al., 2008) (can be very long). In that case, the number of modules
created in each graph is imposed.
node_inter (optional, if x and y are igraph objects, default is ’none’) A character string indi-
cating whether the links of the graph are weighted by distances or by similarity
indices. It is only used to compute the modularity index. It can be:
• ’distance’: Link weights correspond to distances. Nodes that are close to
each other will more likely be in the same module.
• ’similarity’: Link weights correspond to similarity indices. Nodes that are
similar to each other will more likely be in the same module. Inverse link
weights are then used to compute the modularity index.
• ’none’: Links are not weighted for the computation, which is only based on
graph topology.
Two different weightings can be used to create the modules of the two graphs.
• If node_inter is a character string, then the same link weighting is used
for both graphs.
• If node_inter is a character vector of length 2, then the link weighting
used by the algorithm to create the modules of graphs x and y is determined
by the first and second elements of node_inter, respectively.
data (if x and y are columns from a data.frame) An object of class data.frame with at
least two columns and as many rows as there are nodes in the graphs compared.
The columns indicate the modules of each node in 2 different classifications.
Details
This index takes values between -1 and 1. It measures how often pairs of nodes pertaining to the
same module in one graph also pertain to the same module in the other graph. Therefore, large
values indicate that both partitions are similar. The Rand Index can be defined as the frequency of
agreement between two classifications into discrete classes. It is the number of times a pair of ele-
ments are classified into the same class or in two different classes in both compared classifications,
divided by the total number of possible pairs of elements. The Rand Index is between 0 and 1 but
its maximum value depends on the number of elements. Thus, another ’adjusted’ index was cre-
ated, the Adjusted Rand Index. According to the Hubert et Arabie’s formula, the ARI is computed
Index−Expectedindex
as follows: ARI = M aximumindex−Expectedindex where the values of Index, Expected index and
Maximum index are computed from a contingency table. This function uses adjustedRandIndex
from package mclust which applies the Hubert and Arabie’s formula for the ARI. This function
works for undirected graphs only.
Value
The value of the ARI
Author(s)
<NAME>
References
<NAME>, <NAME> (2004). “Population graphs: the graph theoretic shape of genetic structure.”
Molecular ecology, 13(7), 1713–1727. <NAME>, <NAME> (1985). “Comparing partitions.” Journal
of classification, 2(1), 193–218. <NAME>, <NAME>, <NAME> (2004). “Finding community
structure in very large networks.” Physical review E, 70(6). Blondel VD, <NAME>, Lambiotte
R, Lefebv<NAME> (2008). “Fast unfolding of communities in large networks.” Journal of Statistical
Mechanics - Theory and Experiment, 10. <NAME>, <NAME>, <NAME>, <NAME>, <NAME>,
<NAME>, <NAME> (2008). “On modularity clustering.” IEEE transactions on knowledge and
data engineering, 20(2), 172–188. <NAME>, <NAME> (2006). “Computing communities in large
networks using random walks.” J. Graph Algorithms Appl., 10(2), 191–218.
Examples
data(data_ex_genind)
data(pts_pop_ex)
mat_dist <- suppressWarnings(graph4lg::mat_geo_dist(data=pts_pop_ex,
ID = "ID",
x = "x",
y = "y"))
mat_dist <- mat_dist[order(as.character(row.names(mat_dist))),
order(as.character(colnames(mat_dist)))]
graph_obs <- gen_graph_thr(mat_w = mat_dist, mat_thr = mat_dist,
thr = 24000, mode = "larger")
mat_gen <- mat_gen_dist(x = data_ex_genind, dist = "DPS")
graph_pred <- gen_graph_topo(mat_w = mat_gen, mat_topo = mat_dist,
topo = "gabriel")
ARI <- graph_modul_compar(x = graph_obs, y = graph_pred)
graph_node_compar Compare the local properties of the nodes from two graphs
Description
The function computes a correlation coefficient between the graph-theoretic metric values computed
at the node-level in two graphs sharing the same nodes. It allows to assess whether the connectivity
properties of the nodes in one graph are similar to that of the same nodes in the other graph. Alter-
natively, the correlation is computed between a graph-theoretic metric values and the values of an
attribute associated to the nodes of a graph.
Usage
graph_node_compar(
x,
y,
metrics = c("siw", "siw"),
method = "spearman",
weight = TRUE,
test = TRUE
)
Arguments
x An object of class igraph. Its nodes must have the same names as in graph y.
y An object of class igraph. Its nodes must have the same names as in graph x.
metrics Two-element character vector specifying the graph-theoretic metrics computed
at the node-level in the graphs or the node attribute values to be correlated to
these metrics. Graph-theoretic metrics can be:
• Degree (metrics = c("deg", ...))
• Closeness centrality index (metrics = c("close",...))
• Betweenness centrality index (metrics = c("btw",...))
• Strength (sum of the weights of the links connected to a node) (metrics =
c("str",...))
• Sum of the inverse weights of the links connected to a node (metrics =
c("siw", ...), default)
• Mean of the inverse weights of the links connected to a node (metrics =
c("miw", ...))
Node attributes must have the same names as in the igraph object, and must
refer to an attribute with numerical values. The vector metrics is composed
of two character values. When a node attribute has the same name as a metric
computable from the graph, node attributes are given priority.
method A character string indicating which correlation coefficient is to be computed
("pearson", "kendall" or "spearman" (default)).
weight Logical which indicates whether the links are weighted during the calculation
of the centrality indices betweenness and closeness. (default: weight = TRUE).
Link weights are interpreted as distances when computing the shortest paths.
They should then be inversely proportional to the strength of the relationship
between nodes (e.g. to fluxes).
test Logical. Should significance testing be performed? (default = TRUE)
Details
The correlation coefficients between the metrics can be computed in different ways, as initial as-
sumptions (e.g. linear relationship) are rarely verified. Pearson’s r, Spearman’s rho and Kendall’s
tau can be computed (from function cor). When x is similar to y, then the correlation is computed
between two metrics characterizing the nodes of the same graph.
Value
A list summarizing the correlation analysis.
Author(s)
<NAME>
Examples
data(data_ex_genind)
data(pts_pop_ex)
mat_dist <- suppressWarnings(graph4lg::mat_geo_dist(data = pts_pop_ex,
ID = "ID",
x = "x",
y = "y"))
mat_dist <- mat_dist[order(as.character(row.names(mat_dist))),
order(as.character(colnames(mat_dist)))]
graph_obs <- gen_graph_thr(mat_w = mat_dist, mat_thr = mat_dist,
thr = 9500, mode = "larger")
mat_gen <- mat_gen_dist(x = data_ex_genind, dist = "DPS")
graph_pred <- gen_graph_topo(mat_w = mat_gen, mat_topo = mat_dist,
topo = "gabriel")
res_cor <- graph_node_compar(x = graph_obs, y = graph_pred,
metrics = c("siw", "siw"), method = "spearman",
test = TRUE, weight = TRUE)
graph_plan Create a graph with a minimum planar graph topology
Description
The function constructs a graph with a minimum planar graph topology
Usage
graph_plan(crds, ID = NULL, x = NULL, y = NULL, weight = TRUE)
Arguments
crds A data.frame with the spatial coordinates of the point set (the graph nodes). It
must have three columns:
• ID: A character string indicating the name of the points(graph nodes).
• x: A numeric or integer indicating the longitude of the graph nodes.
• y: A numeric or integer indicating the latitude of the graph nodes.
ID A character string indicating the name of the column of crds with the point IDs
x A character string indicating the name of the column of crds with the point
longitude
y A character string indicating the name of the column of crds with the point
latitude
weight A character string indicating whether the links of the graph are weighted by
Euclidean distances (TRUE)(default) or not (FALSE). When the graph links do
not have weights in Euclidean distances, each link is given a weight of 1.
Details
A delaunay triangulation is performed in order to get the planar graph.
Value
A planar graph of class igraph
Author(s)
<NAME>
Examples
data(pts_pop_ex)
g_plan <- graph_plan(crds = pts_pop_ex,
ID = "ID",
x = "x",
y = "y")
graph_plot_compar Visualize the topological differences between two spatial graphs on a
map
Description
The function enables to compare two spatial graphs by plotting them highlighting the topological
similarities and differences between them. Both graphs should share the same nodes and cannot be
directed graphs.
Usage
graph_plot_compar(x, y, crds)
Arguments
x A graph object of class igraph. Its nodes must have the same names as in graph
y.
y A graph object of class igraph. Its nodes must have the same names as in graph
x.
crds A data.frame with the spatial coordinates of the graph nodes (both x and y). It
must have three columns:
• ID: Name of the graph nodes (character string). The names must be the
same as the node names of the graphs of class igraph (igraph::V(graph)$name)
• x: Longitude of the graph nodes (numeric or integer).
• y: Latitude of the graph nodes (numeric or integer).
Details
The graphs x and y of class igraph must have node names (not necessarily in the same order as IDs
in crds, given a merging is done).
Value
A ggplot2 object to plot
Author(s)
<NAME>
Examples
data(pts_pop_ex)
data(data_ex_genind)
mat_w <- mat_gen_dist(data_ex_genind, dist = "DPS")
mat_dist <- mat_geo_dist(data = pts_pop_ex,
ID = "ID",
x = "x",
y = "y")
mat_dist <- mat_dist[order(as.character(row.names(mat_dist))),
order(as.character(colnames(mat_dist)))]
g1 <- gen_graph_topo(mat_w = mat_w, topo = "mst")
g2 <- gen_graph_topo(mat_w = mat_w, mat_topo = mat_dist, topo = "gabriel")
g <- graph_plot_compar(x = g1, y = g2,
crds = pts_pop_ex)
graph_topo_compar Compute an index comparing graph topologies
Description
The function computes several indices in order to compare two graph topologies. One of the graph
has the "true" topology the other is supposed to reproduce. The indices are then a way to assess the
reliability of the latter graph. Both graphs must have the same number of nodes, but not necessarily
the same number of links. They must also have the same node names and in the same order.
Usage
graph_topo_compar(obs_graph, pred_graph, mode = "mcc", directed = FALSE)
Arguments
obs_graph A graph object of class igraph with n nodes. It is the observed graph that
pred_graph is supposed to approach.
pred_graph A graph object of class igraph with n nodes. It is the predicted graph that is
supposed to be akin to obs_graph.
mode A character string specifying which index to compute in order to compare the
topologies of the graphs.
• If ’mode = ’mcc” (default), the Matthews Correlation Coefficient (MCC) is
computed.
• If ’mode = ’kappa”, the Kappa index is computed.
• If ’mode = ’fdr”, the False Discovery Rate (FDR) is computed.
• If ’mode = ’acc”, the Accuracy is computed.
• If ’mode = ’sens”, the Sensitivity is computed.
• If ’mode = ’spec”, the Specificity is computed.
• If ’mode = ’prec”, the Precision is computed.
directed Logical (TRUE or FALSE) specifying whether both graphs are directed or not.
Details
The indices are calculated from a confusion matrix counting the number of links that are in the
"observed" graph ("true") and also in the "predicted" graph (true positives : TP), that are in the "ob-
served" graph but not in the "predicted" graph (false negatives : FN), that are not in the "observed"
graph but in the "predicted" graph (false positives : FP) and that are not in the "observed" graph
and not in the "predicted" graph neither (true negatives: TN). K is the total number of links in the
graphs. K is equal to n × (n − 1) if the graphs are directed and to n×(n−1)
with n the number of nodes. OP = TP + FN, ON = TN + FP, PP = TP + FP and PN = FN + TN.
T P ×T N −F P ×F N
The Matthews Correlation Coefficient (MCC) is computed as follows: M CC = √
(T P +F P )(T P +F N )(T N +F P )(T N +F N )
K×(T P +T N )−(ON ×P N )−(OP ×P P )
The Kappa index is computed as follows: Kappa = K 2 −(ON ×P N )−(OP ×P P )
FP
The False Discovery Rate (FDR) is calculated as follows: F DR = T P +F P
T P +T N
The Accuracy is calculated as follows: Acc = K
The Sensitivity is calculated as follows: Sens = T PT+FP
N
TN
The Specificity is calculated as follows: Spec = T N +F P
TP
The Precision is calculated as follows: P rec = T P +F P
Self loops are not taken into account.
Value
The value of the index computed
Author(s)
<NAME>
References
<NAME>, <NAME> (2004). “Population graphs: the graph theoretic shape of genetic structure.”
Molecular ecology, 13(7), 1713–1727. <NAME>, <NAME>, <NAME>, Andersen CA, <NAME>
(2000). “Assessing the accuracy of prediction algorithms for classification: an overview.” Bioin-
formatics, 16(5), 412–424. Matthews BW (1975). “Comparison of the predicted and observed
secondary structure of T4 phage lysozyme.” Biochimica et Biophysica Acta (BBA)-Protein Struc-
ture, 405(2), 442–451.
Examples
data(data_ex_genind)
data(pts_pop_ex)
mat_dist <- suppressWarnings(graph4lg::mat_geo_dist(data=pts_pop_ex,
ID = "ID",
x = "x",
y = "y"))
mat_dist <- mat_dist[order(as.character(row.names(mat_dist))),
order(as.character(colnames(mat_dist)))]
graph_obs <- gen_graph_thr(mat_w = mat_dist, mat_thr = mat_dist,
thr = 15000, mode = "larger")
mat_gen <- mat_gen_dist(x = data_ex_genind, dist = "DPS")
graph_pred <- gen_graph_topo(mat_w = mat_gen, mat_topo = mat_dist,
topo = "gabriel")
graph_topo_compar(obs_graph = graph_obs,
pred_graph = graph_pred,
mode = "mcc",
directed = FALSE)
graph_to_df Convert a graph into a edge list data.frame
Description
The function converts a graph into a edge list data.frame
Usage
graph_to_df(graph, weight = TRUE)
Arguments
graph A graph object of class igraph
weight Logical. If TRUE (default), then the column ’link’ of the output data.frame
contains the weights of the links. If FALSE, it contains only 0 and 1.
Details
The ’graph’ nodes must have names. Links must have weights if ’weight = TRUE’.
Value
An object of class data.frame with a link ID, the origin nodes (’from’) and arrival nodes (’to’) and
the link value (’link’)(weighted or binary)
Author(s)
<NAME>
Examples
data(pts_pop_ex)
suppressWarnings(mat_geo <- mat_geo_dist(pts_pop_ex,
ID = "ID",
x = "x",
y = "y"))
g1 <- gen_graph_thr(mat_w = mat_geo,
mat_thr = mat_geo,
thr = 20000)
g1_df <- graph_to_df(g1,
weight = TRUE)
graph_to_shp Export a spatial graph to shapefile layers
Description
The function enables to export a spatial graph to shapefile layers.
Usage
graph_to_shp(
graph,
crds,
mode = "both",
crds_crs,
layer,
dir_path,
metrics = FALSE
)
Arguments
graph A graph object of class igraph
crds (if ’mode = ’spatial”) A data.frame with the spatial coordinates of the graph
nodes. It must have three columns:
• ID: Name of the graph nodes (will be converted into character string). The
names must the same as the node names of the graph object of class igraph
(igraph::V(graph)$name)
• x: Longitude (numeric or integer) of the graph nodes in the coordinates
reference system indicated with the argument crds_crs.
• y: Latitude (numeric or integer) of the graph nodes in the coordinates ref-
erence system indicated with the argument crds_crs.
mode Indicates which shapefile layers will be created
• If ’mode = ’both” (default), then two shapefile layers are created, one for
the nodes and another for the links.
• If ’mode = ’node”, a shapefile layer is created for the nodes only.
• If ’mode = ’link”, a shapefile layer is created for the links only.
crds_crs An integer indicating the EPSG code of the coordinates reference system to use.
The projection and datum are given in the PROJ.4 format.
layer A character string indicating the suffix of the name of the layers to be created.
dir_path A character string corresponding to the path to the directory in which the shape-
file layers will be exported. If dir_path = "wd", then the layers are created in
the current working directory.
metrics (not considered if ’mode = ’link”) Logical. Should graph node attributes inte-
grated in the attribute table of the node shapefile layer? (default: FALSE)
Value
Create shapefile layers in the directory specified with the parameter ’dir_path’.
Author(s)
<NAME>
Examples
## Not run:
data(data_tuto)
mat_w <- data_tuto[[1]]
gp <- gen_graph_topo(mat_w = mat_w, topo = "gabriel")
crds_crs <- 2154
crds <- pts_pop_simul
layer <- "graph_dps_gab"
graph_to_shp(graph = gp, crds = pts_pop_simul, mode = "both",
crds_crs = crds_crs,
layer = "test_fonct",
dir_path = tempdir(),
metrics = FALSE)
## End(Not run)
gstud_to_genind Convert a file from gstudio or popgraph into a genind object
Description
The function converts a file formatted to use gstudio or popgraph package into a genind object
(adegenet package)
Usage
gstud_to_genind(x, pop_col, ind_col = NULL)
Arguments
x An object of class data.frame with loci columns in format locus (defined in
package gstudio) with as many rows as individuals and as many columns in
format locus as there are loci and additional columns
pop_col A character string indicating the name of the column with populations’ names
in x
ind_col (optional) A character string indicating the name of the column with individuals’
ID in x
Details
This function uses functions from pegas package. It can handle genetic data where alleles codings
do not have same length, (99:101, for example). If the names of the loci include ’.’ characters, they
will be replaced by ’_’.
Value
An object of class genind.
Author(s)
<NAME>
Examples
data("data_ex_gstud")
x <- data_ex_gstud
pop_col <- "POP"
ind_col <- "ID"
data_genind <- gstud_to_genind(x, pop_col, ind_col)
g_percol Prune a graph using the ’percolation threshold’ method
Description
The function allows to prune a graph by removing the links with the largest weights until the graph
breaks into two components. The returned graph is the last graph with only one component.
Usage
g_percol(x, val_step = 20)
Arguments
x A symmetric matrix or a dist object with pairwise distances between nodes
val_step The number of classes to create to search for the threshold value without testing
all the possibilities. By default, ’val_step = 20’.
Value
A graph object of type igraph
Author(s)
<NAME>
Examples
data(data_ex_genind)
suppressWarnings(mat_w <- graph4lg::mat_geo_dist(data = pts_pop_ex,
ID = "ID",
x = "x",
y = "y"))
g_percol(x = mat_w)
kernel_param Compute dispersal kernel parameters
Description
The function computes the constant parameters of a dispersal kernel with a negative exponential
distribution
Usage
kernel_param(p, d_disp, mode = "A")
Arguments
p A numeric value indicating the dispersal probability at a distance equal to ’d_disp’
under a negative exponential distribution.
d_disp A numeric value indicating the distance to which dispersal probability is equal
to ’p’ under a negative exponential distribution.
mode A character string indicating the value to return:
• If ’mode = ’A” (default), the returned value ’alpha’ is such that exp(-alpha
* d_disp) = p
• If ’mode = ’B”, the returned value ’alpha’ is such that 10(-alpha * d_disp)
=p
Details
If the resulting parameter when mode = "A" is a and the resulting parameter when mode = "B" is b,
then we have: p = exp(-a.d_disp) = 10^(-b.d_disp) and a = b.ln(10)
Value
A numeric value
Author(s)
<NAME>
Examples
p <- 0.5
d_disp <- 3000
alpha <- kernel_param(p, d_disp, mode = "A")
loci_to_genind Convert a loci object into a genind object
Description
This function is exactly the same as loci2genind from pegas package
Usage
loci_to_genind(x, ploidy = 2, na.alleles = c("NA"))
Arguments
x An object of class loci to convert
ploidy An integer indicating the ploidy level (by default, ’ploidy = 2’)
na.alleles A character vector indicating the coding of the alleles to be treated as missing
data (by default, ’na.alleles = c("NA")’)
Value
An object of class genind
Author(s)
<NAME>
Examples
data("data_ex_loci")
genind <- loci_to_genind(data_ex_loci, ploidy = 2, na.alleles = "NA")
mat_cost_dist Compute cost distances between points on a raster
Description
The function computes cost-distances associated to least cost paths between point pairs on a raster
with specified cost values.
Usage
mat_cost_dist(
raster,
pts,
cost,
method = "gdistance",
return = "mat",
direction = 8,
parallel.java = 1,
alloc_ram = NULL
)
Arguments
raster A parameter indicating the raster file on which cost distances are computed. It
can be:
• A character string indicating the path to a raster file in format .tif or .asc.
• A RasterLayer object already loaded in R environment
All the raster cell values must be present in the column ’code’ from cost argu-
ment.
pts A parameter indicating the points between which cost distances are computed.
It can be either:
• A character string indicating the path to a .csv file. It must have three
columns:
– ID: The ID of the points.
– x: A numeric or integer indicating the longitude of the points.
– y: A numeric or integer indicating the latitude of the points.
• A data.frame with the spatial coordinates of the points. It must have three
columns:
– ID: The ID of the points.
– x: A numeric or integer indicating the longitude of the points.
– y: A numeric or integer indicating the latitude of the points.
• A SpatialPointsDataFrame with at least an attribute column named "ID"
with the point IDs.
The point coordinates must be in the same spatial coordinate reference system
as the raster file.
cost A data.frame indicating the cost values associated to each raster value. It must
have two columns:
• ’code’: raster cell values
• ’cost’: corresponding cost values
method A character string indicating the method used to compute the cost distances. It
must be:
• ’gdistance’: uses the functions from the package gdistance assuming that
movement is possible in 8 directions from each cell, that a geo-correction is
applied to correct for diagonal movement lengths and that raster cell values
correspond to resistance (and not conductance).
• ’java’: uses a .jar file which is downloaded on the user’s machine if neces-
sary and if java is installed. This option substantially reduces computation
times and makes possible the parallelisation.
return A character string indicating whether the returned object is a data.frame (return="df")
or a pairwise matrix (return="mat").
direction An integer (4, 8, 16) indicating the directions in which movement can take place
from a cell. Only used when method="gdistance". By default, direction=8.
parallel.java An integer indicating how many computer cores are used to run the .jar file. By
default, parallel.java=1.
alloc_ram (optional, default = NULL) Integer or numeric value indicating RAM gigabytes
allocated to the java process when used. Increasing this value can speed up
the computations. Too large values may not be compatible with your machine
settings.
Value
The function returns:
• If return="mat", a pairwise matrix with cost-distance values between points.
• If return="df", an object of type data.frame with three columns:
– from: A character string indicating the ID of the point of origin.
– to: A character string indicating the ID of the point of destination.
– cost_dist: A numeric indicating the accumulated cost-distance along the least-cost path
between point ID1 and point ID2.
Author(s)
<NAME>
Examples
## Not run:
x <- raster::raster(ncol=10, nrow=10, xmn=0, xmx=100, ymn=0, ymx=100)
raster::values(x) <- sample(c(1,2,3,4), size = 100, replace = TRUE)
pts <- data.frame(ID = 1:4,
x = c(10, 90, 10, 90),
y = c(90, 10, 10, 90))
cost <- data.frame(code = 1:4,
cost = c(1, 10, 100, 1000))
mat_cost_dist(raster = x,
pts = pts, cost = cost,
method = "gdistance")
## End(Not run)
mat_gen_dist Compute a pairwise matrix of genetic distances between populations
Description
The function computes a pairwise matrix of genetic distances between populations and allows to
implement several formula.
Usage
mat_gen_dist(x, dist = "basic", null_val = FALSE)
Arguments
x An object of class genind that contains the multilocus genotypes (format ’lo-
cus’) of the individuals as well as their populations.
dist A character string indicating the method used to compute the multilocus genetic
distance between populations
• If ’dist = ’basic” (default), then the multilocus genetic distance is computed
using a formula of Euclidean genetic distance (Excoffier et al., 1992)
• If ’dist = ’weight”, then the multilocus genetic distance is computed as in
Fortuna et al. (2009). It is a Euclidean genetic distance giving more weight
to rare alleles
• If ’dist = ’PG”, then the multilocus genetic distance is computed as in pop-
graph::popgraph function, following several steps of PCA and SVD (Dyer
et Nason, 2004).
• If ’dist = ’DPS”, then the genetic distance used is equal to 1 - the proportion
of shared alleles (Bowcock, 1994)
• If ’dist = ’FST”, then the genetic distance used is the pairwise FST (Weir et
Cockerham, 1984)
• If ’dist = ’FST_lin”, then the genetic distance used is the linearised pairwise
FST (Weir et Cockerham, 1984)(FST_lin = FST/(1-FST))
• If ’dist = ’PCA”, then the genetic distance is computed following a PCA
of the matrix of allelic frequencies by population. It is a Euclidean genetic
distance between populations in the multidimensional space defined by all
the independent principal components.
• If ’dist = ’GST”, then the genetic distance used is the G’ST (Hedrick, 2005).
See graph4lg <= 1.6.0 only, because it used diveRsity
• If ’dist = ’D”, then the genetic distance used is Jost’s D (Jost, 2008). See
graph4lg <= 1.6.0 only, because it used diveRsity
null_val (optional) Logical. Should negative and null FST, FST_lin, GST or D values
be replaced by half the minimum positive value? This option allows to compute
Gabriel graphs from these "distances". Default is null_val = FALSE. This option
only works if ’dist = ’FST” or ’FST_lin’ or ’GST’ or ’D’
Details
Negative values are converted into 0. Euclidean genetic distance dij between population i and j is
computed as follows:
Xn
d2ij = (xki − xkj )2
where xki is the allelic frequency of allele k in population i and n is the total number of alleles.
Note that when ’dist = ’weight”, the formula becomes
Xn
d2ij = (1/(K ∗ pk ))(xki − xkj )2
k=1
where K is the number of alleles at the locus of the allele k and pk is the frequency of the allele k in
all populations. Note that when ’dist = ’PCA”, n is the number of conserved independent principal
components and xki is the value taken by the principal component k in population i.
Value
An object of class matrix
Author(s)
<NAME>
References
Bowcock AM, <NAME>, <NAME>, <NAME>, Kidd JR, Cavalli-Sforza LL (1994). “High
resolution of human evolutionary trees with polymorphic microsatellites.” nature, 368(6470), 455–
457. Excoffier L, Smouse PE, Quattro JM (1992). “Analysis of molecular variance inferred from
metric distances among DNA haplotypes: application to human mitochondrial DNA restriction
data.” Genetics, 131(2), 479–491. Dyer RJ, Nason JD (2004). “Population graphs: the graph the-
oretic shape of genetic structure.” Molecular ecology, 13(7), 1713–1727. Fortuna MA, Albaladejo
RG, <NAME>, <NAME>, <NAME> (2009). “Networks of spatial genetic variation across
species.” Proceedings of the National Academy of Sciences, 106(45), 19044–19049. Weir BS,
Cockerham CC (1984). “Estimating F-statistics for the analysis of population structure.” evolution,
38(6), 1358–1370. Hedrick PW (2005). “A standardized genetic differentiation measure.” Evo-
lution, 59(8), 1633–1638. Jost L (2008). “GST and its relatives do not measure differentiation.”
Molecular ecology, 17(18), 4015–4026.
Examples
data(data_ex_genind)
x <- data_ex_genind
D <- mat_gen_dist(x = x, dist = "basic")
mat_geo_dist Compute Euclidean geographic distances between points
Description
The function computes Euclidean geographic distance between points given their spatial coordi-
nates either in a metric projected Coordinate Reference System or in a polar coordinates system.
Usage
mat_geo_dist(
data,
ID = NULL,
x = NULL,
y = NULL,
crds_type = "proj",
gc_formula = "vicenty"
)
Arguments
data An object of class :
• data.frame with 3 columns: 2 columns with the point spatial coordinates
and another column with point IDs
• SpatialPointsDataFrame
ID (if data is of class data.frame) A character string indicating the name of the
column of data with the point IDs
x (if data is of class data.frame) A character string indicating the name of the
column of data with the point longitude
y (if data is of class data.frame) A character string indicating the name of the
column of data with the point latitude
crds_type A character string indicating the type of coordinate reference system:
• ’proj’ (default): a projected coordinate reference system
• ’polar’: a polar coordinate reference system, such as WGS84
gc_formula A character string indicating the formula used to compute the Great Circle dis-
tance:
• ’vicenty’(default): Vincenty inverse formula for ellipsoids
• ’slc’: Spherical Law of Cosines
• ’hvs’: Harversine formula
Details
When a projected coordinate reference system is used, it calculates classical Euclidean geographic
distance between two points using Pythagora’s theorem. When a polar coordinate reference sys-
tem is used, it calculates the Great circle distance between points using different methods. Unless
method = "polar", when data is a data.frame, it assumes projected coordinates by default.
Value
A pairwise matrix of geographic distances between points in meters
Author(s)
<NAME>
Examples
# Projected CRS
data(pts_pop_simul)
mat_dist <- mat_geo_dist(data=pts_pop_simul,
ID = "ID",
x = "x",
y = "y")
#Polar CRS
city_us <- data.frame(name = c("New York City", "Chicago",
"Los Angeles", "Atlanta"),
lat = c(40.75170, 41.87440,
34.05420, 33.75280),
lon = c(-73.99420, -87.63940,
-118.24100, -84.39360))
mat_geo_us <- mat_geo_dist(data = city_us,
ID = "name", x = "lon", y = "lat",
crds_type = "polar")
plot_graph_lg Plot graphs
Description
The function enables to plot graphs, whether spatial or not.
Usage
plot_graph_lg(
graph,
crds = NULL,
mode = "aspatial",
node_inter = NULL,
link_width = NULL,
node_size = NULL,
module = NULL,
pts_col = NULL
)
Arguments
graph A graph object of class igraph
crds (optional, default = NULL) If ’mode = ’spatial”, it is a data.frame with the
spatial coordinates of the graph nodes. It must have three columns :
• ID: A character string indicating the name of the graph nodes. The names
must be the same as the node names of the graph of class igraph (igraph::V(graph)$name)
• x: A numeric or integer indicating the longitude of the graph nodes.
• y: A numeric or integer indicating the latitude of the graph nodes.
This argument is not used when ’mode = ’aspatial” and mandatory when ’mode
= ’spatial”.
mode A character string indicating whether the graph is spatial (’mode = ’spatial”) or
not (’mode = ’aspatial” (default))
node_inter (optional, default = NULL) A character string indicating whether the links of
the graph are weighted by distances or by similarity indices. It is only used
when ’mode = ’aspatial” to compute the node positions with Fruchterman and
Reingold algorithm. It can be equal to:
• ’distance’: Link weights correspond to distances. Nodes that are close to
each other will be close on the figure.
• ’similarity’: Link weights correspond to similarity indices. Nodes that are
similar to each other will be close on the figure.
link_width (optional, default = NULL) A character string indicating how the width of the
link is set on the figure. Their width can be:
• inversely proportional to link weights ("inv_w", convenient with distances,
default)
• proportional to link weights ("w")
node_size (optional, default = NULL) A character string indicating the graph node attribute
used to set the node size on the figure. It must be the name of a numeric or
integer node attribute from the graph.
module (optional, default = NULL) A character string indicating the graph node modules
used to set the node color on the figure. It must be the name of a node attribute
from the graph with discrete values.
pts_col (optional, default = NULL) A character string indicating the color used to plot
the nodes (default: "#F2B950"). It must be a hexadecimal color code or a color
used by default in R. It cannot be used if ’module’ is specified.
Details
When the graph is not spatial (’mode = ’aspatial”), the nodes coordinates are calculated with
Fruchterman et Reingold algorithm. The graph object graph of class igraph must have node names
(not necessarily in the same order as IDs in crds, given a merging is done).
Value
A ggplot2 object to plot
Author(s)
<NAME>
References
Fruchterman TM, Reingold EM (1991). “Graph drawing by force-directed placement.” Software:
Practice and experience, 21(11), 1129–1164.
Examples
data(pts_pop_ex)
data(data_ex_genind)
mat_w <- mat_gen_dist(data_ex_genind, dist = "DPS")
gp <- gen_graph_topo(mat_w = mat_w, topo = "mst")
g <- plot_graph_lg(graph = gp,
crds = pts_pop_ex,
mode = "spatial",
link_width = "inv_w")
plot_w_hist Plot histograms of link weights
Description
The function enables to plot histogram to visualize the distribution of the link weights
Usage
plot_w_hist(graph, fill = "#396D35", class_width = NULL)
Arguments
graph A graph object of class igraph whose links are weighted
fill A character string indicating the color used to fill the bars (default: "#396D35").
It must be a hexadecimal color code or a color used by default in R.
class_width (default values: NULL) A numeric or an integer specifying the width of the
classes displayed on the histogram. When it is not specified, the width is equal
to the difference between the minimum and maximum values divided by 80.
Value
A ggplot2 object to plot
Author(s)
<NAME>
Examples
data(data_ex_genind)
mat_w <- mat_gen_dist(data_ex_genind, dist = "DPS")
gp <- gen_graph_topo(mat_w = mat_w, topo = "gabriel")
hist <- plot_w_hist(gp)
pop_gen_index Compute population-level genetic indices
Description
The function computes population-level genetic indices from an object of class genind.
Usage
pop_gen_index(x, pop_names = NULL, indices = c("Nb_ind", "A", "He", "Ho"))
Arguments
x An object of class genind from package adegenet.
pop_names (optional) A character vector indicating population names. It is of the same
length as the number of populations. Without this argument, populations are
given the names they have initially in the ’genind’ object (which is sometimes
only a number). The order of the population names must match with their order
in the ’genind’ object. The function does not reorder them. Users must be
careful.
indices (optional) A character vector indicating the population-level indices to compute.
These indices can be:
• Mean allelic richness by locus by population (indices = c("A", ...))
• Mean expected heterozygosity by locus by population (indices = c("He",...))
• Mean observed heterozygosity by locus by population (indices = c("Ho",...))
• Number of individuals by population (indices = c("Nb_ind", ...))
• Total allelic richness by population (indices = c("A_tot",...))
By default, indices = c("Nb_ind", "A", "He", "Ho").
Value
An object of class data.frame whose rows correspond to populations and columns to population
attributes (ID, size, genetic indices). By default, the first column corresponds to the population
names (ID). The order of the columns depends on the vector ’indices’.
Author(s)
<NAME>
Examples
data(data_ex_genind)
x <- data_ex_genind
pop_names <- levels(x@pop)
df_pop_indices <- pop_gen_index(x = x,
pop_names = pop_names,
indices = c("Nb_ind", "A"))
pts_pop_ex pts_pop_ex : details on simulated populations
Description
Simulation dataset 10 populations located on a simulated landscape
Usage
pts_pop_ex
Format
An object of class ’data.frame’ with the following columns :
ID Population ID of the 10 populations
x Site longitude (RGF93)
y Site latitude (RGF93)
References
Landguth EL, Cushman SA (2010). “CDPOP: a spatially explicit cost distance population genetics
program.” Molecular Ecology Resources, 10(1), 156–161. There are as many rows as there are
sampled populations.
Examples
data("pts_pop_ex")
str(pts_pop_ex)
pts_pop_simul pts_pop_simul : details on simulated populations
Description
Simulation dataset 50 populations located on a simulated landscape
Usage
pts_pop_simul
Format
An object of class ’data.frame’ with the following columns :
ID Population ID of the 50 populations
x Site longitude (RGF93)
y Site latitude (RGF93)
References
Landguth EL, Cushman SA (2010). “CDPOP: a spatially explicit cost distance population genetics
program.” Molecular Ecology Resources, 10(1), 156–161. There are as many rows as there are
sampled populations.
Examples
data("pts_pop_simul")
str(pts_pop_simul)
pw_mat_to_df Convert a pairwise matrix into an edge-list data.frame
Description
The function converts a pairwise matrix into an edge-list data.frame
Usage
pw_mat_to_df(pw_mat)
Arguments
pw_mat A pairwise matrix which can be:
• An object of class matrix. It must have the same row names and column
names. If values represent distances, diagonal elements should be equal to
0.
• An object of class dist. In that, its column numbers are used to create IDs
in the resulting data.frame.
Value
An object of class data.frame
Author(s)
<NAME>
Examples
data(data_tuto)
pw_mat <- data_tuto[[1]]
df <- pw_mat_to_df(pw_mat)
reorder_mat Reorder the rows and columns of a symmetric matrix
Description
The function reorders the rows and columns of a symmetric matrix according to a specified order.
Usage
reorder_mat(mat, order)
Arguments
mat An object of class matrix
order A character vector with the rows and columns names of the matrix in the order
in which they will be ordered by the function. All its elements must be rows and
columns names of the matrix mat.
Details
The matrix mat must be symmetric and have rows and columns names. Its values are not modified.
Value
A reordered symmetric matrix
Author(s)
<NAME>
Examples
mat <- matrix(rnorm(36), 6)
mat[lower.tri(mat)] <- t(mat)[lower.tri(mat)]
row.names(mat) <- colnames(mat) <- c("A", "C", "E", "B", "D", "F")
order <- c("A", "B", "C", "D", "E", "F")
mat <- reorder_mat(mat = mat, order = order)
scatter_dist Plot scatterplots of genetic distance vs landscape distance
Description
The function enables to plot scatterplots to visualize the relationship between genetic distance (or
differentiation) and landscape distance (Euclidean distance, cost-distance, etc.)between populations
or sample sites.
Usage
scatter_dist(
mat_gd,
mat_ld,
method = "loess",
thr_gd = NULL,
thr_ld = NULL,
se = TRUE,
smooth_col = "black",
pts_col = "#999999"
)
Arguments
mat_gd A symmetric matrix or dist object with pairwise genetic distances between
populations or sample sites.
mat_ld A symmetric matrix or dist object with pairwise landscape distances between
populations or sample sites. These distances can be Euclidean distances, cost-
distances or resistance distances, among others.
method A character string indicating the smoothing method used to fit a line on the
scatterplot. Possible values are the same as with function ’geom_smooth()’ from
ggplot2 : ’lm’, ’glm’, ’gam’, ’loess’ (default).
thr_gd (optional) A numeric or integer value used to remove values from the data before
to plot. All genetic distances values above thr_gd are removed from the data.
thr_ld (optional) A numeric or integer value used to remove values from the data before
to plot. All landscape distances values above thr_ld are removed from the data.
se Logical (optional, default = TRUE) indicating whether the confidence interval
around the smooth line is displayed.
smooth_col (optional) A character string indicating the color used to plot the smoothing line
(default: "blue"). It must be a hexadecimal color code or a color used by default
in R.
pts_col (optional) Character string indicating the color used to plot the points (default:
"#999999"). It must be a hexadecimal color code or a color used by default in
R.
Details
IDs in mat_gd and mat_ld must be the same and refer to the same sampling sites or populations, and
both matrices must be ordered in the same way. Matrix of genetic distance mat_gd can be computed
using mat_gen_dist. Matrix of landscape distance mat_ld can be computed using mat_geo_dist
when the landscape distance needed is a Euclidean geographical distance.
Value
A ggplot2 object to plot
Author(s)
<NAME>
Examples
data(data_tuto)
mat_dps <- data_tuto[[1]]
mat_dist <- suppressWarnings(mat_geo_dist(data = pts_pop_simul,
ID = "ID",
x = "x",
y = "y"))
mat_dist <- mat_dist[order(as.character(row.names(mat_dist))),
order(as.character(colnames(mat_dist)))]
scatterplot_ex <- scatter_dist(mat_gd = mat_dps,
mat_ld = mat_dist)
scatter_dist_g Plot scatterplots of distances to visualize the graph pruning intensity
Description
The function enables to plot scatterplots of the relationship between two distances (often a genetic
distance and a landscape distance between populations or sample sites), while highlighting the
population pairs between which a link was conserved during the creation of a graph whose nodes
are populations (or sample sites). It thereby allows to visualize the graph pruning intensity.
Usage
scatter_dist_g(
mat_y,
mat_x,
graph,
thr_y = NULL,
thr_x = NULL,
pts_col_1 = "#999999",
pts_col_2 = "black"
)
Arguments
mat_y A symmetric (complete) matrix or dist object with pairwise (genetic or land-
scape) distances between populations or sample sites. These values will be the
point coordinates on the y axis. mat_y is the matrix used to weight the links of
the graph x, whose nodes correspond to row and column names of mat_y.
mat_x A symmetric (complete) matrix or dist object with pairwise (genetic or land-
scape) distances between populations or sample sites. These values will be the
point coordinates on the x axis. mat_x and mat_y must have the same row and
column names, ordered in the same way.
graph A graph object of class igraph. Its nodes must have the same names as the row
and column of mat_y and mat_x matrices. x must have weighted links. Link
weights have to be values from mat_y matrix. graph must be an undirected
graph.
thr_y (optional) A numeric or integer value used to remove values from the data before
to plot. All values from mat_y above thr_y are removed from the data.
thr_x (optional) A numeric or integer value used to remove values from the data before
to plot. All values from mat_x above thr_x are removed from the data.
pts_col_1 (optional) A character string indicating the color used to plot the points associ-
ated to all populations or sample sites pairs (default: "#999999"). It must be a
hexadecimal color code or a color used by default in R.
pts_col_2 (optional) A character string indicating the color used to plot the points as-
sociated to populations or sample sites pairs connected on the graph (default:
"black"). It must be a hexadecimal color code or a color used by default in R.
Details
IDs in mat_y and mat_x must be the same and refer to the same sampling sites or populations,
and both matrices must be ordered in the same way. Matrices of genetic distance can be computed
using mat_gen_dist. Matrices of landscape distance can be computed using mat_geo_dist when
the landscape distance needed is a Euclidean geographical distance. This function is based upon
scatter_dist function.
Value
A ggplot2 object to plot
Author(s)
<NAME>
Examples
data(data_tuto)
mat_gen <- data_tuto[[1]]
mat_dist <- suppressWarnings(mat_geo_dist(data=pts_pop_simul,
ID = "ID",
x = "x",
y = "y"))
mat_dist <- mat_dist[order(as.character(row.names(mat_dist))),
order(as.character(colnames(mat_dist)))]
x <- gen_graph_topo(mat_w = mat_gen, mat_topo = mat_dist, topo = "gabriel")
scat <- scatter_dist_g(mat_y = mat_gen, mat_x = mat_dist,
graph = x)
structure_to_genind Convert a file in STRUCTURE format into a genind object
Description
The function converts a text file in STRUCTURE format into a genind object to use in R
Usage
structure_to_genind(
path,
pop_names = NULL,
loci_names = NULL,
ind_names = NULL
)
Arguments
path A character string indicating the path to the STRUCTURE file in format .txt, or
alternatively the name of the file in the working directory. The STRUCTURE
file must only have :
• A first column with the IDs of the individuals (can be a simple number)
• A second column with the IDs of the populations (can be a simple number)
• Some loci columns : as many columns as loci in the data
The row for loci names is optional but recommended. Each individual is dis-
played on 2 rows.
pop_names (optional) A character vector indicating the population names in the same order
as in the STRUCTURE file. It is of the same length as the number of popu-
lations. Without this argument, populations are numbered from 1 to the total
number of individuals.
loci_names A character vector with the names of the loci if not specified in the file first row.
This argument is mandatory if the STRUCTURE file does not include the names
of the loci in the first row. In other cases, the names of the loci is extracted from
the file first row
ind_names (optional) A character vector indicating the individual names in the same order
as in the STRUCTURE file. It is of the same length as the number of individuals.
Without this argument, individuals are numbered from 1 to the total number of
individuals.
Details
The column order of the resulting object can be different from that of objects returned by gstud_to_genind
and genepop_to_genind, depending on allele and loci coding This function uses functions from
pegas package. For details about STRUCTURE file format : STRUCTURE user manual
Value
An object of type genind.
Author(s)
<NAME>
Examples
data("data_ex_genind")
loci_names <- levels([email protected])
pop_names <- levels(data_ex_genind@pop)
ind_names <- row.names(data_ex_genind@tab)
path_in <- system.file('extdata', 'data_ex_str.txt',
package = 'graph4lg')
file_n <- file.path(tempdir(), "data_ex_str.txt")
file.copy(path_in, file_n, overwrite = TRUE)
str <- structure_to_genind(path = file_n, loci_names = loci_names,
pop_names = pop_names, ind_names = ind_names)
file.remove(file_n) |
@middy/function-shield | npm | JavaScript | Middy FunctionShield middleware
===
**FunctionShield middleware for the middy framework, the stylish Node.js middleware engine for AWS Lambda**
⚠️ **Warning: FunctionShield is no longer actively maintained and will unlikely be updated to have Node.js v12 support. [See #460](https://github.com/middyjs/middy/issues/460)** ⚠️
Hardens AWS Lambda execution environment:
* By monitoring (or blocking) outbound network traffic to public internet, you can be certain that your data is never leaked (traffic to AWS services is not affected)
* By disabling read/write operations on the /tmp/ directory, you make sure that files are not persisted across invocations. Storing data in `/tmp` is a bad practice as it may be leaked in subsequent invocations
* By disabling the ability to launch child processes, you can make sure that no rogue processes are spawned without your knowledge by potentially malicious packages
* By disabling the ability to read the function's (handler) source code through the file system, you can prevent handler source code leakage, which is oftentimes the first step in a serverless attack
More info:
* <https://www.puresec.io/function-shield>
* <https://www.jeremydaly.com/serverless-security-with-functionshield/Get a free token
---
Please visit: <https://www.puresec.io/function-shield-token-form###
Modes
* `'block'` - Block and log to Cloudwatch Logs
* `'alert'` - Allow and log to Cloudwatch Logs
* `'allow'` - Allow
###
Options
* `policy.outbound_connectivity` - `'block'/'alert'/'allow'` (default: `'block'`)
* `policy.read_write_tmp` - `'block'/'alert'/'allow'` (default: `'block'`)
* `policy.create_child_process` - `'block'/'alert'/'allow'` (default: `'block'`)
* `policy.read_handler` - `'block'/'alert'/'allow'` (default: `'block'`)
* `token` - By default looks for `FUNCTION_SHIELD_TOKEN` in `process.env` and `context`
* `disable_analytics` - Periodically, during cold starts, FunctionShield sends basic analytics information to its backend. To disable analytics module set: `true`. (default: `false`)
###
Sample Usage
```
'use strict';
const fs = require('fs');
const middy = require('middy');
const {ssm, functionShield} = require('middy/middlewares');
async function hello(event) {
fs.openSync('/tmp/test', 'w');
}
const handler = middy(hello)
.use(ssm({
cache: true,
setToContext: true,
names: {
FUNCTION_SHIELD_TOKEN: 'function_shield_token'
}
}))
.use(functionShield(
{
policy: {
outbound_connectivity: 'alert'
}
}
));
module.exports = {
handler
};
```
```
START RequestId: f7b7305d-d785-11e8-baf1-9136b5c7aa75 Version: $LATEST
[TOKEN VERIFICATION] license is OK
{"function_shield":true,"policy":"read_write_tmp","details":{"path":"/tmp/test"},"mode":"block"}
2018-10-24 15:11:45.427 (+03:00) f7b7305d-d785-11e8-baf1-9136b5c7aa75 {"errorMessage":"Unknown system error -999: Unknown system error -999, open '/tmp/test'","errorType":"Error","stackTrace":["Object.fs.openSync (fs.js:646:18)","Function.hello (/var/task/handler.js:8:6)","runMiddlewares (/var/task/node_modules/middy/src/middy.js:180:42)","runNext (/var/task/node_modules/middy/src/middy.js:85:14)","before (/var/task/node_modules/middy/src/middlewares/functionShield.js:20:5)","runNext (/var/task/node_modules/middy/src/middy.js:70:24)","<anonymous>","process._tickDomainCallback (internal/process/next_tick.js:228:7)"]}
END RequestId: f7b7305d-d785-11e8-baf1-9136b5c7aa75 REPORT RequestId: f7b7305d-d785-11e8-baf1-9136b5c7aa75 Duration: 458.65 ms Billed Duration: 500 ms Memory Size: 1024 MB Max Memory Used: 38 MB
```
Middy documentation and examples
---
For more documentation and examples, refers to the main [Middy monorepo on GitHub](https://github.com/middyjs/middy) or [Middy official website](https://middy.js.org).
Contributing
---
Everyone is very welcome to contribute to this repository. Feel free to [raise issues](https://github.com/middyjs/middy/issues) or to [submit Pull Requests](https://github.com/middyjs/middy/pulls).
License
---
Licensed under [MIT License](https://github.com/middyjs/middy/blob/HEAD/LICENSE). Copyright (c) 2017-2018 <NAME> and the [Middy team](https://github.com/middyjs/middy/graphs/contributors).
Readme
---
### Keywords
* Lambda
* Middleware
* Serverless
* Framework
* AWS
* AWS Lambda
* Middy
* Security
* Hardening |
indicado | hex | Erlang | BadDeviationError exception
===
BadPeriodError exception
===
Indicado
===
Indicado helps you analyze historical data to generate future price movement predictions on numerical datasets.
Indicado.ADI
===
This is the ADI module used for calculating Accumulation Distribution Line.
[Link to this section](#summary)
Summary
===
[Types](#types)
---
[adi_data_map()](#t:adi_data_map/0)
The argument passed to eval functions should be a list of adi_data_map type.
[Functions](#functions)
---
[eval(list)](#eval/1)
Calculates ADI for the list. The list argument passed to eval function should be list of adi_data_map type.
[eval!(list)](#eval!/1)
Calculates ADI for the list. The list argument passed to eval function should be list of adi_data_map type spec. Raises exceptions when arguments does not satisfy needed conditions.
[Link to this section](#types)
Types
===
[Link to this section](#functions)
Functions
===
Indicado.Bollinger
===
This is the Bollinger module used for calculating Bollinger Bands.
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[eval(list, period, devation)](#eval/3)
Calculates BB for the list.
[eval!(list, period, deviation)](#eval!/3)
Calculates BB for the list. Raises exceptions when argument does not satisfy needed conditions to calculate Bollinger Bands.
[Link to this section](#functions)
Functions
===
Indicado.EMA
===
This is the EMA module used for calculating Exponential Moving Average
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[eval(list, period)](#eval/2)
Calculates EMA for the list. It needs non empty list of numbers and a positive period argument.
[eval!(list, period)](#eval!/2)
Calculates EMA for the list. It needs non empty list of numbers and a positive period argument.
[Link to this section](#functions)
Functions
===
Indicado.MACD
===
This is the MACD module used for calculating Moving Average Convergence Divergence
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[eval(list, fast_period, slow_period, signal_period)](#eval/4)
Calculates MACD for the list.
[eval!(list, fast_period, slow_period, signal_period)](#eval!/4)
Calculates MACD for the list.
[Link to this section](#functions)
Functions
===
Indicado.MFI
===
This is the MFI module used for calculating Money Flow Index
[Link to this section](#summary)
Summary
===
[Types](#types)
---
[mfi_data_map()](#t:mfi_data_map/0)
The argument passed to eval functions should be a list of mfi_data_map type.
[Functions](#functions)
---
[eval(list, period)](#eval/2)
Calculates MFI for the list. It needs list of mfi_data_map and lenght of list should be at least 1 higher then period.
[eval!(list, period)](#eval!/2)
Calculates MFI for the list. It needs list of mfi_data_map and lenght of list should be at least 1 higher then period.
[Link to this section](#types)
Types
===
[Link to this section](#functions)
Functions
===
Indicado.Math
===
This is the helper module holding common math functions for Indicado.
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[mean(list)](#mean/1)
Calculated mean of a given numeric list.
Returns `nil` if list is empty.
[stddev(list)](#stddev/1)
Calculates standard deviation of a given numeric list.
Returns `nil` if list is empty.
[stddev(list, calculated_mean)](#stddev/2)
Calculates standard deviation of a given numeric list when mean is pre calculated and passed.
Returns `nil` if list is empty.
[variance(list)](#variance/1)
Calculates variance of a given numeric list.
Returns `nil` if list is empty.
[variance(list, calculated_mean)](#variance/2)
Calculates variance of a given numeric list when mean is pre calculated and passed.
Returns `nil` if list is empty.
[Link to this section](#functions)
Functions
===
Indicado.OBV
===
This is the OBV module used for calculating On-Balance Volume
[Link to this section](#summary)
Summary
===
[Types](#types)
---
[ovb_data_map()](#t:ovb_data_map/0)
The argument passed to eval functions should be a list of ovb_data_map type.
[Functions](#functions)
---
[eval(list)](#eval/1)
Calculates OBV for the list. The list argument passed to eval function should be list of ovb_data_map type.
[eval!(list)](#eval!/1)
Calculates OBV for the list. The list argument passed to eval function should be list of ovb_data_map type.
[Link to this section](#types)
Types
===
[Link to this section](#functions)
Functions
===
Indicado.RSI
===
This is the RSI module used for calculating Relative Strength Index
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[eval(list, period)](#eval/2)
Calculates RSI for the list. It needs list of numbers and the length of list argument should at least be 1 more than period.
[eval!(list, period)](#eval!/2)
Calculates RSI for the list. It needs list of numbers and the length of list argument should at least be 1 more than period.
[Link to this section](#functions)
Functions
===
Indicado.SMA
===
This is the SMA module used for calculating Simple Moving Average.
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[eval(list, period)](#eval/2)
Calculates SMA for the list.
[eval!(list, period)](#eval!/2)
Calculates SMA for the list. Raises exceptions when arguments does not satisfy needed conditions to calculate SMA.
[Link to this section](#functions)
Functions
===
Indicado.SR
===
This is the SR module used for calculating Stochastic Oscillator.
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[eval(list, period)](#eval/2)
Calculates SR for the list.
[eval!(list, period)](#eval!/2)
Calculates SR for the list. Raises exceptions when arguments does not satisfy needed conditions to calculate SR.
[Link to this section](#functions)
Functions
===
Indicado.WR
===
This is the WR module used for calculating Williams %R.
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[eval(list, period)](#eval/2)
Calculates WR for the list.
[eval!(list, period)](#eval!/2)
Calculates WR for the list. Raises exceptions when arguments does not satisfy needed conditions to calculate WR.
[Link to this section](#functions)
Functions
===
NotEnoughDataError exception
===
Indicado 🚀🌕
---
[![Hex Version](https://img.shields.io/hexpm/v/indicado.svg)](https://hex.pm/packages/indicado) [![Hex Docs](http://img.shields.io/badge/hex.pm-docs-green.svg?style=flat)](https://hexdocs.pm/indicado) [![CI Status](https://github.com/thisiscetin/indicado/workflows/ci/badge.svg)](https://github.com/thisiscetin/indicado/actions) [![Apache 2 License](https://img.shields.io/hexpm/l/oban)](https://opensource.org/licenses/Apache-2.0)
[Technical indicator](https://www.investopedia.com/terms/t/technicalindicator.asp) library for Elixir with no dependencies. Indicado helps you analyze historical data to generate future price movement predictions on numerical datasets. Many traders and automated trading platforms use technical analysis because past actions may indicate future prices. Indicado might also be used outside financial markets if data hold patterns and not random.
What can you do with this library ❔
---
This library can be used as an add-on to [<NAME>'s](https://twitter.com/kamilskowron) great project [Hands-on Elixir & OTP: Cryptocurrency trading bot](https://www.elixircryptobot.com), at some point. So, you can create sophisticated trading strategies that may better fit your risk appetite. You can also use this library for your custom solutions around automated trading/testing/strategy building.
In the future, in addition to supporting the community, [I](https://twitter.com/thisiscetin) plan to release more open source tools around strategy building, backtesting, and numerical analysis.
Table of Contents 📋
---
* [Indicado](#indicado-)
* [What can you do with this library](#what-can-you-do-with-this-library-)
* [Table of Contents](#table-of-contents-)
* [Supported Indicators](#supported-indicators-)
* [Installation](#installation-)
* [Usage](#usage-️)
* [Contributing](#contributing-)
Supported Indicators 📈
---
Indicators below are supported. New indicators being added regularly.
* Accumulation/Distribution Line ([ADI](https://www.investopedia.com/terms/a/accumulationdistribution.asp))
* Bollinger Bands ([BB](https://www.investopedia.com/terms/b/bollingerbands.asp))
* Exponential Moving Average ([EMA](https://www.investopedia.com/terms/e/ema.asp))
* Money Flow Index ([MFI](https://www.investopedia.com/terms/m/mfi.asp))
* Moving Average Convergence Divergence ([MACD](https://www.investopedia.com/terms/m/macd.asp))
* On-Balance Volume ([OBV](https://www.investopedia.com/terms/o/onbalancevolume.asp))
* Relative Strength Index ([RSI](https://www.investopedia.com/terms/r/rsi.asp))
* Simple Moving Average ([SMA](https://www.investopedia.com/terms/s/sma.asp))
* Stochastic Oscillator ([SR](https://www.investopedia.com/terms/s/stochasticoscillator.asp))
* Williams %R ([WR](https://www.investopedia.com/terms/w/williamsr.asp))
Helper math functions such as mean, stddev, variance is accessible through [`Indicado.Math`](Indicado.Math.html) module.
Installation 💻
---
Indicado published to [Hex](https://hex.pm/packages/indicado). Just add it to your dependencies in `mix.exs`.
```
def deps do
[
{:indicado, "~> 0.0.4"}
]
end
```
Then run [`mix deps.get`](https://hexdocs.pm/mix/Mix.Tasks.Deps.Get.html) to install indicado.
Usage 🛠️
---
Indicado provides two functions on the public API of indicators. Namely `eval` and `eval!` function.
* `eval` function calls return `{:ok, result}` or `{:error, reason}`.
* `eval!` functions return a single result list or raises exceptions such as [`NotEnoughDataError`](NotEnoughDataError.html).
Because every other indicator may expect different arguments, I recommend you check [online documentation on hexdocs](https://hexdocs.pm/indicado/Indicado.html) before using the indicado. For demonstration purposes how you can calculate a four day Simple Moving Average is shown below.
```
iex(2)> Indicado.SMA.eval([1.0, 5.0, 7.4, 12.5, 16,4], 4)
{:ok, [6.475, 10.225, 9.975]}
```
Contributing 🧵
---
Please follow standard convention such as `eval` and `eval!` functions defined for all indicators inside `lib` folder.
Rest is easy;
* Fork it!
* Create your feature branch (git checkout -b my-new-feature)
* Commit your changes (git commit -am 'Add some feature')
* Push to the branch (git push origin my-new-feature)
* Create new Pull Request
To ensure a commit passes CI run `mix test.ci` before opening a pull request to execute commands below.
[API Reference](api-reference.html)
API Reference
===
Modules
---
[BadDeviationError](BadDeviationError.html)
[BadPeriodError](BadPeriodError.html)
[Indicado](Indicado.html)
Indicado helps you analyze historical data to generate future price movement predictions on numerical datasets.
[Indicado.ADI](Indicado.ADI.html)
This is the ADI module used for calculating Accumulation Distribution Line.
[Indicado.Bollinger](Indicado.Bollinger.html)
This is the Bollinger module used for calculating Bollinger Bands.
[Indicado.EMA](Indicado.EMA.html)
This is the EMA module used for calculating Exponential Moving Average
[Indicado.MACD](Indicado.MACD.html)
This is the MACD module used for calculating Moving Average Convergence Divergence
[Indicado.MFI](Indicado.MFI.html)
This is the MFI module used for calculating Money Flow Index
[Indicado.Math](Indicado.Math.html)
This is the helper module holding common math functions for Indicado.
[Indicado.OBV](Indicado.OBV.html)
This is the OBV module used for calculating On-Balance Volume
[Indicado.RSI](Indicado.RSI.html)
This is the RSI module used for calculating Relative Strength Index
[Indicado.SMA](Indicado.SMA.html)
This is the SMA module used for calculating Simple Moving Average.
[Indicado.SR](Indicado.SR.html)
This is the SR module used for calculating Stochastic Oscillator.
[Indicado.WR](Indicado.WR.html)
This is the WR module used for calculating Williams %R.
[NotEnoughDataError](NotEnoughDataError.html)
[README](readme.html) |
evm-gasometer | rust | Rust | Crate evm_gasometer
===
EVM gasometer.
Structs
---
* GasometerEVM gasometer.
* MemoryCostMemory cost.
Enums
---
* GasCostGas cost.
* StorageTargetStorage opcode will access. Used for tracking accessed storage (EIP-2929).
* TransactionCostTransaction cost.
Functions
---
* call_transaction_costCalculate the call transaction cost.
* create_transaction_costCalculate the create transaction cost.
* dynamic_opcode_costCalculate the opcode cost.
* init_code_cost
* static_opcode_cost
Crate evm_gasometer
===
EVM gasometer.
Structs
---
* GasometerEVM gasometer.
* MemoryCostMemory cost.
Enums
---
* GasCostGas cost.
* StorageTargetStorage opcode will access. Used for tracking accessed storage (EIP-2929).
* TransactionCostTransaction cost.
Functions
---
* call_transaction_costCalculate the call transaction cost.
* create_transaction_costCalculate the create transaction cost.
* dynamic_opcode_costCalculate the opcode cost.
* init_code_cost
* static_opcode_cost
Struct evm_gasometer::Gasometer
===
```
pub struct Gasometer<'config> { /* private fields */ }
```
EVM gasometer.
Implementations
---
### impl<'config> Gasometer<'config#### pub fn new(gas_limit: u64, config: &'config Config) -> Self
Create a new gasometer with given gas limit and config.
#### pub fn gas_cost(&self, cost: GasCost, gas: u64) -> Result<u64, ExitErrorReturns the numerical gas cost value.
#### pub fn config(&self) -> &'config Config
Reference of the config.
#### pub fn gas(&self) -> u64
Remaining gas.
#### pub fn total_used_gas(&self) -> u64
Total used gas.
#### pub fn refunded_gas(&self) -> i64
Refunded gas.
#### pub fn fail(&mut self) -> ExitError
Explicitly fail the gasometer with out of gas. Return `OutOfGas` error.
#### pub fn record_cost(&mut self, cost: u64) -> Result<(), ExitErrorRecord an explicit cost.
#### pub fn record_refund(&mut self, refund: i64) -> Result<(), ExitErrorRecord an explicit refund.
#### pub fn record_deposit(&mut self, len: usize) -> Result<(), ExitErrorRecord `CREATE` code deposit.
#### pub fn record_dynamic_cost(
&mut self,
cost: GasCost,
memory: Option<MemoryCost>
) -> Result<(), ExitErrorRecord opcode gas cost.
#### pub fn record_stipend(&mut self, stipend: u64) -> Result<(), ExitErrorRecord opcode stipend.
#### pub fn record_transaction(
&mut self,
cost: TransactionCost
) -> Result<(), ExitErrorRecord transaction cost.
Trait Implementations
---
### impl<'config> Clone for Gasometer<'config#### fn clone(&self) -> Gasometer<'configReturns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
Formats the value using the given formatter. Read moreAuto Trait Implementations
---
### impl<'config> RefUnwindSafe for Gasometer<'config### impl<'config> Send for Gasometer<'config### impl<'config> Sync for Gasometer<'config### impl<'config> Unpin for Gasometer<'config### impl<'config> UnwindSafe for Gasometer<'configBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
Struct evm_gasometer::MemoryCost
===
```
pub struct MemoryCost {
pub offset: U256,
pub len: U256,
}
```
Memory cost.
Fields
---
`offset: U256`Affected memory offset.
`len: U256`Affected length.
Implementations
---
### impl MemoryCost
#### pub fn join(self, other: MemoryCost) -> MemoryCost
Join two memory cost together.
Trait Implementations
---
### impl Clone for MemoryCost
#### fn clone(&self) -> MemoryCost
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
Auto Trait Implementations
---
### impl RefUnwindSafe for MemoryCost
### impl Send for MemoryCost
### impl Sync for MemoryCost
### impl Unpin for MemoryCost
### impl UnwindSafe for MemoryCost
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
Enum evm_gasometer::GasCost
===
```
pub enum GasCost {
Zero,
Base,
VeryLow,
Low,
Invalid(Opcode),
ExtCodeSize {
target_is_cold: bool,
},
Balance {
target_is_cold: bool,
},
BlockHash,
ExtCodeHash {
target_is_cold: bool,
},
Call {
value: U256,
gas: U256,
target_is_cold: bool,
target_exists: bool,
},
CallCode {
value: U256,
gas: U256,
target_is_cold: bool,
target_exists: bool,
},
DelegateCall {
gas: U256,
target_is_cold: bool,
target_exists: bool,
},
StaticCall {
gas: U256,
target_is_cold: bool,
target_exists: bool,
},
Suicide {
value: U256,
target_is_cold: bool,
target_exists: bool,
already_removed: bool,
},
SStore {
original: H256,
current: H256,
new: H256,
target_is_cold: bool,
},
Sha3 {
len: U256,
},
Log {
n: u8,
len: U256,
},
ExtCodeCopy {
target_is_cold: bool,
len: U256,
},
VeryLowCopy {
len: U256,
},
Exp {
power: U256,
},
Create,
Create2 {
len: U256,
},
SLoad {
target_is_cold: bool,
},
}
```
Gas cost.
Variants
---
### Zero
Zero gas cost.
### Base
Base gas cost.
### VeryLow
Very low gas cost.
### Low
Low gas cost.
### Invalid(Opcode)
Fail the gasometer.
### ExtCodeSize
#### Fields
`target_is_cold: bool`True if address has not been previously accessed in this transaction
Gas cost for `EXTCODESIZE`.
### Balance
#### Fields
`target_is_cold: bool`True if address has not been previously accessed in this transaction
Gas cost for `BALANCE`.
### BlockHash
Gas cost for `BLOCKHASH`.
### ExtCodeHash
#### Fields
`target_is_cold: bool`True if address has not been previously accessed in this transaction
Gas cost for `EXTBLOCKHASH`.
### Call
#### Fields
`value: U256`Call value.
`gas: U256`Call gas.
`target_is_cold: bool`True if target has not been previously accessed in this transaction
`target_exists: bool`Whether the target exists.
Gas cost for `CALL`.
### CallCode
#### Fields
`value: U256`Call value.
`gas: U256`Call gas.
`target_is_cold: bool`True if target has not been previously accessed in this transaction
`target_exists: bool`Whether the target exists.
Gas cost for `CALLCODE.
### DelegateCall
#### Fields
`gas: U256`Call gas.
`target_is_cold: bool`True if target has not been previously accessed in this transaction
`target_exists: bool`Whether the target exists.
Gas cost for `DELEGATECALL`.
### StaticCall
#### Fields
`gas: U256`Call gas.
`target_is_cold: bool`True if target has not been previously accessed in this transaction
`target_exists: bool`Whether the target exists.
Gas cost for `STATICCALL`.
### Suicide
#### Fields
`value: U256`Value.
`target_is_cold: bool`True if target has not been previously accessed in this transaction
`target_exists: bool`Whether the target exists.
`already_removed: bool`Whether the target has already been removed.
Gas cost for `SUICIDE`.
### SStore
#### Fields
`original: H256`Original value.
`current: H256`Current value.
`new: H256`New value.
`target_is_cold: bool`True if target has not been previously accessed in this transaction
Gas cost for `SSTORE`.
### Sha3
#### Fields
`len: U256`Length of the data.
Gas cost for `SHA3`.
### Log
#### Fields
`n: u8`Topic length.
`len: U256`Data length.
Gas cost for `LOG`.
### ExtCodeCopy
#### Fields
`target_is_cold: bool`True if target has not been previously accessed in this transaction
`len: U256`Length.
Gas cost for `EXTCODECOPY`.
### VeryLowCopy
#### Fields
`len: U256`Length.
Gas cost for some copy opcodes that is documented as `VERYLOW`.
### Exp
#### Fields
`power: U256`Power of `EXP`.
Gas cost for `EXP`.
### Create
Gas cost for `CREATE`.
### Create2
#### Fields
`len: U256`Length.
Gas cost for `CREATE2`.
### SLoad
#### Fields
`target_is_cold: bool`True if target has not been previously accessed in this transaction
Gas cost for `SLOAD`.
Trait Implementations
---
### impl Clone for GasCost
#### fn clone(&self) -> GasCost
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
Auto Trait Implementations
---
### impl RefUnwindSafe for GasCost
### impl Send for GasCost
### impl Sync for GasCost
### impl Unpin for GasCost
### impl UnwindSafe for GasCost
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
Enum evm_gasometer::StorageTarget
===
```
pub enum StorageTarget {
None,
Address(H160),
Slot(H160, H256),
}
```
Storage opcode will access. Used for tracking accessed storage (EIP-2929).
Variants
---
### None
No storage access
### Address(H160)
Accessing address
### Slot(H160, H256)
Accessing storage slot within an address
Trait Implementations
---
### impl Clone for StorageTarget
#### fn clone(&self) -> StorageTarget
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
Auto Trait Implementations
---
### impl RefUnwindSafe for StorageTarget
### impl Send for StorageTarget
### impl Sync for StorageTarget
### impl Unpin for StorageTarget
### impl UnwindSafe for StorageTarget
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
Enum evm_gasometer::TransactionCost
===
```
pub enum TransactionCost {
Call {
zero_data_len: usize,
non_zero_data_len: usize,
access_list_address_len: usize,
access_list_storage_len: usize,
},
Create {
zero_data_len: usize,
non_zero_data_len: usize,
access_list_address_len: usize,
access_list_storage_len: usize,
initcode_cost: u64,
},
}
```
Transaction cost.
Variants
---
### Call
#### Fields
`zero_data_len: usize`Length of zeros in transaction data.
`non_zero_data_len: usize`Length of non-zeros in transaction data.
`access_list_address_len: usize`Number of addresses in transaction access list (see EIP-2930)
`access_list_storage_len: usize`Total number of storage keys in transaction access list (see EIP-2930)
Call transaction cost.
### Create
#### Fields
`zero_data_len: usize`Length of zeros in transaction data.
`non_zero_data_len: usize`Length of non-zeros in transaction data.
`access_list_address_len: usize`Number of addresses in transaction access list (see EIP-2930)
`access_list_storage_len: usize`Total number of storage keys in transaction access list (see EIP-2930)
`initcode_cost: u64`Cost of initcode = 2 * ceil(len(initcode) / 32) (see EIP-3860)
Create transaction cost.
Trait Implementations
---
### impl Clone for TransactionCost
#### fn clone(&self) -> TransactionCost
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
Auto Trait Implementations
---
### impl RefUnwindSafe for TransactionCost
### impl Send for TransactionCost
### impl Sync for TransactionCost
### impl Unpin for TransactionCost
### impl UnwindSafe for TransactionCost
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
Function evm_gasometer::call_transaction_cost
===
```
pub fn call_transaction_cost(
data: &[u8],
access_list: &[(H160, Vec<H256>)]
) -> TransactionCost
```
Calculate the call transaction cost.
Function evm_gasometer::create_transaction_cost
===
```
pub fn create_transaction_cost(
data: &[u8],
access_list: &[(H160, Vec<H256>)]
) -> TransactionCost
```
Calculate the create transaction cost.
Function evm_gasometer::dynamic_opcode_cost
===
```
pub fn dynamic_opcode_cost<H: Handler>(
address: H160,
opcode: Opcode,
stack: &Stack,
is_static: bool,
config: &Config,
handler: &mut H
) -> Result<(GasCost, StorageTarget, Option<MemoryCost>), ExitError>
```
Calculate the opcode cost. |
dr-niels-intl-messageformat | npm | JavaScript | These are some minor adjustments to the intl-messageformat element from formatjs (https://github.com/formatjs/intl-messageformat), trying to get it working with Polymer 3. The readme was not updated and is still the one from the original intl-messageformat element!
Intl MessageFormat
===
Formats ICU Message strings with number, date, plural, and select placeholders to create localized messages.
Overview
---
### Goals
This package aims to provide a way for you to manage and format your JavaScript app's string messages into localized strings for people using your app. You can use this package in the browser and on the server via Node.js.
This implementation is based on the [Strawman proposal](http://wiki.ecmascript.org/doku.php?id=globalization:messageformatting), but there are a few places this implementation diverges.
*Note: This `IntlMessageFormat` API may change to stay in sync with ECMA-402, but this package will follow [semver](http://semver.org/).*
### How It Works
Messages are provided into the constructor as a `String` message, or a [pre-parsed AST](https://github.com/yahoo/intl-messageformat-parser) object.
```
var msg = new IntlMessageFormat(message, locales, [formats]);
```
The string `message` is parsed, then stored internally in a compiled form that is optimized for the `format()` method to produce the formatted string for displaying to the user.
```
var output = msg.format(values);
```
### Common Usage Example
A very common example is formatting messages that have numbers with plural labels. With this package you can make sure that the string is properly formatted for a person's locale, e.g.:
```
var MESSAGES = { 'en-US': { NUM_PHOTOS: 'You have {numPhotos, plural, ' + '=0 {no photos.}' + '=1 {one photo.}' + 'other {# photos.}}' }, 'es-MX': { NUM_PHOTOS: 'Usted {numPhotos, plural, ' + '=0 {no tiene fotos.}' + '=1 {tiene una foto.}' + 'other {tiene # fotos.}}' }}; var output; var enNumPhotos = new IntlMessageFormat(MESSAGES['en-US'].NUM_PHOTOS, 'en-US');output = enNumPhotos.format({numPhotos: 1000});console.log(output); // => "You have 1,000 photos." var esNumPhotos = new IntlMessageFormat(MESSAGES['es-MX'].NUM_PHOTOS, 'es-MX');output = esNumPhotos.format({numPhotos: 1000});console.log(output); // => "Usted tiene 1,000 fotos."
```
### Message Syntax
The message syntax that this package uses is not proprietary, in fact it's a common standard message syntax that works across programming languages and one that professional translators are familiar with. This package uses the **[ICU Message syntax](http://userguide.icu-project.org/formatparse/messages)** and works for all [CLDR languages](http://cldr.unicode.org/) which have pluralization rules defined.
### Features
* Uses industry standards: [ICU Message syntax](http://userguide.icu-project.org/formatparse/messages) and [CLDR locale data](http://cldr.unicode.org/).
* Supports **plural**, **select**, and **selectordinal** message arguments.
* Formats numbers and dates/times in messages using [`Intl.NumberFormat`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/NumberFormat) and [`Intl.DateTimeFormat`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/DateTimeFormat), respectively.
* Optimized for repeated calls to an `IntlMessageFormat` instance's `format()` method.
* Supports defining custom format styles/options.
* Supports escape sequences for message syntax chars, e.g.: `"\\{foo\\}"` will output: `"{foo}"` in the formatted output instead of interpreting it as a `foo` argument.
Usage
---
### `Intl` Dependency
This package assumes that the [`Intl`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Intl) global object exists in the runtime. `Intl` is present in all modern browsers and there's work happening to [integrate `Intl` into Node.js](https://github.com/joyent/node/issues/6371).
**Luckly, there's the [Intl.js](https://github.com/andyearnshaw/Intl.js) polyfill!** You will need to conditionally load the polyfill if you want to support runtimes which `Intl` is not already built-in.
#### Loading Intl.js Polyfill in a browser
If the browser does not already have the `Intl` APIs built-in, the Intl.js Polyfill will need to be loaded on the page along with the locale data for any locales that need to be supported:
```
<script src="intl/Intl.min.js"></script><script src="intl/locale-data/jsonp/en-US.js"></script>
```
*Note: Modern browsers already have the `Intl` APIs built-in, so you can load the Intl.js Polyfill conditionally, by for checking for `window.Intl`.*
#### Loading Intl.js Polyfill in Node.js
Conditionally require the Intl.js Polyfill if it doesn't already exist in the runtime. As of Node <= 0.10, this polyfill will be required.
```
if (!global.Intl) { require('intl');}
```
*Note: When using the Intl.js Polyfill in Node.js, it will automatically load the locale data for all supported locales.*
### Loading Intl MessageFormat in a browser
```
<script src="intl-messageformat/intl-messageformat.min.js"></script>
```
By default, Intl MessageFormat ships with the locale data for English (`en`) built-in to the library's runtime. When you need to format data in another locale, include its data; e.g., for French:
```
<script src="intl-messageformat/locale-data/fr.js"></script>
```
*Note: All 200+ languages supported by this package use their root BCP 47 language tag; i.e., the part before the first hyphen (if any).*
### Loading Intl MessageFormat in Node.js
Simply `require()` this package:
```
var IntlMessageFormat = require('intl-messageformat');
```
*Note: in Node.js, the data for all 200+ languages is loaded along with the library.*
### Public API
#### `IntlMessageFormat` Constructor
To create a message to format, use the `IntlMessageFormat` constructor. The constructor takes three parameters:
* **message** - *{String | AST}* - String message (or pre-parsed AST) that serves as formatting pattern.
* **locales** - *{String | String[]}* - A string with a BCP 47 language tag, or an array of such strings. If you do not provide a locale, the default locale will be used. When an array of locales is provided, each item and its ancestor locales are checked and the first one with registered locale data is returned. **See: [Locale Resolution](#locale-resolution) for more details.**
* **[formats]** - *{Object}* - Optional object with user defined options for format styles.
```
var msg = new IntlMessageFormat('My name is {name}.', 'en-US');
```
#### Locale Resolution
`IntlMessageFormat` uses a locale resolution process similar to that of the built-in `Intl` APIs to determine which locale data to use based on the `locales` value passed to the constructor. The result of this resolution process can be determined by call the `resolvedOptions()` prototype method.
The following are the abstract steps `IntlMessageFormat` goes through to resolve the locale value:
* If no extra locale data is loaded, the locale will *always* resolved to `"en"`.
* If locale data is missing for a leaf locale like `"fr-FR"`, but there *is* data for one of its ancestors, `"fr"` in this case, then its ancestor will be used.
* If there's data for the specified locale, then that locale will be resolved; i.e.,
```
var mf = new IntlMessageFormat('', 'en-US');assert(mf.resolvedOptions().locale === 'en-US'); // true
```
* The resolved locales are now normalized; e.g., `"en-us"` will resolve to: `"en-US"`.
*Note: When an array is provided for `locales`, the above steps happen for each item in that array until a match is found.*
#### `resolvedOptions()` Method
This method returns an object with the options values that were resolved during instance creation. It currently only contains a `locale` property; here's an example:
```
var msg = new IntlMessageFormat('', 'en-us');console.log(msg.resolvedOptions().locale); // => "en-US"
```
Notice how the specified locale was the all lower-case value: `"en-us"`, but it was resolved and normalized to: `"en-US"`.
#### `format(values)` Method
Once the message is created, formatting the message is done by calling the `format()` method on the instance and passing a collection of `values`:
```
var output = msg.format({name: "Eric"});console.log(output); // => "My name is Eric."
```
*Note: A value **must** be supplied for every argument in the message pattern the instance was constructed with.*
#### User Defined Formats
Define custom format styles is useful you need supply a set of options to the underlying formatter; e.g., outputting a number in USD:
```
var msg = new IntlMessageFormat('The price is: {price, number, USD}', 'en-US', { number: { USD: { style : 'currency', currency: 'USD' } }}); var output = msg.format({price: 100});console.log(output); // => "The price is: $100.00"
```
In this example, we're defining a `USD` number format style which is passed to the underlying `Intl.NumberFormat` instance as its options.
Examples
---
### Plural Label
This example shows how to use the [ICU Message syntax](http://userguide.icu-project.org/formatparse/messages) to define a message that has a plural label; e.g., `"You have 10 photos"`:
```
You have {numPhotos, plural,
=0 {no photos.}
=1 {one photo.}
other {# photos.}
}
```
```
var MESSAGES = { photos: '...', // String from code block above. ...}; var msg = new IntlMessageFormat(MESSAGES.photos, 'en-US'); console.log(msg.format({numPhotos: 0})); // => "You have no photos."console.log(msg.format({numPhotos: 1})); // => "You have one photo."console.log(msg.format({numPhotos: 1000})); // => "You have 1,000 photos."
```
*Note: how when `numPhotos` was `1000`, the number is formatted with the correct thousands separator.*
License
---
This software is free to use under the Yahoo! Inc. BSD license.
See the [LICENSE file](https://github.com/yahoo/intl-messageformat/blob/master/LICENSE) for license text and copyright information.
Readme
---
### Keywords
* i18n
* intl
* internationalization
* localization
* globalization
* messageformat
* parser
* plural
* icu |
perchance | rust | Rust | Crate perchance
===
`perchance` is a simple random number generation library, tuned for ease of use: create an instance of `PerchanceContext` and go.
Note that `perchance` is **not** cryptographically secure and ***should not*** be used in any context where security is a concern.
When the `std` feature is enabled (by default), there’s a global `PerchanceContext` provided, too.
Example
---
First, create a `PerchanceContext`:
```
let mut rng = perchance::PerchanceContext::new(my_seed);
```
Or, if you have the `std` feature enabled, obtain the global `PerchanceContext` object:
```
// Seed the global context first. You may do so manually, or, on platforms that support it,
// obtain a seed to pass into it by calling `perchance::gen_time_seed()`.
perchance::seed_global(0x5F3759DF); // ;)
let mut rng = perchance::global();
```
Then, start letting things happen perchance!
```
let between_0_and_1 = rng.uniform_f32();
let dice_roll = rng.uniform_range_i32(1..=6);
let random_direction = rng.uniform_sphere_surface_vec3();
let thing_should_happen = rng.get_bool();
enum Event {
Thing1,
Thing2,
Thing3,
}
let which_should_happen = rng.choose(&[Event::Thing1, Event::Thing2, Event::Thing3]);
```
Some of the convenience functions exist in multiple flavors with different return types.
Perchance does NOT guarantee the same random number generator across new versions of the crate.
Structs
---
* PerchanceContext
* WeightedSampler
Functions
---
* gen_time_seedGenerates a seed that may be passed to `perchance::seed_global` based on the system clock.
* globalReturns the global `PerchanceContext`. You **must** seed it by calling
`perchance::seed_global` at least once before calling this function or else it will panic.
* global_has_been_seededReturns `true` if `perchance::seed_global` has been called at least once.
* seed_globalSeed (or reseed) the global `PerchanceContext` returned by `perchance::global`.
Crate perchance
===
`perchance` is a simple random number generation library, tuned for ease of use: create an instance of `PerchanceContext` and go.
Note that `perchance` is **not** cryptographically secure and ***should not*** be used in any context where security is a concern.
When the `std` feature is enabled (by default), there’s a global `PerchanceContext` provided, too.
Example
---
First, create a `PerchanceContext`:
```
let mut rng = perchance::PerchanceContext::new(my_seed);
```
Or, if you have the `std` feature enabled, obtain the global `PerchanceContext` object:
```
// Seed the global context first. You may do so manually, or, on platforms that support it,
// obtain a seed to pass into it by calling `perchance::gen_time_seed()`.
perchance::seed_global(0x5F3759DF); // ;)
let mut rng = perchance::global();
```
Then, start letting things happen perchance!
```
let between_0_and_1 = rng.uniform_f32();
let dice_roll = rng.uniform_range_i32(1..=6);
let random_direction = rng.uniform_sphere_surface_vec3();
let thing_should_happen = rng.get_bool();
enum Event {
Thing1,
Thing2,
Thing3,
}
let which_should_happen = rng.choose(&[Event::Thing1, Event::Thing2, Event::Thing3]);
```
Some of the convenience functions exist in multiple flavors with different return types.
Perchance does NOT guarantee the same random number generator across new versions of the crate.
Structs
---
* PerchanceContext
* WeightedSampler
Functions
---
* gen_time_seedGenerates a seed that may be passed to `perchance::seed_global` based on the system clock.
* globalReturns the global `PerchanceContext`. You **must** seed it by calling
`perchance::seed_global` at least once before calling this function or else it will panic.
* global_has_been_seededReturns `true` if `perchance::seed_global` has been called at least once.
* seed_globalSeed (or reseed) the global `PerchanceContext` returned by `perchance::global`.
Struct perchance::PerchanceContext
===
```
pub struct PerchanceContext(_);
```
Implementations
---
### impl PerchanceContext
#### pub const fn new(seed: u128) -> Self
Returns a new `PerchanceContext`, seeded with `seed`.
Two `PerchanceContext`s seeded with the same number will generate the same sequence of outputs, given an identical sequence of calls to its member functions.
Since `perchance` is not yet stabilized we reserve the right to change the random number generator algorithm, meaning the same seed may produce different numbers on different version of `perchance`.
##### Examples found in repository?
examples/gen_test_answers.rs (line 4)&pr &sc
```
3 4
5 6
7 8
9 10
11 12
13 14
15 16
17 18
19 20
21 22
23 24
25 26
27 28
29
```
```
&varrfn main() {
let mut rng = PerchanceContext::new(0x5F3759DF);
for _ in 0..10 {
let next_u32 = rng.get_u32();
println!("{next_u32},");
}
let mut rng = PerchanceContext::new(0x5F3759DF);
for _ in 0..10 {
let between_0_and_1 = rng.uniform_f32();
println!("{between_0_and_1},");
}
let mut rng = PerchanceContext::new(0x5F3759DF);
for _ in 0..10 {
let dice_roll = rng.uniform_range_i32(1..=6);
println!("{dice_roll},");
}
let mut rng = PerchanceContext::new(0x5F3759DF);
for _ in 0..10 {
let between_minus_one_and_five = rng.uniform_range_f32(-1.0..5.0);
println!("{between_minus_one_and_five},");
}
let mut rng = PerchanceContext::new(0x5F3759DF);
for _ in 0..10 {
let next_u32 = rng.get_u32();
println!("{next_u32},");
}
}
```
#### pub fn get_bool(&mut self) -> bool
50/50 chance
#### pub fn get_u32(&mut self) -> u32
All 32 bits of the number are random.
##### Examples found in repository?
examples/gen_test_answers.rs (line 6)&pr &sc
```
3 4
5 6
7 8
9 10
11 12
13 14
15 16
17 18
19 20
21 22
23 24
25 26
27 28
29
```
```
&varrfn main() {
let mut rng = PerchanceContext::new(0x5F3759DF);
for _ in 0..10 {
let next_u32 = rng.get_u32();
println!("{next_u32},");
}
let mut rng = PerchanceContext::new(0x5F3759DF);
for _ in 0..10 {
let between_0_and_1 = rng.uniform_f32();
println!("{between_0_and_1},");
}
let mut rng = PerchanceContext::new(0x5F3759DF);
for _ in 0..10 {
let dice_roll = rng.uniform_range_i32(1..=6);
println!("{dice_roll},");
}
let mut rng = PerchanceContext::new(0x5F3759DF);
for _ in 0..10 {
let between_minus_one_and_five = rng.uniform_range_f32(-1.0..5.0);
println!("{between_minus_one_and_five},");
}
let mut rng = PerchanceContext::new(0x5F3759DF);
for _ in 0..10 {
let next_u32 = rng.get_u32();
println!("{next_u32},");
}
}
```
#### pub fn get_u64(&mut self) -> u64
All 64 bits of the number are random.
#### pub fn get_u128(&mut self) -> u128
All 128 bits of the number are random.
#### pub fn u64_less_than(&mut self, max: u64) -> u64
Returns a number strictly less than the given max.
Returns 0 if max is zero.
#### pub fn usize_less_than(&mut self, end: usize) -> usize
Returns a `usize` that is less than the given number.
If the given number is zero, None is returned.
#### pub fn choose<'slice, T>(&mut self, slice: &'slice [T]) -> &'slice T
Chose one of the given values at random.
panics if the given slice is empty
#### pub fn choose_mut<'slice, T>(&mut self, slice: &'slice mut [T]) -> &'slice mut T
Chose one of the given values at random.
panics if the given slice is empty
#### pub fn choose_it<T>(&mut self, it: impl Iterator<Item = T>) -> Option<TChose one of the given values at random.
Returns `None` if the given iterator is empty.
#### pub fn uniform_range_i32<R: RangeBounds<i32>>(&mut self, range: R) -> i32
Returns a random integer uniformly distributed over `range`.
Panics if the range is invalid (end < start) or if the range bounds would overflow an i32.
Example: `let dice = rng.uniform_range_i32(0..6);`
##### Examples found in repository?
examples/gen_test_answers.rs (line 16)
```
3 4
5 6
7 8
9 10
11 12
13 14
15 16
17 18
19 20
21 22
23 24
25 26
27 28
29
```
```
&varrfn main() {
let mut rng = PerchanceContext::new(0x5F3759DF);
for _ in 0..10 {
let next_u32 = rng.get_u32();
println!("{next_u32},");
}
let mut rng = PerchanceContext::new(0x5F3759DF);
for _ in 0..10 {
let between_0_and_1 = rng.uniform_f32();
println!("{between_0_and_1},");
}
let mut rng = PerchanceContext::new(0x5F3759DF);
for _ in 0..10 {
let dice_roll = rng.uniform_range_i32(1..=6);
println!("{dice_roll},");
}
let mut rng = PerchanceContext::new(0x5F3759DF);
for _ in 0..10 {
let between_minus_one_and_five = rng.uniform_range_f32(-1.0..5.0);
println!("{between_minus_one_and_five},");
}
let mut rng = PerchanceContext::new(0x5F3759DF);
for _ in 0..10 {
let next_u32 = rng.get_u32();
println!("{next_u32},");
}
}
```
#### pub fn uniform_range_usize<R: RangeBounds<usize>>(&mut self, range: R) -> usize
Returns a random usize uniformly distributed over `range`.
Panics if the range is invalid (end < start) or if the range bounds would overflow a usize.
Example: `let array_index = rng.uniform_range_usize(0..array.len());`
#### pub fn uniform_f32(&mut self) -> f32
Returns a random floating point number in the range [0.0, 1.0).
##### Examples found in repository?
examples/gen_test_answers.rs (line 11)
```
3 4
5 6
7 8
9 10
11 12
13 14
15 16
17 18
19 20
21 22
23 24
25 26
27 28
29
```
```
&varrfn main() {
let mut rng = PerchanceContext::new(0x5F3759DF);
for _ in 0..10 {
let next_u32 = rng.get_u32();
println!("{next_u32},");
}
let mut rng = PerchanceContext::new(0x5F3759DF);
for _ in 0..10 {
let between_0_and_1 = rng.uniform_f32();
println!("{between_0_and_1},");
}
let mut rng = PerchanceContext::new(0x5F3759DF);
for _ in 0..10 {
let dice_roll = rng.uniform_range_i32(1..=6);
println!("{dice_roll},");
}
let mut rng = PerchanceContext::new(0x5F3759DF);
for _ in 0..10 {
let between_minus_one_and_five = rng.uniform_range_f32(-1.0..5.0);
println!("{between_minus_one_and_five},");
}
let mut rng = PerchanceContext::new(0x5F3759DF);
for _ in 0..10 {
let next_u32 = rng.get_u32();
println!("{next_u32},");
}
}
```
#### pub fn uniform_f64(&mut self) -> f64
Returns a random floating point number in the range [0.0, 1.0).
#### pub fn uniform_range_f32<R: RangeBounds<f32>>(&mut self, range: R) -> f32
Returns a random floating point number uniformly distributed over `range`.
Note that this ignores the inclusivity of the range (always assumes an inclusive lower bound and an exclusive upper bound). An unbonded lower and upper bound will be set to
`f32::MIN` and `f32::MAX` respectively.
##### Examples found in repository?
examples/gen_test_answers.rs (line 21)
```
3 4
5 6
7 8
9 10
11 12
13 14
15 16
17 18
19 20
21 22
23 24
25 26
27 28
29
```
```
&varrfn main() {
let mut rng = PerchanceContext::new(0x5F3759DF);
for _ in 0..10 {
let next_u32 = rng.get_u32();
println!("{next_u32},");
}
let mut rng = PerchanceContext::new(0x5F3759DF);
for _ in 0..10 {
let between_0_and_1 = rng.uniform_f32();
println!("{between_0_and_1},");
}
let mut rng = PerchanceContext::new(0x5F3759DF);
for _ in 0..10 {
let dice_roll = rng.uniform_range_i32(1..=6);
println!("{dice_roll},");
}
let mut rng = PerchanceContext::new(0x5F3759DF);
for _ in 0..10 {
let between_minus_one_and_five = rng.uniform_range_f32(-1.0..5.0);
println!("{between_minus_one_and_five},");
}
let mut rng = PerchanceContext::new(0x5F3759DF);
for _ in 0..10 {
let next_u32 = rng.get_u32();
println!("{next_u32},");
}
}
```
#### pub fn normal_f32(&mut self) -> f32
A normally distributed value with standard deviation=1
#### pub fn normal_vec2(&mut self) -> Vec2
Two independent normally distributed values
#### pub fn normal_vec3(&mut self) -> Vec3
Three independent normally distributed values
#### pub fn uniform_square_vec2<R: RangeBounds<f32>>(&mut self, range: R) -> Vec2
Returns a Vec2 where both components are uniformly distributed over `range`.
#### pub fn uniform_cube_vec3<R: RangeBounds<f32>>(&mut self, range: R) -> Vec3
Returns a Vec3 where all three components are uniformly distributed over `range`.
#### pub fn uniform_bounds_vec3(&mut self, bounds: BoundingBox) -> Vec3
Returns a uniformly random point that lies inside the bounding box.
#### pub fn uniform_rectangle_vec2(&mut self, min: Vec2, max: Vec2) -> Vec2
Returns a uniformly random point that lies inside the rectangle.
#### pub fn uniform_circle_edge_vec2(&mut self) -> Vec2
Returns a random Vec2 on the edge of a circle with radius 1.0.
#### pub fn uniform_circle_area_vec2(&mut self) -> Vec2
Returns a random Vec2 within the area of a circle with radius 1.0.
#### pub fn uniform_disc_vec2(&mut self) -> Vec2
👎Deprecated: Deprecated in favor of the equivalent `uniform_circle_area_vec2()`Returns a random Vec2 within a circular disc.
#### pub fn uniform_sphere_surface_vec3(&mut self) -> Vec3
Returns a random Vec3 on the surface of a sphere with radius 1.0.
#### pub fn uniform_sphere_volume_vec3(&mut self) -> Vec3
Returns a random Vec3 within the volume of a sphere.
#### pub fn weighted_choice_f32(&mut self, weights: &[f32]) -> usize
Return a an integer index, weighted by the given weights.
For instance, `rng.weighted_choice_f32(&[1.0, 3.0, 2.0])` will return
`0` with a `1/6` probability,
`1` with a `3/6` probability,
`2` with a `2/6` probability.
If the weights sum to zero, a fair choice is returned.
The function panics if the given weight list is empty, or contain non-finite or negative numbers.
#### pub fn weighted_choices_f32(&mut self, weights: &[f32]) -> WeightedSampler<'_Like `Self::weighted_choice_f32`, but returns a sampler that allows you to much more efficiently sample multiple choices with the same weights.
Complexity for `N` samples with `C` weights is `C + N*log(C)` instead of `C + N*C` when using `weighted_choice_f32`.
Panics if given a NaN weight.
Trait Implementations
---
### impl Clone for PerchanceContext
#### fn clone(&self) -> PerchanceContext
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read moreAuto Trait Implementations
---
### impl RefUnwindSafe for PerchanceContext
### impl Send for PerchanceContext
### impl Sync for PerchanceContext
### impl Unpin for PerchanceContext
### impl UnwindSafe for PerchanceContext
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct perchance::WeightedSampler
===
```
pub struct WeightedSampler<'a> { /* private fields */ }
```
Implementations
---
### impl<'a> WeightedSampler<'a#### pub fn sample(&mut self) -> usize
Returns an index sampled according to the given weights.
See `PerchanceContext::weighted_choice_f32` for more details.
Auto Trait Implementations
---
### impl<'a> RefUnwindSafe for WeightedSampler<'a### impl<'a> Send for WeightedSampler<'a### impl<'a> Sync for WeightedSampler<'a### impl<'a> Unpin for WeightedSampler<'a### impl<'a> !UnwindSafe for WeightedSampler<'aBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Function perchance::gen_time_seed
===
```
pub fn gen_time_seed() -> u128
```
Generates a seed that may be passed to `perchance::seed_global` based on the system clock.
Function perchance::seed_global
===
```
pub fn seed_global(seed: u128)
```
Seed (or reseed) the global `PerchanceContext` returned by `perchance::global`.
After calling this at least once, `perchance::global_has_been_seeded` will return true.
Function perchance::global
===
```
pub fn global<'a>() -> MutexGuard<'a, PerchanceContext>
```
Returns the global `PerchanceContext`. You **must** seed it by calling
`perchance::seed_global` at least once before calling this function or else it will panic.
Call methods on it like `PerchanceContext::get_u32` to get random numbers from anywhere in your module.
Function perchance::global_has_been_seeded
===
```
pub fn global_has_been_seeded() -> bool
```
Returns `true` if `perchance::seed_global` has been called at least once. |
github.com/pocketbase/pocketbase | go | Go | README
[¶](#section-readme)
---
[![PocketBase - open source backend in 1 file](https://i.imgur.com/5qimnm5.png)](https://pocketbase.io)
[![build](https://github.com/pocketbase/pocketbase/actions/workflows/release.yaml/badge.svg)](https://github.com/pocketbase/pocketbase/actions/workflows/release.yaml)
[![Latest releases](https://img.shields.io/github/release/pocketbase/pocketbase.svg)](https://github.com/pocketbase/pocketbase/releases)
[![Go package documentation](https://godoc.org/github.com/ganigeorgiev/fexpr?status.svg)](https://pkg.go.dev/github.com/pocketbase/pocketbase)
[PocketBase](https://pocketbase.io) is an open source Go backend, consisting of:
* embedded database (*SQLite*) with **realtime subscriptions**
* built-in **files and users management**
* convenient **Admin dashboard UI**
* and simple **REST-ish API**
**For documentation and examples, please visit <https://pocketbase.io/docs>.**
> ⚠️ Please keep in mind that PocketBase is still under active development
> and therefore full backward compatibility is not guaranteed before reaching v1.0.0.
### API SDK clients
The easiest way to interact with the API is to use one of the official SDK clients:
* **JavaScript - [pocketbase/js-sdk](https://github.com/pocketbase/js-sdk)** (*browser and node*)
* **Dart - [pocketbase/dart-sdk](https://github.com/pocketbase/dart-sdk)** (*web, mobile, desktop*)
### Overview
PocketBase could be [downloaded directly as a standalone app](https://github.com/pocketbase/pocketbase/releases) or it could be used as a Go framework/toolkit which allows you to build your own custom app specific business logic and still have a single portable executable at the end.
#### Installation
```
# go 1.19+
go get github.com/pocketbase/pocketbase
```
#### Example
```
package main
import (
"log"
"net/http"
"github.com/labstack/echo/v5"
"github.com/pocketbase/pocketbase"
"github.com/pocketbase/pocketbase/apis"
"github.com/pocketbase/pocketbase/core"
)
func main() {
app := pocketbase.New()
app.OnBeforeServe().Add(func(e *core.ServeEvent) error {
// add new "GET /hello" route to the app router (echo)
e.Router.AddRoute(echo.Route{
Method: http.MethodGet,
Path: "/hello",
Handler: func(c echo.Context) error {
return c.String(200, "Hello world!")
},
Middlewares: []echo.MiddlewareFunc{
apis.ActivityLogger(app),
},
})
return nil
})
if err := app.Start(); err != nil {
log.Fatal(err)
}
}
```
#### Running and building
Running/building the application is the same as for any other Go program, aka. just `go run` and `go build`.
**PocketBase embeds SQLite, but doesn't require CGO.**
If CGO is enabled (aka. `CGO_ENABLED=1`), it will use [mattn/go-sqlite3](https://pkg.go.dev/github.com/mattn/go-sqlite3) driver, otherwise - [modernc.org/sqlite](https://pkg.go.dev/modernc.org/sqlite).
Enable CGO only if you really need to squeeze the read/write query performance at the expense of complicating cross compilation.
To build the minimal standalone executable, like the prebuilt ones in the releases page, you can simply run `go build` inside the `examples/base` directory:
1. [Install Go 1.19+](https://go.dev/doc/install) (*if you haven't already*)
2. Clone/download the repo 3. Navigate to `examples/base`
4. Run `GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go build`
(*<https://go.dev/doc/install/source#environment>*)
5. Start the created executable by running `./base serve`.
The supported build targets by the non-cgo driver at the moment are:
```
darwin amd64 darwin arm64 freebsd amd64 freebsd arm64 linux 386 linux amd64 linux arm linux arm64 linux ppc64le linux riscv64 windows amd64 windows arm64
```
#### Testing
PocketBase comes with mixed bag of unit and integration tests.
To run them, use the default `go test` command:
```
go test ./...
```
Check also the [Testing guide](http://pocketbase.io/docs/testing) to learn how to write your own custom application tests.
### Security
If you discover a security vulnerability within PocketBase, please send an e-mail to **support at pocketbase.io**.
All reports will be promptly addressed, and you'll be credited accordingly.
### Contributing
PocketBase is free and open source project licensed under the [MIT License](https://github.com/pocketbase/pocketbase/blob/v0.19.0/LICENSE.md).
You are free to do whatever you want with it, even offering it as a paid service.
You could help continuing its development by:
* [Contribute to the source code](https://github.com/pocketbase/pocketbase/blob/v0.19.0/CONTRIBUTING.md)
* [Suggest new features and report issues](https://github.com/pocketbase/pocketbase/issues)
* [Donate a small amount](https://pocketbase.io/support-us)
PRs for new OAuth2 providers, bug fixes, code optimizations and documentation improvements are more than welcome.
But please refrain creating PRs for *new features* without previously discussing the implementation details.
PocketBase has a [roadmap](https://github.com/orgs/pocketbase/projects/2) and I try to work on issues in specific order and such PRs often come in out of nowhere and skew all initial planning with tedious back-and-forth communication.
Don't get upset if I close your PR, even if it is well executed and tested. This doesn't mean that it will never be merged.
Later we can always refer to it and/or take pieces of your implementation when the time comes to work on the issue (don't worry you'll be credited in the release notes).
*Please also note that PocketBase was initially created to serve as a new backend for my other open source project - [Presentator](https://presentator.io) (see [#183](https://github.com/presentator/presentator/issues/183)),
so all feature requests will be first aligned with what we need for Presentator v3.*
Documentation
[¶](#section-documentation)
---
### Index [¶](#pkg-index)
* [Variables](#pkg-variables)
* [type Config](#Config)
* [type PocketBase](#PocketBase)
* + [func New() *PocketBase](#New)
+ [func NewWithConfig(config Config) *PocketBase](#NewWithConfig)
* + [func (pb *PocketBase) Execute() error](#PocketBase.Execute)
+ [func (pb *PocketBase) Start() error](#PocketBase.Start)
### Constants [¶](#pkg-constants)
This section is empty.
### Variables [¶](#pkg-variables)
```
var Version = "(untracked)"
```
Version of PocketBase
### Functions [¶](#pkg-functions)
This section is empty.
### Types [¶](#pkg-types)
####
type [Config](https://github.com/pocketbase/pocketbase/blob/v0.19.0/pocketbase.go#L44) [¶](#Config)
added in v0.7.2
```
type Config struct {
// optional default values for the console flags
DefaultDebug [bool](/builtin#bool)
DefaultDataDir [string](/builtin#string) // if not set, it will fallback to "./pb_data"
DefaultEncryptionEnv [string](/builtin#string)
// hide the default console server info on app startup
HideStartBanner [bool](/builtin#bool)
// optional DB configurations
DataMaxOpenConns [int](/builtin#int) // default to core.DefaultDataMaxOpenConns
DataMaxIdleConns [int](/builtin#int) // default to core.DefaultDataMaxIdleConns
LogsMaxOpenConns [int](/builtin#int) // default to core.DefaultLogsMaxOpenConns
LogsMaxIdleConns [int](/builtin#int) // default to core.DefaultLogsMaxIdleConns
}
```
Config is the PocketBase initialization config struct.
####
type [PocketBase](https://github.com/pocketbase/pocketbase/blob/v0.19.0/pocketbase.go#L31) [¶](#PocketBase)
```
type PocketBase struct {
// RootCmd is the main console command
RootCmd *[cobra](/github.com/spf13/cobra).[Command](/github.com/spf13/cobra#Command)
// contains filtered or unexported fields
}
```
PocketBase defines a PocketBase app launcher.
It implements [core.App](/github.com/pocketbase/[email protected]/core#App) via embedding and all of the app interface methods could be accessed directly through the instance (eg. PocketBase.DataDir()).
####
func [New](https://github.com/pocketbase/pocketbase/blob/v0.19.0/pocketbase.go#L68) [¶](#New)
```
func New() *[PocketBase](#PocketBase)
```
New creates a new PocketBase instance with the default configuration.
Use [NewWithConfig()] if you want to provide a custom configuration.
Note that the application will not be initialized/bootstrapped yet,
aka. DB connections, migrations, app settings, etc. will not be accessible.
Everything will be initialized when [Start()] is executed.
If you want to initialize the application before calling [Start()],
then you'll have to manually call [Bootstrap()].
####
func [NewWithConfig](https://github.com/pocketbase/pocketbase/blob/v0.19.0/pocketbase.go#L83) [¶](#NewWithConfig)
added in v0.7.2
```
func NewWithConfig(config [Config](#Config)) *[PocketBase](#PocketBase)
```
NewWithConfig creates a new PocketBase instance with the provided config.
Note that the application will not be initialized/bootstrapped yet,
aka. DB connections, migrations, app settings, etc. will not be accessible.
Everything will be initialized when [Start()] is executed.
If you want to initialize the application before calling [Start()],
then you'll have to manually call [Bootstrap()].
####
func (*PocketBase) [Execute](https://github.com/pocketbase/pocketbase/blob/v0.19.0/pocketbase.go#L145) [¶](#PocketBase.Execute)
```
func (pb *[PocketBase](#PocketBase)) Execute() [error](/builtin#error)
```
Execute initializes the application (if not already) and executes the pb.RootCmd with graceful shutdown support.
This method differs from pb.Start() by not registering the default system commands!
####
func (*PocketBase) [Start](https://github.com/pocketbase/pocketbase/blob/v0.19.0/pocketbase.go#L132) [¶](#PocketBase.Start)
```
func (pb *[PocketBase](#PocketBase)) Start() [error](/builtin#error)
```
Start starts the application, aka. registers the default system commands (serve, migrate, version) and executes pb.RootCmd. |
fashiontale-rest-hooks | npm | JavaScript | Making dynamic sites performant, scalable, simple to build with any API design.
**[📖Read The Docs](https://resthooks.io)** | [🏁Getting Started](https://resthooks.io/docs/getting-started/installation) |
[🎮Demo](https://codesandbox.io/s/rest-hooks-hinux?fontsize=14&module=%2Fsrc%2Fpages%2FIssueList.tsx)
###
Simple TypeScript definition
```
class ArticleResource extends Resource {
readonly id: number | undefined = undefined;
readonly title: string = '';
readonly body: string = '';
pk() { return this.id; }
static urlRoot = '/articles/';
}
```
###
One line data hookup
```
const article = useResource(ArticleResource.detailShape(), { id });
return (
<>
<h2>{article.title}</h2>
<p>{article.body}</p>
</>
);
```
###
Mutation
```
const update = useFetcher(ArticleResource.updateShape());
return <ArticleForm onSubmit={data => update({ id }, data)} />;
```
###
And subscriptions
```
const price = useResource(PriceResource.detailShape(), { symbol });
useSubscription(PriceResource.detailShape(), { symbol });
return price.value;
```
###
...all typed ...fast ...and consistent
For the small price of 7kb gziped. [🏁Get started now](https://resthooks.io/docs/getting-started/installation)
Features
---
* [x] Strong [Typescript](https://www.typescriptlang.org/) types
* [x] 🛌 React [Suspense](https://resthooks.io/docs/guides/loading-state) support
* [x] ⛓️ React [Concurrent mode](https://reactjs.org/docs/concurrent-mode-patterns.html) compatible
* [x] 🎣 Simple declarative API
* [x] 💰 Normalized response [configurable](https://resthooks.io/docs/guides/resource-lifetime) caching
* [x] 💥 Tiny bundle footprint
* [x] 🛑 Automatic overfetching elimination
* [x] ✨ Optimistic updates
* [x] 🧘 [Flexible](https://resthooks.io/docs/api/FetchShape) to fit any API design (one size fits all)
* [x] 🌳 Tree-shakable (only use what you need)
* [x] 🔁 [Subscriptions](https://resthooks.io/docs/api/useSubscription)
* [x] ♻️ Optional [redux integration](https://resthooks.io/docs/guides/redux)
* [x] 📙 [Storybook mocking](https://resthooks.io/docs/guides/storybook)
* [x] 📱 [React Native](https://facebook.github.io/react-native/) support
* [ ] 🚯 Pluggable garbage collection policy
###
Special thanks
Thanks to [@0xcaff](https://github.com/0xcaff), [@melissafzhang](https://github.com/melissafzhang)
and [@alexiswolfish](https://github.com/alexiswolfish) for their valuable feedback.
Readme
---
### Keywords
* rest
* react
* flux
* ajax
* networking
* suspense
* concurrent mode
* fetch
* hook
* typescript
* redux
* data fetching
* data cache
* api
* api call
* normalized cache
* swr |
someMTP | cran | R | Package ‘someMTP’
October 14, 2022
Type Package
Title Some Multiple Testing Procedures
Version 1.4.1.1
Date 2013-11-04
Author <NAME>
Maintainer <NAME> <<EMAIL>>
Depends methods
Description It's a collection of functions for Multiplicity Correction and Multiple Testing.
License GPL (>= 2)
LazyLoad yes
NeedsCompilation no
Repository CRAN
Date/Publication 2021-03-01 07:10:10 UTC
R topics documented:
someMTP-packag... 2
*OrNULL-clas... 3
dra... 3
fdrOrd/kfweOr... 4
lsd.object clas... 6
lsd.tes... 7
p.adjust.... 8
someMTP.object clas... 10
step.ad... 11
someMTP-package Some Multiple Testing Procedures
Description
It is a collection of functions for Multiplicty Correction and Multiple Testing.
Details
Package: someMTP
Type: Package
Version: 1.2
Date: 2011-01-10
License: GPL (>= 2)
LazyLoad: yes
Author(s)
<NAME>
Maintainer: <<EMAIL>>
References
For weighted methods:
<NAME> (1997). Multiple hypotheses testing with weights. Scand. J. Statist. 24,
407-418.
<NAME> (2007). FDR- and FWE-controlling methods using data-driven weights. Journal of
Statistical Planning and Inference, 137,12, 3859-3870.
For LSD test:
<NAME>, <NAME> and <NAME> (1998). Multivariate test based on Left-Spherically Distributed
Linear Scores. The Annals of Statistics, Vol. 26, No. 5, 1972-1988
<NAME> (2011). A note on Left-Spherically Distributed Test with covariates, Statistics and Proba-
bilty Letters, Volume 81, Issue 6, June 2011, Pages 639-641
Examples
set.seed(13)
y <- matrix(rnorm(5000),5,1000) #create toy data
y[,1:100] <- y[,1:100]+3 #create toy data
p <- apply(y,2,function(y) t.test(y)$p.value) #compute p-values
M2 <- apply(y^2,2,mean) #compute ordering criterion
fdr <- p.adjust(p,method="BH") #(unweighted) procedure, fdr control
sum(fdr<.05)
fdr.w <- p.adjust.w(p,method="BH",w=M2) #weighted procedure, weighted fdr control
sum(fdr.w<.05)
fwer <- p.adjust(p,method="holm") #(unweighted) procedure, fwer control
sum(fwer<.05)
fwer.w <- p.adjust.w(p,method="BHfwe",w=M2) #weighted procedure, weighted fwer (=fwer) control
sum(fwer.w<.05)
plot(M2,-log10(p))
*OrNULL-class Class *OrNULL
Description
class * or Null
Objects from the Class
A virtual Class: No objects may be created from it.
Methods
No methods defined with class "*OrNULL" in the signature.
Examples
showClass("callOrNULL")
draw Plots results of fdrOrd()
Description
Plots results of fdrOrd()
Usage
draw(object, what = c("all", "ordVsP", "stepVsR"), pdfName = NULL)
Arguments
object a someMTP.object resulting from fdrOrd()
what what to plot; "all" is the default
pdfName it is the pdf filename where the plot will be saved. If pdfNane is null (the default)
the plot will show as window.
Value
No value is returned
Author(s)
<NAME>
See Also
See Also fdrOrd.
Examples
set.seed(17)
x=matrix(rnorm(60),3,20)
x[,1:10]=x[,1:10]+2 ##variables 1:10 have tests under H1
ts=apply(x,2,function(x) t.test(x)$statistic)
ps=apply(x,2,function(x) t.test(x)$p.value)
m2=apply(x^2,2,mean)
pOrd <- fdrOrd(ps,q=.05,ord=m2)
draw(pOrd)
fdrOrd/kfweOrd Controlling the False Discovery Rate and and the Generalized FWER
in ordered Test
Description
Ordinal procedure controlling the FDR and the Generalized FWER
Usage
fdrOrd(p, q = .01, ord = NULL, GD=FALSE)
kfweOrd(p, k = 1, alpha = 0.01, ord = NULL, alpha.prime = alpha,
J = qnbinom(alpha, k, alpha.prime), GD = FALSE)
Arguments
p vector of p-values
ord Values on the basis of which the procedure select the hypotheses (following
decreasing order). The vector have the same length of p. If NULL the natural
ordering is considered.
q average FDR level
alpha global significance level
k number of allowed errors in kFWE controls
J number of allowed jumps befor stopping
alpha.prime univariate alpha for single step Guo and Romano procedure
GD Logic value. Should the correction for general dependence be applied?
Value
The function returns an object of class someMTP.object.
rej: a logical vector indicating whenever the related hypotesis have been rejected.
p: the vector of p-values used in the call
ord: The vector used to sort the p-values (decrasing).
MTP: "fdrOrd" or "kfweOrd"
GD: A logical value incating if the correction for General Dependence have been
used or not.
q: The level of controlled FDR.
alpha: The level of controlled k-FWER
alphaprime: The significance level of individual tests
k: Number of allowed Errors
J: Number of allowed Jumps
Author(s)
<NAME> and <NAME>
References
<NAME>, <NAME> (2011). k-FWER Control without p-value Adjustment, with Application to
Detection of Genetic Determinants of Multiple Sclerosis in Italian Twins. Biometrics.
<NAME>, <NAME> (2013). FDR Control with Pseudo-Gatekeeping Based on a Possibly Data
Driven Order of the Hypotheses. Biometrics.
See Also
See also draw
Examples
set.seed(17)
x=matrix(rnorm(60),3,20)
x[,1:10]=x[,1:10]+2 ##variables 1:10 have tests under H1
ts=apply(x,2,function(x) t.test(x)$statistic)
ps=apply(x,2,function(x) t.test(x)$p.value) #compute p-values
m2=apply(x^2,2,mean) #compute ordering criterion
pOrd <- fdrOrd(ps,q=.05,ord=m2) #ordinal Procedure
pOrd
draw(pOrd)
sum(p.adjust(ps,method="BH")<=.05) #rejections with BH
kOrd <- kfweOrd(ps,k=5,ord=m2)#ordinal procedure
kOrd
kOrdGD <- kfweOrd(ps,k=5,ord=m2,GD=TRUE)#ord. proc. (any dependence)
kOrdGD
lsd.object class Class "lsd.object" for storing the result of the function lsd
Description
The class lsd.object is the output of a call to lsd.test
Slots
F : the test statistic
df : the degrees of freedom of F
globalP: the associated p-value
D: the matrix used in the test (it provides the influence of columns in resp to the test statistic)
call: The matched call to lsd.
MTP: The procedure used ("fdrOrd", "kfweOrd" or others).
Methods
p.value (lsd.object): Extracts the p-values.
show lsd.object: Prints the test results: p-value, test statistic, expected value of the test statistic
under the null hypothesis, standard deviation of the test statistic under the null hypothesis, and
number of covariates tested.
summary lsd.object: Prints the test results: p-value, test statistic, expected value of the test statistic
under the null hypothesis, standard deviation of the test statistic under the null hypothesis, and
number of covariates tested.
weights lsd.object: diagonal of matrix D used in the test (i.e. the influence of columns in resp to
the test statistic)
Author(s)
<NAME>: <<EMAIL>>
See Also
lsd
Examples
# Simple examples with random data here
set.seed(1)
#Standard multivariate LSD test for one sample case
X=matrix(rnorm(50),5,10)+5
res <- lsd.test(resp=X,alternative=~1)
print(res)
p.value(res)
summary(res,showD=TRUE)
lsd.test Multivariate Left Spherically Distributed (LSD) linear scores test.
Description
It performs the multivariate Left Spherically Distributed linear scores test of L\"auter et al. (The
Annals of Statistics, 1998) (see also details below).
Usage
lsd.test(resp, alternative = 1, null = NULL, D = NULL, data=NULL)
Arguments
resp The response vector of the regression model. May be supplied as a vector or
as a formula object. In the latter case, the right hand side of Y is passed on to
alternative if that argument is missing, or otherwise to null.
alternative The part of the design matrix corresponding to the alternative hypothesis. The
covariates of the null model do not have to be supplied again here. May be
given as a half formula object (e.g. ~a+b). In that case the intercept is always
suppressed.
null The part of the design matrix corresponding to the null hypothesis. May be
given as a design matrix or as a half formula object (e.g. ~a+b). The default for
Z is ~1, i.e. only an intercept. This intercept may be suppressed, if desired, with
Z = ~0.
data Only used when Y, X, or Z is given in formula form. An optional data frame,
list or environment containing the variables used in the formulae. If the vari-
ables in a formula are not found in data, the variables are taken from environ-
ment(formula), typically the environment from which gt is called.
D is q x p matrix or it is a function with arguments resp and null returning the q x
p transformation matrix. When D = NULL, then D = diag(t(resp)%*%IP0%*%resp)
with IP0 = diag(n) - null%*%solve(t(null)%*%null)%*%t(null)
Value
The function returns an object of class lsd.object.
F the test statistic
df the degrees of freedom of F
p the associated p-value
D the matrix used in the test (it provide information on the influence of columns in
resp to the test)
call: The matched call to lsd.test.
Author(s)
<NAME>
References
<NAME>, <NAME> and <NAME> (1998) Multivariate test based on Left-Spherically Distributed
Linear Scores. The Annals of Statistics, Vol. 26, No. 5, 1972-1988
<NAME> (2011). A note on Left-Spherically Distributed Test with covariates, Statistics and Proba-
bilty Letters, Volume 81, Issue 6, June 2011, Pages 639-641
Examples
set.seed(1)
#Standard multivariate LSD test for one sample case
X=matrix(rnorm(50),5,10)+2
lsd.test(resp=X,alternative=~1)
#Standard multivariate LSD test for two sample case
X2=X+matrix(c(0,0,1,1,1),5,10)*10
lsd.test(resp=X2,null=~1,alternative=c(0,0,1,1,1))
#General multivariate LSD test for linear predictor with covariates
lsd.test(resp=X2,null=cbind(rep(1,5),c(0,0,1,1,1)),alternative=1:5)
p.adjust.w Adjust P-values for Multiple Comparisons
Description
Given a set of p-values, returns p-values adjusted using one of several (weighted) methods. It
extends the method of p.adjust{stats}
Usage
p.adjust.w(p, method = c("bonferroni","holm","BHfwe","BH","BY"), n = length(p),w=NULL)
Arguments
p vector of p-values (possibly with NAs)
method correction method
n number of comparisons, must be at least length(p); only set this (to non-default)
when you know what you are doing!
w weigths to be used. p.adjust.w(..., rep(1,length(p))) produces the same
results as in p.adjust(...) (i.e. the unweighted counterpart).
Value
A vector of corrected p-values (same length as p) having two attributes: attributes(...)$w is the
vecotr of used weights and attributes(...)$method is the method used.
Author(s)
<NAME>
References
<NAME> (1997). Multiple hypotheses testing with weights. Scand. J. Statist. 24,
407-418.
<NAME> (2007). FDR- and FWE-controlling methods using data-driven weights. Journal of
Statistical Planning and Inference, 137,12, 3859-3870.
See Also
p.adjust
Examples
set.seed(13)
y <- matrix(rnorm(5000),5,1000) #create toy data
y[,1:100] <- y[,1:100]+3 #create toy data
p <- apply(y,2,function(y) t.test(y)$p.value) #compute p-values
M2 <- apply(y^2,2,mean) #compute ordering criterion
fdr <- p.adjust(p,method="BH") #(unweighted) procedure, fdr control
sum(fdr<.05)
fdr.w <- p.adjust.w(p,method="BH",w=M2) #weighted procedure, weighted fdr control
sum(fdr.w<.05)
fwer <- p.adjust(p,method="holm") #(unweighted) procedure, fwer control
sum(fwer<.05)
fwer.w <- p.adjust.w(p,method="BHfwe",w=M2) #weighted procedure, weighted fwer (=fwer) control
sum(fwer.w<.05)
plot(M2,-log10(p))
someMTP.object class Class "someMTP.object" for storing the result of the function fdrOrd
Description
The class someMTP.object is the output of a call to fdrOrd. It also stores the information needed
for related plots.
Slots
rej: a logical vector indicating whenever the related hypotesis have been rejected.
p: The vector of (raw) p-values used in the procedure.
ord: The vector used to sort the p-values (decreasing).
idOrd: The vector of indices used in sorting.
MTP: The type of procedure used.
GD: A logical value incating if the correction for General Dependence have been used or not.
q: The level of contrelled FDR when MTP=="fdrOrd".
k: The number of false rejection when MTP=="kfweOrd"
J: The number of allowed Jumps when MTP=="kfweOrd"
alpha: The significance level when MTP=="kfweOrd"
alphaprime: The significance level of individual tests.
call: The cal that generates the object.
Methods
show someMTP.object: Prints the test results.
summary someMTP.object: Prints the test results (as show).
draw someMTP.object: Plots results; what = c("all","ordVsP", "stepVsR")
sort signature(x = "someMTP.object"): Sorts the p-values to decreasing order of ord.
length signature(x = "someMTP.object"): The number of tests performed.
names signature(x = "someMTP.object"): Extracts the row names of the results matrix.
names<- signature(x = "someMTP.object"): Changes the row names of the results matrix. Du-
plicate names are not allowed, but see alias.
Author(s)
<NAME>: <<EMAIL>>
See Also
someMTP.object
Examples
# Simple examples with random data
set.seed(17)
x=matrix(rnorm(60),3,20)
x[,1:10]=x[,1:10]+2 ##variables 1:10 have tests under H1
ts=apply(x,2,function(x) t.test(x)$statistic)
ps=apply(x,2,function(x) t.test(x)$p.value)
m2=apply(x^2,2,mean)
pOrd <- fdrOrd(ps,q=.05,ord=m2)
pOrd
length(pOrd)
names(pOrd) <- paste("V",1:20,sep="")
names(pOrd)
step.adj Multipicity correction for Stepwise Selected models
Description
Corrects the p-value due to model selection. It works with models of class glm and selected with
function step {stats\).
Usage
step.adj(object, MC = 1000, scope = NULL, scale = 0,
direction = c("both", "backward", "forward"),
trace = 0, keep = NULL, steps = 1000, k = 2)
Arguments
object object of class glm. Note that formula have to write by variables name like
y~var1+var2+var3, data is a data.frame (see example below), offset is not
yet implemented, avoid its use, glm(formula, data, family=gaussian) pro-
duce the same result of lm(formula, data), then linear model can be allways
performed
MC number of random permutations for the dependent variable
scope as in function step
scale as in function step
direction as in function step
trace as in function step
keep as in function step
steps as in function step
k as in function step, other arguments are not implemented yet.
Details
It performs anova function (stats library) on the model selected by function step vs the null model
with the only intercept and it corrects for multiplicity. For lm models and gaussian glm models
it computes a F-test, form other models it uses Chisquare-test (see also anova.glm and anova.lm
help).
Value
An anova table with an extra column reporting the corrected p-value
Author(s)
<NAME> and <NAME>
References
<NAME>, <NAME>, <NAME> (2010). Adjusting stepwise p-values in generalized linear models.
Communications in Statistics - Theory and Methods.
See Also
glm, anova
Examples
set.seed(17)
y=rnorm(10)
x=matrix(rnorm(50),10,5)
#define a data.frame to be used in the glm function
DATA=data.frame(y,x)
#fit the model on a toy dataset
mod=glm(y~X1+X2+X3+X4+X5,data=DATA)
#select the model using function step
mod.step=step(mod, trace=0)
#test the selected model vs the null model
anova(glm(y~1, data=DATA),mod.step,test="F")
#step.adj do the same, but it also provides multiplicity control
step.adj(mod,MC=101, trace=0) |
bootwar | cran | R | Package ‘bootwar’
October 1, 2023
Title Nonparametric Bootstrap Test with Pooled Resampling Card Game
Version 0.2.1
Description The card game War is simple in its rules but can be lengthy. In
another domain, the nonparametric bootstrap test with pooled resampling
(nbpr) methods, as outlined in Dwivedi, Mallawaarachchi, and Al-
varado (2017) <doi:10.1002/sim.7263>,
is optimal for comparing paired or unpaired means in non-normal data,
especially for small sample size studies. However, many researchers are
unfamiliar with these methods. The 'bootwar' package bridges this gap by
enabling users to grasp the concepts of nbpr via Boot War, a variation of the
card game War designed for small samples. The package provides functions like
score_keeper() and play_round() to streamline gameplay and scoring. Once a
predetermined number of rounds concludes, users can employ the analyze_game()
function to derive game results. This function leverages the 'npboottprm'
package's nonparboot() to report nbpr results and, for comparative analysis,
also reports results from the 'stats' package's t.test() function. Additionally,
'bootwar' features an interactive 'shiny' web application, bootwar(). This
offers a user-centric interface to experience Boot War, enhancing understanding
of nbpr methods across various distributions, sample sizes, number of bootstrap
resamples, and confidence intervals.
License MIT + file LICENSE
Encoding UTF-8
RoxygenNote 7.2.3
URL https://github.com/mightymetrika/bootwar
BugReports https://github.com/mightymetrika/bootwar/issues
Imports ggplot2, mmcards, npboottprm, shiny, shinyjs, shinythemes
Depends R (>= 2.10)
LazyData true
Suggests knitr, rmarkdown, testthat (>= 3.0.0)
Config/testthat/edition 3
VignetteBuilder knitr
NeedsCompilation no
Author <NAME> [aut, cre],
mightymetrika, LLC [cph, fnd]
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2023-10-01 16:30:10 UTC
R topics documented:
analyze_gam... 2
bootwa... 3
dec... 4
play_roun... 4
score_keepe... 5
analyze_game Analyze Game Results and Determine Winner
Description
This function analyzes the results of the game using both nonparametric bootstrap with pooled
resampling and classical t-tests. It then determines the winner based on the bootstrap results and
effect size.
Usage
analyze_game(plyr_vv, comp_vv, mode = "t", conf.level = 0.95, ...)
Arguments
plyr_vv A numeric vector storing the values of the cards dealt to the player.
comp_vv A numeric vector storing the values of the cards dealt to the computer.
mode A character string indicating the type of test. Valid options are "t" for indepen-
dent t-test and "pt" for paired t-test. Default is "t".
conf.level A confidence level for npboottprm::nonparboot, stats::t.test. The confi-
dence level is also used to set the alpha level to alpha = 1 - conf.level
... Additional arguments passed to the npboottprm::nonparboot function.
Value
A list containing:
• bootstrap_results: A list containing results from the bootstrap test.
• classical_results: A list containing results from the classical t-test.
• winner: A character string indicating the winner ("Player Wins", "Computer Wins", or "Draw").
Examples
# Analyze a sample game
plyr_values <- c(4, 3, 2, 1)
comp_values <- c(1, 2, 3, 4)
game_results <- analyze_game(plyr_values, comp_values, nboot = 1000,
mode = "t", seed = 150)
bootwar Bootwar Shiny App
Description
Launches a Shiny application for the Bootwar card game. The app allows users to play a card game
where they can analyze the game results using nonparametric bootstrap test with pooled resampling
methods.
Usage
bootwar()
Details
The Bootwar card game is a bootstrap variation of the card game War. The Bootwar application
has options to select different modes (’t’ for independent t-test and ’pt’ for paired t-test) and decks.
Players can use a standard 52 card deck and they can also input a custom anonymous function to
generate a deck. The app will let users deal cards, play the game, and then score and analyze results
using nonparametric bootstrap test with pooled resampling methods. The game is designed to help
users gain greater intuition on nonparametric bootstrap test with pooled resampling methods; as
such, players are encouraged to experiment with different confidence levels, number of rounds,
number of bootstrap resamples, and custom decks.
Value
A Shiny application object. Running this function will launch the Shiny app in the user’s default
web browser.
Examples
if(interactive()){
bootwar()
}
deck Deck of Cards
Description
A 52 card deck of playing cards with suit ranking.
Usage
deck
Format
deck:
A data frame with 52 rows and 4 columns:
rank A factor representing card rank taking values 2 - A
suit A card suit with ranked order Club (C), Diamond (D), Heart (H), and Spade (S)
card A card
value A card value ranging from 2.00 (2C) to 14.75 (AS)
Source
Standard Deck of Playing Cards
play_round Play a Round of the Card Game
Description
This function simulates a single round of the card game, where both the computer and the player
are dealt a card. The function returns the updated state of the game after the round.
Usage
play_round(
cdeck,
plyr_cv,
plyr_vv,
plyr_ic = NULL,
comp_cv,
comp_vv,
comp_ic = NULL
)
Arguments
cdeck A dataframe representing the current deck of cards.
plyr_cv A character vector storing the cards dealt to the player so far.
plyr_vv A numeric vector storing the values of the cards dealt to the player so far.
plyr_ic A character vector storing the image cards dealt to the player. Default is NULL.
comp_cv A character vector storing the cards dealt to the computer so far.
comp_vv A numeric vector storing the values of the cards dealt to the computer so far.
comp_ic A character vector storing the image cards dealt to the computer. Default is
NULL.
Value
A list containing:
• updated_deck: A dataframe representing the updated deck of cards after the round.
• plyr_cv: Updated character vector of cards dealt to the player.
• plyr_vv: Updated numeric vector of values of cards dealt to the player.
• plyr_ic: Updated character vector of image cards dealt to the player.
• comp_cv: Updated character vector of cards dealt to the computer.
• comp_vv: Updated numeric vector of values of cards dealt to the computer.
• comp_ic: Updated character vector of image cards dealt to the computer.
Examples
# Simulate a round of the game with a sample deck
deck <- mmcards::shuffle_deck()
plyr_cards <- character(0)
plyr_values <- numeric(0)
comp_cards <- character(0)
comp_values <- numeric(0)
round_result <- play_round(deck, plyr_cv = plyr_cards, plyr_vv = plyr_values,
comp_cv = comp_cards, comp_vv = comp_values)
score_keeper Calculate Scores and Effect Size
Description
This function computes the sum and mean of the player’s and computer’s values and calculates the
effect size based on the given mode (t or pt).
Usage
score_keeper(player_values, comp_values, mode)
Arguments
player_values A numeric vector representing the values of the player’s cards.
comp_values A numeric vector representing the values of the computer’s cards.
mode A character string representing the mode of the game, either ’t’ for independent
t-test or ’pt’ for paired t-test.
Value
A list containing:
• player_sum: Sum of player’s values.
• player_mean: Mean of player’s values.
• comp_sum: Sum of computer’s values.
• comp_mean: Mean of computer’s values.
• effect_size: Calculated effect size based on the given mode.
Examples
# Calculate scores for a simple game
player_vals <- c(2.5, 3.0, 4.5)
comp_vals <- c(3.5, 2.0, 4.0)
scores <- score_keeper(player_vals, comp_vals, mode = "t") |
django-frontend-static | readthedoc | Markdown | Django Frontend Static — django-frontend-static 1.2.0 documentation
### Navigation
* [index](genindex.html)
* [next](getting_started.html) |
* [django-frontend-static 1.2.0 documentation](#) »
Django Frontend Static[¶](#django-frontend-static)
===
A basic Django application to install often used static files.
With the convenience of an installable Django application, easily add some of the most widely used static files.
There are no templates included. If you want a skeleton application built on HTML5 Boilerplate and Twitter Bootstrap then checkout [django-frontend-skeleton](https://github.com/jonfaustman/django-frontend-skeleton). If you want a light-weight template built on HTML5 Boilerplate without any static files then check out [django-frontend-template](https://github.com/jonfaustman/django-frontend-template).
| Package: | <https://pypi.python.org/pypi/django-frontend-static> |
| Source: | <https://github.com/jonfaustman/django-frontend-static> |
Inactive[¶](#inactive)
---
Django Frontend Static is now inactive. Please use [django-frontend](https://github.com/jonfaustman/django-frontend).
While continuing to use Django Frontend Static, be sure to force upgrade to avoid namespacing problems.
[Read this article](http://jonfaustman.com/2013/08/07/django-frontend/) for more information.
Starring[¶](#starring)
---
* [HTML5 Boilerplate (based on 4.2.0)](https://github.com/h5bp/html5-boilerplate)
* [Modernizr (2.6.2)](https://github.com/Modernizr/Modernizr)
* [jQuery (1.10.2) and (2.0.3)](https://github.com/jquery/jquery)
* [jQuery UI (1.10.3)](https://github.com/jquery/jquery-ui)
* [jQuery DataTables (1.9.4)](https://github.com/DataTables/DataTables)
* [jQuery Dynamic Formset (1.2)](https://code.google.com/p/django-dynamic-formset)
* [jQuery ScrollTo (1.4.6)](https://github.com/flesler/jquery.scrollTo)
* [jQuery Smooth Scroll (1.4.11)](https://github.com/kswedberg/jquery-smooth-scroll)
* [Twitter Bootstrap (3.0.0 RC2)](https://github.com/twbs/bootstrap)
* [iOS-Orientationchange-Fix](https://github.com/scottjehl/iOS-Orientationchange-Fix)
* [famfamfam’s Silk Icons](http://www.famfamfam.com/lab/icons/silk/)
Contents[¶](#contents)
---
* [Getting Started](getting_started.html)
+ [Install](getting_started.html#install)
* [Template tags](template_tags.html)
+ [djfrontend](template_tags.html#djfrontend)
* [Optional Settings](optional_settings.html)
+ [DJFRONTEND_STATIC_URL](optional_settings.html#djfrontend-static-url)
+ [DJFRONTEND_GA_SETDOMAINNAME](optional_settings.html#djfrontend-ga-setdomainname)
+ [DJFRONTEND_GA_SETALLOWLINKER](optional_settings.html#djfrontend-ga-setallowlinker)
* [License](license.html)
+ [Component Specific Licenses](license.html#component-specific-licenses)
* [Changelog](changelog.html)
+ [1.2.0](changelog.html#id1)
+ [1.1.2](changelog.html#id2)
+ [1.1.1](changelog.html#id3)
+ [1.1.0](changelog.html#id4)
+ [1.0.1](changelog.html#id5)
+ [1.0.0](changelog.html#id6)
+ [0.1.0](changelog.html#id7)
* [Road Map](road_map.html)
### Project Versions
* [latest](/en/latest/)
### RTD Search
Full-text doc search.
### [Table Of Contents](#)
* [Django Frontend Static](#)
+ [Inactive](#inactive)
+ [Starring](#starring)
+ [Contents](#contents)
#### Next topic
[Getting Started](getting_started.html)
### This Page
* [Show Source](_sources/index.txt)
* [Show on GitHub](https://github.com/jonfaustman/django-frontend-static/blob/master/docs/source/index.rst)
* [Edit on GitHub](https://github.com/jonfaustman/django-frontend-static/edit/master/docs/source/index.rst)
### Quick search
Enter search terms or a module, class or function name.
### Navigation
* [index](genindex.html)
* [next](getting_started.html) |
* [django-frontend-static 1.2.0 documentation](#) »
[Brought to you by Read the Docs](//readthedocs.org/projects/django-frontend-static/?fromdocs=django-frontend-static)
* [latest](/en/latest/)
Search — django-frontend-static 1.2.0 documentation
### Navigation
* [index](genindex.html)
* [django-frontend-static 1.2.0 documentation](index.html) »
Search
===
Please activate JavaScript to enable the search
functionality.
From here you can search these documents. Enter your search
words into the box below and click "search". Note that the search
function will automatically search for all of the words. Pages
containing fewer words won't appear in the result list.
### Project Versions
* [latest](/en/latest/)
### RTD Search
Full-text doc search.
### This Page
* [Show on GitHub](https://github.com/jonfaustman/django-frontend-static/blob/master/docs/source/search.rst)
* [Edit on GitHub](https://github.com/jonfaustman/django-frontend-static/edit/master/docs/source/search.rst)
### Navigation
* [index](genindex.html)
* [django-frontend-static 1.2.0 documentation](index.html) »
[Brought to you by Read the Docs](//readthedocs.org/projects/django-frontend-static/?fromdocs=django-frontend-static)
* [latest](/en/latest/)
Changelog — django-frontend-static 1.2.0 documentation
### Navigation
* [index](genindex.html)
* [next](road_map.html) |
* [previous](license.html) |
* [django-frontend-static 1.2.0 documentation](index.html) »
Changelog[¶](#changelog)
===
1.2.0[¶](#id1)
---
* Marked as inactive - no more updates.
* Twitter Bootstrap updated to v3.0.0
* Added djfrontend_twbs_theme_css template tag
* Added bootstrap-theme.css and bootstrap-theme.min.css
* Added djfrontend_jquery_scrollto template tag
* Added jquery.scrollTo.js and jquery.scrollTo.min.js
* Removed djfrontend_twbs_glyphicons template tag
* Removed bootstrap-glyphicons.css
1.1.2[¶](#id2)
---
* Twitter Bootstrap updated to v3.0.0 RC2
1.1.1[¶](#id3)
---
* Fixed missing static files.
1.1.0[¶](#id4)
---
* jQuery updated to v1.10.2 and v2.0.3
* jQuery smooth-scroll updated to v.1.4.11
* Twitter Bootstrap (TWBS) updated to v3.0.0 RC1
* TWBS typeahead, glyphicons and bootstrap-responsive removed per TWBS v3.0.0 RC1
1.0.1[¶](#id5)
---
* Moved Silk icons out of recursive img dirs.
1.0.0[¶](#id6)
---
* There was some wide-sweeping, non-backwards compatible changes - read carefully!
* Packaged renamed to djfrontend. This will affect INSTALLED_APPS settings as well as the static location.
* Icons now included in the default setup.
* Template tags renamed to djfrontend.py.
* {% load djfrontend %} loads all template tags.
0.1.0[¶](#id7)
---
* Initial release
### Project Versions
* [latest](/en/latest/)
### RTD Search
Full-text doc search.
### [Table Of Contents](index.html)
* [Changelog](#)
+ [1.2.0](#id1)
+ [1.1.2](#id2)
+ [1.1.1](#id3)
+ [1.1.0](#id4)
+ [1.0.1](#id5)
+ [1.0.0](#id6)
+ [0.1.0](#id7)
#### Previous topic
[License](license.html)
#### Next topic
[Road Map](road_map.html)
### This Page
* [Show Source](_sources/changelog.txt)
* [Show on GitHub](https://github.com/jonfaustman/django-frontend-static/blob/master/docs/source/changelog.rst)
* [Edit on GitHub](https://github.com/jonfaustman/django-frontend-static/edit/master/docs/source/changelog.rst)
### Quick search
Enter search terms or a module, class or function name.
### Navigation
* [index](genindex.html)
* [next](road_map.html) |
* [previous](license.html) |
* [django-frontend-static 1.2.0 documentation](index.html) »
[Brought to you by Read the Docs](//readthedocs.org/projects/django-frontend-static/?fromdocs=django-frontend-static)
* [latest](/en/latest/)
License — django-frontend-static 1.2.0 documentation
### Navigation
* [index](genindex.html)
* [next](changelog.html) |
* [previous](optional_settings.html) |
* [django-frontend-static 1.2.0 documentation](index.html) »
License[¶](#license)
===
MIT License
Component Specific Licenses[¶](#component-specific-licenses)
---
* HTML5 Boilerplate: MIT License
* Modernizr: MIT License
* jQuery: MIT License
* jQuery UI: MIT License
* jQuery DataTables: Dual GPL v2.0 and BSD License
* jQuery Dynamic Formset: BSD New License
* jQuery ScrollTo: Dual MIT and GPL License
* jQuery Smooth Scroll: MIT License
* Twitter Bootstrap: Apache License, Version 2.0
* iOS-Orientationchange-Fix: MIT/GPL v2.0 License
* famfamfam Silk Icons: Creative Commons Attribution 3.0 License
### Project Versions
* [latest](/en/latest/)
### RTD Search
Full-text doc search.
### [Table Of Contents](index.html)
* [License](#)
+ [Component Specific Licenses](#component-specific-licenses)
#### Previous topic
[Optional Settings](optional_settings.html)
#### Next topic
[Changelog](changelog.html)
### This Page
* [Show Source](_sources/license.txt)
* [Show on GitHub](https://github.com/jonfaustman/django-frontend-static/blob/master/docs/source/license.rst)
* [Edit on GitHub](https://github.com/jonfaustman/django-frontend-static/edit/master/docs/source/license.rst)
### Quick search
Enter search terms or a module, class or function name.
### Navigation
* [index](genindex.html)
* [next](changelog.html) |
* [previous](optional_settings.html) |
* [django-frontend-static 1.2.0 documentation](index.html) »
[Brought to you by Read the Docs](//readthedocs.org/projects/django-frontend-static/?fromdocs=django-frontend-static)
* [latest](/en/latest/)
Template tags — django-frontend-static 1.2.0 documentation
### Navigation
* [index](genindex.html)
* [next](optional_settings.html) |
* [previous](getting_started.html) |
* [django-frontend-static 1.2.0 documentation](index.html) »
Template tags[¶](#template-tags)
===
Use the included djfrontend template tags to suit your needs.
djfrontend[¶](#djfrontend)
---
```
{% load djfrontend %}
```
### djfrontend_h5bp_html[¶](#djfrontend-h5bp-html)
**Not a direct part of django-frontend-static.**
Returns HTML tag according to chosen language - ‘en’ is the default.
```
<!--[if lt IE 7]> <html class="no-js lt-ie9 lt-ie8 lt-ie7" lang="en"> <![endif]-->
<!--[if IE 7]> <html class="no-js lt-ie9 lt-ie8" lang="en"> <![endif]-->
<!--[if IE 8]> <html class="no-js lt-ie9" lang="en"> <![endif]-->
<!--[if gt IE 8]><!--> <html class="no-js" lang="en"> <!--<![endif]-->
```
### djfrontend_h5bp_css[¶](#djfrontend-h5bp-css)
Returns HTML5 Boilerplate CSS file according to version number. The latest ‘4.2.0’ is included.
```
<link rel="stylesheet" href="/static/djfrontend/css/h5bp/4.2.0/h5bp.css">
```
### djfrontend_normalize[¶](#djfrontend-normalize)
Returns Normalize CSS file according to version number. The latest ‘1.1.1’ is included.
```
<link rel="stylesheet" href="/static/djfrontend/css/normalize/1.1.1/normalize.css">
```
### djfrontend_modernizr[¶](#djfrontend-modernizr)
Returns Modernizr JavaScript file according to version number. TEMPLATE_DEBUG returns full file, otherwise returns minified file from cdnjs with local callback. The latest ‘2.6.2’ is included.
```
<script src="/static/djfrontend/js/modernizr/2.6.2/modernizr.js"></script>
```
Or
```
<script src="//cdnjs.cloudflare.com/ajax/libs/modernizr/%s/modernizr.min.js"></script>' % v,
<script>window.Modernizr || document.write(\'<script src="static/djfrontend/js/modernizr/2.6.2/modernizr.min.js"><\/script>\')</script>
```
### djfrontend_jquery[¶](#djfrontend-jquery)
Returns jQuery JavaScript file according to version number. TEMPLATE_DEBUG returns full file, otherwise returns minified file from Google CDN with local fallback. The latest ‘1.10.2’ and ‘2.0.3’ is included.
```
<script src="/static/djfrontend/js/jquery/1.10.2/jquery.js"></script>
```
Or
```
<script src="//ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js"></script>
<script>window.jQuery || document.write('<script src="/static/djfrontend/js/jquery/1.10.2/jquery.min.js"><\/script>')</script>
```
### djfrontend_jqueryui[¶](#djfrontend-jqueryui)
Returns jQuery UI plugin JavaScript file according to version number. TEMPLATE_DEBUG returns full file, otherwise returns minified file from Google CDN with local fallback. The latest ‘1.10.3’ is included.
```
<script src="/static/djfrontend/js/jquery/jqueryui/1.10.3/jquery-ui.js"></script>
```
Or
```
<script src="//ajax.googleapis.com/ajax/libs/jqueryui/1.10.3/jquery-ui.min.js"></script>' % v,
<script>window.jQueryUI || document.write(\'<script src="/static/djfrontend/js/jquery/jqueryui/1.10.3/jquery-ui.min.js"><\/script>\')</script>
```
### djfrontend_jquery_datatables[¶](#djfrontend-jquery-datatables)
Returns the jQuery DataTables plugin JavaScript file according to version number. TEMPLATE_DEBUG returns full file, otherwise returns minified file from cdnjs with local fallback.
```
<script src="/static/djfrontend/js/jquery/jquery.dataTables/1.9.4/jquery.dataTables.js"></script>
```
Or
```
<script src="//cdnjs.cloudflare.com/ajax/libs/datatables/1.9.4/jquery.dataTables.min.js"></script>
<script>window.jQuery.fn.DataTable || document.write('<script src="/static/djfrontend/js/jquery/jquery.dataTables/1.9.4/jquery.dataTables.min.js"><\/script>')</script>
```
### djfrontend_jquery_datatables_css[¶](#djfrontend-jquery-datatables-css)
Returns the jQuery DataTables CSS file according to version number. The latest ‘1.9.4’ is included.
```
<link rel="stylesheet" href="/static/djfrontend/css/jquery/jquery.dataTables/1.9.4/jquery.dataTables.css">
```
### djfrontend_jquery_formset[¶](#djfrontend-jquery-formset)
Returns the jQuery Dynamic Formset plugin JavaScript file according to version number. TEMPLATE_DEBUG returns full file, otherwise returns minified file. The latest ‘1.2’ is included.
```
<script src="/static/djfrontend/js/jquery/jquery.formset/1.2/jquery.formset.js"></script>
```
Or
```
<script src="/static/djfrontend/js/jquery/jquery.formset/1.2/jquery.formset.min.js"></script>
```
### djfrontend_jquery_scrollto[¶](#djfrontend-jquery-scrollto)
Returns the jQuery ScrollTo plugin JavaScript file according to version number. TEMPLATE_DEBUG returns full file, otherwise returns minified file.
```
<script src="/static/djfrontend/js/jquery/jquery.scrollTo/1.4.6/jquery.scrollTo.js"></script>
```
Or
```
<script src="/static/djfrontend/js/jquery/jquery.scrollTo/1.4.6/jquery.scrollTo.min.js"></script>
```
### djfrontend_jquery_smoothscroll[¶](#djfrontend-jquery-smoothscroll)
Returns the jQuery Smooth Scroll plugin JavaScript file according to version number. TEMPLATE_DEBUG returns full file, otherwise returns minified file. The latest ‘1.4.11’ is included.
```
<script src="/static/djfrontend/js/jquery/jquery.smooth-scroll/1.4.11/jquery.smooth-scroll.js"></script>
```
Or
```
<script src="/static/djfrontend/js/jquery/jquery.smooth-scroll/1.4.11/jquery.smooth-scroll.min.js"></script>
```
### djfrontend_twbs_css[¶](#djfrontend-twbs-css)
Returns Twitter Bootstrap CSS file according to version number. TEMPLATE_DEBUG returns full file, otherwise returns minified file. The latest ‘3.0.0’ is included.
```
<link rel="stylesheet" href="/static/djfrontend/css/twbs/3.0.0/bootstrap.css">
```
Or
```
<link rel="stylesheet" href="/static/djfrontend/css/twbs/3.0.0/bootstrap.min.css">
```
### djfrontend_twbs_theme_css[¶](#djfrontend-twbs-theme-css)
Returns Twitter Bootstrap Theme CSS file according to version number.
```
<link rel="stylesheet" href="/static/djfrontend/css/twbs/3.0.0/bootstrap-theme.css">
```
Or
```
<link rel="stylesheet" href="/static/djfrontend/css/twbs/3.0.0/bootstrap-theme.min.css">
```
### djfrontend_twbs_js[¶](#djfrontend-twbs-js)
Returns Twitter Bootstrap (3.0.0) JavaScript file(s). all returns concatenated file; full file for TEMPLATE_DEBUG, minified otherwise. Other choices include:
* affix
* alert
* button
* carousel
* collapse
* dropdown
* modal
* popover (adds tooltip if not included)
* scrollspy
* tab
* tooltip
* transition
Individual files are not minified.
{% boostrap_js all %} would render
```
<script src="/static/djfrontend/js/twbs/3.0.0/bootstrap.js"></script>
```
Or
```
<script src="/static/djfrontend/js/twbs/3.0.0/bootstrap.min.js"></script>
```
{% bootstrap_js alert affix %} would render
```
<script src="/static/djfrontend/js/twbs/3.0.0/bootstrap-affix.js"></script>
<script src="/static/djfrontend/js/twbs/3.0.0/bootstrap-alert.js"></script>
```
Shout out to <NAME> and his [Django Bootstrapped](https://github.com/rbrady/django-bootstrapped) for inspiration and initial code.
### djfrontend_ga[¶](#djfrontend-ga)
Returns Google Analytics asynchronous snippet if TEMPLATE_DEBUG is not set. Use DJFRONTEND_GA_SETDOMAINNAME to set domain for multiple, or cross-domain tracking. Set DJFRONTEND_GA_SETALLOWLINKER to use _setAllowLinker method on target site for cross-domain tracking.
```
<script>var _gaq=[["_setAccount","UA-XXXXX-X"],["_trackPageview"]];(function(d,t){var g=d.createElement(t),s=d.getElementsByTagName(t)[0];g.src="//www.google-analytics.com/ga.js";s.parentNode.insertBefore(g,s)}(document,"script"));</script>'
```
Or
```
<script>var _gaq=[["_setAccount","UA-XXXXX-X"],["_setDomainName","%s"],["_setAllowLinker", true],["_trackPageview"]];(function(d,t){var g=d.createElement(t),s=d.getElementsByTagName(t)[0];g.src="//www.google-analytics.com/ga.js";s.parentNode.insertBefore(g,s)}(document,"script"));</script>
```
Or
```
<script>var _gaq=[["_setAccount","UA-XXXXX-X"],["_setDomainName","%s"],["_trackPageview"]];(function(d,t){var g=d.createElement(t),s=d.getElementsByTagName(t)[0];g.src="//www.google-analytics.com/ga.js";s.parentNode.insertBefore(g,s)}(document,"script"));</script>
```
### djfrontend_ios_fix[¶](#djfrontend-ios-fix)
Returns the iOS-Orientationchange-Fix.
```
<script>/*! A fix for the iOS orientationchange zoom bug. Script by @scottjehl, rebound by @wilto.MIT / GPLv2 License.*/(function(a){function m(){d.setAttribute("content",g),h=!0}function n(){d.setAttribute("content",f),h=!1}function o(b){l=b.accelerationIncludingGravity,i=Math.abs(l.x),j=Math.abs(l.y),k=Math.abs(l.z),(!a.orientation||a.orientation===180)&&(i>7||(k>6&&j<8||k<8&&j>6)&&i>5)?h&&n():h||m()}var b=navigator.userAgent;if(!(/iPhone|iPad|iPod/.test(navigator.platform)&&/OS [1-5]_[0-9_]* like Mac OS X/i.test(b)&&b.indexOf("AppleWebKit")>-1))return;var c=a.document;if(!c.querySelector)return;var d=c.querySelector("meta[name=viewport]"),e=d&&d.getAttribute("content"),f=e+",maximum-scale=1",g=e+",maximum-scale=10",h=!0,i,j,k,l;if(!d)return;a.addEventListener("orientationchange",m,!1),a.addEventListener("devicemotion",o,!1)})(this);</script>
```
### Project Versions
* [latest](/en/latest/)
### RTD Search
Full-text doc search.
### [Table Of Contents](index.html)
* [Template tags](#)
+ [djfrontend](#djfrontend)
- [djfrontend_h5bp_html](#djfrontend-h5bp-html)
- [djfrontend_h5bp_css](#djfrontend-h5bp-css)
- [djfrontend_normalize](#djfrontend-normalize)
- [djfrontend_modernizr](#djfrontend-modernizr)
- [djfrontend_jquery](#djfrontend-jquery)
- [djfrontend_jqueryui](#djfrontend-jqueryui)
- [djfrontend_jquery_datatables](#djfrontend-jquery-datatables)
- [djfrontend_jquery_datatables_css](#djfrontend-jquery-datatables-css)
- [djfrontend_jquery_formset](#djfrontend-jquery-formset)
- [djfrontend_jquery_scrollto](#djfrontend-jquery-scrollto)
- [djfrontend_jquery_smoothscroll](#djfrontend-jquery-smoothscroll)
- [djfrontend_twbs_css](#djfrontend-twbs-css)
- [djfrontend_twbs_theme_css](#djfrontend-twbs-theme-css)
- [djfrontend_twbs_js](#djfrontend-twbs-js)
- [djfrontend_ga](#djfrontend-ga)
- [djfrontend_ios_fix](#djfrontend-ios-fix)
#### Previous topic
[Getting Started](getting_started.html)
#### Next topic
[Optional Settings](optional_settings.html)
### This Page
* [Show Source](_sources/template_tags.txt)
* [Show on GitHub](https://github.com/jonfaustman/django-frontend-static/blob/master/docs/source/template_tags.rst)
* [Edit on GitHub](https://github.com/jonfaustman/django-frontend-static/edit/master/docs/source/template_tags.rst)
### Quick search
Enter search terms or a module, class or function name.
### Navigation
* [index](genindex.html)
* [next](optional_settings.html) |
* [previous](getting_started.html) |
* [django-frontend-static 1.2.0 documentation](index.html) »
[Brought to you by Read the Docs](//readthedocs.org/projects/django-frontend-static/?fromdocs=django-frontend-static)
* [latest](/en/latest/)
Road Map — django-frontend-static 1.2.0 documentation
### Navigation
* [index](genindex.html)
* [previous](changelog.html) |
* [django-frontend-static 1.2.0 documentation](index.html) »
Road Map[¶](#road-map)
===
* None. This project is now inactive. Use django-frontend instead.
### Project Versions
* [latest](/en/latest/)
### RTD Search
Full-text doc search.
#### Previous topic
[Changelog](changelog.html)
### This Page
* [Show Source](_sources/road_map.txt)
* [Show on GitHub](https://github.com/jonfaustman/django-frontend-static/blob/master/docs/source/road_map.rst)
* [Edit on GitHub](https://github.com/jonfaustman/django-frontend-static/edit/master/docs/source/road_map.rst)
### Quick search
Enter search terms or a module, class or function name.
### Navigation
* [index](genindex.html)
* [previous](changelog.html) |
* [django-frontend-static 1.2.0 documentation](index.html) »
[Brought to you by Read the Docs](//readthedocs.org/projects/django-frontend-static/?fromdocs=django-frontend-static)
* [latest](/en/latest/)
Getting Started — django-frontend-static 1.2.0 documentation
### Navigation
* [index](genindex.html)
* [next](template_tags.html) |
* [previous](index.html) |
* [django-frontend-static 1.2.0 documentation](index.html) »
Getting Started[¶](#getting-started)
===
Install[¶](#install)
---
1. install django-frontend-static (pip install, add to your requirements files, etc.)
2. add ‘djfrontend’ to your INSTALLED_APPS
### Project Versions
* [latest](/en/latest/)
### RTD Search
Full-text doc search.
### [Table Of Contents](index.html)
* [Getting Started](#)
+ [Install](#install)
#### Previous topic
[Django Frontend Static](index.html)
#### Next topic
[Template tags](template_tags.html)
### This Page
* [Show Source](_sources/getting_started.txt)
* [Show on GitHub](https://github.com/jonfaustman/django-frontend-static/blob/master/docs/source/getting_started.rst)
* [Edit on GitHub](https://github.com/jonfaustman/django-frontend-static/edit/master/docs/source/getting_started.rst)
### Quick search
Enter search terms or a module, class or function name.
### Navigation
* [index](genindex.html)
* [next](template_tags.html) |
* [previous](index.html) |
* [django-frontend-static 1.2.0 documentation](index.html) »
[Brought to you by Read the Docs](//readthedocs.org/projects/django-frontend-static/?fromdocs=django-frontend-static)
* [latest](/en/latest/)
Index — django-frontend-static 1.2.0 documentation
### Navigation
* [index](#)
* [django-frontend-static 1.2.0 documentation](index.html) »
Index
===
### Project Versions
* [latest](/en/latest/)
### RTD Search
Full-text doc search.
### This Page
* [Show on GitHub](https://github.com/jonfaustman/django-frontend-static/blob/master/docs/source/genindex.rst)
* [Edit on GitHub](https://github.com/jonfaustman/django-frontend-static/edit/master/docs/source/genindex.rst)
### Quick search
Enter search terms or a module, class or function name.
### Navigation
* [index](#)
* [django-frontend-static 1.2.0 documentation](index.html) »
[Brought to you by Read the Docs](//readthedocs.org/projects/django-frontend-static/?fromdocs=django-frontend-static)
* [latest](/en/latest/)
Optional Settings — django-frontend-static 1.2.0 documentation
### Navigation
* [index](genindex.html)
* [next](license.html) |
* [previous](template_tags.html) |
* [django-frontend-static 1.2.0 documentation](index.html) »
Optional Settings[¶](#optional-settings)
===
There are a few optional settings for customization.
DJFRONTEND_STATIC_URL[¶](#djfrontend-static-url)
---
Set a dedicated static server or CDN for serving static files.
DJFRONTEND_GA_SETDOMAINNAME[¶](#djfrontend-ga-setdomainname)
---
Set domain for multiple, or cross-domain tracking with Google Analytics.
DJFRONTEND_GA_SETALLOWLINKER[¶](#djfrontend-ga-setallowlinker)
---
To use _setAllowLinker method on target site for cross-domain tracking with Google Analytics. Set to ‘True’ to enable. Requires H5BP_GA_SETDOMAINNAME to be set.
### Project Versions
* [latest](/en/latest/)
### RTD Search
Full-text doc search.
### [Table Of Contents](index.html)
* [Optional Settings](#)
+ [DJFRONTEND_STATIC_URL](#djfrontend-static-url)
+ [DJFRONTEND_GA_SETDOMAINNAME](#djfrontend-ga-setdomainname)
+ [DJFRONTEND_GA_SETALLOWLINKER](#djfrontend-ga-setallowlinker)
#### Previous topic
[Template tags](template_tags.html)
#### Next topic
[License](license.html)
### This Page
* [Show Source](_sources/optional_settings.txt)
* [Show on GitHub](https://github.com/jonfaustman/django-frontend-static/blob/master/docs/source/optional_settings.rst)
* [Edit on GitHub](https://github.com/jonfaustman/django-frontend-static/edit/master/docs/source/optional_settings.rst)
### Quick search
Enter search terms or a module, class or function name.
### Navigation
* [index](genindex.html)
* [next](license.html) |
* [previous](template_tags.html) |
* [django-frontend-static 1.2.0 documentation](index.html) »
[Brought to you by Read the Docs](//readthedocs.org/projects/django-frontend-static/?fromdocs=django-frontend-static)
* [latest](/en/latest/) |
funData | cran | R | Package ‘funData’
October 13, 2022
Type Package
Title An S4 Class for Functional Data
Version 1.3-8
Date 2021-10-17
Author <NAME> [aut, cre]
Maintainer <NAME> <<EMAIL>>
Description S4 classes for univariate and multivariate functional data with
utility functions. See <doi:10.18637/jss.v093.i05> for a detailed description
of the package functionalities and its interplay with the MFPCA package for
multivariate functional principal component analysis
<https://CRAN.R-project.org/package=MFPCA>.
URL https://github.com/ClaraHapp/funData
License GPL-2
Depends methods
Imports abind, fields, foreach, graphics, grDevices, stats
Suggests covr, fda, ggplot2 (>= 3.0.0), gridExtra, reshape2, zoo,
testthat (>= 2.0.0)
RoxygenNote 7.1.1
Encoding UTF-8
Collate 'funDataClass.R' 'coerce.R' 'funDataMethods.R' 'get_set.R'
'names.R' 'plotMethods.R' 'simulation.R' 'str.R' 'subset.R'
'summary.R' 'zzz.R'
NeedsCompilation no
Repository CRAN
Date/Publication 2021-10-17 16:40:02 UTC
R topics documented:
.intWeight... 2
.scalarProduc... 3
addErro... 3
approxN... 5
Arith.funDat... 6
as.data.frame.funDat... 8
as.funDat... 9
as.irregFunDat... 10
as.multiFunDat... 10
autoplot.funDat... 11
autoplot.irregFunDat... 13
autoplot.multiFunDat... 14
dimSup... 16
eFu... 17
eVa... 18
extractOb... 19
fd2funDat... 22
flipFun... 23
funData-clas... 25
funData2f... 28
ggplo... 29
integrat... 30
irregFunData-clas... 31
Math.funDat... 33
meanFunctio... 34
multiFunData-clas... 35
nOb... 38
nObsPoint... 39
nor... 40
plot.funDat... 41
plot.irregFunDat... 44
plot.multiFunDat... 45
scalarProduc... 47
simFunDat... 49
simMultiFunDat... 50
sparsif... 53
tensorProduc... 55
.intWeights Calculate weights for numerical integration
Description
This function calculates the weights for numerical integration
Usage
.intWeights(argvals, method = "trapezoidal")
Arguments
argvals A numeric vector of x-Values
method A character string, giving the numerical integration method to use (default is
trapezoidal, alternatively use midpoint)
Value
A vector of integration weights
See Also
integrate
.scalarProduct Generic method for scalar products, based on integrate
Description
Generic method for scalar products, based on integrate
Usage
.scalarProduct(object1, object2, ...)
Arguments
object1, object2
Generic objects
... Further objects passed to integrate
addError Add Gaussian white noise to functional data objects
Description
This function generates an artificial noisy version of a functional data object of class funData (uni-
variate) or multiFunData (multivariate) by adding iid. realizations of Gaussian random variables
ε ∼ N (0, σ 2 ) to the observations. The standard deviation σ can be supplied by the user.
Usage
addError(funDataObject, sd)
Arguments
funDataObject A functional data object of class funData or multiFunData.
sd The standard deviation σ of the Gaussian white noise that is added to the data.
Defaults to 1. See Description.
Value
An object of the same class as funDataObject, which is a noisy version of the original data.
See Also
funData, multiFunData, simFunData, simMultiFunData.
Examples
oldPar <- par(no.readonly = TRUE)
set.seed(1)
# Univariate functional data
plain <- simFunData(argvals = seq(0,1,0.01), M = 10, eFunType = "Fourier",
eValType = "linear", N = 1)$simData
noisy <- addError(plain , sd = 0.5)
veryNoisy <- addError(plain, sd = 2)
plot(plain, main = "Add error", ylim = range(veryNoisy@X))
plot(noisy, type = "p", pch = 20, add = TRUE)
plot(veryNoisy, type = "p", pch = 4, add = TRUE)
legend("topright", c("Plain", "Noisy", "Very Noisy"), lty = c(1, NA, NA), pch = c(NA, 20 ,4))
# Multivariate functional data
plain <- simMultiFunData(type = "split", argvals = list(seq(0,1,0.01), seq(-.5,.5,0.02)), M = 10,
eFunType = "Fourier", eValType = "linear", N = 1)$simData
noisy <- addError(plain , sd = 0.5)
veryNoisy <- addError(plain, sd = 2)
par(mfrow = c(1,2))
plot(plain[[1]], main = "Add error (multivariate)", ylim = range(veryNoisy[[1]]@X))
plot(noisy[[1]], type = "p", pch = 20, add = TRUE)
plot(veryNoisy[[1]], type = "p", pch = 4, add = TRUE)
plot(plain[[2]], main = "Add error (multivariate)", ylim = range(veryNoisy[[2]]@X))
plot(noisy[[2]], type = "p", pch = 20, add = TRUE)
plot(veryNoisy[[2]], type = "p", pch = 4, add = TRUE)
legend("topright", c("Plain", "Noisy", "Very Noisy"), lty = c(1, NA, NA), pch = c(NA, 20 ,4))
par(oldPar)
approxNA Approximate missing values for funData objects
Description
This function approximates missing values for funData objects based on the na.approx interpolation
method from the package zoo.
Usage
approxNA(object)
Arguments
object An object of class funData with missing values (coded by NA).
Value
A funData object where missing values have been imputed.
Warning
This function requires the package zoo to be installed, otherwise it will throw a warning.
Examples
# Simulate some data
f <- simFunData(N = 10, M = 8, eVal = "linear", eFun = "Poly", argvals = seq(0, 1, 0.01))$simData
# Sparsify, i.e. generate artificial missings in the data
fSparse <- sparsify(f, minObs = 10, maxObs = 50)
# plot
oldpar <- par(no.readonly = TRUE)
par(mfrow = c(1,3))
plot(f, main = "Original Data")
plot(fSparse, main = "Sparse Data")
plot(approxNA(fSparse), main = "Reconstructed Data")
# faster with plot(fSparse, plotNA = TRUE, main = "Reconstructed Data")
par(oldpar)
Arith.funData Arithmetics for functional data objects
Description
These functions allow basic arithmetics (such as ‘+‘, ‘-‘, ‘*‘, ‘sqrt‘) for functional data and numerics
based on Arith. The operations are made pointwise for each observation. See examples below.
Usage
## S4 method for signature 'funData,funData'
Arith(e1, e2)
## S4 method for signature 'funData,numeric'
Arith(e1, e2)
## S4 method for signature 'numeric,funData'
Arith(e1, e2)
## S4 method for signature 'multiFunData,multiFunData'
Arith(e1, e2)
## S4 method for signature 'multiFunData,numeric'
Arith(e1, e2)
## S4 method for signature 'numeric,multiFunData'
Arith(e1, e2)
## S4 method for signature 'irregFunData,numeric'
Arith(e1, e2)
## S4 method for signature 'numeric,irregFunData'
Arith(e1, e2)
## S4 method for signature 'irregFunData,irregFunData'
Arith(e1, e2)
## S4 method for signature 'irregFunData,funData'
Arith(e1, e2)
## S4 method for signature 'funData,irregFunData'
Arith(e1, e2)
Arguments
e1, e2 Objects of class funData, irregFunData, multiFunData or numeric. If two
functional data objects are used, they must be of the same class, have the same
domain and the same number of observations. For exceptions, see Details.
Details
If two objects of a functional data class (funData, irregFunData or multiFunData) are used, they
normally must be of the same class, have the same domain and the same number of observations.
Exceptions are accepted if
• one object has only one observation. In this case, the arithmetic operations (‘+‘, ‘-‘, ‘*‘, ...) are
done pairwise for this single function and all functions of the other object. A typical example
would be when subtracting the mean function from all observations in a funData object. This
single function must be defined on the same domain as the other functions (or, in case of
irregFunData, on the union of all observation grids).
• one of the two objects is of class irregFunData. Then, the other object can be of class
funData, too, if it is defined on the union of all observation grids. The result is an irregFunData
object which is defined on the same observation grid as the original irregFunData object.
Value
An object of the same functional data class as e1 or e2, respectively.
Warning
Note that not all combinations of operations and classes make sense, e.g. e1 ^ e2 is sensible if e1
is of class funData, irregFunData or multiFunData and e2 is numeric. The reverse is not true.
See Also
funData, irregFunData, multiFunData, Arith
Examples
oldpar <- par(no.readonly = TRUE)
par(mfrow = c(3,2), mar = rep(2.1,4))
argvals <- seq(0, 2*pi, 0.01)
object1 <- funData(argvals, outer(seq(0.75, 1.25, by = 0.05), sin(argvals)))
object2 <- funData(argvals, outer(seq(0.75, 1.25, by = 0.05), cos(argvals)))
plot(object1, main = "Object1")
plot(object2, main = "Object2")
# Only functional data objects
plot(object1 + object2, main = "Sum")
plot(object1 - object2, main = "Difference")
# Mixed
plot(4 * object1 + 5, main = "4 * Object1 + 5") # Note y-axis!
plot(object1^2 + object2^2, main = "Pythagoras")
### Irregular
ind <- replicate(11, sort(sample(1:length(argvals), sample(5:10, 1))))
i1 <- irregFunData(
argvals = lapply(1:11, function(i, ind, x){x[ind[[i]]]}, ind = ind, x = object1@argvals[[1]]),
X = lapply(1:11, function(i, ind, y){y[i, ind[[i]]]}, ind = ind, y = object1@X))
i2 <- irregFunData(
argvals = lapply(1:11, function(i, ind, x){x[ind[[i]]]}, ind = ind, x = object2@argvals[[1]]),
X = lapply(1:11, function(i, ind, y){y[i, ind[[i]]]}, ind = ind, y = object2@X))
plot(i1, main = "Object 1 (irregular)")
plot(i2, main = "Object 2 (irregular)")
# Irregular and regular functional data objects
plot(i1 + i2, main = "Sum")
plot(i1 - object2, main = "Difference")
# Mixed
plot(4 * i1 + 5, main = "4 * i1 + 5") # Note y-axis!
plot(i1^2 + i2^2, main = "Pythagoras")
par(oldpar)
as.data.frame.funData Coerce functional data objects to a data.frame
Description
Coerce objects of class funData, multiFunData and irregFunData to a data frame.
Usage
## S4 method for signature 'funData'
as.data.frame(x)
## S4 method for signature 'multiFunData'
as.data.frame(x)
## S4 method for signature 'irregFunData'
as.data.frame(x)
Arguments
x The functional data object that is to be transformed to a data.frame
Value
A data frame with columns obs (gives index/name of observed curve), argvals1, ... argvalsd
with d the dimension of the support and X for the observed values. One-dimensional functions have
only argvals1, two-dimensional functions (images) have argvals1 and argvals2, etc.
See Also
funData, irregFunData, multiFunData, data.frame
Examples
# one-dimensional domain
f1 <- funData(argvals = 1:5, X = matrix(1:20, nrow = 4))
head(as.data.frame(f1))
# two-dimensional domain
f2 <- funData(argvals = list(1:5, 1:6), X = array(1:120, c(4,5,6)))
head(as.data.frame(f2))
# multivariate functional data
m1 <- multiFunData(f1, f2)
str(as.data.frame(m1))
# irregular functional data
i1 <- irregFunData(argvals = list(1:5, 2:4, 3:5), X = list(1:5, 2:4, -(3:1)))
head(as.data.frame(i1))
as.funData Coerce an irregFunData object to class funData
Description
This function coerces an object of class irregFunData to a funData object with missing values,
which is defined on the union of all observation points.
Usage
as.funData(object)
## S4 method for signature 'irregFunData'
as.funData(object)
Arguments
object The irregFunData object that is to be converted to a funData object with miss-
ing values.
See Also
funData, irregFunData
as.irregFunData Coerce a funData object to class irregFunData
Description
This function coerces an object of class funData to a irregFunData object.
Usage
as.irregFunData(object)
## S4 method for signature 'funData'
as.irregFunData(object)
Arguments
object The funData object that is to be converted to a irregFunData object.
See Also
funData, irregFunData
as.multiFunData Coerce a funData object to class multiFunData
Description
Coerce a funData object to class multiFunData with one element.
Usage
as.multiFunData(object)
## S4 method for signature 'funData'
as.multiFunData(object)
Arguments
object The funData object that is to be converted to a multiFunData object of length
1.
See Also
funData, multiFunData
Examples
# create funData object with 5 observations
x <- seq(0,1,0.01)
f1 <- funData(argvals = x, X = 1:5 %o% x)
f1
class(f1)
# coerce to multiFunData object (of length 1)
m1 <- as.multiFunData(f1)
m1
class(m1)
autoplot.funData Visualize functional data objects using ggplot
Description
This function allows to plot funData objects based on the ggplot2 package. The function provides
a wrapper that rearranges the data in a funData object on a one- or two-dimensional domain and
provides a basic ggplot object, which can be customized using all functionalities of the ggplot2
package.
Usage
autoplot.funData(
object,
obs = seq_len(nObs(object)),
geom = "line",
plotNA = FALSE,
...
)
autolayer.funData(
object,
obs = seq_len(nObs(object)),
geom = "line",
plotNA = FALSE,
...
)
Arguments
object A funData object on a one- or two-dimensional domain.
obs A vector of numerics giving the observations to plot. Defaults to all observations
in object. For two-dimensional functions (images) obs must have length 1.
geom A character string describing the geometric object to use. Defaults to "line".
See ggplot2 for details.
plotNA Logical. If TRUE, missing values are interpolated using the approxNA function
(only for one-dimensional functions). Defaults to FALSE. See Details.
... Further parameters passed to geom_line (for one dimensional domains, e.g.
alpha, color, fill, linetype, size) or to geom_raster (for two-dimensional
domains, e.g. hjust, vjust, interpolate).
Details
If some observations contain missing values (coded via NA), the functions can be interpolated using
the option plotNA = TRUE. This option relies on the na.approx function in package zoo and is
currently implemented for one-dimensional functions only in the function approxNA.
Value
A ggplot object that can be customized using all functionalities of the ggplot2 package.
See Also
funData, ggplot, plot.funData
Examples
# Install / load package ggplot2 before running the examples
library("ggplot2")
# One-dimensional
argvals <- seq(0,2*pi,0.01)
object <- funData(argvals,
outer(seq(0.75, 1.25, length.out = 11), sin(argvals)))
g <- autoplot(object) # returns ggplot object
g # plot the object
# add the mean function in red
g + autolayer(meanFunction(object), col = 2)
# Two-dimensional
X <- array(0, dim = c(2, length(argvals), length(argvals)))
X[1,,] <- outer(argvals, argvals, function(x,y){sin((x-pi)^2 + (y-pi)^2)})
X[2,,] <- outer(argvals, argvals, function(x,y){sin(2*x*pi) * cos(2*y*pi)})
object2D <- funData(list(argvals, argvals), X)
autoplot(object2D, obs = 1)
autoplot(object2D, obs = 2)
## Not run: autoplot(object2D) # must specify obs!
### More examples ###
par(mfrow = c(1,1))
# using plotNA (needs packages zoo and gridExtra)
objectMissing <- funData(1:5, rbind(c(1, NA, 5, 4, 3), c(10, 9, NA, NA, 6)))
g1 <- autoplot(objectMissing) # the default
g2 <- autoplot(objectMissing, plotNA = TRUE) # requires zoo
gridExtra::grid.arrange(g1 + ggtitle("plotNA = FALSE (default)"),
g2 + ggtitle("plotNA = TRUE")) # requires gridExtra
# Customizing plots (see ggplot2 documentation for more details)
# parameters passed to geom_line are passed via the ... argument
gFancy <- autoplot(object, color = "red", linetype = 2)
gFancy
# new layers can be added directly to the ggplot object
gFancy + theme_bw() # add new layers to the ggplot object
gFancy + ggtitle("Fancy Plot with Title and Axis Legends") +
xlab("The x-Axis") + ylab("The y-Axis")
autoplot(object2D, obs = 1) + ggtitle("Customized 2D plot") + theme_minimal() +
scale_fill_gradient(high = "green", low = "blue", name = "Legend here")
autoplot.irregFunData Visualize irregular functional data objects using ggplot
Description
This function allows to plot irregFunData objects on their domain based on the ggplot2 package.
The function provides a wrapper that returns a basic ggplot object, which can be customized using
all functionalities of the ggplot2 package.
Usage
autoplot.irregFunData(object, obs = seq_len(nObs(object)), geom = "line", ...)
autolayer.irregFunData(object, obs = seq_len(nObs(object)), geom = "line", ...)
Arguments
object A irregFunData object.
obs A vector of numerics giving the observations to plot. Defaults to all observations
in object. For two-dimensional functions (images) obs must have length 1.
geom A character string describing the geometric object to use. Defaults to "line".
See ggplot2 for details.
... Further parameters passed to stat_identity, e.g. alpha, color, fill, linetype,
size).
Value
A ggplot object that can be customized using all functionalities of the ggplot2 package.
See Also
irregFunData, ggplot, plot.irregFunData
Examples
# Install / load package ggplot2 before running the examples
library("ggplot2")
# Generate data
argvals <- seq(0,2*pi,0.01)
ind <- replicate(5, sort(sample(1:length(argvals), sample(5:10,1))))
object <- irregFunData(argvals = lapply(ind, function(i){argvals[i]}),
X = lapply(ind, function(i){sample(1:10,1) / 10 * argvals[i]^2}))
# Plot the data
autoplot(object)
# Parameters passed to geom_line are passed via the ... argument
autoplot(object, color = "red", linetype = 3)
# Plot the data and add green dots for the 2nd function
autoplot(object) + autolayer(object, obs = 2, geom = "point", color = "green")
# New layers can be added directly to the ggplot object using functions from the ggplot2 package
g <- autoplot(object)
g + theme_bw() + ggtitle("Plot with minimal theme and axis labels") +
xlab("The x-Axis") + ylab("The y-Axis")
autoplot.multiFunData Visualize multivariate functional data objects using ggplot
Description
This function allows to plot multiFunData objects based on the ggplot2 package. The function
applies the autoplot.funData function to each element and returns either a combined plot with
all elements plotted in one row or a list containing the different subplots as ggplot objects. The
individual objects can be customized using all functionalities of the ggplot2 package.
Usage
autoplot.multiFunData(
object,
obs = seq_len(nObs(object)),
dim = seq_len(length(object)),
plotGrid = FALSE,
...
)
Arguments
object A multiFunData object that is to be plotted.
obs A vector of numerics giving the observations to plot. Defaults to all observations
in object. For two-dimensional functions (images) obs must have length 1.
dim The dimensions to plot. Defaults to length(object), i.e. all functions in
object are plotted.
plotGrid Logical. If TRUE, the data is plotted using grid.arrange and the list of ggplot
objects is returned invisibly. If FALSE, only the list of objects is returned. De-
faults to FALSE.
... Further parameters passed to the univariate autoplot.funData functions for
funData objects.
Value
A list of ggplot objects that are also printed directly as a grid if plotGrid = TRUE.
Warning
Currently, the function does not accept different parameters for the univariate elements.
See Also
multiFunData, ggplot, plot.multiFunData
Examples
# Load packages ggplot2 and gridExtra before running the examples
library("ggplot2"); library("gridExtra")
# One-dimensional elements
argvals <- seq(0, 2*pi, 0.01)
f1 <- funData(argvals, outer(seq(0.75, 1.25, length.out = 11), sin(argvals)))
f2 <- funData(argvals, outer(seq(0.75, 1.25, length.out = 11), cos(argvals)))
m1 <- multiFunData(f1, f2)
g <- autoplot(m1) # default
g[[1]] # plot first element
g[[2]] # plot second element
gridExtra::grid.arrange(grobs = g, nrow = 1) # requires gridExtra package
autoplot(m1, plotGrid = TRUE) # the same directly with plotGrid = TRUE
# Mixed-dimensional elements
X <- array(0, dim = c(11, length(argvals), length(argvals)))
X[1,,] <- outer(argvals, argvals, function(x,y){sin((x-pi)^2 + (y-pi)^2)})
f2 <- funData(list(argvals, argvals), X)
m2 <- multiFunData(f1, f2)
autoplot(m2, obs = 1, plotGrid = TRUE)
# Customizing plots (see ggplot2 documentation for more details)
g2 <- autoplot(m2, obs = 1)
g2[[1]] <- g2[[1]] + ggtitle("First element") + theme_bw()
g2[[2]] <- g2[[2]] + ggtitle("Second element") +
scale_fill_gradient(high = "green", low = "blue")
gridExtra::grid.arrange(grobs = g2, nrow = 1) # requires gridExtra package
dimSupp Support dimension of functional data
Description
This function returns the support dimension of an object of class funData, irregFunData or
multiFunData.
Usage
dimSupp(object)
Arguments
object An object of class funData, irregFunData or multiFunData.
Value
If object is univariate (i.e. of class funData or irregFunData), the function returns the dimension
of the support of object. If object is multivariate (i.e. of class multiFunData), the function
returns a vector, giving the support dimension of each element.
See Also
funData, irregFunData, multiFunData
Examples
# Univariate (one-dimensional)
object1 <- funData(argvals = 1:5, X = rbind(1:5, 6:10))
dimSupp(object1)
# Univariate (two-dimensional)
object2 <- funData(argvals = list(1:10, 1:5), X = array(rnorm(100), dim = c(2,10,5)))
dimSupp(object2)
# Univariate (irregular)
irregObject <- irregFunData(argvals = list(1:5, 2:4), X = list(2:6, 3:5))
dimSupp(irregObject)
# Multivariate
multiObject <- multiFunData(object1, object2)
dimSupp(multiObject)
eFun Generate orthonormal eigenfunctions
Description
This function calculates M (orthonormal) basis functions on a given interval, that can be interpreted
as the first M eigenfunctions of an appropriate data generating process of functional data.
Usage
eFun(argvals, M, ignoreDeg = NULL, type)
Arguments
argvals A vector of numerics, defining a (fine) grid on the interval for which the basis
functions are computed.
M An integer, specifying the number of functions that are calculated.
ignoreDeg A vector of numerics, specifying the degrees to be ignored for type "PolyHigh".
Defaults to NULL. See Details.
type A character string, specifying the type of functions that are calculated. See
Details.
Details
The function implements three families of orthonormal basis functions plus variations of them. The
parameter type, that specifies the functions to be calculated, can have the following values:
• "Poly": Calculate orthonormal Legendre polynomials of degree 0,...,M-1.
• "PolyHigh": Calculate M orthonormal Legendre Polynomials of higher degree. The vector
of indices ignoreDeg specifies the functions to be ignored. If ignoreDeg is not specified, the
function returns an error.
• "Fourier": Calculate the first M Fourier basis functions.
• "FourierLin": Calculate the first M − 1 Fourier basis functions plus the linear function,
orthonormalized to the previous functions via Gram-Schmidts method. This type is currently
implemented for functions on the unit interval [0, 1] only. If the function is called with other
argvals, an error is thrown.
• "Wiener": Calculate the first M orthonormal eigenfunctions of the Wiener process.
Value
A univariate functional data object of class funData containing the basis functions on the given
interval.
See Also
funData, simFunData, simMultiFunData
Examples
oldPar <- par(no.readonly = TRUE)
argvals <- seq(0,1,0.01)
par(mfrow = c(3,2))
plot(eFun(argvals, M = 4, type = "Poly"), main = "Poly", ylim = c(-3,3))
plot(eFun(argvals, M = 4, ignoreDeg = 1:2, type = "PolyHigh"), main = "PolyHigh", ylim = c(-3,3))
plot(eFun(argvals, M = 4, type = "Fourier"), main = "Fourier", ylim = c(-3,3))
plot(eFun(argvals, M = 4, type = "FourierLin"), main = "FourierLin", ylim = c(-3,3))
plot(eFun(argvals, M = 4, type = "Wiener"), main = "Wiener", ylim = c(-3,3))
par(oldPar)
eVal Generate a sequence of simulated eigenvalues
Description
This function generates M decreasing eigenvalues.
Usage
eVal(M, type)
Arguments
M An integer, the number of eigenvalues to be generated.
type A character string specifying the type of eigenvalues that should be calculated.
See Details.
Details
The function implements three types of eigenvalues:
• "linear": The eigenvalues start at 1 and decrease linearly towards 0:
νm = .
m
• "exponential": The eigenvalues start at 1 and decrease exponentially towards 0:
νm = exp − .
• "wiener": The eigenvalues correspond to the eigenvalues of the Wiener process:
νm = .
Value
A vector containing the M decreasing eigenvalues.
Examples
oldpar <- par(no.readonly = TRUE)
# simulate M = 10 eigenvalues
M <- 10
eLin <- eVal(M = M, type = "linear")
eExp <- eVal(M = M, type = "exponential")
eWien <- eVal(M = M, type = "wiener")
par(mfrow = c(1,1))
plot(1:M, eLin, pch = 20, xlab = "m", ylab = expression(nu[m]), ylim = c(0,1))
points(1:M, eExp, pch = 20, col = 3)
points(1:M, eWien, pch = 20, col = 4)
legend("topright", legend = c("linear", "exponential", "wiener"), pch = 20, col = c(1,3,4))
par(oldpar)
extractObs Extract observations of functional data
Description
This function extracts one or more observations and/or observations on a part of the domain from a
funData, irregFunData or multiFunData object.
Usage
extractObs(
object,
obs = seq_len(nObs(object)),
argvals = funData::argvals(object)
)
## S4 method for signature 'funData'
subset(x, obs = seq_len(nObs(x)), argvals = funData::argvals(x))
## S4 method for signature 'multiFunData'
subset(x, obs = seq_len(nObs(x)), argvals = funData::argvals(x))
## S4 method for signature 'irregFunData'
subset(x, obs = seq_len(nObs(x)), argvals = funData::argvals(x))
## S4 method for signature 'funData,ANY,missing,missing'
x[i, j, ..., drop = TRUE]
## S4 method for signature 'multiFunData,ANY,missing,missing'
x[i, j, ..., drop = TRUE]
## S4 method for signature 'irregFunData,ANY,missing,missing'
x[i = seq_len(nObs(x)), j, ..., drop = TRUE]
Arguments
object An object of class funData, irregFunData or multiFunData.
obs A numeric vector, giving the indices of the observations to extract (default: all
observations).
argvals The part of the domain to be extracted (default: the whole domain object@argvals).
Must be a list or a numeric vector (only for one-dimensional domains, see also
the definition of funData, multiFunData).
x An object of class funData, irregFunData or multiFunData (for subset).
i A numeric vector, giving the indices of the observations to extract when using
x[i]. Defaults to all observations.
j, drop not used
... Used to pass further arguments to extractObs. Here only usable for argvals.
Details
In case of an irregFunData object, some functions may not have observation points in the given
part of the domain. In this case, the functions are removed from the extracted dataset and a warning
is thrown.
If only observations are to be extracted, the usual notation object[1:3] is equivalent to extractObs(object,
obs = 1:3). This works only if the domain remains unchanged.
Value
An object of class funData, irregFunData or multiFunData containing the desired observations.
Functions
• [,funData,ANY,missing,missing-method:
Warning
The function is currently implemented only for functional data with up to three-dimensional do-
mains.
Alias
The function subset is an alias for extractObs.
See Also
funData, irregFunData, multiFunData
Examples
# Univariate - one-dimensional domain
object1 <- funData(argvals = 1:5, X = rbind(1:5, 6:10))
extractObs(object1, obs = 1)
extractObs(object1, argvals = 1:3)
extractObs(object1, argvals = list(1:3)) # the same as the statement before
# alias
subset(object1, argvals = 1:3)
# Univariate - two-dimensional domains
object2 <- funData(argvals = list(1:5, 1:6), X = array(1:60, dim = c(2, 5, 6)))
extractObs(object2, obs = 1)
extractObs(object2, argvals = list(1:3, c(2,4,6))) # argvals must be supplied as list
# Univariate - irregular
irregObject <- irregFunData(argvals = list(1:5, 2:4), X = list(2:6, 3:5))
extractObs(irregObject, obs = 2)
extractObs(irregObject, argvals = 1:3)
extractObs(irregObject, argvals = c(1,5)) # throws a warning, as second function has no observations
# Multivariate
multiObject <- multiFunData(object1, object2)
extractObs(multiObject, obs = 2)
multiObject[2] # shorthand
extractObs(multiObject, argvals = list(1:3, list(1:3, c(2,4,6))))
### Shorthand via "[]"
object1[1]
object1[argvals = 1:3]
object2[1]
object2[argvals = list(1:3, c(2,4,6))]
irregObject[2]
irregObject[argvals = 1:3]
fd2funData Convert an fd object to funData
Description
This function converts an object of class fd (from package fda) to an object of class funData. It
heavily builds on the function eval.fd from the fda package. The fd representation assumes a basis
representation for the observed functions and therefore implicitly smoothes the data. In funData
objects, the data is saved in ’raw’ format.
Usage
fd2funData(fdobj, argvals, ...)
Arguments
fdobj An fd object
argvals A vector or a list of length one, containing a vector with argument values at
which the functions in fdobj should be evaluated.
... Other parameters passed to eval.fd.
Value
An object of class funData.
Warning
Time names in fdobj$fdnames$time are not preserved.
See Also
funData, fd, eval.fd
Examples
# Install / load package fda before running the examples
library("fda")
# from Data2fd help
daybasis <- create.fourier.basis(c(0, 365), nbasis=65)
# fd object of daily temperatures
tempfd <- Data2fd(argvals = day.5, y = CanadianWeather$dailyAv[,,"Temperature.C"], daybasis)
# convert to funData
tempFun <- fd2funData(tempfd, argvals = day.5)
# plot to compare
par(mfrow = c(1,2))
plot(tempfd, main = "fd object")
plot(tempFun, main = "funData object")
flipFuns Flip functional data objects
Description
This function flips an object newObject of class funData, irregFunData or multiFunData with
respect to a reference object refObject of the same class (or of class funData, if newObject is
irregular). This is particularly useful when dealing with functional principal components, as they
are only defined up to a sign change. For details, see below.
Usage
flipFuns(refObject, newObject, ...)
Arguments
refObject An object of class funData, irregFunData or multiFunData that serves as
reference. It must have the same number of observations as newObject or have
only one observation. In this case, all observations in newObject are flipped
with respect to this single observation.
newObject An object of class funData, irregFunData or multiFunData that is to be
flipped with respect to refObject.
... Further parameters passed to norm.
Details
Functional principal component analysis is an important tool in functional data analysis. Just as
eigenvectors, eigenfunctions (or functional principal components) are only defined up to a sign
change. This may lead to difficulties in simulation studies or when bootstrapping pointwise con-
fidence bands, as in these cases one wants the estimates to have the same "orientation" as the true
function (in simulation settings) or the non-bootstrapped estimate (when calculating bootstrap con-
fidence bands). This function allows to flip (i.e. multiply by −1) all observations in newObject that
have a different orientation than their counterparts in refData.
Technically, the function compares the distance between newObject and refObject
|||fnew − fref |||
and the distance between newObject and -1 * refObject
|||fnew + fref |||.
If newObject is closer to -1 * refObject, it is flipped, i.e. multiplied by -1.
Value
An object of the same class as newData with flipped observations.
Warning
The function is currently implemented only for functional data with one- and two-dimensional
domains.
See Also
funData, irregFunData, multiFunData, Arith.funData
Examples
### Univariate
argvals <- seq(0,2*pi,0.01)
refData <- funData(argvals, rbind(sin(argvals))) # one observation as reference
newData <- funData(argvals, outer(sample(c(-1,1), 11, replace = TRUE) * seq(0.75, 1.25, by = 0.05),
sin(argvals)))
oldpar <- par(no.readonly = TRUE)
par(mfrow = c(1,2))
plot(newData, col = "grey", main = "Original data")
plot(refData, col = "red", lwd = 2, add = TRUE)
plot(flipFuns(refData, newData), col = "grey", main = "Flipped data")
plot(refData, col = "red", lwd = 2, add = TRUE)
### Univariate (irregular)
ind <- replicate(11, sort(sample(1:length(argvals), sample(5:10,1)))) # sample observation points
argvalsIrreg <- lapply(ind, function(i){argvals[i]})
argvalsIrregAll <- unique(sort(unlist(argvalsIrreg)))
# one observation as reference (fully observed)
refDataFull <- funData(argvals, rbind(sin(argvals)))
# one observation as reference (irregularly observed)
refDataIrreg <- irregFunData(argvals = list(argvalsIrregAll), X = list(sin(argvalsIrregAll)))
newData <- irregFunData(argvals = argvalsIrreg, X = mapply(function(x, a, s){s * a * sin(x)},
x = argvalsIrreg, a = seq(0.75, 1.25, by = 0.05), s = sample(c(-1,1), 11, replace = TRUE)))
plot(newData, col = "grey", main = "Original data (regular reference)")
plot(refDataFull, col = "red", lwd = 2, add = TRUE)
plot(flipFuns(refDataFull, newData), col = "grey", main = "Flipped data")
plot(refDataFull, col = "red", lwd = 2, add = TRUE)
plot(newData, col = "grey", main = "Original data (irregular reference)")
plot(refDataIrreg, col = "red", lwd = 2, add = TRUE)
plot(flipFuns(refDataIrreg, newData), col = "grey", main = "Flipped data")
plot(refDataIrreg, col = "red", lwd = 2, add = TRUE)
### Multivariate
refData <- multiFunData(funData(argvals, rbind(sin(argvals))), # one observation as reference
funData(argvals, rbind(cos(argvals))))
sig <- sample(c(-1,1), 11, replace = TRUE)
newData <- multiFunData(funData(argvals, outer(sig * seq(0.75, 1.25, by = 0.05), sin(argvals))),
funData(argvals, outer(sig * seq(0.75, 1.25, by = 0.05), cos(argvals))))
par(mfrow = c(2,2))
plot(newData[[1]], col = topo.colors(11), main = "Original data")
plot(refData[[1]], col = "red", lwd = 2, add = TRUE)
plot(newData[[2]], col = topo.colors(11), main = "Original data")
plot(refData[[2]], col = "red", lwd = 2, add = TRUE)
plot(flipFuns(refData, newData)[[1]], col = topo.colors(11), main = "Flipped data")
plot(refData[[1]], col = "red", lwd = 2, add = TRUE)
plot(flipFuns(refData, newData)[[2]], col = topo.colors(11), main = "Flipped data")
plot(refData[[2]], col = "red", lwd = 2, add = TRUE)
par(oldpar)
funData-class A class for (univariate) functional data
Description
The funData class represents functional data on d-dimensional domains. The two slots represent
the domain (x-values) and the values of the different observations (y-values).
Usage
## S4 method for signature 'list,array'
funData(argvals, X)
## S4 method for signature 'numeric,array'
funData(argvals, X)
## S4 method for signature 'funData'
show(object)
## S4 method for signature 'funData'
names(x)
## S4 replacement method for signature 'funData'
names(x) <- value
## S4 method for signature 'funData'
str(object, ...)
## S4 method for signature 'funData'
summary(object, ...)
Arguments
argvals A list of numeric vectors or a single numeric vector, giving the sampling points
in the domains. See Details.
X An array of dimension N × M (for one-dimensional domains, or N × M1 ×
. . . × Md for higher-dimensional domains), giving the observed values for N
individuals. Missing values can be included via NA. See Details.
object A funData object.
x The funData object.
value The names to be given to the funData curves.
... Other parameters passed to summary.
Details
Functional data can be seen as realizations of a random process
X : T → IR
on a d-dimensional domain T . The data is usually sampled on a fine grid T ⊂ T , which is repre-
sented in the argvals slot of a funData object. All observations are assumed to be sampled over
the same grid T , but can contain missing values (see below). If T is one-dimensional, argvals can
be supplied either as a numeric vector, containing the x-values or as a list, containing such a vector.
If T is higher-dimensional, argvals must always be supplied as a list, containing numeric vectors
of the x-values in dimensions 1, . . . , d.
The observed values are represented in the X slot of a funData object, which is an array of di-
mension N × M (for one-dimensional domains, or N × M1 × . . . × Md for higher-dimensional
domains). Here N equals the number of observations and M denotes the number of sampling
points (for higher dimensional domains Mi denotes the number of sampling points in dimension
i, i = 1, . . . , d). Missing values in the observations are allowed and must be marked by NA. If miss-
ing values occur due to irregular observation points, the data can be stored alternatively as an object
of class irregFunData.
Generic functions for the funData class include a print method, plotting and basic arithmetics.
Further methods for funData:
• dimSupp, nObs: Informations about the support dimensions and the number of observations,
• getArgvals, extractObs: Getting/Setting slot values (instead of accessing them directly via
funData@argvals, funData@X) and extracting single observations or data on a subset of the
domain,
• integrate, norm: Integrate all observations over their domain or calculating the L2 norm.
A funData object can be coerced to a multiFunData object using as.multiFunData(funDataObject).
Methods (by generic)
• funData: Constructor for functional data objects with argvals given as list.
• funData: Constructor for functional data objects with argvals given as vector of numerics
(only valid for one-dimensional domains).
• show: Print basic information about the funData object in the console. The default console
output for funData objects.
• names: Get the names of the funData object.
• names<-: Set the names of the funData object.
• str: A str method for funData objects, giving a compact overview of the structure.
• summary: A summary method for funData objects.
Slots
argvals The domain T of the data. See Details.
X The functional data samples. See Details.
See Also
irregFunData, multiFunData
Examples
### Creating a one-dimensional funData object with 2 observations
# Basic
f1 <- new("funData", argvals = list(1:5), X = rbind(1:5,6:10))
# Using the constructor with first argument supplied as array
f2 <- funData(argvals = list(1:5), X = rbind(1:5, 6:10))
# Using the constructor with first argument supplied as numeric vector
f3 <- funData(argvals = 1:5, X = rbind(1:5, 6:10))
# Test if all the same
all.equal(f1,f2)
all.equal(f1,f3)
# Display funData object in the console
f3
# A more realistic object
argvals <- seq(0,2*pi,0.01)
object <- funData(argvals, outer(seq(0.75, 1.25, by = 0.05), sin(argvals)))
# Display / summary give basic information
object
summary(object)
# Use the plot function to get an impression of the data
plot(object)
### Higher-dimensional funData objects with 2 observations
# Basic
g1 <- new("funData", argvals = list(1:5, 1:3),
X = array(1:30, dim = c(2,5,3)))
# Using the constructor
g2 <- funData(argvals = list(1:5, 1:3),
X = array(1:30, dim = c(2,5,3)))
# Test if the same
all.equal(g1,g2)
# Display funData object in the console
g2
# Summarize information
summary(g2)
funData2fd Convert a funData object to fd
Description
This function converts an object of class funData to an object of class fd (from package fda). It
heavily builds on the function Data2fd from the fda package. The fd representation assumes a basis
representation for the observed functions and therefore implicitly smoothes the data. In funData
objects, the data is saved in ’raw’ format.
Usage
funData2fd(object, ...)
Arguments
object A funData object
... Other parameters passed to Data2fd.
Value
An object of class fd.
Warning
This function works only for funData objects on one-dimensional domains.
See Also
funData, fd, Data2fd, fd2funData
Examples
# Install / load package fda before running the examples
library("fda")
# from Data2fd help
daybasis <- create.fourier.basis(c(0, 365), nbasis=65)
# funData object with temperature
tempFun <- funData(day.5, t(CanadianWeather$dailyAv[, , "Temperature.C"]))
# convert to fd
tempfd <- funData2fd(tempFun, daybasis)
# plot to compare
par(mfrow = c(1,2))
plot(tempFun, main = "funData object (raw data)")
plot(tempfd, main = "fd object (smoothed)")
ggplot ggplot Graphics for Functional Data Objects
Description
This function is deprecated. Use autoplot.funData / autolayer.funData for funData objects,
autoplot.multiFunData for multiFunData objects and autoplot.irregFunData / autolayer.irregFunData
for irregFunData objects instead.
Usage
ggplot(data, ...)
## S4 method for signature 'funData'
ggplot(data, add = FALSE, ...)
## S4 method for signature 'multiFunData'
ggplot(data, ...)
## S4 method for signature 'irregFunData'
ggplot(data, add = FALSE, ...)
Arguments
data A funData, multiFunData or irregFunData object.
... Further parameters passed to the class-specific methods.
add Logical. If TRUE, add to current plot (only for one-dimensional functions). De-
faults to FALSE.
Details
In the default case, this function calls ggplot (if available).
Value
A ggplot object
See Also
ggplot, autoplot, autolayer from package ggplot2
integrate Integrate functional data
Description
Integrate all observations of a funData, irregFunData or multiFunData object over their domain.
Usage
integrate(object, ...)
Arguments
object An object of class funData, irregFunData or multiFunData.
... Further parameters (see Details).
Details
Further parameters passed to this function may include:
• method: Character string. The integration rule to be used, passed to the internal function
.intWeights. Defaults to "trapezoidal" (alternative: "midpoint").
• fullDom: Logical. If object is of class irregFunData, setting fullDom = TRUE extrapolates
all functions linearly to the full domain before calculating the integrals. Defaults to FALSE.
For details on the extrapolation, see extrapolateIrreg.
Value
A vector of numerics, containing the integral values for each observation.
Warning
The function is currently implemented only for functional data with up to three-dimensional do-
mains. In the default case, this function calls integrate.
See Also
funData, irregFunData, multiFunData
Examples
# Univariate
object <- funData(argvals = 1:5, X = rbind(1:5, 6:10))
integrate(object)
# Univariate (irregular)
irregObject <-irregFunData(argvals = list(1:5, 2:4), X = list(2:6, 3:5))
integrate(irregObject) # fullDom = FALSE
integrate(irregObject, fullDom = TRUE)
# Multivariate
multiObject <- multiFunData(object, funData(argvals = 1:3, X = rbind(3:5, 6:8)))
integrate(multiObject)
irregFunData-class A class for irregularly sampled functional data
Description
The irregFunData class represents functional data that is sampled irregularly on one-dimensional
domains. The two slots represent the observation points (x-values) and the observed function values
(y-values).
Usage
## S4 method for signature 'list,list'
irregFunData(argvals, X)
## S4 method for signature 'irregFunData'
show(object)
## S4 method for signature 'irregFunData'
names(x)
## S4 replacement method for signature 'irregFunData'
names(x) <- value
## S4 method for signature 'irregFunData'
str(object, ...)
## S4 method for signature 'irregFunData'
summary(object, ...)
Arguments
argvals A list of numerics, corresponding to the observation points for each realization
Xi (see Details).
X A list of numerics, corresponding to the observed functions Xi (see Details).
object An irregFunData object.
x The irregFunData object.
value The names to be given to the irregFunData curves.
... Other parameters passed to summary.
Details
Irregular functional data are realizations of a random process
X : T → IR,
where each realization Xi of X is given on an individual grid Ti ⊂ T of observation points.
As for the funData class, each object of the irregFunData class has two slots; the argvals slot
represents the observation points and the X slot represents the observed data. In contrast to the
regularly sampled data, both slots are defined as lists of vectors, where each entry corresponds to
one observed function:
• argvals[[i]] contains the vector of observation points Ti for the i-th function,
• X[[i]] contains the corresponding observed data Xi (tij ), tij ∈ Ti .
Generic functions for the irregFunData class include a print method, plotting and basic arith-
metics. Further methods for irregFunData:
• dimSupp, nObs: Informations about the support dimensions and the number of observations,
• getArgvals, extractObs: Getting/setting slot values (instead of accessing them directly via
irregObject@argvals, irregObject@X) and extracting single observations or data on a sub-
set of the domain,
• integrate, norm: Integrate all observations over their domain or calculating the L2 norm.
An irregFunData object can be coerced to a funData object using as.funData(irregObject).
The regular functional data object is defined on the union of all observation grids of the irregular
object. The value of the new object is marked as missing (NA) for observation points that are in the
union, but not in the original observation grid.
Methods (by generic)
• irregFunData: Constructor for irregular functional data objects.
• show: Print basic information about the irregFunData object in the console. The default
console output for irregFunData objects.
• names: Get the names of the irregFunData object.
• names<-: Set the names of the irregFunData object.
• str: A str method for irregFunData objects, giving a compact overview of the structure.
• summary: A summary method for irregFunData objects.
Slots
argvals A list of numerics, representing the observation grid Ti for each realization Xi of X.
X A list of numerics, representing the values of each observation Xi of X on the corresponding
observation points Ti .
Warning
Currently, the class is implemented only for functional data on one-dimensional domains T ⊂ IR.
See Also
funData, multiFunData
Examples
# Construct an irregular functional data object
i1 <- irregFunData(argvals = list(1:5, 2:4), X = list(2:6, 3:5))
# Display in the console
i1
# Summarize
summary(i1)
# A more realistic object
argvals <- seq(0,2*pi, 0.01)
ind <- replicate(11, sort(sample(1:length(argvals), sample(5:10,1)))) # sample observation points
argvalsIrreg <- lapply(ind, function(i){argvals[i]})
i2 <- irregFunData(argvals = argvalsIrreg, X = mapply(function(x, a){a * sin(x)},
x = argvalsIrreg, a = seq(0.75, 1.25, by = 0.05)))
# Display/summary gives basic information
i2
summary(i2)
# Use the plot function to get an impression of the data
plot(i2)
Math.funData Mathematical operations for functional data objects
Description
These functions allow to apply mathematical operations (such as exp(), log(), sin(), cos() or abs()
to functional data objects based on Math. The operations are made pointwise for each observation.
Usage
## S4 method for signature 'funData'
Math(x)
## S4 method for signature 'multiFunData'
Math(x)
## S4 method for signature 'irregFunData'
Math(x)
Arguments
x An object of class funData, irregFunData or multiFunData.
Value
An object of the same functional data class as x.
See Also
funData, irregFunData, multiFunData, Math
Examples
oldpar <- par(no.readonly = TRUE)
par(mfrow = c(1,2))
# simulate a funData object on 0..1 with 10 observations
argvals <- seq(0, 1, 0.01)
f <- simFunData(argvals = argvals, N = 10,
M = 5, eFunType = "Fourier", eValType = "linear")$simData
### FunData
plot(f, main = "Original data")
plot(abs(f), main = "Absolute values")
### Irregular
# create an irrgFunData object by sparsifying f
i <- as.irregFunData(sparsify(f, minObs = 5, maxObs = 10))
plot(i, main = "Sparse data")
plot(cumsum(i), main = "'cumsum' of sparse data")
### Multivariate
m <- multiFunData(f, -1*f)
plot(m, main = "Multivariate Data")
plot(exp(m), main = "Exponential")
par(oldpar)
meanFunction Mean for functional data
Description
This function calculates the pointwise mean function for objects of class funData, irregFunData
or multiFunData.
Usage
meanFunction(object, na.rm = FALSE)
Arguments
object An object of class funData, irregFunData or multiFunData.
na.rm Logical. If TRUE, NA values are removed before computing the mean. Defaults
to FALSE.
Value
An object of the same class as object with one observation that corresponds to the pointwise mean
function of the functions in object.
Warning
If object is of class irregFunData, the option na.rm = TRUE is not implemented and throws an
error. If na.rm = FALSE, the functions must be observed on the same domain.
See Also
funData, irregFunData, multiFunData, Arith.funData
Examples
### Univariate (one-dimensional support)
x <- seq(0, 2*pi, 0.01)
f1 <- funData(x, outer(seq(0.75, 1.25, 0.05), sin(x)))
plot(f1)
plot(meanFunction(f1), col = 1, lwd = 2, add = TRUE)
### Univariate (two-dimensional support)
f2 <- funData(list(1:5, 1:3), array(rep(1:5,each = 11, times = 3), dim = c(11,5,3)))
all.equal(f2[1], meanFunction(f2)) # f2 has 11 identical observations
### Multivariate
m1 <- multiFunData(f1,f2)
all.equal(m1[6], meanFunction(m1)) # observation 6 equals the pointwise mean
### Irregular
i1 <- irregFunData(argvals = list(1:3,1:3,1:3), X = list(1:3,2:4,3:5))
all.equal(meanFunction(i1), i1[2])
# don't run: functions are not defined on the same domain
## Not run: meanFunction(irregFunData(argvals = list(1:3,1:5), X = list(1:3,1:5)))
multiFunData-class A class for multivariate functional data
Description
The multiFunData class represents multivariate functional data on (potentially) different domains,
i.e. a multivariate functional data object is a vector of (univariate) functional data objects, just as a
vector in IRn is a vector of n scalars. In this implementation, a multiFunData object is represented
as a list of univariate funData objects, see Details.
Usage
## S4 method for signature 'ANY'
multiFunData(...)
## S4 method for signature 'multiFunData'
names(x)
## S4 replacement method for signature 'multiFunData'
names(x) <- value
## S4 method for signature 'multiFunData'
str(object, ...)
## S4 method for signature 'multiFunData'
summary(object, ...)
Arguments
... A list of funData objects or several funData objects passed as one argument,
each. See Details.
x The multiFunData object.
value The names to be given to the multiFunData curves.
object A multiFunData object.
Details
A multiFunData object is represented as a list of univariate funData objects, each having a argvals
and X slot, representing the x-values and the observed y-values (see the funData class). When con-
structing a multiFunData object, the elements can be supplied as a list of funData objects or can
be passed directly as arguments to the constructor function.
Most functions implemented for the funData class are also implemented for multiFunData ob-
jects. In most cases, they simply apply the corresponding univariate method to each element of the
multivariate object and return it as a vector (if the result of the univariate function is scalar, such as
dimSupp) or as a multiFunData object (if the result of the univariate function is a funData object,
such as extractObs).
The norm of a multivariate functional data f = (f1 , . . . , fp ) is defined as
Xp
|||f ||| := ||fj ||2 .
A funData object can be coerced to a multiFunData object with one element using as.multiFunData(funDataObject).
Methods (by generic)
• multiFunData: Constructor for multivariate functional data objects.
• names: Get the names of the multiFunData object.
• names<-: Set the names of the multiFunData object.
• str: A str method for multiFunData objects, giving a compact overview of the structure.
• summary: A summary method for multiFunData objects.
See Also
funData
Examples
### Creating a multifunData object with 2 observations on the same domain
# Univariate elements
x <- 1:5
f1 <- funData(x, rbind(x, x+1))
f2 <- funData(x,rbind(x^2, sin(x)))
# Basic
m1 <- new("multiFunData", list(f1,f2))
# Using the constructor, passing the elements as list
m2 <- multiFunData(list(f1,f2))
# Using the constructor, passing the elements directly
m3 <- multiFunData(f1,f2)
# Test if all the same
all.equal(m1,m2)
all.equal(m1,m3)
# Display multiFunData object in the console
m3
# Summarize
summary(m3)
### Creating a multifunData object with 2 observations on different domains (both 1D)
# A new element
y <- 1:3
g1 <- funData(y, rbind(3*y, y+4))
# Create the multiFunData object
m4 <- multiFunData(f1,g1)
# Display multiFunData object in the console
m4
### Creating a multifunData object with 2 observations on different domains (1D and 2D)
# A new element
y <- 1:3; z <- 1:4
g2 <- funData(list(y,z), array(rnorm(24), dim = c(2,3,4)))
# Create the multiFunData object
m5 <- multiFunData(f1,g2)
# Display multiFunData object in the console
m5
### A more realistic object
# element 1
x <- seq(0,2*pi, 0.01)
f1 <- funData(x, outer(seq(0.75, 1.25, length.out = 6), sin(x)))
# element 2
y <- seq(-1,1, 0.01); z <- seq(-0.5, 0.5, 0.01)
X2 <- array(NA, c(6, length(y), length(z)))
for(i in 1:6) X2[i,,] <- outer(y, z, function(x,y){sin(i*pi*y)*cos(i*pi*z)})
f2 <- funData(list(y,z), X2)
# MultiFunData Object
m6 <- multiFunData(f1,f2)
# Display multiFunData object in the console for basic information
m6
# Summarize
summary(m6)
# Use the plot function to get an impression of the data
## Not run: plot(m6) # m6 has 2D element, must specify one observation for plotting
plot(m6, obs = 1, main = c("1st element (obs 1)", "2nd element (obs 1)"))
plot(m6, obs = 6, main = c("1st element (obs 6)", "2nd element (obs 6)"))
nObs Get the number of observations
Description
This functions returns the number of observations in a funData, irregFunData or multiFunData
object.
Usage
nObs(object)
Arguments
object An object of class funData, irregFunData or multiFunData.
Value
The number of observations in object.
See Also
funData, irregFunData, multiFunData
Examples
# Univariate
object <- funData(argvals = 1:5, X = rbind(1:5, 6:10))
nObs(object)
# Univariate (irregular)
irregObject <- irregFunData(argvals = list(1:5, 2:4), X = list(2:6, 3:5))
nObs(irregObject)
# Multivariate
multiObject <- multiFunData(object, funData(argvals = 1:3, X = rbind(3:5, 6:8)))
nObs(multiObject)
nObsPoints Get the number of observation points
Description
This functions returns the number of observation points in an object of class funData, multiFunData
or irregFunData.
Usage
nObsPoints(object)
Arguments
object An object of class funData, multiFunData or irregFunData.
Details
Depending on the class of object, the function returns different values:
• If object is of class funData, the function returns a vector of length dimSupp(object),
giving the number of observations in each dimension.
• If object is of class multiFunData, the function returns a list of the same length as object,
where the j-th entry is a vector, corresponding to the observations point of object[[j]].
• If object is of class irregFunData, the function returns an array of length nObs(object),
where the j-th entry corresponds to the number of observations in the j-th observed function.
Value
The number of observation points in object. See Details.
Warning
Do not confound with nObs, which returns the number of observations (i.e. the number of observed
functions) in an object of a functional data class.
See Also
irregFunData, extractObs
Examples
# Univariate (one-dimensional)
object1 <- funData(argvals = 1:5, X = rbind(1:5, 6:10))
nObsPoints(object1)
# Univariate (two-dimensional)
object2 <- funData(argvals = list(1:5, 1:6), X = array(1:60, dim = c(2, 5, 6)))
nObsPoints(object2)
# Multivariate
multiObject <- multiFunData(object1, object2)
nObsPoints(multiObject)
# Univariate (irregular)
irregObject <- irregFunData(argvals = list(1:5, 2:4), X = list(2:6, 3:5))
nObsPoints(irregObject)
norm Calculate the norm of functional data
Description
This function calculates the norm for each observation of a funData, irregFunData or multiFunData
object.
Arguments
object An object of class funData, irregFunData or multiFunData.
... Further parameters (see Details).
Details
For funData objects, the standard L2 norm is calculated:
||f || = f (t)2 dt .
T
For irregFunData objects, each observed function is integrated only on the observed grid points
(unless fullDom = TRUE).
The (weighted) norm of a multivariate functional data object f = (f1 , . . . , fp ) is defined as
X p
|||f ||| := wj ||fj ||2 .
Further parameters passed to this function may include:
• squared: Logical. If TRUE (default), the function calculates the squared norm, otherwise the
result is not squared.
• obs: A numeric vector, giving the indices of the observations, for which the norm is to be
calculated. Defaults to all observations.
• method: A character string, giving the integration method to be used. See integrate for
details.
• weight: An optional vector of weights for the scalar product; particularly useful for multivari-
ate functional data, where each entry can be weighted in the scalar product / norm. Defaults
to 1 for each element.
• fullDom: Logical. If object is of class irregFunData and fullDom = TRUE, all functions are
extrapolated to the same domain. Defaults to FALSE. See integrate for details.
Value
A numeric vector representing the norm of each observation.
Warning
The function is currently implemented only for functional data with one- and two-dimensional
domains.
See Also
funData, irregFunData, multiFunData, integrate
Examples
# Univariate
object <- funData(argvals = 1:5, X = rbind(1:5, 6:10))
norm(object)
# Univariate (irregular)
irregObject <- irregFunData(argvals = list(1:5, 2:4), X = list(2:6, 3:5))
norm(irregObject) # no extrapolation
norm(irregObject, fullDom = TRUE) # extrapolation (of second function)
# Multivariate
multiObject <- multiFunData(object, funData(argvals = 1:3, X = rbind(3:5, 6:8)))
norm(multiObject)
norm(multiObject, weight = c(2,1)) # with weight vector, giving more weight to the first element
plot.funData Plotting univariate functional data
Description
This function plots observations of univariate functional data on their domain.
Usage
plot.funData(
x,
y,
obs = seq_len(nObs(x)),
type = "l",
lty = 1,
lwd = 1,
col = NULL,
xlab = "argvals",
ylab = "",
legend = TRUE,
plotNA = FALSE,
add = FALSE,
...
)
## S4 method for signature 'funData,missing'
plot(x, y, ...)
Arguments
x An object of class funData.
y Missing.
obs A vector of numerics giving the observations to plot. Defaults to all observations
in x. For two-dimensional functions (images) obs must have length 1.
type The type of plot. Defaults to "l" (line plot). See plot for details.
lty The line type. Defaults to 1 (solid line). See par for details.
lwd The line width. Defaults to 1. See par for details.
col The color of the functions. If not supplied (NULL, default value), one-dimensional
functions are plotted in the rainbow palette and two-dimensional functions are
plotted using tim.colors from package fields-package.
xlab, ylab The titles for x- and y-axis. Defaults to "argvals" for the x-axis and no title for
the y-axis. See plot for details.
legend Logical. If TRUE, a color legend is plotted for two-dimensional functions (im-
ages). Defaults to TRUE.
plotNA Logical. If TRUE, missing values are interpolated using the approxNA function
(only for one-dimensional functions). Defaults to FALSE.
add Logical. If TRUE, add to current plot (only for one-dimensional functions). De-
faults to FALSE.
... Additional arguments to matplot (one-dimensional functions) or image.plot/
image (two-dimensional functions).
Details
If some observations contain missing values (coded via NA), the functions can be interpolated using
the option plotNA = TRUE. This option relies on the na.approx function in package zoo and is
currently implemented for one-dimensional functions only in the function approxNA.
Warning
The function is currently implemented only for functional data with one- and two-dimensional
domains.
See Also
funData, matplot, image.plot, image
Examples
oldpar <- par(no.readonly = TRUE)
# One-dimensional
argvals <- seq(0,2*pi,0.01)
object <- funData(argvals,
outer(seq(0.75, 1.25, length.out = 11), sin(argvals)))
plot(object, main = "One-dimensional functional data")
# Two-dimensional
X <- array(0, dim = c(2, length(argvals), length(argvals)))
X[1,,] <- outer(argvals, argvals, function(x,y){sin((x-pi)^2 + (y-pi)^2)})
X[2,,] <- outer(argvals, argvals, function(x,y){sin(2*x*pi) * cos(2*y*pi)})
object2D <- funData(list(argvals, argvals), X)
plot(object2D, main = "Two-dimensional functional data (obs 1)", obs = 1)
plot(object2D, main = "Two-dimensional functional data (obs 2)", obs = 2)
## Not run: plot(object2D, main = "Two-dimensional functional data") # must specify obs!
### More examples ###
par(mfrow = c(1,1))
# using plotNA
if(requireNamespace("zoo", quietly = TRUE))
{
objectMissing <- funData(1:5, rbind(c(1, NA, 5, 4, 3), c(10, 9, NA, NA, 6)))
par(mfrow = c(1,2))
plot(objectMissing, type = "b", pch = 20, main = "plotNA = FALSE") # the default
plot(objectMissing, type = "b", pch = 20, plotNA = TRUE, main = "plotNA = TRUE") # requires zoo
}
# Changing colors
plot(object, main = "1D functional data in grey", col = "grey")
plot(object, main = "1D functional data in heat.colors", col = heat.colors(nObs(object)))
plot(object2D, main = "2D functional data in topo.colors", obs = 1, col = topo.colors(64))
par(oldpar)
plot.irregFunData Plotting irregular functional data
Description
This function plots observations of irregular functional data on their domain.
Usage
plot.irregFunData(
x,
y,
obs = seq_len(nObs(x)),
type = "b",
pch = 20,
col = grDevices::rainbow(length(obs)),
xlab = "argvals",
ylab = "",
xlim = range(x@argvals[obs]),
ylim = range(x@X[obs]),
log = "",
add = FALSE,
...
)
## S4 method for signature 'irregFunData,missing'
plot(x, y, ...)
Arguments
x An object of class irregFunData.
y Missing.
obs A vector of numerics giving the observations to plot. Defaults to all observations
in x.
type The type of plot. Defaults to "b" (line and point plot). See plot for details.
pch The point type. Defaults to 20 (solid small circles). See par for details.
col The color of the functions. Defaults to the rainbow palette.
xlab, ylab The titles for x- and y-axis. Defaults to "argvals" for the x-axis and no title for
the y-axis. See plot for details.
xlim, ylim The limits for x- and y-axis. Defaults to the total range of the data that is to plot.
See plot for details.
log A character string, specifying the axis that is to be logarithmic. Can be "" (non-
logarithmic axis, the default), "x", "y", "xy" or "yx". See plot.default for
details. This parameter is ignored, if add = TRUE.
add Logical. If TRUE, add to current plot (only for one-dimensional functions). De-
faults to FALSE.
... Additional arguments to plot.
See Also
plot.funData, irregFunData, plot
Examples
oldpar <- par(no.readonly = TRUE)
# Generate data
argvals <- seq(0,2*pi,0.01)
ind <- replicate(5, sort(sample(1:length(argvals), sample(5:10,1))))
object <- irregFunData(argvals = lapply(ind, function(i){argvals[i]}),
X = lapply(ind, function(i){sample(1:10,1) / 10 * argvals[i]^2}))
plot(object, main = "Irregular functional data")
par(oldpar)
plot.multiFunData Plotting multivariate functional data
Description
This function plots observations of multivariate functional data on their domain. The graphic device
is split in a number of subplots (specified by dim) via mfrow (par) and the univariate elements are
plotted using plot.
Usage
plot.multiFunData(
x,
y,
obs = seq_len(nObs(x)),
dim = seq_len(length(x)),
par.plot = NULL,
main = names(x),
xlab = "argvals",
ylab = "",
log = "",
ylim = NULL,
...
)
## S4 method for signature 'multiFunData,missing'
plot(x, y, ...)
Arguments
x An object of class multiFunData.
y Missing.
obs A vector of numerics giving the observations to plot. Defaults to all observations
in x. For two-dimensional functions (images) obs must have length 1.
dim The dimensions to plot. Defaults to length(x), i.e. all functions in x are plotted.
par.plot Graphic parameters to be passed to the plotting regions. The option mfrow is
ignored. Defaults to NULL. See par for details.
main A string vector, giving the title of the plot. Can have the same length as dim
(different titles for each dimension) or length 1 (one title for all dimensions).
Defaults to names(x).
xlab, ylab The titles for x- and y-axis. Defaults to "argvals" for the x-axis and no title
for the y-axis for all elements. Can be supplied as a vector of the same length
as dim (one x-/y-lab for each element) or a single string that is applied for all
elements. See plot for details.
log A character string, specifying the axis that is to be logarithmic. Can be "" (non-
logarithmic axis), "x", "y", "xy" or "yx". Defaults to "" for all plots. Can be
supplied as a vector of the same length as dim (one log-specification for each
element) or a single string that is applied for all elements. See plot.default
for details.
ylim Specifies the limits of the y-Axis. Can be either NULL (the default, limits are
chosen automatically), a vector of length 2 (giving the minimum and maximum
range for all elements at the same time) or a list of the same length as dim
(specifying the limits for each element separately).
... Additional arguments to plot.
Warning
The function is currently implemented only for functional data with one- and two-dimensional
domains.
See Also
funData, multiFunData, plot.funData
Examples
oldpar <- par(no.readonly = TRUE)
argvals <- seq(0, 2*pi, 0.1)
# One-dimensional elements
f1 <- funData(argvals, outer(seq(0.75, 1.25, length.out = 11), sin(argvals)))
f2 <- funData(argvals, outer(seq(0.75, 1.25, length.out = 11), cos(argvals)))
m1 <- multiFunData(f1, f2)
plot(m1, main = c("1st element", "2nd element")) # different titles
plot(m1, main = "Multivariate Functional Data") # one title for all
# Mixed-dimensional elements
X <- array(0, dim = c(11, length(argvals), length(argvals)))
X[1,,] <- outer(argvals, argvals, function(x,y){sin((x-pi)^2 + (y-pi)^2)})
g <- funData(list(argvals, argvals), X)
m2 <- multiFunData(f1, g)
# different titles and labels
plot(m2, main = c("1st element", "2nd element"), obs = 1,
xlab = c("xlab1", "xlab2"),
ylab = "one ylab for all")
# one title for all
plot(m2, main = "Multivariate Functional Data", obs = 1)
## Not run: plot(m2, main = c("1st element", "2nd element")) # must specify obs!
par(oldpar)
scalarProduct Calculate the scalar product for functional data objects
Description
This function calculates the scalar product between two objects of the class funData, irregFunData
and multiFunData. For univariate functions f, g on a domain T , the scalar product is defined as
Z
f (t)g(t)dt
T
and for multivariate functions f, g on domains T1 , . . . , Tp , it is defined as
Xp Z
f (j) (t)g (j) (t)dt.
j=1 Tj
As seen in the formula, the objects must be defined on the same domain. The scalar product is
calculated pairwise for all observations, thus the objects must also have the same number of ob-
servations or one object may have only one observation (for which the scalar product is calculated
with all observations of the other object)). Objects of the classes funData and irregFunData can
be combined, see integrate for details.
Usage
scalarProduct(object1, object2, ...)
Arguments
object1, object2
Two objects of classfunData, irregFunData or multiFunData, for that the
scalar product is to be calculated.
... Additional parameters passed to integrate. For multiFunData objects, one
can also pass a weight argument. See Details.
Details
For multiFunData one can pass an optional vector weight for calculating a weighted scalar prod-
uct. This vector must have the same number of elements as the multiFunData objects and have to
be non-negative with at least one weight that is different from 0. Defaults to 1 for each element.
See also norm.
Value
A vector of length nObs(object1) (or nObs(object2), if object1 has only one observation),
containing the pairwise scalar product for each observation.
See Also
integrate, norm,
Examples
# create two funData objectw with 5 observations on [0,1]
f <- simFunData(N = 5, M = 7, eValType = "linear",
eFunType = "Fourier", argvals = seq(0,1,0.01))$simData
g <- simFunData(N = 5, M = 4, eValType = "linear",
eFunType = "Poly", argvals = seq(0,1,0.01))$simData
# calculate the scalar product
scalarProduct(f,g)
# the scalar product of an object with itself equals the squared norm
all.equal(scalarProduct(f,f), norm(f, squared = TRUE))
# This works of course also for multiFunData objects...
m <- multiFunData(f,g)
all.equal(scalarProduct(m,m), norm(m, squared = TRUE))
# ...and for irregFunData objects
i <- as.irregFunData(sparsify(f, minObs = 5, maxObs = 10))
all.equal(scalarProduct(i,i), norm(i, squared = TRUE))
# Scalar product between funData and irregFunData objects
scalarProduct(i,f)
# Weighted scalar product for multiFunData objects
scalarProduct(m,m, weight = c(1,2))
simFunData Simulate univariate functional data
Description
This functions simulates (univariate) functional data f1 , . . . , fN based on a truncated Karhunen-
Loeve representation:
XM
fi (t) = ξi,m φm (t).
on one- or higher-dimensional domains. The eigenfunctions (basis functions) φm (t) are gener-
ated using eFun, the scores ξi,m are simulated independently from a normal distribution with zero
mean and decreasing variance based on the eVal function. For higher-dimensional domains, the
eigenfunctions are constructed as tensors of marginal orthonormal function systems.
Usage
simFunData(argvals, M, eFunType, ignoreDeg = NULL, eValType, N)
Arguments
argvals A numeric vector, containing the observation points (a fine grid on a real in-
terval) of the functional data that is to be simulated or a list of the marginal
observation points.
M An integer, giving the number of univariate basis functions to use. For higher-
dimensional data, M is a vector with the marginal number of eigenfunctions. See
Details.
eFunType A character string specifying the type of univariate orthonormal basis functions
to use. For data on higher-dimensional domains, eFunType can be a vector,
specifying the marginal type of eigenfunctions to use in the tensor product. See
eFun for details.
ignoreDeg A vector of integers, specifying the degrees to ignore when generating the uni-
variate orthonormal bases. Defaults to NULL. For higher-dimensional data, ignoreDeg
can be supplied as list with vectors for each marginal. See eFun for details.
eValType A character string, specifying the type of eigenvalues/variances used for the
generation of the simulated functions based on the truncated Karhunen-Loeve
representation. See eVal for details.
N An integer, specifying the number of multivariate functions to be generated.
Value
simData A funData object with N observations, representing the simulated functional
data.
trueFuns A funData object with M observations, representing the true eigenfunction basis
used for simulating the data.
trueVals A vector of numerics, representing the true eigenvalues used for simulating the
data.
See Also
funData, eFun, eVal, addError, sparsify
Examples
oldPar <- par(no.readonly = TRUE)
# Use Legendre polynomials as eigenfunctions and a linear eigenvalue decrease
test <- simFunData(seq(0,1,0.01), M = 10, eFunType = "Poly", eValType = "linear", N = 10)
plot(test$trueFuns, main = "True Eigenfunctions")
plot(test$simData, main = "Simulated Data")
# The use of ignoreDeg for eFunType = "PolyHigh"
test <- simFunData(seq(0,1,0.01), M = 4, eFunType = "Poly", eValType = "linear", N = 10)
test_noConst <- simFunData(seq(0,1,0.01), M = 4, eFunType = "PolyHigh",
ignoreDeg = 1, eValType = "linear", N = 10)
test_noLinear <- simFunData(seq(0,1,0.01), M = 4, eFunType = "PolyHigh",
ignoreDeg = 2, eValType = "linear", N = 10)
test_noBoth <- simFunData(seq(0,1,0.01), M = 4, eFunType = "PolyHigh",
ignoreDeg = 1:2, eValType = "linear", N = 10)
par(mfrow = c(2,2))
plot(test$trueFuns, main = "Standard polynomial basis (M = 4)")
plot(test_noConst$trueFuns, main = "No constant basis function")
plot(test_noLinear$trueFuns, main = "No linear basis function")
plot(test_noBoth$trueFuns, main = "Neither linear nor constant basis function")
# Higher-dimensional domains
simImages <- simFunData(argvals = list(seq(0,1,0.01), seq(-pi/2, pi/2, 0.02)),
M = c(5,4), eFunType = c("Wiener","Fourier"), eValType = "linear", N = 4)
for(i in 1:4)
plot(simImages$simData, obs = i, main = paste("Observation", i))
par(oldPar)
simMultiFunData Simulate multivariate functional data
Description
This function provides a unified simulation structure for multivariate functional data f1 , . . . , fN on
one- or two-dimensional domains, based on a truncated multivariate Karhunen-Loeve representa-
tion:
XM
fi (t) = ρi,m ψm (t).
The multivariate eigenfunctions (basis functions) ψm are constructed from univariate orthonormal
bases. There are two different concepts for the construction, that can be chosen by the parameter
type: A split orthonormal basis (split, only one-dimensional domains) and weighted univari-
ate orthonormal bases (weighted, one- and two-dimensional domains). The scores ρi,m in the
Karhunen-Loeve representation are simulated independently from a normal distribution with zero
mean and decreasing variance. See Details.
Usage
simMultiFunData(type, argvals, M, eFunType, ignoreDeg = NULL, eValType, N)
Arguments
type A character string, specifying the construction method for the multivariate eigen-
functions (either "split" or "weighted"). See Details.
argvals A list, containing the observation points for each element of the multivariate
functional data that is to be simulated. The length of argvals determines the
number of elements in the resulting simulated multivariate functional data. See
Details.
M An integer (type = "split") or a list of integers (type = "weighted"), giving
the number of univariate basis functions to use. See Details.
eFunType A character string (type = "split") or a list of character strings (type = "weighted"),
specifying the type of univariate orthonormal basis functions to use. See Details.
ignoreDeg A vector of integers (type = "split") or a list of integer vectors (type = "weighted"),
specifying the degrees to ignore when generating the univariate orthonormal
bases. Defaults to NULL. See Details.
eValType A character string, specifying the type of eigenvalues/variances used for the
simulation of the multivariate functions based on the truncated Karhunen-Loeve
representation. See eVal for details.
N An integer, specifying the number of multivariate functions to be generated.
Details
The parameter type defines how the eigenfunction basis for the multivariate Karhunen-Loeve rep-
resentation is constructed:
• type = "split": The basis functions of an underlying ’big’ orthonormal basis are split in
M parts, translated and possibly reflected. This yields an orthonormal basis of multivariate
functions with M elements. This option is implemented only for one-dimensional domains.
• type = "weighted": The multivariate eigenfunction basis consists of weighted univariate or-
thonormal bases. This yields an orthonormal basis of multivariate functions with M elements.
For data on two-dimensional domains (images), the univariate basis is constructed as a tensor
product of univariate bases in each direction (x- and y-direction).
Depending on type, the other parameters have to be specified as follows:
Split ’big’ orthonormal basis: The parameters M (integer), eFunType (character string) and
ignoreDeg (integer vector or NULL) are passed to the function eFun to generate a univariate or-
thonormal basis on a ’big’ interval. Subsequently, the basis functions are split and translated, such
that the j-th part of the split function is defined on the interval corresponding to argvals[[j]].
The elements of the multivariate basis functions are given by these split parts of the original basis
functions multiplied by a random sign σj ∈ {−1, 1}, j = 1, . . . , p.
Weighted orthonormal bases: The parameters argvals, M,eFunType and ignoreDeg are all
lists of a similar structure. They are passed element-wise to the function eFun to generate or-
thonormal basis functions for each element of the multivariate functional data to be simulated. In
case of bivariate elements (images), the corresponding basis functions are constructed as tensor
products of orthonormal basis functions in each direction (x- and y-direction).
If the j-th element of the simulated data should be defined on a one-dimensional domain, then
• argvals[[j]] is a list, containing one vector of observation points.
• M[[j]] is an integer, specifying the number of basis functions to use for this entry.
• eFunType[[j]] is a character string, specifying the type of orthonormal basis functions to
use for this entry (see eFun for possible options).
• ignoreDeg[[j]] is a vector of integers, specifying the degrees to ignore when constructing
the orthonormal basis functions. The default value is NULL.
If the j-th element of the simulated data should be defined on a two-dimensional domain, then
• argvals[[j]] is a list, containing two vectors of observation points, one for each direction
(observation points in x-direction and in y-direction).
• M[[j]] is a vector of two integers, giving the number of basis functions for each direction
(x- and y-direction).
• eFunType[[j]] is a vector of two character strings, giving the type of orthonormal basis
functions for each direction (x- and y-direction, see eFun for possible options). The corre-
sponding basis functions are constructed as tensor products of orthonormal basis functions in
each direction.
• ignoreDeg[[j]] is a list, containing two integer vectors that specify the degrees to ignore
when constructing the orthonormal basis functions in each direction. The default value is
NULL.
The total number of basis functions (i.e. the product of M[[j]] for all j) must be equal!
Value
simData A multiFunData object with N observations, representing the simulated multi-
variate functional data.
trueFuns A multiFunData object with M observations, representing the multivariate eigen-
function basis used for simulating the data.
trueVals A vector of numerics, representing the eigenvalues used for simulating the data.
References
<NAME>, <NAME> (2018): Multivariate Functional Principal Component Analysis for Data Ob-
served on Different (Dimensional) Domains. Journal of the American Statistical Association,
113(522): 649-659.
See Also
multiFunData, eFun, eVal, simFunData, addError, sparsify.
Examples
oldPar <- par(no.readonly = TRUE)
# split
split <- simMultiFunData(type = "split", argvals = list(seq(0,1,0.01), seq(-0.5,0.5,0.02)),
M = 5, eFunType = "Poly", eValType = "linear", N = 7)
par(mfrow = c(1,2))
plot(split$trueFuns, main = "Split: True Eigenfunctions", ylim = c(-2,2))
plot(split$simData, main = "Split: Simulated Data")
# weighted (one-dimensional domains)
weighted1D <- simMultiFunData(type = "weighted",
argvals = list(list(seq(0,1,0.01)), list(seq(-0.5,0.5,0.02))),
M = c(5,5), eFunType = c("Poly", "Fourier"), eValType = "linear", N = 7)
plot(weighted1D$trueFuns, main = "Weighted (1D): True Eigenfunctions", ylim = c(-2,2))
plot(weighted1D$simData, main = "Weighted (1D): Simulated Data")
# weighted (one- and two-dimensional domains)
weighted <- simMultiFunData(type = "weighted",
argvals = list(list(seq(0,1,0.01), seq(0,10,0.1)), list(seq(-0.5,0.5,0.01))),
M = list(c(5,4), 20), eFunType = list(c("Poly", "Fourier"), "Wiener"),
eValType = "linear", N = 7)
plot(weighted$trueFuns, main = "Weighted: True Eigenfunctions (m = 2)", obs = 2)
plot(weighted$trueFuns, main = "Weighted: True Eigenfunctions (m = 15)", obs = 15)
plot(weighted$simData, main = "Weighted: Simulated Data (1st observation)", obs = 1)
plot(weighted$simData, main = "Weighted: Simulated Data (2nd observation)", obs = 2)
par(oldPar)
sparsify Generate a sparse version of functional data objects
Description
This function generates an artificially sparsified version of a functional data object of class funData
(univariate) or multiFunData (multivariate). The minimal and maximal number of observation
points for all observations can be supplied by the user.
Usage
sparsify(funDataObject, minObs, maxObs)
Arguments
funDataObject A functional data object of class funData or multiFunData.
minObs, maxObs The minimal/maximal number of observation points. Must be a scalar for uni-
variate functional data (funData class) or a vector of the same length as funDataObject
for multivariate functional data (multiFunData class), giving the minimal/maximal
number of observations for each element. See Details.
Details
The technique for artificially sparsifying the data is as described in Yao et al. (2005): For each
(j) (j)
element xi of an observed (multivariate) functional data object xi , a random number Ri ∈
{minObs, . . . , maxObs} of observation points is generated. The points are sampled uniformly
from the full grid {tj,1 , . . . , tj,Sj } ⊂ Tj , resulting in observations
(j) (j) (j)
xi,r = xi (tj,r ), r = 1, . . . , Ri , j = 1, . . . , p.
Value
An object of the same class as funDataObject, which is a sparse version of the original data.
Warning
This function is currently implemented for 1D data only.
References
<NAME>., <NAME> and <NAME> (2005): Functional Data Analysis for Sparse Longitudinal
Data. Journal of the American Statistical Association, 100 (470), 577–590.
See Also
funData, multiFunData, simFunData, simMultiFunData, addError.
Examples
oldPar <- par(no.readonly = TRUE)
par(mfrow = c(1,1))
set.seed(1)
# univariate functional data
full <- simFunData(argvals = seq(0,1, 0.01), M = 10, eFunType = "Fourier",
eValType = "linear", N = 3)$simData
sparse <- sparsify(full, minObs = 4, maxObs = 10)
plot(full, main = "Sparsify")
plot(sparse, type = "p", pch = 20, add = TRUE)
legend("topright", c("Full", "Sparse"), lty = c(1, NA), pch = c(NA, 20))
# Multivariate
full <- simMultiFunData(type = "split", argvals = list(seq(0,1, 0.01), seq(-.5,.5, 0.02)),
M = 10, eFunType = "Fourier", eValType = "linear", N = 3)$simData
sparse <- sparsify(full, minObs = c(4, 30), maxObs = c(10, 40))
par(mfrow = c(1,2))
plot(full[[1]], main = "Sparsify (multivariate)", sub = "minObs = 4, maxObs = 10")
plot(sparse[[1]], type = "p", pch = 20, add = TRUE)
plot(full[[2]], main = "Sparsify (multivariate)", sub = "minObs = 30, maxObs = 40")
plot(sparse[[2]], type = "p", pch = 20, add = TRUE)
legend("bottomright", c("Full", "Sparse"), lty = c(1, NA), pch = c(NA, 20))
par(oldPar)
tensorProduct Tensor product for univariate functions on one-dimensional domains
Description
This function calculates tensor product functions for up to three objects of class funData defined
on one-dimensional domains.
Usage
tensorProduct(...)
Arguments
... Two or three objects of class funData, that must be defined on a one-dimensional
domain, each.
Value
An object of class as funData that corresponds to the tensor product of the input functions.
Warning
The function is only implemented for up to three functions on one-dimensional domains.
See Also
funData
Examples
### Tensor product of two functional data objects
x <- seq(0, 2*pi, 0.1)
f1 <- funData(x, outer(seq(0.75, 1.25, 0.1), sin(x)))
y <- seq(-pi, pi, 0.1)
f2 <- funData(y, outer(seq(0.25, 0.75, 0.1), sin(y)))
plot(f1, main = "f1")
plot(f2, main = "f2")
tP <- tensorProduct(f1, f2)
dimSupp(tP)
plot(tP, obs = 1)
### Tensor product of three functional data objects
z <- seq(-1, 1, 0.05)
f3 <- funData(z, outer(seq(0.75, 1.25, 0.1), z^2))
plot(f1, main = "f1")
plot(f2, main = "f2")
plot(f3, main = "f3")
tP2 <- tensorProduct(f1, f2, f3)
dimSupp(tP2) |
actix-web | rust | Rust | Crate actix_web
===
Actix Web is a powerful, pragmatic, and extremely fast web framework for Rust.
Examples
---
```
use actix_web::{get, web, App, HttpServer, Responder};
#[get("/hello/{name}")]
async fn greet(name: web::Path<String>) -> impl Responder {
format!("Hello {}!", name)
}
#[actix_web::main] // or #[tokio::main]
async fn main() -> std::io::Result<()> {
HttpServer::new(|| {
App::new().service(greet)
})
.bind(("127.0.0.1", 8080))?
.run()
.await
}
```
Documentation & Community Resources
---
In addition to this API documentation, several other resources are available:
* Website & User Guide
* Examples Repository
* Community Chat on Discord
To get started navigating the API docs, you may consider looking at the following pages first:
* `App`: This struct represents an Actix Web application and is used to configure routes and other common application settings.
* `HttpServer`: This struct represents an HTTP server instance and is used to instantiate and configure servers.
* `web`: This module provides essential types for route registration as well as common utilities for request handlers.
* `HttpRequest` and `HttpResponse`: These structs represent HTTP requests and responses and expose methods for creating, inspecting,
and otherwise utilizing them.
Features
---
* Supports HTTP/1.x and HTTP/2
* Streaming and pipelining
* Powerful request routing with optional macros
* Full Tokio compatibility
* Keep-alive and slow requests handling
* Client/server WebSockets support
* Transparent content compression/decompression (br, gzip, deflate, zstd)
* Multipart streams
* Static assets
* SSL support using OpenSSL or Rustls
* Middlewares (Logger, Session, CORS, etc)
* Integrates with the `awc` HTTP client
* Runs on stable Rust 1.54+
Crate Features
---
* `cookies` - cookies support (enabled by default)
* `macros` - routing and runtime macros (enabled by default)
* `compress-brotli` - brotli content encoding compression support (enabled by default)
* `compress-gzip` - gzip and deflate content encoding compression support (enabled by default)
* `compress-zstd` - zstd content encoding compression support (enabled by default)
* `openssl` - HTTPS support via `openssl` crate, supports `HTTP/2`
* `rustls` - HTTPS support via `rustls` crate, supports `HTTP/2`
* `secure-cookies` - secure cookies support
Re-exports
---
* `pub use crate::error::Error;`
* `pub use crate::error::ResponseError;`
Modules
---
* bodyTraits and structures to aid consuming and writing HTTP payloads.
* cookie`cookies`HTTP cookie parsing and cookie jar management.
* devLower-level types and re-exports.
* errorError and Result module
* guardRoute guards.
* httpVarious HTTP related types.
* middlewareA collection of common middleware.
* rtA selection of re-exports from `tokio` and `actix-rt`.
* testVarious helpers for Actix applications to use during testing.
* webEssentials helper functions and types for application registration.
Macros
---
* servicesMacro to help register different types of services at the same time.
Structs
---
* AppThe top-level builder for an Actix Web application.
* CustomizeResponderAllows overriding status code and headers for a `Responder`.
* HttpRequestAn incoming request.
* HttpResponseAn outgoing response.
* HttpResponseBuilderAn HTTP response builder.
* HttpServerAn HTTP Server.
* ResourceA collection of `Route`s that respond to the same path pattern.
* RouteA request handler with guards.
* ScopeA collection of `Route`s, `Resource`s, or other services that share a common path prefix.
Enums
---
* EitherCombines two extractor or responder types into a single type.
Traits
---
* FromRequestA type that implements `FromRequest` is called an **extractor** and can extract data from the request. Some types that implement this trait are: `Json`, `Header`, and `Path`.
* HandlerThe interface for request handlers.
* HttpMessageTrait that implements general purpose operations on HTTP messages.
* ResponderTrait implemented by types that can be converted to an HTTP response.
Type Aliases
---
* ResultA convenience `Result` for Actix Web operations.
Attribute Macros
---
* connect`macros`Creates route handler with `actix_web::guard::Connect`.
* delete`macros`Creates route handler with `actix_web::guard::Delete`.
* get`macros`Creates route handler with `actix_web::guard::Get`.
* head`macros`Creates route handler with `actix_web::guard::Head`.
* main`macros`Marks async main function as the Actix Web system entry-point.
* options`macros`Creates route handler with `actix_web::guard::Options`.
* patch`macros`Creates route handler with `actix_web::guard::Patch`.
* post`macros`Creates route handler with `actix_web::guard::Post`.
* put`macros`Creates route handler with `actix_web::guard::Put`.
* route`macros`Creates resource handler, allowing multiple HTTP method guards.
* routes`macros`Creates resource handler, allowing multiple HTTP methods and paths.
* test`macros`Marks async test functions to use the Actix Web system entry-point.
* trace`macros`Creates route handler with `actix_web::guard::Trace`.
Crate actix_web
===
Actix Web is a powerful, pragmatic, and extremely fast web framework for Rust.
Examples
---
```
use actix_web::{get, web, App, HttpServer, Responder};
#[get("/hello/{name}")]
async fn greet(name: web::Path<String>) -> impl Responder {
format!("Hello {}!", name)
}
#[actix_web::main] // or #[tokio::main]
async fn main() -> std::io::Result<()> {
HttpServer::new(|| {
App::new().service(greet)
})
.bind(("127.0.0.1", 8080))?
.run()
.await
}
```
Documentation & Community Resources
---
In addition to this API documentation, several other resources are available:
* Website & User Guide
* Examples Repository
* Community Chat on Discord
To get started navigating the API docs, you may consider looking at the following pages first:
* `App`: This struct represents an Actix Web application and is used to configure routes and other common application settings.
* `HttpServer`: This struct represents an HTTP server instance and is used to instantiate and configure servers.
* `web`: This module provides essential types for route registration as well as common utilities for request handlers.
* `HttpRequest` and `HttpResponse`: These structs represent HTTP requests and responses and expose methods for creating, inspecting,
and otherwise utilizing them.
Features
---
* Supports HTTP/1.x and HTTP/2
* Streaming and pipelining
* Powerful request routing with optional macros
* Full Tokio compatibility
* Keep-alive and slow requests handling
* Client/server WebSockets support
* Transparent content compression/decompression (br, gzip, deflate, zstd)
* Multipart streams
* Static assets
* SSL support using OpenSSL or Rustls
* Middlewares (Logger, Session, CORS, etc)
* Integrates with the `awc` HTTP client
* Runs on stable Rust 1.54+
Crate Features
---
* `cookies` - cookies support (enabled by default)
* `macros` - routing and runtime macros (enabled by default)
* `compress-brotli` - brotli content encoding compression support (enabled by default)
* `compress-gzip` - gzip and deflate content encoding compression support (enabled by default)
* `compress-zstd` - zstd content encoding compression support (enabled by default)
* `openssl` - HTTPS support via `openssl` crate, supports `HTTP/2`
* `rustls` - HTTPS support via `rustls` crate, supports `HTTP/2`
* `secure-cookies` - secure cookies support
Re-exports
---
* `pub use crate::error::Error;`
* `pub use crate::error::ResponseError;`
Modules
---
* bodyTraits and structures to aid consuming and writing HTTP payloads.
* cookie`cookies`HTTP cookie parsing and cookie jar management.
* devLower-level types and re-exports.
* errorError and Result module
* guardRoute guards.
* httpVarious HTTP related types.
* middlewareA collection of common middleware.
* rtA selection of re-exports from `tokio` and `actix-rt`.
* testVarious helpers for Actix applications to use during testing.
* webEssentials helper functions and types for application registration.
Macros
---
* servicesMacro to help register different types of services at the same time.
Structs
---
* AppThe top-level builder for an Actix Web application.
* CustomizeResponderAllows overriding status code and headers for a `Responder`.
* HttpRequestAn incoming request.
* HttpResponseAn outgoing response.
* HttpResponseBuilderAn HTTP response builder.
* HttpServerAn HTTP Server.
* ResourceA collection of `Route`s that respond to the same path pattern.
* RouteA request handler with guards.
* ScopeA collection of `Route`s, `Resource`s, or other services that share a common path prefix.
Enums
---
* EitherCombines two extractor or responder types into a single type.
Traits
---
* FromRequestA type that implements `FromRequest` is called an **extractor** and can extract data from the request. Some types that implement this trait are: `Json`, `Header`, and `Path`.
* HandlerThe interface for request handlers.
* HttpMessageTrait that implements general purpose operations on HTTP messages.
* ResponderTrait implemented by types that can be converted to an HTTP response.
Type Aliases
---
* ResultA convenience `Result` for Actix Web operations.
Attribute Macros
---
* connect`macros`Creates route handler with `actix_web::guard::Connect`.
* delete`macros`Creates route handler with `actix_web::guard::Delete`.
* get`macros`Creates route handler with `actix_web::guard::Get`.
* head`macros`Creates route handler with `actix_web::guard::Head`.
* main`macros`Marks async main function as the Actix Web system entry-point.
* options`macros`Creates route handler with `actix_web::guard::Options`.
* patch`macros`Creates route handler with `actix_web::guard::Patch`.
* post`macros`Creates route handler with `actix_web::guard::Post`.
* put`macros`Creates route handler with `actix_web::guard::Put`.
* route`macros`Creates resource handler, allowing multiple HTTP method guards.
* routes`macros`Creates resource handler, allowing multiple HTTP methods and paths.
* test`macros`Marks async test functions to use the Actix Web system entry-point.
* trace`macros`Creates route handler with `actix_web::guard::Trace`.
Struct actix_web::App
===
```
pub struct App<T> { /* private fields */ }
```
The top-level builder for an Actix Web application.
Implementations
---
### impl App<AppEntry#### pub fn new() -> Self
Create application builder. Application can be configured with a builder-like pattern.
### impl<T> App<T>where
T: ServiceFactory<ServiceRequest, Config = (), Error = Error, InitError = ()>,
#### pub fn app_data<U: 'static>(self, ext: U) -> Self
Set application (root level) data.
Application data stored with `App::app_data()` method is available through the
`HttpRequest::app_data` method at runtime.
##### `Data<T>`
Any `Data<T>` type added here can utilize its extractor implementation in handlers.
Types not wrapped in `Data<T>` cannot use this extractor. See its docs for more about its usage and patterns.
```
use std::cell::Cell;
use actix_web::{web, App, HttpRequest, HttpResponse, Responder};
struct MyData {
count: std::cell::Cell<usize>,
}
async fn handler(req: HttpRequest, counter: web::Data<MyData>) -> impl Responder {
// note this cannot use the Data<T> extractor because it was not added with it
let incr = *req.app_data::<usize>().unwrap();
assert_eq!(incr, 3);
// update counter using other value from app data
counter.count.set(counter.count.get() + incr);
HttpResponse::Ok().body(counter.count.get().to_string())
}
let app = App::new().service(
web::resource("/")
.app_data(3usize)
.app_data(web::Data::new(MyData { count: Default::default() }))
.route(web::get().to(handler))
);
```
##### Shared Mutable State
`HttpServer::new` accepts an application factory rather than an application instance; the factory closure is called on each worker thread independently.
Therefore, if you want to share a data object between different workers, a shareable object needs to be created first, outside the `HttpServer::new` closure and cloned into it.
`Data<T>` is an example of such a sharable object.
```
let counter = web::Data::new(AppStateWithCounter {
counter: Mutex::new(0),
});
HttpServer::new(move || {
// move counter object into the closure and clone for each worker
App::new()
.app_data(counter.clone())
.route("/", web::get().to(handler))
})
```
#### pub fn data<U: 'static>(self, data: U) -> Self
👎Deprecated since 4.0.0: Use `.app_data(Data::new(val))` instead.Add application (root) data after wrapping in `Data<T>`.
Deprecated in favor of `app_data`.
#### pub fn data_factory<F, Out, D, E>(self, data: F) -> Selfwhere
F: Fn() -> Out + 'static,
Out: Future<Output = Result<D, E>> + 'static,
D: 'static,
E: Debug,
Add application data factory that resolves asynchronously.
Data items are constructed during application initialization, before the server starts accepting requests.
#### pub fn configure<F>(self, f: F) -> Selfwhere
F: FnOnce(&mut ServiceConfig),
Run external configuration as part of the application building process
This function is useful for moving parts of configuration to a different module or even library. For example,
some of the resource’s configuration could be moved to different module.
```
use actix_web::{web, App, HttpResponse};
// this function could be located in different module fn config(cfg: &mut web::ServiceConfig) {
cfg.service(web::resource("/test")
.route(web::get().to(|| HttpResponse::Ok()))
.route(web::head().to(|| HttpResponse::MethodNotAllowed()))
);
}
App::new()
.configure(config) // <- register resources
.route("/index.html", web::get().to(|| HttpResponse::Ok()));
```
#### pub fn route(self, path: &str, route: Route) -> Self
Configure route for a specific path.
This is a simplified version of the `App::service()` method.
This method can be used multiple times with same path, in that case multiple resources with one route would be registered for same resource path.
```
use actix_web::{web, App, HttpResponse};
async fn index(data: web::Path<(String, String)>) -> &'static str {
"Welcome!"
}
let app = App::new()
.route("/test1", web::get().to(index))
.route("/test2", web::post().to(|| HttpResponse::MethodNotAllowed()));
```
#### pub fn service<F>(self, factory: F) -> Selfwhere
F: HttpServiceFactory + 'static,
Register HTTP service.
Http service is any type that implements `HttpServiceFactory` trait.
Actix Web provides several services implementations:
* *Resource* is an entry in resource table which corresponds to requested URL.
* *Scope* is a set of resources with common root path.
* “StaticFiles” is a service for static files support
#### pub fn default_service<F, U>(self, svc: F) -> Selfwhere
F: IntoServiceFactory<U, ServiceRequest>,
U: ServiceFactory<ServiceRequest, Config = (), Response = ServiceResponse, Error = Error> + 'static,
U::InitError: Debug,
Default service that is invoked when no matching resource could be found.
You can use a `Route` as default service.
If a default service is not registered, an empty `404 Not Found` response will be sent to the client instead.
##### Examples
```
use actix_web::{web, App, HttpResponse};
async fn index() -> &'static str {
"Welcome!"
}
let app = App::new()
.service(web::resource("/index.html").route(web::get().to(index)))
.default_service(web::to(|| HttpResponse::NotFound()));
```
#### pub fn external_resource<N, U>(self, name: N, url: U) -> Selfwhere
N: AsRef<str>,
U: AsRef<str>,
Register an external resource.
External resources are useful for URL generation purposes only and are never considered for matching at request time. Calls to
`HttpRequest::url_for()` will work as expected.
```
use actix_web::{web, App, HttpRequest, HttpResponse, Result};
async fn index(req: HttpRequest) -> Result<HttpResponse> {
let url = req.url_for("youtube", &["asdlkjqme"])?;
assert_eq!(url.as_str(), "https://youtube.com/watch/asdlkjqme");
Ok(HttpResponse::Ok().into())
}
let app = App::new()
.service(web::resource("/index.html").route(
web::get().to(index)))
.external_resource("youtube", "https://youtube.com/watch/{video_id}");
```
#### pub fn wrap<M, B>(
self,
mw: M
) -> App<impl ServiceFactory<ServiceRequest, Config = (), Response = ServiceResponse<B>, Error = Error, InitError = ()>>where
M: Transform<T::Service, ServiceRequest, Response = ServiceResponse<B>, Error = Error, InitError = ()> + 'static,
B: MessageBody,
Registers an app-wide middleware.
Registers middleware, in the form of a middleware component (type), that runs during inbound and/or outbound processing in the request life-cycle (request -> response),
modifying request/response as necessary, across all requests managed by the `App`.
Use middleware when you need to read or modify *every* request or response in some way.
Middleware can be applied similarly to individual `Scope`s and `Resource`s.
See `Scope::wrap` and `Resource::wrap`.
For more info on middleware take a look at the `middleware` module.
##### Examples
```
use actix_web::{middleware, web, App};
async fn index() -> &'static str {
"Welcome!"
}
let app = App::new()
.wrap(middleware::Logger::default())
.route("/index.html", web::get().to(index));
```
#### pub fn wrap_fn<F, R, B>(
self,
mw: F
) -> App<impl ServiceFactory<ServiceRequest, Config = (), Response = ServiceResponse<B>, Error = Error, InitError = ()>>where
F: Fn(ServiceRequest, &T::Service) -> R + Clone + 'static,
R: Future<Output = Result<ServiceResponse<B>, Error>>,
B: MessageBody,
Registers an app-wide function middleware.
`mw` is a closure that runs during inbound and/or outbound processing in the request life-cycle (request -> response), modifying request/response as necessary, across all requests handled by the `App`.
Use middleware when you need to read or modify *every* request or response in some way.
Middleware can also be applied to individual `Scope`s and `Resource`s.
See `App::wrap` for details on how middlewares compose with each other.
##### Examples
```
use actix_web::{dev::Service as _, middleware, web, App};
use actix_web::http::header::{CONTENT_TYPE, HeaderValue};
async fn index() -> &'static str {
"Welcome!"
}
let app = App::new()
.wrap_fn(|req, srv| {
let fut = srv.call(req);
async {
let mut res = fut.await?;
res.headers_mut()
.insert(CONTENT_TYPE, HeaderValue::from_static("text/plain"));
Ok(res)
}
})
.route("/index.html", web::get().to(index));
```
Auto Trait Implementations
---
### impl<T> !RefUnwindSafe for App<T### impl<T> !Send for App<T### impl<T> !Sync for App<T### impl<T> Unpin for App<T>where
T: Unpin,
### impl<T> !UnwindSafe for App<TBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct actix_web::HttpServer
===
```
pub struct HttpServer<F, I, S, B>where
F: Fn() -> I + Send + Clone + 'static,
I: IntoServiceFactory<S, Request>,
S: ServiceFactory<Request, Config = AppConfig>,
S::Error: Into<Error>,
S::InitError: Debug,
S::Response: Into<Response<B>>,
B: MessageBody,{ /* private fields */ }
```
An HTTP Server.
Create new HTTP server with application factory.
Automatic HTTP Version Selection
---
There are two ways to select the HTTP version of an incoming connection:
* One is to rely on the ALPN information that is provided when using a TLS (HTTPS); both versions are supported automatically when using either of the `.bind_rustls()` or
`.bind_openssl()` methods.
* The other is to read the first few bytes of the TCP stream. This is the only viable approach for supporting H2C, which allows the HTTP/2 protocol to work over plaintext connections. Use the `.bind_auto_h2c()` method to enable this behavior.
Examples
---
```
use actix_web::{web, App, HttpResponse, HttpServer};
#[actix_web::main]
async fn main() -> std::io::Result<()> {
HttpServer::new(|| {
App::new()
.service(web::resource("/").to(|| async { "hello world" }))
})
.bind(("127.0.0.1", 8080))?
.run()
.await
}
```
Implementations
---
### impl<F, I, S, B> HttpServer<F, I, S, B>where
F: Fn() -> I + Send + Clone + 'static,
I: IntoServiceFactory<S, Request>,
S: ServiceFactory<Request, Config = AppConfig> + 'static,
S::Error: Into<Error> + 'static,
S::InitError: Debug,
S::Response: Into<Response<B>> + 'static,
<S::Service as Service<Request>>::Future: 'static,
S::Service: 'static,
B: MessageBody + 'static,
#### pub fn new(factory: F) -> Self
Create new HTTP server with application factory
#### pub fn workers(self, num: usize) -> Self
Sets number of workers to start (per bind address).
By default, the number of available physical CPUs is used as the worker count.
#### pub fn keep_alive<T: Into<KeepAlive>>(self, val: T) -> Self
Sets server keep-alive preference.
By default keep-alive is set to 5 seconds.
#### pub fn backlog(self, backlog: u32) -> Self
Sets the maximum number of pending connections.
This refers to the number of clients that can be waiting to be served. Exceeding this number results in the client getting an error when attempting to connect. It should only affect servers under significant load.
Generally set in the 64–2048 range. Default value is 2048.
This method will have no effect if called after a `bind()`.
#### pub fn max_connections(self, num: usize) -> Self
Sets the per-worker maximum number of concurrent connections.
All socket listeners will stop accepting connections when this limit is reached for each worker.
By default max connections is set to a 25k.
#### pub fn max_connection_rate(self, num: usize) -> Self
Sets the per-worker maximum concurrent TLS connection limit.
All listeners will stop accepting connections when this limit is reached. It can be used to limit the global TLS CPU usage.
By default max connections is set to a 256.
#### pub fn worker_max_blocking_threads(self, num: usize) -> Self
Sets max number of threads for each worker’s blocking task thread pool.
One thread pool is set up **per worker**; not shared across workers.
By default set to 512 divided by the number of workers.
#### pub fn client_request_timeout(self, dur: Duration) -> Self
Sets server client timeout for first request.
Defines a timeout for reading client request head. If a client does not transmit the entire set headers within this time, the request is terminated with a 408 (Request Timeout) error.
To disable timeout set value to 0.
By default client timeout is set to 5000 milliseconds.
#### pub fn client_disconnect_timeout(self, dur: Duration) -> Self
Sets server connection shutdown timeout.
Defines a timeout for connection shutdown. If a shutdown procedure does not complete within this time, the request is dropped.
To disable timeout set value to 0.
By default client timeout is set to 5000 milliseconds.
#### pub fn tls_handshake_timeout(self, dur: Duration) -> Self
Available on **crate features `openssl` or `rustls-0_20` or `rustls-0_21`** only.Sets TLS handshake timeout.
Defines a timeout for TLS handshake. If the TLS handshake does not complete within this time, the connection is closed.
By default, the handshake timeout is 3 seconds.
#### pub fn on_connect<CB>(self, f: CB) -> HttpServer<F, I, S, B>where
CB: Fn(&dyn Any, &mut Extensions) + Send + Sync + 'static,
Sets function that will be called once before each connection is handled.
It will receive a `&std::any::Any`, which contains underlying connection type and an Extensions container so that connection data can be accessed in middleware and handlers.
##### Connection Types
* `actix_tls::accept::openssl::TlsStream<actix_web::rt::net::TcpStream>` when using OpenSSL.
* `actix_tls::accept::rustls_0_20::TlsStream<actix_web::rt::net::TcpStream>` when using Rustls v0.20.
* `actix_tls::accept::rustls_0_21::TlsStream<actix_web::rt::net::TcpStream>` when using Rustls v0.21.
* `actix_web::rt::net::TcpStream` when no encryption is used.
See the `on_connect` example for additional details.
#### pub fn server_hostname<T: AsRef<str>>(self, val: T) -> Self
Sets server host name.
Host name is used by application router as a hostname for url generation. Check
`ConnectionInfo` docs for more info.
By default, hostname is set to “localhost”.
#### pub fn system_exit(self) -> Self
Flags the `System` to exit after server shutdown.
Does nothing when running under `#[tokio::main]` runtime.
#### pub fn disable_signals(self) -> Self
Disables signal handling.
#### pub fn shutdown_timeout(self, sec: u64) -> Self
Sets timeout for graceful worker shutdown of workers.
After receiving a stop signal, workers have this much time to finish serving requests.
Workers still alive after the timeout are force dropped.
By default shutdown timeout sets to 30 seconds.
#### pub fn addrs(&self) -> Vec<SocketAddrReturns addresses of bound sockets.
#### pub fn addrs_with_scheme(&self) -> Vec<(SocketAddr, &str)Returns addresses of bound sockets and the scheme for it.
This is useful when the server is bound from different sources with some sockets listening on HTTP and some listening on HTTPS and the user should be presented with an enumeration of which socket requires which protocol.
#### pub fn bind<A: ToSocketAddrs>(self, addrs: A) -> Result<SelfResolves socket address(es) and binds server to created listener(s).
##### Hostname Resolution
When `addr` includes a hostname, it is possible for this method to bind to both the IPv4 and IPv6 addresses that result from a DNS lookup. You can test this by passing `localhost:8080`
and noting that the server binds to `127.0.0.1:8080` *and* `[::1]:8080`. To bind additional addresses, call this method multiple times.
Note that, if a DNS lookup is required, resolving hostnames is a blocking operation.
##### Typical Usage
In general, use `127.0.0.1:<port>` when testing locally and `0.0.0.0:<port>` when deploying
(with or without a reverse proxy or load balancer) so that the server is accessible.
##### Errors
Returns an `io::Error` if:
* `addrs` cannot be resolved into one or more socket addresses;
* all the resolved socket addresses are already bound.
##### Example
```
HttpServer::new(|| App::new())
.bind(("127.0.0.1", 8080))?
.bind("[::1]:9000")?
```
#### pub fn bind_auto_h2c<A: ToSocketAddrs>(self, addrs: A) -> Result<SelfAvailable on **crate feature `http2`** only.Resolves socket address(es) and binds server to created listener(s) for plaintext HTTP/1.x or HTTP/2 connections.
#### pub fn bind_rustls<A: ToSocketAddrs>(
self,
addrs: A,
config: ServerConfig
) -> Result<SelfAvailable on **crate feature `rustls-0_20`** only.Resolves socket address(es) and binds server to created listener(s) for TLS connections using Rustls v0.20.
See `bind()` for more details on `addrs` argument.
ALPN protocols “h2” and “http/1.1” are added to any configured ones.
#### pub fn bind_rustls_021<A: ToSocketAddrs>(
self,
addrs: A,
config: ServerConfig
) -> Result<SelfAvailable on **crate feature `rustls-0_21`** only.Resolves socket address(es) and binds server to created listener(s) for TLS connections using Rustls v0.21.
See `bind()` for more details on `addrs` argument.
ALPN protocols “h2” and “http/1.1” are added to any configured ones.
#### pub fn bind_openssl<A>(
self,
addrs: A,
builder: SslAcceptorBuilder
) -> Result<Self>where
A: ToSocketAddrs,
Available on **crate feature `openssl`** only.Resolves socket address(es) and binds server to created listener(s) for TLS connections using OpenSSL.
See `bind()` for more details on `addrs` argument.
ALPN protocols “h2” and “http/1.1” are added to any configured ones.
#### pub fn listen(self, lst: TcpListener) -> Result<SelfBinds to existing listener for accepting incoming connection requests.
No changes are made to `lst`’s configuration. Ensure it is configured properly before passing ownership to `listen()`.
#### pub fn listen_auto_h2c(self, lst: TcpListener) -> Result<SelfAvailable on **crate feature `http2`** only.Binds to existing listener for accepting incoming plaintext HTTP/1.x or HTTP/2 connections.
#### pub fn listen_rustls(
self,
lst: TcpListener,
config: ServerConfig
) -> Result<SelfAvailable on **crate feature `rustls-0_20`** only.Binds to existing listener for accepting incoming TLS connection requests using Rustls v0.20.
See `listen()` for more details on the `lst` argument.
ALPN protocols “h2” and “http/1.1” are added to any configured ones.
#### pub fn listen_rustls_0_21(
self,
lst: TcpListener,
config: ServerConfig
) -> Result<SelfAvailable on **crate feature `rustls-0_21`** only.Binds to existing listener for accepting incoming TLS connection requests using Rustls v0.21.
See `listen()` for more details on the `lst` argument.
ALPN protocols “h2” and “http/1.1” are added to any configured ones.
#### pub fn listen_openssl(
self,
lst: TcpListener,
builder: SslAcceptorBuilder
) -> Result<SelfAvailable on **crate feature `openssl`** only.Binds to existing listener for accepting incoming TLS connection requests using OpenSSL.
See `listen()` for more details on the `lst` argument.
ALPN protocols “h2” and “http/1.1” are added to any configured ones.
#### pub fn bind_uds<A>(self, uds_path: A) -> Result<Self>where
A: AsRef<Path>,
Available on **Unix** only.Opens Unix Domain Socket (UDS) from `uds` path and binds server to created listener.
#### pub fn listen_uds(self, lst: UnixListener) -> Result<SelfAvailable on **Unix** only.Binds to existing Unix Domain Socket (UDS) listener.
### impl<F, I, S, B> HttpServer<F, I, S, B>where
F: Fn() -> I + Send + Clone + 'static,
I: IntoServiceFactory<S, Request>,
S: ServiceFactory<Request, Config = AppConfig>,
S::Error: Into<Error>,
S::InitError: Debug,
S::Response: Into<Response<B>>,
S::Service: 'static,
B: MessageBody,
#### pub fn run(self) -> Server
Start listening for incoming connections.
##### Workers
This method starts a number of HTTP workers in separate threads. The number of workers in a set is defined by `workers()` or, by default, the number of the machine’s physical cores. One worker set is created for each socket address to be bound. For example,
if workers is set to 4, and there are 2 addresses to bind, then 8 worker threads will be spawned.
##### Panics
This methods panics if no socket addresses were successfully bound or if no Tokio runtime is set up.
Auto Trait Implementations
---
### impl<F, I, S, B> !RefUnwindSafe for HttpServer<F, I, S, B### impl<F, I, S, B> Send for HttpServer<F, I, S, B>where
B: Send,
S: Send,
### impl<F, I, S, B> !Sync for HttpServer<F, I, S, B### impl<F, I, S, B> Unpin for HttpServer<F, I, S, B>where
B: Unpin,
F: Unpin,
S: Unpin,
### impl<F, I, S, B> !UnwindSafe for HttpServer<F, I, S, BBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more{"Server":"<h3>Notable traits for <code><a class=\"struct\" href=\"dev/struct.Server.html\" title=\"struct actix_web::dev::Server\">Server</a></code></h3><pre><code><span class=\"where fmt-newline\">impl <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/future/future/trait.Future.html\" title=\"trait core::future::future::Future\">Future</a> for <a class=\"struct\" href=\"dev/struct.Server.html\" title=\"struct actix_web::dev::Server\">Server</a></span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/future/future/trait.Future.html#associatedtype.Output\" class=\"associatedtype\">Output</a> = <a class=\"enum\" href=\"https://doc.rust-lang.org/nightly/core/result/enum.Result.html\" title=\"enum core::result::Result\">Result</a><<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.unit.html\">()</a>, <a class=\"struct\" href=\"https://doc.rust-lang.org/nightly/std/io/error/struct.Error.html\" title=\"struct std::io::error::Error\">Error</a>>;</span>"}
Module actix_web::web
===
Essentials helper functions and types for application registration.
Request Extractors
---
* `Data`: Application data item
* `ReqData`: Request-local data item
* `Path`: URL path parameters / dynamic segments
* `Query`: URL query parameters
* `Header`: Typed header
* `Json`: JSON payload
* `Form`: URL-encoded payload
* `Bytes`: Raw payload
Responders
---
* `Json`: JSON response
* `Form`: URL-encoded response
* `Bytes`: Raw bytes response
* `Redirect`: Convenient redirect responses
Structs
---
* BytesA cheaply cloneable and sliceable chunk of contiguous memory.
* BytesMutA unique reference to a contiguous slice of memory.
* DataApplication data wrapper and extractor.
* FormURL encoded payload extractor and responder.
* FormConfig`Form` extractor configuration.
* HeaderExtract typed headers from the request.
* JsonJSON extractor and responder.
* JsonConfig`Json` extractor configuration.
* PathExtract typed data from request path segments.
* PathConfigPath extractor configuration
* PayloadExtract a request’s raw payload stream.
* PayloadConfigConfiguration for request payloads.
* QueryExtract typed information from the request’s query.
* QueryConfigQuery extractor configuration.
* ReadlinesStream that reads request line by line.
* RedirectAn HTTP service for redirecting one path to another path or URL.
* ReqDataRequest-local data extractor.
* ServiceConfigEnables parts of app configuration to be declared separately from the app itself. Helpful for modularizing large applications.
* UrlEncodedFuture that resolves to some `T` when parsed from a URL encoded payload.
Enums
---
* EitherCombines two extractor or responder types into a single type.
* JsonBodyFuture that resolves to some `T` when parsed from a JSON payload.
Traits
---
* BufRead bytes from a buffer.
* BufMutA trait for values that provide sequential write access to bytes.
Functions
---
* blockExecutes blocking function on a thread pool, returns future that resolves to result of the function execution.
* deleteCreates a new route with `DELETE` method guard.
* getCreates a new route with `GET` method guard.
* headCreates a new route with `HEAD` method guard.
* methodCreates a new route with specified method guard.
* patchCreates a new route with `PATCH` method guard.
* postCreates a new route with `POST` method guard.
* putCreates a new route with `PUT` method guard.
* redirectCreate a relative or absolute redirect.
* resourceCreates a new resource for a specific path.
* routeCreates a new un-configured route.
* scopeCreates scope for common path prefix.
* serviceCreates a raw service for a specific path.
* toCreates a new any-method route with handler.
* traceCreates a new route with `TRACE` method guard.
Struct actix_web::HttpRequest
===
```
pub struct HttpRequest { /* private fields */ }
```
An incoming request.
Implementations
---
### impl HttpRequest
#### pub fn head(&self) -> &RequestHead
This method returns reference to the request head
#### pub fn uri(&self) -> &Uri
Request’s uri.
#### pub fn method(&self) -> &Method
Read the Request method.
#### pub fn version(&self) -> Version
Read the Request Version.
#### pub fn headers(&self) -> &HeaderMap
Returns request’s headers.
#### pub fn path(&self) -> &str
The target path of this request.
#### pub fn query_string(&self) -> &str
The query string in the URL.
Example: `id=10`
#### pub fn match_info(&self) -> &Path<UrlReturns a reference to the URL parameters container.
A URL parameter is specified in the form `{identifier}`, where the identifier can be used later in a request handler to access the matched value for that parameter.
##### Percent Encoding and URL Parameters
Because each URL parameter is able to capture multiple path segments, none of
`["%2F", "%25", "%2B"]` found in the request URI are decoded into `["/", "%", "+"]` in order to preserve path integrity. If a URL parameter is expected to contain these characters, then it is on the user to decode them or use the `web::Path` extractor which
*will* decode these special sequences.
#### pub fn match_pattern(&self) -> Option<StringThe resource definition pattern that matched the path. Useful for logging and metrics.
For example, when a resource with pattern `/user/{id}/profile` is defined and a call is made to `/user/123/profile` this function would return `Some("/user/{id}/profile")`.
Returns a None when no resource is fully matched, including default services.
#### pub fn match_name(&self) -> Option<&strThe resource name that matched the path. Useful for logging and metrics.
Returns a None when no resource is fully matched, including default services.
#### pub fn conn_data<T: 'static>(&self) -> Option<&TReturns a reference a piece of connection data set in an on-connect callback.
```
let opt_t = req.conn_data::<PeerCertificate>();
```
#### pub fn url_for<U, I>(
&self,
name: &str,
elements: U
) -> Result<Url, UrlGenerationError>where
U: IntoIterator<Item = I>,
I: AsRef<str>,
Generates URL for a named resource.
This substitutes in sequence all URL parameters that appear in the resource itself and in parent scopes, if any.
It is worth noting that the characters `['/', '%']` are not escaped and therefore a single URL parameter may expand into multiple path segments and `elements` can be percent-encoded beforehand without worrying about double encoding. Any other character that is not valid in a URL path context is escaped using percent-encoding.
##### Examples
```
fn index(req: HttpRequest) -> HttpResponse {
let url = req.url_for("foo", &["1", "2", "3"]); // <- generate URL for "foo" resource
HttpResponse::Ok().into()
}
let app = App::new()
.service(web::resource("/test/{one}/{two}/{three}")
.name("foo") // <- set resource name so it can be used in `url_for`
.route(web::get().to(|| HttpResponse::Ok()))
);
```
#### pub fn url_for_static(&self, name: &str) -> Result<Url, UrlGenerationErrorGenerate URL for named resource
This method is similar to `HttpRequest::url_for()` but it can be used for urls that do not contain variable parts.
#### pub fn resource_map(&self) -> &ResourceMap
Get a reference to a `ResourceMap` of current application.
#### pub fn peer_addr(&self) -> Option<SocketAddrReturns peer socket address.
Peer address is the directly connected peer’s socket address. If a proxy is used in front of the Actix Web server, then it would be address of this proxy.
For expanded client connection information, use `connection_info` instead.
Will only return None when called in unit tests unless `TestRequest::peer_addr` is used.
#### pub fn connection_info(&self) -> Ref<'_, ConnectionInfoReturns connection info for the current request.
The return type, `ConnectionInfo`, can also be used as an extractor.
##### Panics
Panics if request’s extensions container is already borrowed.
#### pub fn app_config(&self) -> &AppConfig
Returns a reference to the application’s connection configuration.
#### pub fn app_data<T: 'static>(&self) -> Option<&TRetrieves a piece of application state.
Extracts any object stored with `App::app_data()` (or the counterpart methods on `Scope` and
`Resource`) during application configuration.
Since the Actix Web router layers application data, the returned object will reference the
“closest” instance of the type. For example, if an `App` stores a `u32`, a nested `Scope`
also stores a `u32`, and the delegated request handler falls within that `Scope`, then calling `.app_data::<u32>()` on an `HttpRequest` within that handler will return the
`Scope`’s instance. However, using the same router set up and a request that does not get captured by the `Scope`, `.app_data::<u32>()` would return the `App`’s instance.
If the state was stored using the `Data` wrapper, then it must also be retrieved using this same type.
See also the `Data` extractor.
##### Examples
```
let opt_t: Option<&Data<T>> = req.app_data::<Data<T>>();
```
#### pub fn cookies(&self) -> Result<Ref<'_, Vec<Cookie<'static>>>, CookieParseErrorAvailable on **crate feature `cookies`** only.Load request cookies.
#### pub fn cookie(&self, name: &str) -> Option<Cookie<'static>Available on **crate feature `cookies`** only.Return request cookie.
Trait Implementations
---
### impl Clone for HttpRequest
#### fn clone(&self) -> HttpRequest
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn drop(&mut self)
Executes the destructor for this type.
It is possible to get `HttpRequest` as an extractor handler parameter
#### Examples
```
use actix_web::{web, App, HttpRequest};
use serde::Deserialize;
/// extract `Thing` from request async fn index(req: HttpRequest) -> String {
format!("Got thing: {:?}", req)
}
let app = App::new().service(
web::resource("/users/{first}").route(
web::get().to(index))
);
```
#### type Error = Error
The associated error which can be returned.#### type Future = Ready<Result<HttpRequest, Error>Future that resolves to a `Self`.
Create a `Self` from request parts asynchronously.#### fn extract(req: &HttpRequest) -> Self::Future
Create a `Self` from request head asynchronously.
#### type Stream = ()
Type of message payload stream#### fn headers(&self) -> &HeaderMap
Read the message headers.#### fn extensions(&self) -> Ref<'_, ExtensionsReturns a reference to the request-local data/extensions container.#### fn extensions_mut(&self) -> RefMut<'_, ExtensionsReturns a mutable reference to the request-local data/extensions container.#### fn take_payload(&mut self) -> Payload<Self::StreamMessage payload stream#### fn content_type(&self) -> &str
Read the request content type. If request did not contain a *Content-Type* header, an empty string is returned.#### fn encoding(&self) -> Result<&'static Encoding, ContentTypeErrorGet content type encoding.
---
### impl !RefUnwindSafe for HttpRequest
### impl !Send for HttpRequest
### impl !Sync for HttpRequest
### impl Unpin for HttpRequest
### impl !UnwindSafe for HttpRequest
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct actix_web::HttpResponse
===
```
pub struct HttpResponse<B = BoxBody> { /* private fields */ }
```
An outgoing response.
Implementations
---
### impl HttpResponse
#### pub fn Continue() -> HttpResponseBuilder
#### pub fn SwitchingProtocols() -> HttpResponseBuilder
#### pub fn Processing() -> HttpResponseBuilder
#### pub fn Ok() -> HttpResponseBuilder
#### pub fn Created() -> HttpResponseBuilder
#### pub fn Accepted() -> HttpResponseBuilder
#### pub fn NonAuthoritativeInformation() -> HttpResponseBuilder
#### pub fn NoContent() -> HttpResponseBuilder
#### pub fn ResetContent() -> HttpResponseBuilder
#### pub fn PartialContent() -> HttpResponseBuilder
#### pub fn MultiStatus() -> HttpResponseBuilder
#### pub fn AlreadyReported() -> HttpResponseBuilder
#### pub fn ImUsed() -> HttpResponseBuilder
#### pub fn MultipleChoices() -> HttpResponseBuilder
#### pub fn MovedPermanently() -> HttpResponseBuilder
#### pub fn Found() -> HttpResponseBuilder
#### pub fn SeeOther() -> HttpResponseBuilder
#### pub fn NotModified() -> HttpResponseBuilder
#### pub fn UseProxy() -> HttpResponseBuilder
#### pub fn TemporaryRedirect() -> HttpResponseBuilder
#### pub fn PermanentRedirect() -> HttpResponseBuilder
#### pub fn BadRequest() -> HttpResponseBuilder
#### pub fn Unauthorized() -> HttpResponseBuilder
#### pub fn PaymentRequired() -> HttpResponseBuilder
#### pub fn Forbidden() -> HttpResponseBuilder
#### pub fn NotFound() -> HttpResponseBuilder
#### pub fn MethodNotAllowed() -> HttpResponseBuilder
#### pub fn NotAcceptable() -> HttpResponseBuilder
#### pub fn ProxyAuthenticationRequired() -> HttpResponseBuilder
#### pub fn RequestTimeout() -> HttpResponseBuilder
#### pub fn Conflict() -> HttpResponseBuilder
#### pub fn Gone() -> HttpResponseBuilder
#### pub fn LengthRequired() -> HttpResponseBuilder
#### pub fn PreconditionFailed() -> HttpResponseBuilder
#### pub fn PayloadTooLarge() -> HttpResponseBuilder
#### pub fn UriTooLong() -> HttpResponseBuilder
#### pub fn UnsupportedMediaType() -> HttpResponseBuilder
#### pub fn RangeNotSatisfiable() -> HttpResponseBuilder
#### pub fn ExpectationFailed() -> HttpResponseBuilder
#### pub fn ImATeapot() -> HttpResponseBuilder
#### pub fn MisdirectedRequest() -> HttpResponseBuilder
#### pub fn UnprocessableEntity() -> HttpResponseBuilder
#### pub fn Locked() -> HttpResponseBuilder
#### pub fn FailedDependency() -> HttpResponseBuilder
#### pub fn UpgradeRequired() -> HttpResponseBuilder
#### pub fn PreconditionRequired() -> HttpResponseBuilder
#### pub fn TooManyRequests() -> HttpResponseBuilder
#### pub fn RequestHeaderFieldsTooLarge() -> HttpResponseBuilder
#### pub fn UnavailableForLegalReasons() -> HttpResponseBuilder
#### pub fn InternalServerError() -> HttpResponseBuilder
#### pub fn NotImplemented() -> HttpResponseBuilder
#### pub fn BadGateway() -> HttpResponseBuilder
#### pub fn ServiceUnavailable() -> HttpResponseBuilder
#### pub fn GatewayTimeout() -> HttpResponseBuilder
#### pub fn VersionNotSupported() -> HttpResponseBuilder
#### pub fn VariantAlsoNegotiates() -> HttpResponseBuilder
#### pub fn InsufficientStorage() -> HttpResponseBuilder
#### pub fn LoopDetected() -> HttpResponseBuilder
#### pub fn NotExtended() -> HttpResponseBuilder
#### pub fn NetworkAuthenticationRequired() -> HttpResponseBuilder
### impl HttpResponse<BoxBody#### pub fn new(status: StatusCode) -> Self
Constructs a response.
#### pub fn build(status: StatusCode) -> HttpResponseBuilder
Constructs a response builder with specific HTTP status.
#### pub fn from_error(error: impl Into<Error>) -> Self
Create an error response.
### impl<B> HttpResponse<B#### pub fn with_body(status: StatusCode, body: B) -> Self
Constructs a response with body
#### pub fn head(&self) -> &ResponseHead
Returns a reference to response head.
#### pub fn head_mut(&mut self) -> &mut ResponseHead
Returns a mutable reference to response head.
#### pub fn error(&self) -> Option<&ErrorThe source `error` for this response
#### pub fn status(&self) -> StatusCode
Get the response status code
#### pub fn status_mut(&mut self) -> &mut StatusCode
Set the `StatusCode` for this response
#### pub fn headers(&self) -> &HeaderMap
Get the headers from the response
#### pub fn headers_mut(&mut self) -> &mut HeaderMap
Get a mutable reference to the headers
#### pub fn cookies(&self) -> CookieIter<'_Available on **crate feature `cookies`** only.Get an iterator for the cookies set by this response.
#### pub fn add_cookie(&mut self, cookie: &Cookie<'_>) -> Result<(), HttpErrorAvailable on **crate feature `cookies`** only.Add a cookie to this response.
##### Errors
Returns an error if the cookie results in a malformed `Set-Cookie` header.
#### pub fn add_removal_cookie(
&mut self,
cookie: &Cookie<'_>
) -> Result<(), HttpErrorAvailable on **crate feature `cookies`** only.Add a “removal” cookie to the response that matches attributes of given cookie.
This will cause browsers/clients to remove stored cookies with this name.
The `Set-Cookie` header added to the response will have:
* name matching given cookie;
* domain matching given cookie;
* path matching given cookie;
* an empty value;
* a max-age of `0`;
* an expiration date far in the past.
If the cookie you’re trying to remove has an explicit path or domain set, those attributes will need to be included in the cookie passed in here.
##### Errors
Returns an error if the given name results in a malformed `Set-Cookie` header.
#### pub fn del_cookie(&mut self, name: &str) -> usize
Available on **crate feature `cookies`** only.Remove all cookies with the given name from this response.
Returns the number of cookies removed.
This method can *not* cause a browser/client to delete any of its stored cookies. Its only purpose is to delete cookies that were added to this response using `add_cookie`
and `add_removal_cookie`. Use `add_removal_cookie` to send a “removal” cookie.
#### pub fn upgrade(&self) -> bool
Connection upgrade status
#### pub fn keep_alive(&self) -> bool
Keep-alive status for this connection
#### pub fn extensions(&self) -> Ref<'_, ExtensionsReturns reference to the response-local data/extensions container.
#### pub fn extensions_mut(&mut self) -> RefMut<'_, ExtensionsReturns reference to the response-local data/extensions container.
#### pub fn body(&self) -> &B
Returns a reference to this response’s body.
#### pub fn set_body<B2>(self, body: B2) -> HttpResponse<B2Sets new body.
#### pub fn into_parts(self) -> (HttpResponse<()>, B)
Returns split head and body.
##### Implementation Notes
Due to internal performance optimizations, the first element of the returned tuple is an
`HttpResponse` as well but only contains the head of the response this was called on.
#### pub fn drop_body(self) -> HttpResponse<()Drops body and returns new response.
#### pub fn map_body<F, B2>(self, f: F) -> HttpResponse<B2>where
F: FnOnce(&mut ResponseHead, B) -> B2,
Map the current body type to another using a closure, returning a new response.
Closure receives the response head and the current body type.
#### pub fn map_into_left_body<R>(self) -> HttpResponse<EitherBody<B, R>Map the current body type `B` to `EitherBody::Left(B)`.
Useful for middleware which can generate their own responses.
#### pub fn map_into_right_body<L>(self) -> HttpResponse<EitherBody<L, B>Map the current body type `B` to `EitherBody::Right(B)`.
Useful for middleware which can generate their own responses.
#### pub fn map_into_boxed_body(self) -> HttpResponse<BoxBody>where
B: MessageBody + 'static,
Map the current body to a type-erased `BoxBody`.
#### pub fn into_body(self) -> B
Returns the response body, dropping all other parts.
Trait Implementations
---
### impl<B> Debug for HttpResponse<B>where
B: MessageBody,
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn from(err: Error) -> Self
Converts to this type from the input type.### impl<B> From<HttpResponse<B>> for Response<B#### fn from(res: HttpResponse<B>) -> Self
Converts to this type from the input type.### impl From<HttpResponseBuilder> for HttpResponse
#### fn from(builder: HttpResponseBuilder) -> Self
Converts to this type from the input type.### impl<B> From<Response<B>> for HttpResponse<B#### fn from(res: Response<B>) -> Self
Converts to this type from the input type.### impl<B> From<ServiceResponse<B>> for HttpResponse<B#### fn from(res: ServiceResponse<B>) -> HttpResponse<BConverts to this type from the input type.### impl<B> Responder for HttpResponse<B>where
B: MessageBody + 'static,
#### type Body = B
#### fn respond_to(self, _: &HttpRequest) -> HttpResponse<Self::BodyConvert self to `HttpResponse`.#### fn customize(self) -> CustomizeResponder<Self>where
Self: Sized,
Wraps responder to allow alteration of its response. Read moreAuto Trait Implementations
---
### impl<B = BoxBody> !RefUnwindSafe for HttpResponse<B### impl<B = BoxBody> !Send for HttpResponse<B### impl<B = BoxBody> !Sync for HttpResponse<B### impl<B> Unpin for HttpResponse<B>where
B: Unpin,
### impl<B = BoxBody> !UnwindSafe for HttpResponse<BBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more{"HttpResponseBuilder":"<h3>Notable traits for <code><a class=\"struct\" href=\"struct.HttpResponseBuilder.html\" title=\"struct actix_web::HttpResponseBuilder\">HttpResponseBuilder</a></code></h3><pre><code><span class=\"where fmt-newline\">impl <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/future/future/trait.Future.html\" title=\"trait core::future::future::Future\">Future</a> for <a class=\"struct\" href=\"struct.HttpResponseBuilder.html\" title=\"struct actix_web::HttpResponseBuilder\">HttpResponseBuilder</a></span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/future/future/trait.Future.html#associatedtype.Output\" class=\"associatedtype\">Output</a> = <a class=\"enum\" href=\"https://doc.rust-lang.org/nightly/core/result/enum.Result.html\" title=\"enum core::result::Result\">Result</a><<a class=\"struct\" href=\"struct.HttpResponse.html\" title=\"struct actix_web::HttpResponse\">HttpResponse</a>, <a class=\"struct\" href=\"error/struct.Error.html\" title=\"struct actix_web::error::Error\">Error</a>>;</span>"}
Module actix_web::middleware
===
A collection of common middleware.
What Is Middleware?
---
Actix Web’s middleware system allows us to add additional behavior to request/response processing. Middleware can hook into incoming request and outgoing response processes, enabling us to modify requests and responses as well as halt request processing to return a response early.
Typically, middleware is involved in the following actions:
* Pre-process the request (e.g., normalizing paths)
* Post-process a response (e.g., logging)
* Modify application state (through `ServiceRequest`)
* Access external services (e.g., sessions, etc.)
Middleware is registered for each `App`, `Scope`, or
`Resource` and executed in opposite order as registration. In general, a middleware is a pair of types that implements the `Service` trait and `Transform` trait,
respectively. The `new_transform` and `call` methods must return a `Future`, though it can often be an immediately-ready one.
Ordering
---
```
#[get("/")]
async fn service(a: ExtractorA, b: ExtractorB) -> impl Responder { "Hello, World!" }
let app = App::new()
.wrap(MiddlewareA)
.wrap(MiddlewareB)
.wrap(MiddlewareC)
.service(service);
```
```
Request
⭣
╭────────────────────┼────╮
│ MiddlewareC │ │
│ ╭──────────────────┼───╮│
│ │ MiddlewareB │ ││
│ │ ╭────────────────┼──╮││
│ │ │ MiddlewareA │ │││
│ │ │ ╭──────────────┼─╮│││
│ │ │ │ ExtractorA │ ││││
│ │ │ ├┈┈┈┈┈┈┈┈┈┈┈┈┈┈┼┈┤│││
│ │ │ │ ExtractorB │ ││││
│ │ │ ├┈┈┈┈┈┈┈┈┈┈┈┈┈┈┼┈┤│││
│ │ │ │ service │ ││││
│ │ │ ╰──────────────┼─╯│││
│ │ ╰────────────────┼──╯││
│ ╰──────────────────┼───╯│
╰────────────────────┼────╯
⭣
Response
```
The request *first* gets processed by the middleware specified *last* - `MiddlewareC`. It passes the request (modified a modified one) to the next middleware - `MiddlewareB` - *or* directly responds to the request (e.g. when the request was invalid or an error occurred). `MiddlewareB`
processes the request as well and passes it to `MiddlewareA`, which then passes it to the
`Service`. In the `Service`, the extractors will run first. They don’t pass the request on,
but only view it (see `FromRequest`). After the `Service` responds to the request, the response it passed back through `MiddlewareA`, `MiddlewareB`, and `MiddlewareC`.
As you register middleware using `wrap` and `wrap_fn`
in the `App` builder, imagine wrapping layers around an inner `App`. The first middleware layer exposed to a Request is the outermost layer (i.e., the *last* registered in the builder chain, in the example above: `MiddlewareC`). Consequently, the *first* middleware registered in the builder chain is the *last* to start executing during request processing (`MiddlewareA`).
Ordering is less obvious when wrapped services also have middleware applied. In this case,
middleware are run in reverse order for `App` *and then* in reverse order for the wrapped service.
Middleware Traits
---
### `Transform<S, Req>`
The `Transform` trait is the builder for the actual `Service`s that handle the requests. All the middleware you pass to the `wrap` methods implement this trait. During construction, each thread assembles a chain of `Service`s by calling `new_transform` and passing the next
`Service` (`S`) in the chain. The created `Service` handles requests of type `Req`.
In the example from the ordering section, the chain would be:
```
MiddlewareCService {
next: MiddlewareBService {
next: MiddlewareAService { ... }
}
}
```
### `Service<Req>`
A `Service` `S` represents an asynchronous operation that turns a request of type `Req` into a response of type `S::Response` or an error of type
`S::Error`. You can think of the service of being roughly:
```
async fn(&self, req: Req) -> Result<S::Response, S::Error>
```
In most cases the `Service` implementation will, at some point, call the wrapped `Service`
in its `call` implementation.
Note that the `Service`s created by `new_transform` don’t need to be `Send` or `Sync`.
Example
---
```
use std::{future::{ready, Ready, Future}, pin::Pin};
use actix_web::{
dev::{forward_ready, Service, ServiceRequest, ServiceResponse, Transform},
web, Error,
};
pub struct SayHi;
// `S` - type of the next service
// `B` - type of response's body impl<S, B> Transform<S, ServiceRequest> for SayHi where
S: Service<ServiceRequest, Response = ServiceResponse<B>, Error = Error>,
S::Future: 'static,
B: 'static,
{
type Response = ServiceResponse<B>;
type Error = Error;
type InitError = ();
type Transform = SayHiMiddleware<S>;
type Future = Ready<Result<Self::Transform, Self::InitError>>;
fn new_transform(&self, service: S) -> Self::Future {
ready(Ok(SayHiMiddleware { service }))
}
}
pub struct SayHiMiddleware<S> {
/// The next service to call
service: S,
}
// This future doesn't have the requirement of being `Send`.
// See: futures_util::future::LocalBoxFuture type LocalBoxFuture<T> = Pin<Box<dyn Future<Output = T> + 'static>>;
// `S`: type of the wrapped service
// `B`: type of the body - try to be generic over the body where possible impl<S, B> Service<ServiceRequest> for SayHiMiddleware<S>
where
S: Service<ServiceRequest, Response = ServiceResponse<B>, Error = Error>,
S::Future: 'static,
B: 'static,
{
type Response = ServiceResponse<B>;
type Error = Error;
type Future = LocalBoxFuture<Result<Self::Response, Self::Error>>;
// This service is ready when its next service is ready
forward_ready!(service);
fn call(&self, req: ServiceRequest) -> Self::Future {
println!("Hi from start. You requested: {}", req.path());
// A more complex middleware, could return an error or an early response here.
let fut = self.service.call(req);
Box::pin(async move {
let res = fut.await?;
println!("Hi from response");
Ok(res)
})
}
}
let app = App::new()
.wrap(SayHi)
.route("/", web::get().to(|| async { "Hello, middleware!" }));
```
Simpler Middleware
---
In many cases, you *can* actually use an async function via a helper that will provide a more natural flow for your behavior.
The experimental `actix_web_lab` crate provides a `from_fn` utility which allows an async fn to be wrapped and used in the same way as other middleware. See the
`from_fn` docs for more info and examples of it’s use.
While `from_fn` is experimental currently, it’s likely this helper will graduate to Actix Web in some form, so feedback is appreciated.
Structs
---
* CompatMiddleware for enabling any middleware to be used in `Resource::wrap`,
and `Condition`.
* Compress`__compress`Middleware for compressing response payloads.
* ConditionMiddleware for conditionally enabling other middleware.
* DefaultHeadersMiddleware for setting default response headers.
* ErrorHandlersMiddleware for registering custom status code based error handlers.
* LoggerMiddleware for logging request and response summaries to the terminal.
* NormalizePathMiddleware for normalizing a request’s path so that routes can be matched more flexibly.
Enums
---
* ErrorHandlerResponseReturn type for `ErrorHandlers` custom handlers.
* TrailingSlashDetermines the behavior of the `NormalizePath` middleware.
Struct actix_web::error::Error
===
```
pub struct Error { /* private fields */ }
```
General purpose Actix Web error.
An Actix Web error is used to carry errors from `std::error` through actix in a convenient way.
It can be created through converting errors with `into()`.
Whenever it is created from an external object a response error is created for it that can be used to create an HTTP response from it this means that if you have access to an actix `Error`
you can always get a `ResponseError` reference from it.
Implementations
---
### impl Error
#### pub fn as_response_error(&self) -> &dyn ResponseError
Returns the reference to the underlying `ResponseError`.
#### pub fn as_error<T: ResponseError + 'static>(&self) -> Option<&TSimilar to `as_response_error` but downcasts.
#### pub fn error_response(&self) -> HttpResponse
Shortcut for creating an `HttpResponse`.
Trait Implementations
---
### impl Debug for Error
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more1.0.0 · source#### fn description(&self) -> &str
👎Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎Deprecated since 1.33.0: replaced by Error::source, which can support downcasting#### fn provide<'a>(&'a self, request: &mut Request<'a>)
🔬This is a nightly-only experimental API. (`error_generic_member_access`)Provides type based access to context intended for error reports.
#### fn from(err: Error) -> Self
Converts to this type from the input type.### impl From<Error> for Response<BoxBody#### fn from(err: Error) -> Response<BoxBodyConverts to this type from the input type.### impl<T: ResponseError + 'static> From<T> for Error
`Error` for any error that implements `ResponseError`
#### fn from(err: T) -> Error
Converts to this type from the input type.Auto Trait Implementations
---
### impl !RefUnwindSafe for Error
### impl !Send for Error
### impl !Sync for Error
### impl Unpin for Error
### impl !UnwindSafe for Error
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> ToString for Twhere
T: Display + ?Sized,
#### default fn to_string(&self) -> String
Converts the given value to a `String`.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Trait actix_web::error::ResponseError
===
```
pub trait ResponseError: Debug + Display {
// Provided methods
fn status_code(&self) -> StatusCode { ... }
fn error_response(&self) -> HttpResponse<BoxBody> { ... }
}
```
Errors that can generate responses.
Provided Methods
---
#### fn status_code(&self) -> StatusCode
Returns appropriate status code for error.
A 500 Internal Server Error is used by default. If error_response is also implemented and does not call `self.status_code()`, then this will not be used.
#### fn error_response(&self) -> HttpResponse<BoxBodyCreates full response for error.
By default, the generated response uses a 500 Internal Server Error status code, a
`Content-Type` of `text/plain`, and the body is set to `Self`’s `Display` impl.
Implementations
---
### impl dyn ResponseError + 'static
#### pub fn downcast_ref<T: ResponseError + 'static>(&self) -> Option<&TDowncasts generic body to a specific type.
#### pub fn downcast_mut<T: ResponseError + 'static>(&mut self) -> Option<&mut TDowncasts a generic body to a mutable specific type.
Implementations on Foreign Types
---
### impl ResponseError for Error
### impl ResponseError for Box<dyn StdError + 'static### impl ResponseError for Error
#### fn status_code(&self) -> StatusCode
### impl ResponseError for Infallible
#### fn status_code(&self) -> StatusCode
#### fn error_response(&self) -> HttpResponse<BoxBody### impl ResponseError for ProtocolError
### impl ResponseError for Utf8Error
#### fn status_code(&self) -> StatusCode
### impl ResponseError for Error
Available on **crate feature `openssl`** only.### impl ResponseError for HandshakeError
#### fn error_response(&self) -> HttpResponse<BoxBody### impl ResponseError for Error
#### fn status_code(&self) -> StatusCode
### impl ResponseError for Error
Implementors
---
### impl ResponseError for ContentTypeError
### impl ResponseError for JsonPayloadError
### impl ResponseError for ParseError
### impl ResponseError for PathError
Return `BadRequest` for `PathError`
### impl ResponseError for PayloadError
### impl ResponseError for QueryPayloadError
### impl ResponseError for ReadlinesError
### impl ResponseError for UrlGenerationError
### impl ResponseError for UrlencodedError
### impl ResponseError for InvalidHeaderValue
### impl ResponseError for actix_web::http::Error
### impl ResponseError for BlockingError
### impl ResponseError for HttpError
### impl<T> ResponseError for InternalError<T>where
T: Debug + Display,
Module actix_web::body
===
Traits and structures to aid consuming and writing HTTP payloads.
“Body” and “payload” are used somewhat interchangeably in this documentation.
Structs
---
* BodyLimitExceededError type returned from `to_bytes_limited` when body produced exceeds limit.
* BodyStreamStreaming response wrapper.
* BoxBodyA boxed message body with boxed errors.
* NoneBody type for responses that forbid payloads.
* SizedStreamKnown sized streaming response wrapper.
Enums
---
* BodySizeBody size hint.
* EitherBodyAn “either” type specialized for body types.
Traits
---
* MessageBodyAn interface for types that can be used as a response body.
Functions
---
* to_bytesCollects all the bytes produced by `body`.
* to_bytes_limitedCollects the bytes produced by `body`, up to `limit` bytes.
Crate actix_web::cookie
===
Available on **crate feature `cookies`** only.HTTP cookie parsing and cookie jar management.
This crates provides the `Cookie` type, representing an HTTP cookie, and the `CookieJar` type, which manages a collection of cookies for session management, recording changes as they are made, and optional automatic cookie encryption and signing.
Usage
---
Add the following to the `[dependencies]` section of your `Cargo.toml`:
```
cookie = "0.16"
```
Features
---
This crate exposes several features, all of which are disabled by default:
* **`percent-encode`**
Enables *percent encoding and decoding* of names and values in cookies.
When this feature is enabled, the `Cookie::encoded()` and
`Cookie::parse_encoded()` methods are available. The `encoded` method returns a wrapper around a `Cookie` whose `Display` implementation percent-encodes the name and value of the cookie. The `parse_encoded`
method percent-decodes the name and value of a `Cookie` during parsing.
* **`signed`**
Enables *signed* cookies via `CookieJar::signed()`.
When this feature is enabled, the `CookieJar::signed()` method,
`SignedJar` type, and `Key` type are available. The jar acts as “child jar”; operations on the jar automatically sign and verify cookies as they are added and retrieved from the parent jar.
* **`private`**
Enables *private* (authenticated, encrypted) cookies via
`CookieJar::private()`.
When this feature is enabled, the `CookieJar::private()` method,
`PrivateJar` type, and `Key` type are available. The jar acts as “child jar”; operations on the jar automatically encrypt and decrypt/authenticate cookies as they are added and retrieved from the parent jar.
* **`key-expansion`**
Enables *key expansion* or *key derivation* via `Key::derive_from()`.
When this feature is enabled, and either `signed` or `private` are *also*
enabled, the `Key::derive_from()` method is available. The method can be used to derive a `Key` structure appropriate for use with signed and private jars from cryptographically valid key material that is shorter in length than the full key.
* **`secure`**
A meta-feature that simultaneously enables `signed`, `private`, and
`key-expansion`.
You can enable features via `Cargo.toml`:
```
[dependencies.cookie]
features = ["secure", "percent-encode"]
```
Modules
---
* timeFeature flags
Structs
---
* CookieRepresentation of an HTTP cookie.
* CookieBuilderStructure that follows the builder pattern for building `Cookie` structs.
* CookieJarA collection of cookies that tracks its modifications.
* DeltaIterator over the changes to a cookie jar.
* DisplayWrapper around `Cookie` whose `Display` implementation either percent-encodes the cookie’s name and value, skips displaying the cookie’s parameters (only displaying it’s name and value), or both.
* IterIterator over all of the cookies in a jar.
* KeyA cryptographic master key for use with `Signed` and/or `Private` jars.
* PrivateJarA child cookie jar that provides authenticated encryption for its cookies.
* SignedJarA child cookie jar that authenticates its cookies.
Enums
---
* ExpirationA cookie’s expiration: either session or a date-time.
* KeyErrorAn error indicating an issue with generating or constructing a key.
* ParseErrorEnum corresponding to a parsing error.
* SameSiteThe `SameSite` cookie attribute.
Module actix_web::dev
===
Lower-level types and re-exports.
Most users will not have to interact with the types in this module, but it is useful for those writing extractors, middleware, libraries, or interacting with the service API directly.
Request Extractors
---
* `ConnectionInfo`: Connection information
* `PeerAddr`: Connection information
Macros
---
* always_readyAn implementation of `poll_ready` that always signals readiness.
* forward_readyAn implementation of `poll_ready` that forwards readiness checks to a named struct field.
Structs
---
* AppConfigApplication connection config.
* AppServiceApplication configuration
* ConnectionInfoHTTP connection information.
* Decompress`__compress`
* ExtensionsA type map for request extensions.
* PathResource path match information.
* PeerAddrExtractor for peer’s socket address.
* ReadlinesStream that reads request line by line.
* RequestHead
* ResourceDefDescribes the set of paths that match to a resource.
* ResourceMap
* ResponseAn HTTP response.
* ResponseHead
* ServerGeneral purpose TCP server that runs services receiving Tokio `TcpStream`s.
* ServerHandleServer handle.
* ServiceRequestA service level request wrapper.
* ServiceResponseA service level response wrapper.
* Url
* UrlEncodedFuture that resolves to some `T` when parsed from a URL encoded payload.
* WebService
Enums
---
* JsonBodyFuture that resolves to some `T` when parsed from a JSON payload.
* PayloadA streaming payload.
Traits
---
* HttpServiceFactory
* ResourcePath
* ServiceAn asynchronous operation from `Request` to a `Response`.
* ServiceFactoryFactory for creating `Service`s.
* TransformDefines the interface of a service factory that wraps inner service during construction.
Functions
---
* fn_factoryCreate `ServiceFactory` for function that can produce services
* fn_serviceCreate `ServiceFactory` for function that can act as a `Service`
Module actix_web::error
===
Error and Result module
Structs
---
* BlockingErrorAn error representing a problem running a blocking task on a thread pool.
* ErrorGeneral purpose Actix Web error.
* HttpErrorA generic “error” for HTTP connections
* InternalErrorWraps errors to alter the generated response status code.
Enums
---
* ContentTypeErrorA set of error that can occur during parsing content type.
* DispatchErrorA set of errors that can occur during dispatching HTTP requests.
* JsonPayloadErrorA set of errors that can occur during parsing json payloads
* ParseErrorA set of errors that can occur during parsing HTTP streams.
* PathErrorA set of errors that can occur during parsing request paths
* PayloadErrorA set of errors that can occur during payload parsing.
* QueryPayloadErrorA set of errors that can occur during parsing query strings.
* ReadlinesErrorError type returned when reading body as lines.
* UrlGenerationErrorErrors which can occur when attempting to generate resource uri.
* UrlencodedErrorA set of errors that can occur during parsing urlencoded payloads
Traits
---
* ResponseErrorErrors that can generate responses.
Functions
---
* ErrorBadGatewayHelper function that wraps any error and generates a `BAD_GATEWAY` response.
* ErrorBadRequestHelper function that wraps any error and generates a `BAD_REQUEST` response.
* ErrorConflictHelper function that wraps any error and generates a `CONFLICT` response.
* ErrorExpectationFailedHelper function that wraps any error and generates a `EXPECTATION_FAILED` response.
* ErrorFailedDependencyHelper function that wraps any error and generates a `FAILED_DEPENDENCY` response.
* ErrorForbiddenHelper function that wraps any error and generates a `FORBIDDEN` response.
* ErrorGatewayTimeoutHelper function that wraps any error and generates a `GATEWAY_TIMEOUT` response.
* ErrorGoneHelper function that wraps any error and generates a `GONE` response.
* ErrorHttpVersionNotSupportedHelper function that wraps any error and generates a `HTTP_VERSION_NOT_SUPPORTED` response.
* ErrorImATeapotHelper function that wraps any error and generates a `IM_A_TEAPOT` response.
* ErrorInsufficientStorageHelper function that wraps any error and generates a `INSUFFICIENT_STORAGE` response.
* ErrorInternalServerErrorHelper function that wraps any error and generates a `INTERNAL_SERVER_ERROR` response.
* ErrorLengthRequiredHelper function that wraps any error and generates a `LENGTH_REQUIRED` response.
* ErrorLockedHelper function that wraps any error and generates a `LOCKED` response.
* ErrorLoopDetectedHelper function that wraps any error and generates a `LOOP_DETECTED` response.
* ErrorMethodNotAllowedHelper function that wraps any error and generates a `METHOD_NOT_ALLOWED` response.
* ErrorMisdirectedRequestHelper function that wraps any error and generates a `MISDIRECTED_REQUEST` response.
* ErrorNetworkAuthenticationRequiredHelper function that wraps any error and generates a `NETWORK_AUTHENTICATION_REQUIRED` response.
* ErrorNotAcceptableHelper function that wraps any error and generates a `NOT_ACCEPTABLE` response.
* ErrorNotExtendedHelper function that wraps any error and generates a `NOT_EXTENDED` response.
* ErrorNotFoundHelper function that wraps any error and generates a `NOT_FOUND` response.
* ErrorNotImplementedHelper function that wraps any error and generates a `NOT_IMPLEMENTED` response.
* ErrorPayloadTooLargeHelper function that wraps any error and generates a `PAYLOAD_TOO_LARGE` response.
* ErrorPaymentRequiredHelper function that wraps any error and generates a `PAYMENT_REQUIRED` response.
* ErrorPreconditionFailedHelper function that wraps any error and generates a `PRECONDITION_FAILED` response.
* ErrorPreconditionRequiredHelper function that wraps any error and generates a `PRECONDITION_REQUIRED` response.
* ErrorProxyAuthenticationRequiredHelper function that wraps any error and generates a `PROXY_AUTHENTICATION_REQUIRED` response.
* ErrorRangeNotSatisfiableHelper function that wraps any error and generates a `RANGE_NOT_SATISFIABLE` response.
* ErrorRequestHeaderFieldsTooLargeHelper function that wraps any error and generates a `REQUEST_HEADER_FIELDS_TOO_LARGE` response.
* ErrorRequestTimeoutHelper function that wraps any error and generates a `REQUEST_TIMEOUT` response.
* ErrorServiceUnavailableHelper function that wraps any error and generates a `SERVICE_UNAVAILABLE` response.
* ErrorTooManyRequestsHelper function that wraps any error and generates a `TOO_MANY_REQUESTS` response.
* ErrorUnauthorizedHelper function that wraps any error and generates a `UNAUTHORIZED` response.
* ErrorUnavailableForLegalReasonsHelper function that wraps any error and generates a `UNAVAILABLE_FOR_LEGAL_REASONS` response.
* ErrorUnprocessableEntityHelper function that wraps any error and generates a `UNPROCESSABLE_ENTITY` response.
* ErrorUnsupportedMediaTypeHelper function that wraps any error and generates a `UNSUPPORTED_MEDIA_TYPE` response.
* ErrorUpgradeRequiredHelper function that wraps any error and generates a `UPGRADE_REQUIRED` response.
* ErrorUriTooLongHelper function that wraps any error and generates a `URI_TOO_LONG` response.
* ErrorVariantAlsoNegotiatesHelper function that wraps any error and generates a `VARIANT_ALSO_NEGOTIATES` response.
Type Aliases
---
* ResultA convenience `Result` for Actix Web operations.
Module actix_web::guard
===
Route guards.
Guards are used during routing to help select a matching service or handler using some aspect of the request; though guards should not be used for path matching since it is a built-in function of the Actix Web router.
Guards can be used on `Scope`s, `Resource`s, `Route`s, and other custom services.
Fundamentally, a guard is a predicate function that receives a reference to a request context object and returns a boolean; true if the request *should* be handled by the guarded service or handler. This interface is defined by the `Guard` trait.
Commonly-used guards are provided in this module as well as a way of creating a guard from a closure (`fn_guard`). The `Not`, `Any`, and `All` guards are noteworthy, as they can be used to compose other guards in a more flexible and semantic way than calling `.guard(...)` on services multiple times (which might have different combining behavior than you want).
There are shortcuts for routes with method guards in the `web` module:
`web::get()`, `web::post()`, etc. The routes created by the following calls are equivalent:
* `web::get()` (recommended form)
* `web::route().guard(guard::Get())`
Guards can not modify anything about the request. However, it is possible to store extra attributes in the request-local data container obtained with `GuardContext::req_data_mut`.
Guards can prevent resource definitions from overlapping which, when only considering paths,
would result in inaccessible routes. See the `Host` guard for an example of virtual hosting.
Examples
---
In the following code, the `/guarded` resource has one defined route whose handler will only be called if the request method is GET or POST and there is a `x-guarded` request header with value equal to `secret`.
```
use actix_web::{web, http::Method, guard, HttpResponse};
web::resource("/guarded").route(
web::route()
.guard(guard::Any(guard::Get()).or(guard::Post()))
.guard(guard::Header("x-guarded", "secret"))
.to(|| HttpResponse::Ok())
);
```
Re-exports
---
* `pub use self::host::HostGuard;`
Structs
---
* AcceptableA guard that verifies that an `Accept` header is present and it contains a compatible MIME type.
* AllGuardA collection of guards that match if the conjunction of their `check` outcomes is true.
* AnyGuardA collection of guards that match if the disjunction of their `check` outcomes is true.
* GuardContextProvides access to request parts that are useful during routing.
* NotWraps a guard and inverts the outcome of its `Guard` implementation.
Traits
---
* GuardInterface for routing guards.
Functions
---
* AllCreates a guard that matches if all added guards match.
* AnyCreates a guard that matches if any added guards match.
* ConnectCreates a guard that matches the `CONNECT` request method.
* DeleteCreates a guard that matches the `DELETE` request method.
* GetCreates a guard that matches the `GET` request method.
* HeadCreates a guard that matches the `HEAD` request method.
* HeaderCreates a guard that matches if request contains given header name and value.
* HostCreates a guard that matches requests targetting a specific host.
* MethodCreates a guard that matches a specified HTTP method.
* OptionsCreates a guard that matches the `OPTIONS` request method.
* PatchCreates a guard that matches the `PATCH` request method.
* PostCreates a guard that matches the `POST` request method.
* PutCreates a guard that matches the `PUT` request method.
* TraceCreates a guard that matches the `TRACE` request method.
* fn_guardCreates a guard using the given function.
Module actix_web::http
===
Various HTTP related types.
Modules
---
* headerA Collection of Header implementations for common HTTP Headers.
* uriURI component of request and response lines
Structs
---
* Error
* MethodThe Request Method (VERB)
* StatusCodeAn HTTP status code (`status-code` in RFC 7230 et al.).
* UriThe URI component of a request.
* VersionRepresents a version of the HTTP spec.
Enums
---
* ConnectionTypeRepresents various types of connection
* KeepAliveConnection keep-alive config.
Module actix_web::rt
===
A selection of re-exports from `tokio` and `actix-rt`.
Actix Web runs on Tokio, providing full1 compatibility with its huge ecosystem of crates. Each of the server’s workers uses a single-threaded runtime. Read more about the architecture in `actix-rt`’s docs.
Running Actix Web Without Macros
---
```
use actix_web::{middleware, rt, web, App, HttpRequest, HttpServer};
async fn index(req: HttpRequest) -> &'static str {
println!("REQ: {:?}", req);
"Hello world!\r\n"
}
fn main() -> std::io::Result<()> {
rt::System::new().block_on(
HttpServer::new(|| {
App::new().service(web::resource("/").route(web::get().to(index)))
})
.bind(("127.0.0.1", 8080))?
.run()
)
}
```
Running Actix Web Using `#[tokio::main]`
---
If you need to run something that uses Tokio’s work stealing functionality alongside Actix Web,
you can run Actix Web under `#[tokio::main]`. The `Server` object returned from `HttpServer::run` can also be `spawn`ed, if preferred.
Note that `actix` actor support (and therefore WebSocket support through `actix-web-actors`)
still require `#[actix_web::main]` since they require a `System` to be set up.
```
use actix_web::{get, middleware, rt, web, App, HttpRequest, HttpServer};
#[get("/")]
async fn index(req: HttpRequest) -> &'static str {
println!("REQ: {:?}", req);
"Hello world!\r\n"
}
#[tokio::main]
async fn main() -> std::io::Result<()> {
HttpServer::new(|| {
App::new().service(index)
})
.bind(("127.0.0.1", 8080))?
.run()
.await
}
```
---
1. Crates that use Tokio’s `block_in_place` will not work with Actix Web. Fortunately,
the vast majority of Tokio-based crates do not use it. ↩
Modules
---
* netTCP/UDP/Unix bindings (mostly Tokio re-exports).
* signalAsynchronous signal handling (Tokio re-exports).
* taskTask management (Tokio re-exports).
* timeUtilities for tracking time (Tokio re-exports).
Macros
---
* pinPins a value on the stack.
Structs
---
* RuntimeA Tokio-based runtime proxy.
* SystemA manager for a per-thread distributed async runtime.
* SystemRunnerNon-`io-uring`Runner that keeps a System’s event loop alive until stop message is received.
Functions
---
* spawnSpawns a future on the current thread as a new task.
Module actix_web::test
===
Various helpers for Actix applications to use during testing.
Creating A Test Service
---
* `init_service`
Off-The-Shelf Test Services
---
* `ok_service`
* `status_service`
Calling Test Service
---
* `TestRequest`
* `call_service`
* `try_call_service`
* `call_and_read_body`
* `call_and_read_body_json`
* `try_call_and_read_body_json`
Reading Response Payloads
---
* `read_body`
* `try_read_body`
* `read_body_json`
* `try_read_body_json`
Re-exports
---
* `pub use self::test_services::default_service;`Deprecated
* `pub use self::test_services::simple_service;`Deprecated
* `pub use self::test_utils::read_response;`Deprecated
* `pub use self::test_utils::read_response_json;`Deprecated
Structs
---
* TestBufferAsync I/O test buffer.
* TestRequestTest `Request` builder.
Functions
---
* call_and_read_bodyHelper function that returns a response body of a TestRequest
* call_and_read_body_jsonHelper function that returns a deserialized response body of a TestRequest
* call_serviceCalls service and waits for response future completion.
* init_serviceInitialize service from application builder instance.
* ok_serviceCreates service that always responds with `200 OK` and no body.
* read_bodyHelper function that returns a response body of a ServiceResponse.
* read_body_jsonHelper function that returns a deserialized response body of a ServiceResponse.
* status_serviceCreates service that always responds with given status code and no body.
* try_call_and_read_body_jsonFallible version of `call_and_read_body_json` that allows testing service call errors.
* try_call_serviceFallible version of `call_service` that allows testing response completion errors.
* try_read_bodyFallible version of `read_body` that allows testing MessageBody reading errors.
* try_read_body_jsonFallible version of `read_body_json` that allows testing response deserialization errors.
Macro actix_web::services
===
```
macro_rules! services {
($($x:expr),+ $(,)?) => { ... };
}
```
Macro to help register different types of services at the same time.
The max number of services that can be grouped together is 12 and all must implement the
`HttpServiceFactory` trait.
Examples
---
```
use actix_web::{services, web, App};
let services = services![
web::resource("/test2").to(|| async { "test2" }),
web::scope("/test3").route("/", web::get().to(|| async { "test3" }))
];
let app = App::new().service(services);
// services macro just convert multiple services to a tuple.
// below would also work without importing the macro.
let app = App::new().service((
web::resource("/test2").to(|| async { "test2" }),
web::scope("/test3").route("/", web::get().to(|| async { "test3" }))
));
```
Struct actix_web::CustomizeResponder
===
```
pub struct CustomizeResponder<R> { /* private fields */ }
```
Allows overriding status code and headers for a `Responder`.
Created by calling the `customize` method on a `Responder` type.
Implementations
---
### impl<R: Responder> CustomizeResponder<R#### pub fn with_status(self, status: StatusCode) -> Self
Override a status code for the Responder’s response.
##### Examples
```
use actix_web::{Responder, http::StatusCode, test::TestRequest};
let responder = "Welcome!".customize().with_status(StatusCode::ACCEPTED);
let request = TestRequest::default().to_http_request();
let response = responder.respond_to(&request);
assert_eq!(response.status(), StatusCode::ACCEPTED);
```
#### pub fn insert_header(self, header: impl TryIntoHeaderPair) -> Self
Insert (override) header in the final response.
Overrides other headers with the same name.
See `HeaderMap::insert`.
Headers added with this method will be inserted before those added with `append_header`. As such, header(s) can be overridden with more than one new header by first calling `insert_header` followed by `append_header`.
##### Examples
```
use actix_web::{Responder, test::TestRequest};
let responder = "Hello world!"
.customize()
.insert_header(("x-version", "1.2.3"));
let request = TestRequest::default().to_http_request();
let response = responder.respond_to(&request);
assert_eq!(response.headers().get("x-version").unwrap(), "1.2.3");
```
#### pub fn append_header(self, header: impl TryIntoHeaderPair) -> Self
Append header to the final response.
Unlike `insert_header`, this will not override existing headers.
See `HeaderMap::append`.
Headers added here are appended *after* additions/overrides from `insert_header`.
##### Examples
```
use actix_web::{Responder, test::TestRequest};
let responder = "Hello world!"
.customize()
.append_header(("x-version", "1.2.3"));
let request = TestRequest::default().to_http_request();
let response = responder.respond_to(&request);
assert_eq!(response.headers().get("x-version").unwrap(), "1.2.3");
```
Trait Implementations
---
### impl<T> Responder for CustomizeResponder<T>where
T: Responder,
#### type Body = EitherBody<<T as Responder>::Body, BoxBody#### fn respond_to(self, req: &HttpRequest) -> HttpResponse<Self::BodyConvert self to `HttpResponse`.#### fn customize(self) -> CustomizeResponder<Self>where
Self: Sized,
Wraps responder to allow alteration of its response. Read moreAuto Trait Implementations
---
### impl<R> RefUnwindSafe for CustomizeResponder<R>where
R: RefUnwindSafe,
### impl<R> Send for CustomizeResponder<R>where
R: Send,
### impl<R> Sync for CustomizeResponder<R>where
R: Sync,
### impl<R> Unpin for CustomizeResponder<R>where
R: Unpin,
### impl<R> UnwindSafe for CustomizeResponder<R>where
R: UnwindSafe,
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Trait actix_web::Responder
===
```
pub trait Responder {
type Body: MessageBody + 'static;
// Required method
fn respond_to(self, req: &HttpRequest) -> HttpResponse<Self::Body>;
// Provided method
fn customize(self) -> CustomizeResponder<Self>
where Self: Sized { ... }
}
```
Trait implemented by types that can be converted to an HTTP response.
Any types that implement this trait can be used in the return type of a handler. Since handlers will only have one return type, it is idiomatic to use opaque return types `-> impl Responder`.
Implementations
---
It is often not required to implement `Responder` for your own types due to a broad base of built-in implementations:
* `HttpResponse` and `HttpResponseBuilder`
* `Option<R>` where `R: Responder`
* `Result<R, E>` where `R: Responder` and `E: ResponseError`
* `(R, StatusCode)` where `R: Responder`
* `&'static str`, `String`, `&'_ String`, `Cow<'_, str>`, `ByteString`
* `&'static [u8]`, `Vec<u8>`, `Bytes`, `BytesMut`
* `Json<T>` and `Form<T>` where `T: Serialize`
* `Either<L, R>` where `L: Serialize` and `R: Serialize`
* `CustomizeResponder<R>`
* `actix_files::NamedFile`
* Experimental responders from `actix-web-lab`
* Third party integrations may also have implemented `Responder` where appropriate. For example,
HTML templating engines.
Customizing Responder Output
---
Calling `.customize()` on any responder type will wrap it in a
`CustomizeResponder` capable of overriding various parts of the response such as the status code and header map.
Required Associated Types
---
#### type Body: MessageBody + 'static
Required Methods
---
#### fn respond_to(self, req: &HttpRequest) -> HttpResponse<Self::BodyConvert self to `HttpResponse`.
Provided Methods
---
#### fn customize(self) -> CustomizeResponder<Self>where
Self: Sized,
Wraps responder to allow alteration of its response.
See `CustomizeResponder` docs for more details on its capabilities.
##### Examples
```
use actix_web::{Responder, http::StatusCode, test::TestRequest};
let responder = "Hello world!"
.customize()
.with_status(StatusCode::BAD_REQUEST)
.insert_header(("x-hello", "world"));
let request = TestRequest::default().to_http_request();
let response = responder.respond_to(&request);
assert_eq!(response.status(), StatusCode::BAD_REQUEST);
assert_eq!(response.headers().get("x-hello").unwrap(), "world");
```
Implementations on Foreign Types
---
### impl Responder for ResponseBuilder
#### type Body = BoxBody
#### fn respond_to(self, req: &HttpRequest) -> HttpResponse<Self::Body### impl Responder for ByteString
#### type Body = ByteString
#### fn respond_to(self, _: &HttpRequest) -> HttpResponse<Self::Body### impl Responder for String
#### type Body = String
#### fn respond_to(self, _: &HttpRequest) -> HttpResponse<Self::Body### impl Responder for Cow<'_, str#### type Body = String
#### fn respond_to(self, _: &HttpRequest) -> HttpResponse<Self::Body### impl<R: Responder> Responder for (R, StatusCode)
#### type Body = <R as Responder>::Body
#### fn respond_to(self, req: &HttpRequest) -> HttpResponse<Self::Body### impl Responder for &'static str
#### type Body = &'static str
#### fn respond_to(self, _: &HttpRequest) -> HttpResponse<Self::Body### impl Responder for Vec<u8#### type Body = Vec<u8, Global#### fn respond_to(self, _: &HttpRequest) -> HttpResponse<Self::Body### impl Responder for &String
#### type Body = String
#### fn respond_to(self, _: &HttpRequest) -> HttpResponse<Self::Body### impl<R: Responder> Responder for Option<R#### type Body = EitherBody<<R as Responder>::Body, BoxBody#### fn respond_to(self, req: &HttpRequest) -> HttpResponse<Self::Body### impl Responder for &'static [u8]
#### type Body = &'static [u8]
#### fn respond_to(self, _: &HttpRequest) -> HttpResponse<Self::Body### impl<R, E> Responder for Result<R, E>where
R: Responder,
E: Into<Error>,
#### type Body = EitherBody<<R as Responder>::Body, BoxBody#### fn respond_to(self, req: &HttpRequest) -> HttpResponse<Self::BodyImplementors
---
### impl Responder for Response<BoxBody#### type Body = BoxBody
### impl Responder for HttpResponseBuilder
#### type Body = BoxBody
### impl Responder for Bytes
#### type Body = Bytes
### impl Responder for BytesMut
#### type Body = BytesMut
### impl Responder for Redirect
#### type Body = ()
### impl<B> Responder for HttpResponse<B>where
B: MessageBody + 'static,
#### type Body = B
### impl<L, R> Responder for Either<L, R>where
L: Responder,
R: Responder,
See here for example of usage as a handler return type.
#### type Body = EitherBody<<L as Responder>::Body, <R as Responder>::Body### impl<T> Responder for InternalError<T>where
T: Debug + Display + 'static,
#### type Body = BoxBody
### impl<T> Responder for CustomizeResponder<T>where
T: Responder,
#### type Body = EitherBody<<T as Responder>::Body, BoxBody### impl<T: Serialize> Responder for Form<TSee here for example of usage as a handler return type.
#### type Body = EitherBody<String, BoxBody### impl<T: Serialize> Responder for Json<TCreates response with OK status code, correct content type header, and serialized JSON payload.
If serialization failed
#### type Body = EitherBody<String, BoxBodyStruct actix_web::HttpResponseBuilder
===
```
pub struct HttpResponseBuilder { /* private fields */ }
```
An HTTP response builder.
This type can be used to construct an instance of `Response` through a builder-like pattern.
Implementations
---
### impl HttpResponseBuilder
#### pub fn new(status: StatusCode) -> Self
Create response builder
#### pub fn status(&mut self, status: StatusCode) -> &mut Self
Set HTTP status code of this response.
#### pub fn insert_header(&mut self, header: impl TryIntoHeaderPair) -> &mut Self
Insert a header, replacing any that were set with an equivalent field name.
```
use actix_web::{HttpResponse, http::header};
HttpResponse::Ok()
.insert_header(header::ContentType(mime::APPLICATION_JSON))
.insert_header(("X-TEST", "value"))
.finish();
```
#### pub fn append_header(&mut self, header: impl TryIntoHeaderPair) -> &mut Self
Append a header, keeping any that were set with an equivalent field name.
```
use actix_web::{HttpResponse, http::header};
HttpResponse::Ok()
.append_header(header::ContentType(mime::APPLICATION_JSON))
.append_header(("X-TEST", "value1"))
.append_header(("X-TEST", "value2"))
.finish();
```
#### pub fn reason(&mut self, reason: &'static str) -> &mut Self
Set the custom reason for the response.
#### pub fn keep_alive(&mut self) -> &mut Self
Set connection type to KeepAlive
#### pub fn upgrade<V>(&mut self, value: V) -> &mut Selfwhere
V: TryIntoHeaderValue,
Set connection type to Upgrade
#### pub fn force_close(&mut self) -> &mut Self
Force close connection, even if it is marked as keep-alive
#### pub fn no_chunking(&mut self, len: u64) -> &mut Self
Disable chunked transfer encoding for HTTP/1.1 streaming responses.
#### pub fn content_type<V>(&mut self, value: V) -> &mut Selfwhere
V: TryIntoHeaderValue,
Set response content type.
#### pub fn cookie(&mut self, cookie: Cookie<'_>) -> &mut Self
Available on **crate feature `cookies`** only.Add a cookie to the response.
To send a “removal” cookie, call `.make_removal()` on the given cookie. See `HttpResponse::add_removal_cookie()` to learn more.
##### Examples
Send a new cookie:
```
use actix_web::{HttpResponse, cookie::Cookie};
let res = HttpResponse::Ok()
.cookie(
Cookie::build("name", "value")
.domain("www.rust-lang.org")
.path("/")
.secure(true)
.http_only(true)
.finish(),
)
.finish();
```
Send a removal cookie:
```
use actix_web::{HttpResponse, cookie::Cookie};
// the name, domain and path match the cookie created in the previous example let mut cookie = Cookie::build("name", "value-does-not-matter")
.domain("www.rust-lang.org")
.path("/")
.finish();
cookie.make_removal();
let res = HttpResponse::Ok()
.cookie(cookie)
.finish();
```
#### pub fn extensions(&self) -> Ref<'_, ExtensionsReturns a reference to the response-local data/extensions container.
#### pub fn extensions_mut(&mut self) -> RefMut<'_, ExtensionsReturns a mutable reference to the response-local data/extensions container.
#### pub fn body<B>(&mut self, body: B) -> HttpResponse<BoxBody>where
B: MessageBody + 'static,
Set a body and build the `HttpResponse`.
Unlike `message_body`, errors are converted into error responses immediately.
`HttpResponseBuilder` can not be used after this call.
#### pub fn message_body<B>(&mut self, body: B) -> Result<HttpResponse<B>, ErrorSet a body and build the `HttpResponse`.
`HttpResponseBuilder` can not be used after this call.
#### pub fn streaming<S, E>(&mut self, stream: S) -> HttpResponsewhere
S: Stream<Item = Result<Bytes, E>> + 'static,
E: Into<Box<dyn Error>> + 'static,
Set a streaming body and build the `HttpResponse`.
`HttpResponseBuilder` can not be used after this call.
#### pub fn json(&mut self, value: impl Serialize) -> HttpResponse
Set a JSON body and build the `HttpResponse`.
`HttpResponseBuilder` can not be used after this call.
#### pub fn finish(&mut self) -> HttpResponse
Set an empty body and build the `HttpResponse`.
`HttpResponseBuilder` can not be used after this call.
#### pub fn take(&mut self) -> Self
This method construct new `HttpResponseBuilder`
Trait Implementations
---
### impl From<HttpResponseBuilder> for HttpResponse
#### fn from(builder: HttpResponseBuilder) -> Self
Converts to this type from the input type.### impl From<HttpResponseBuilder> for Response<BoxBody#### fn from(builder: HttpResponseBuilder) -> Self
Converts to this type from the input type.### impl Future for HttpResponseBuilder
#### type Output = Result<HttpResponse<BoxBody>, ErrorThe type of value produced on completion.#### fn poll(self: Pin<&mut Self>, _: &mut Context<'_>) -> Poll<Self::OutputAttempt to resolve the future to a final value, registering the current task for wakeup if the value is not yet available.
#### type Body = BoxBody
#### fn respond_to(self, _: &HttpRequest) -> HttpResponse<Self::BodyConvert self to `HttpResponse`.#### fn customize(self) -> CustomizeResponder<Self>where
Self: Sized,
Wraps responder to allow alteration of its response. Read moreAuto Trait Implementations
---
### impl !RefUnwindSafe for HttpResponseBuilder
### impl !Send for HttpResponseBuilder
### impl !Sync for HttpResponseBuilder
### impl Unpin for HttpResponseBuilder
### impl !UnwindSafe for HttpResponseBuilder
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> FutureExt for Twhere
T: Future + ?Sized,
#### fn map<U, F>(self, f: F) -> Map<Self, F>where
F: FnOnce(Self::Output) -> U,
Self: Sized,
Map this future’s output to a different type, returning a new future of the resulting type.
Self::Output: Into<U>,
Self: Sized,
Map this future’s output to a different type, returning a new future of the resulting type.
F: FnOnce(Self::Output) -> Fut,
Fut: Future,
Self: Sized,
Chain on a computation for when a future finished, passing the result of the future to the provided closure `f`.
B: Future<Output = Self::Output>,
Self: Sized,
Wrap this future in an `Either` future, making it the left-hand variant of that `Either`.
A: Future<Output = Self::Output>,
Self: Sized,
Wrap this future in an `Either` future, making it the right-hand variant of that `Either`.
Self: Sized,
Convert this future into a single element stream.
Self::Output: Future,
Self: Sized,
Flatten the execution of this future when the output of this future is itself another future.
Self::Output: Stream,
Self: Sized,
Flatten the execution of this future when the successful result of this future is a stream.
Self: Sized,
Fuse a future such that `poll` will never again be called once it has completed. This method can be used to turn any `Future` into a
`FusedFuture`.
F: FnOnce(&Self::Output),
Self: Sized,
Do something with the output of a future before passing it on.
self
) -> Pin<Box<dyn Future<Output = Self::Output> + Send + 'a, Global>>where
Self: Sized + Send + 'a,
Available on **crate feature `alloc`** only.Wrap the future in a Box, pinning it.
self
) -> Pin<Box<dyn Future<Output = Self::Output> + 'a, Global>>where
Self: Sized + 'a,
Available on **crate feature `alloc`** only.Wrap the future in a Box, pinning it.
Self: Sized,
Turns a `Future<Output = T>` into a
`TryFuture<Ok = T, Error = ()`>.#### fn never_error(self) -> NeverError<Self>where
Self: Sized,
Turns a `Future<Output = T>` into a
`TryFuture<Ok = T, Error = Never`>.#### fn poll_unpin(&mut self, cx: &mut Context<'_>) -> Poll<Self::Output>where
Self: Unpin,
A convenience for calling `Future::poll` on `Unpin` future types.#### fn now_or_never(self) -> Option<Self::Output>where
Self: Sized,
Evaluates and consumes the future, returning the resulting output if the future is ready after the first call to `Future::poll`.
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<F> IntoFuture for Fwhere
F: Future,
#### type Output = <F as Future>::Output
The output that the future will produce on completion.#### type IntoFuture = F
Which kind of future are we turning this into?#### fn into_future(self) -> <F as IntoFuture>::IntoFuture
Creates a future from a value.
#### type Output = T
Should always be `Self`### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<F, T, E> TryFuture for Fwhere
F: Future<Output = Result<T, E>> + ?Sized,
#### type Ok = T
The type of successful values yielded by this future#### type Error = E
The type of failures yielded by this future#### fn try_poll(
self: Pin<&mut F>,
cx: &mut Context<'_>
) -> Poll<<F as Future>::OutputPoll this `TryFuture` as if it were a `Future`.
Fut: TryFuture + ?Sized,
#### fn map_ok<T, F>(self, f: F) -> MapOk<Self, F>where
F: FnOnce(Self::Ok) -> T,
Self: Sized,
Maps this future’s success value to a different value.
F: FnOnce(Self::Ok) -> T,
E: FnOnce(Self::Error) -> T,
Self: Sized,
Maps this future’s success value to a different value, and permits for error handling resulting in the same type.
F: FnOnce(Self::Error) -> E,
Self: Sized,
Maps this future’s error value to a different value.
Self: Sized,
Self::Error: Into<E>,
Maps this future’s `Error` to a new error type using the `Into` trait.
Self: Sized,
Self::Ok: Into<U>,
Maps this future’s `Ok` to a new type using the `Into` trait.#### fn and_then<Fut, F>(self, f: F) -> AndThen<Self, Fut, F>where
F: FnOnce(Self::Ok) -> Fut,
Fut: TryFuture<Error = Self::Error>,
Self: Sized,
Executes another future after this one resolves successfully. The success value is passed to a closure to create this subsequent future.
F: FnOnce(Self::Error) -> Fut,
Fut: TryFuture<Ok = Self::Ok>,
Self: Sized,
Executes another future if this one resolves to an error. The error value is passed to a closure to create this subsequent future.
F: FnOnce(&Self::Ok),
Self: Sized,
Do something with the success value of a future before passing it on.
F: FnOnce(&Self::Error),
Self: Sized,
Do something with the error value of a future before passing it on.
Self::Ok: TryFuture<Error = Self::Error>,
Self: Sized,
Flatten the execution of this future when the successful result of this future is another future.
Self::Ok: TryStream<Error = Self::Error>,
Self: Sized,
Flatten the execution of this future when the successful result of this future is a stream.
Self: Sized,
F: FnOnce(Self::Error) -> Self::Ok,
Unwraps this future’s output, producing a future with this future’s
`Ok` type as its
`Output` type.
Self: Sized,
Wraps a `TryFuture` into a type that implements
`Future`.
&mut self,
cx: &mut Context<'_>
) -> Poll<Result<Self::Ok, Self::Error>>where
Self: Unpin,
A convenience method for calling `TryFuture::try_poll` on `Unpin`
future types.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct actix_web::Resource
===
```
pub struct Resource<T = ResourceEndpoint> { /* private fields */ }
```
A collection of `Route`s that respond to the same path pattern.
Resource in turn has at least one route. Route consists of an handlers objects and list of guards (objects that implement `Guard` trait). Resources and routes uses builder-like pattern for configuration. During request handling, resource object iterate through all routes and check guards for specific route, if request matches all guards, route considered matched and route handler get called.
Examples
---
```
use actix_web::{web, App, HttpResponse};
let app = App::new().service(
web::resource("/")
.get(|| HttpResponse::Ok())
.post(|| async { "Hello World!" })
);
```
If no matching route is found, an empty 405 response is returned which includes an appropriate Allow header. This default behavior can be overridden using
`default_service()`.
Implementations
---
### impl Resource
#### pub fn new<T: IntoPatterns>(path: T) -> Resource
Constructs new resource that matches a `path` pattern.
### impl<T> Resource<T>where
T: ServiceFactory<ServiceRequest, Config = (), Error = Error, InitError = ()>,
#### pub fn name(self, name: &str) -> Self
Set resource name.
Name is used for url generation.
#### pub fn guard<G: Guard + 'static>(self, guard: G) -> Self
Add match guard to a resource.
```
use actix_web::{web, guard, App, HttpResponse};
async fn index(data: web::Path<(String, String)>) -> &'static str {
"Welcome!"
}
let app = App::new()
.service(
web::resource("/app")
.guard(guard::Header("content-type", "text/plain"))
.route(web::get().to(index))
)
.service(
web::resource("/app")
.guard(guard::Header("content-type", "text/json"))
.route(web::get().to(|| HttpResponse::MethodNotAllowed()))
);
```
#### pub fn route(self, route: Route) -> Self
Register a new route.
```
use actix_web::{web, guard, App, HttpResponse};
let app = App::new().service(
web::resource("/").route(
web::route()
.guard(guard::Any(guard::Get()).or(guard::Put()))
.guard(guard::Header("Content-Type", "text/plain"))
.to(|| HttpResponse::Ok()))
);
```
Multiple routes could be added to a resource. Resource object uses match guards for route selection.
```
use actix_web::{web, guard, App};
let app = App::new().service(
web::resource("/container/")
.route(web::get().to(get_handler))
.route(web::post().to(post_handler))
.route(web::delete().to(delete_handler))
);
```
#### pub fn app_data<U: 'static>(self, data: U) -> Self
Add resource data.
Data of different types from parent contexts will still be accessible. Any `Data<T>` types set here can be extracted in handlers using the `Data<T>` extractor.
##### Examples
```
use std::cell::Cell;
use actix_web::{web, App, HttpRequest, HttpResponse, Responder};
struct MyData {
count: std::cell::Cell<usize>,
}
async fn handler(req: HttpRequest, counter: web::Data<MyData>) -> impl Responder {
// note this cannot use the Data<T> extractor because it was not added with it
let incr = *req.app_data::<usize>().unwrap();
assert_eq!(incr, 3);
// update counter using other value from app data
counter.count.set(counter.count.get() + incr);
HttpResponse::Ok().body(counter.count.get().to_string())
}
let app = App::new().service(
web::resource("/")
.app_data(3usize)
.app_data(web::Data::new(MyData { count: Default::default() }))
.route(web::get().to(handler))
);
```
#### pub fn data<U: 'static>(self, data: U) -> Self
👎Deprecated since 4.0.0: Use `.app_data(Data::new(val))` instead.Add resource data after wrapping in `Data<T>`.
Deprecated in favor of `app_data`.
#### pub fn to<F, Args>(self, handler: F) -> Selfwhere
F: Handler<Args>,
Args: FromRequest + 'static,
F::Output: Responder + 'static,
Register a new route and add handler. This route matches all requests.
```
use actix_web::{App, HttpRequest, HttpResponse, web};
async fn index(req: HttpRequest) -> HttpResponse {
todo!()
}
App::new().service(web::resource("/").to(index));
```
This is shortcut for:
```
App::new().service(web::resource("/").route(web::route().to(index)));
```
#### pub fn wrap<M, B>(
self,
mw: M
) -> Resource<impl ServiceFactory<ServiceRequest, Config = (), Response = ServiceResponse<B>, Error = Error, InitError = ()>>where
M: Transform<T::Service, ServiceRequest, Response = ServiceResponse<B>, Error = Error, InitError = ()> + 'static,
B: MessageBody,
Registers a resource middleware.
`mw` is a middleware component (type), that can modify the request and response across all routes managed by this `Resource`.
See `App::wrap` for more details.
#### pub fn wrap_fn<F, R, B>(
self,
mw: F
) -> Resource<impl ServiceFactory<ServiceRequest, Config = (), Response = ServiceResponse<B>, Error = Error, InitError = ()>>where
F: Fn(ServiceRequest, &T::Service) -> R + Clone + 'static,
R: Future<Output = Result<ServiceResponse<B>, Error>>,
B: MessageBody,
Registers a resource function middleware.
`mw` is a closure that runs during inbound and/or outbound processing in the request life-cycle (request -> response), modifying request/response as necessary, across all requests handled by the `Resource`.
See `App::wrap_fn` for examples and more details.
#### pub fn default_service<F, U>(self, f: F) -> Selfwhere
F: IntoServiceFactory<U, ServiceRequest>,
U: ServiceFactory<ServiceRequest, Config = (), Response = ServiceResponse, Error = Error> + 'static,
U::InitError: Debug,
Sets the default service to be used if no matching route is found.
Unlike `Scope`s, a `Resource` does *not* inherit its parent’s default service. You can use a `Route` as default service.
If a custom default service is not registered, an empty `405 Method Not Allowed` response with an appropriate Allow header will be sent instead.
##### Examples
```
use actix_web::{App, HttpResponse, web};
let resource = web::resource("/test")
.route(web::get().to(HttpResponse::Ok))
.default_service(web::to(|| {
HttpResponse::BadRequest()
}));
App::new().service(resource);
```
### impl<T> Resource<T>where
T: ServiceFactory<ServiceRequest, Config = (), Error = Error, InitError = ()>,
Concise routes for well-known HTTP methods.
#### pub fn get<F, Args>(self, handler: F) -> Selfwhere
F: Handler<Args>,
Args: FromRequest + 'static,
F::Output: Responder + 'static,
Adds a GET route.
Use `route` if you need to add additional guards.
##### Examples
```
web::resource("/")
.get(|| async { "Hello World!" })
```
#### pub fn post<F, Args>(self, handler: F) -> Selfwhere
F: Handler<Args>,
Args: FromRequest + 'static,
F::Output: Responder + 'static,
Adds a POST route.
Use `route` if you need to add additional guards.
##### Examples
```
web::resource("/")
.post(|| async { "Hello World!" })
```
#### pub fn put<F, Args>(self, handler: F) -> Selfwhere
F: Handler<Args>,
Args: FromRequest + 'static,
F::Output: Responder + 'static,
Adds a PUT route.
Use `route` if you need to add additional guards.
##### Examples
```
web::resource("/")
.put(|| async { "Hello World!" })
```
#### pub fn patch<F, Args>(self, handler: F) -> Selfwhere
F: Handler<Args>,
Args: FromRequest + 'static,
F::Output: Responder + 'static,
Adds a PATCH route.
Use `route` if you need to add additional guards.
##### Examples
```
web::resource("/")
.patch(|| async { "Hello World!" })
```
#### pub fn delete<F, Args>(self, handler: F) -> Selfwhere
F: Handler<Args>,
Args: FromRequest + 'static,
F::Output: Responder + 'static,
Adds a DELETE route.
Use `route` if you need to add additional guards.
##### Examples
```
web::resource("/")
.delete(|| async { "Hello World!" })
```
#### pub fn head<F, Args>(self, handler: F) -> Selfwhere
F: Handler<Args>,
Args: FromRequest + 'static,
F::Output: Responder + 'static,
Adds a HEAD route.
Use `route` if you need to add additional guards.
##### Examples
```
web::resource("/")
.head(|| async { "Hello World!" })
```
#### pub fn trace<F, Args>(self, handler: F) -> Selfwhere
F: Handler<Args>,
Args: FromRequest + 'static,
F::Output: Responder + 'static,
Adds a TRACE route.
Use `route` if you need to add additional guards.
##### Examples
```
web::resource("/")
.trace(|| async { "Hello World!" })
```
Trait Implementations
---
### impl<T, B> HttpServiceFactory for Resource<T>where
T: ServiceFactory<ServiceRequest, Config = (), Response = ServiceResponse<B>, Error = Error, InitError = ()> + 'static,
B: MessageBody + 'static,
#### fn register(self, config: &mut AppService)
Auto Trait Implementations
---
### impl<T = ResourceEndpoint> !RefUnwindSafe for Resource<T### impl<T = ResourceEndpoint> !Send for Resource<T### impl<T = ResourceEndpoint> !Sync for Resource<T### impl<T> Unpin for Resource<T>where
T: Unpin,
### impl<T = ResourceEndpoint> !UnwindSafe for Resource<TBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct actix_web::Route
===
```
pub struct Route { /* private fields */ }
```
A request handler with guards.
Route uses a builder-like pattern for configuration. If handler is not set, a `404 Not Found`
handler is used.
Implementations
---
### impl Route
#### pub fn new() -> Route
Create new route which matches any request.
#### pub fn wrap<M, B>(self, mw: M) -> Routewhere
M: Transform<BoxService<ServiceRequest, ServiceResponse, Error>, ServiceRequest, Response = ServiceResponse<B>, Error = Error, InitError = ()> + 'static,
B: MessageBody + 'static,
Registers a route middleware.
`mw` is a middleware component (type), that can modify the requests and responses handled by this `Route`.
See `App::wrap` for more details.
### impl Route
#### pub fn method(self, method: Method) -> Self
Add method guard to the route.
##### Examples
```
App::new().service(web::resource("/path").route(
web::get()
.method(http::Method::CONNECT)
.guard(guard::Header("content-type", "text/plain"))
.to(|req: HttpRequest| HttpResponse::Ok()))
);
```
#### pub fn guard<F: Guard + 'static>(self, f: F) -> Self
Add guard to the route.
##### Examples
```
App::new().service(web::resource("/path").route(
web::route()
.guard(guard::Get())
.guard(guard::Header("content-type", "text/plain"))
.to(|req: HttpRequest| HttpResponse::Ok()))
);
```
#### pub fn to<F, Args>(self, handler: F) -> Selfwhere
F: Handler<Args>,
Args: FromRequest + 'static,
F::Output: Responder + 'static,
Set handler function, use request extractors for parameters.
##### Examples
```
use actix_web::{web, http, App};
use serde::Deserialize;
#[derive(Deserialize)]
struct Info {
username: String,
}
/// extract path info using serde async fn index(info: web::Path<Info>) -> String {
format!("Welcome {}!", info.username)
}
let app = App::new().service(
web::resource("/{username}/index.html") // <- define path parameters
.route(web::get().to(index)) // <- register handler
);
```
It is possible to use multiple extractors for one handler function.
```
use actix_web::{web, App};
#[derive(Deserialize)]
struct Info {
username: String,
}
/// extract path info using serde async fn index(
path: web::Path<Info>,
query: web::Query<HashMap<String, String>>,
body: web::Json<Info>
) -> String {
format!("Welcome {}!", path.username)
}
let app = App::new().service(
web::resource("/{username}/index.html") // <- define path parameters
.route(web::get().to(index))
);
```
#### pub fn service<S, E>(self, service_factory: S) -> Selfwhere
S: ServiceFactory<ServiceRequest, Response = ServiceResponse, Error = E, InitError = (), Config = ()> + 'static,
E: Into<Error> + 'static,
Set raw service to be constructed and called as the request handler.
##### Examples
```
struct HelloWorld;
impl Service<ServiceRequest> for HelloWorld {
type Response = ServiceResponse;
type Error = Infallible;
type Future = LocalBoxFuture<'static, Result<Self::Response, Self::Error>>;
dev::always_ready!();
fn call(&self, req: ServiceRequest) -> Self::Future {
let (req, _) = req.into_parts();
let res = HttpResponse::Ok()
.insert_header(header::ContentType::plaintext())
.body("Hello world!");
Box::pin(async move { Ok(ServiceResponse::new(req, res)) })
}
}
App::new().route(
"/",
web::get().service(fn_factory(|| async { Ok(HelloWorld) })),
);
```
Trait Implementations
---
### impl ServiceFactory<ServiceRequest> for Route
#### type Response = ServiceResponse<BoxBodyResponses given by the created services.#### type Error = Error
Errors produced by the created services.#### type Config = ()
Service factory configuration.#### type Service = RouteService
The kind of `Service` created by this factory.#### type InitError = ()
Errors potentially raised while building a service.#### type Future = Pin<Box<dyn Future<Output = Result<<Route as ServiceFactory<ServiceRequest>>::Service, <Route as ServiceFactory<ServiceRequest>>::InitError>>, Global>The future of the `Service` instance.g#### fn new_service(&self, _: ()) -> Self::Future
Create and return a new service asynchronously.Auto Trait Implementations
---
### impl !RefUnwindSafe for Route
### impl !Send for Route
### impl !Sync for Route
### impl Unpin for Route
### impl !UnwindSafe for Route
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<SF, Req> IntoServiceFactory<SF, Req> for SFwhere
SF: ServiceFactory<Req>,
#### fn into_factory(self) -> SF
Convert `Self` to a `ServiceFactory`### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<SF, Req> ServiceFactoryExt<Req> for SFwhere
SF: ServiceFactory<Req>,
#### fn map<F, R>(self, f: F) -> MapServiceFactory<Self, F, Req, R>where
Self: Sized,
F: FnMut(Self::Response) -> R + Clone,
Map this service’s output to a different type, returning a new service of the resulting type.#### fn map_err<F, E>(self, f: F) -> MapErrServiceFactory<Self, Req, F, E>where
Self: Sized,
F: Fn(Self::Error) -> E + Clone,
Map this service’s error to a different error, returning a new service.#### fn map_init_err<F, E>(self, f: F) -> MapInitErr<Self, F, Req, E>where
Self: Sized,
F: Fn(Self::InitError) -> E + Clone,
Map this factory’s init error to a different error, returning a new service.#### fn and_then<I, SF1>(self, factory: I) -> AndThenServiceFactory<Self, SF1, Req>where
Self: Sized,
Self::Config: Clone,
I: IntoServiceFactory<SF1, Self::Response>,
SF1: ServiceFactory<Self::Response, Config = Self::Config, Error = Self::Error, InitError = Self::InitError>,
Call another service after call to this one has resolved successfully.### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct actix_web::Scope
===
```
pub struct Scope<T = ScopeEndpoint> { /* private fields */ }
```
A collection of `Route`s, `Resource`s, or other services that share a common path prefix.
The `Scope`’s path can contain dynamic segments. The dynamic segments can be extracted from requests using the `Path` extractor or with `HttpRequest::match_info()`.
Avoid Trailing Slashes
---
Avoid using trailing slashes in the scope prefix (e.g., `web::scope("/scope/")`). It will almost certainly not have the expected behavior. See the documentation on resource definitions to understand why this is the case and how to correctly construct scope/prefix definitions.
Examples
---
```
use actix_web::{web, App, HttpResponse};
let app = App::new().service(
web::scope("/{project_id}")
.service(web::resource("/path1").to(|| async { "OK" }))
.service(web::resource("/path2").route(web::get().to(|| HttpResponse::Ok())))
.service(web::resource("/path3").route(web::head().to(HttpResponse::MethodNotAllowed)))
);
```
In the above example three routes get registered:
* /{project_id}/path1 - responds to all HTTP methods
* /{project_id}/path2 - responds to `GET` requests
* /{project_id}/path3 - responds to `HEAD` requests
Implementations
---
### impl Scope
#### pub fn new(path: &str) -> Scope
Create a new scope
### impl<T> Scope<T>where
T: ServiceFactory<ServiceRequest, Config = (), Error = Error, InitError = ()>,
#### pub fn guard<G: Guard + 'static>(self, guard: G) -> Self
Add match guard to a scope.
```
use actix_web::{web, guard, App, HttpRequest, HttpResponse};
async fn index(data: web::Path<(String, String)>) -> &'static str {
"Welcome!"
}
let app = App::new().service(
web::scope("/app")
.guard(guard::Header("content-type", "text/plain"))
.route("/test1", web::get().to(index))
.route("/test2", web::post().to(|r: HttpRequest| {
HttpResponse::MethodNotAllowed()
}))
);
```
#### pub fn app_data<U: 'static>(self, data: U) -> Self
Add scope data.
Data of different types from parent contexts will still be accessible. Any `Data<T>` types set here can be extracted in handlers using the `Data<T>` extractor.
##### Examples
```
use std::cell::Cell;
use actix_web::{web, App, HttpRequest, HttpResponse, Responder};
struct MyData {
count: std::cell::Cell<usize>,
}
async fn handler(req: HttpRequest, counter: web::Data<MyData>) -> impl Responder {
// note this cannot use the Data<T> extractor because it was not added with it
let incr = *req.app_data::<usize>().unwrap();
assert_eq!(incr, 3);
// update counter using other value from app data
counter.count.set(counter.count.get() + incr);
HttpResponse::Ok().body(counter.count.get().to_string())
}
let app = App::new().service(
web::scope("/app")
.app_data(3usize)
.app_data(web::Data::new(MyData { count: Default::default() }))
.route("/", web::get().to(handler))
);
```
#### pub fn data<U: 'static>(self, data: U) -> Self
👎Deprecated since 4.0.0: Use `.app_data(Data::new(val))` instead.Add scope data after wrapping in `Data<T>`.
Deprecated in favor of `app_data`.
#### pub fn configure<F>(self, cfg_fn: F) -> Selfwhere
F: FnOnce(&mut ServiceConfig),
Run external configuration as part of the scope building process.
This function is useful for moving parts of configuration to a different module or library.
For example, some of the resource’s configuration could be moved to different module.
```
use actix_web::{web, middleware, App, HttpResponse};
// this function could be located in different module fn config(cfg: &mut web::ServiceConfig) {
cfg.service(web::resource("/test")
.route(web::get().to(|| HttpResponse::Ok()))
.route(web::head().to(|| HttpResponse::MethodNotAllowed()))
);
}
let app = App::new()
.wrap(middleware::Logger::default())
.service(
web::scope("/api")
.configure(config)
)
.route("/index.html", web::get().to(|| HttpResponse::Ok()));
```
#### pub fn service<F>(self, factory: F) -> Selfwhere
F: HttpServiceFactory + 'static,
Register HTTP service.
This is similar to `App's` service registration.
Actix Web provides several services implementations:
* *Resource* is an entry in resource table which corresponds to requested URL.
* *Scope* is a set of resources with common root path.
* “StaticFiles” is a service for static files support
```
use actix_web::{web, App, HttpRequest};
struct AppState;
async fn index(req: HttpRequest) -> &'static str {
"Welcome!"
}
let app = App::new().service(
web::scope("/app").service(
web::scope("/v1")
.service(web::resource("/test1").to(index)))
);
```
#### pub fn route(self, path: &str, route: Route) -> Self
Configure route for a specific path.
This is a simplified version of the `Scope::service()` method.
This method can be called multiple times, in that case multiple resources with one route would be registered for same resource path.
```
use actix_web::{web, App, HttpResponse};
async fn index(data: web::Path<(String, String)>) -> &'static str {
"Welcome!"
}
let app = App::new().service(
web::scope("/app")
.route("/test1", web::get().to(index))
.route("/test2", web::post().to(|| HttpResponse::MethodNotAllowed()))
);
```
#### pub fn default_service<F, U>(self, f: F) -> Selfwhere
F: IntoServiceFactory<U, ServiceRequest>,
U: ServiceFactory<ServiceRequest, Config = (), Response = ServiceResponse, Error = Error> + 'static,
U::InitError: Debug,
Default service to be used if no matching resource could be found.
If a default service is not registered, it will fall back to the default service of the parent `App` (see `App::default_service`).
#### pub fn wrap<M, B>(
self,
mw: M
) -> Scope<impl ServiceFactory<ServiceRequest, Config = (), Response = ServiceResponse<B>, Error = Error, InitError = ()>>where
M: Transform<T::Service, ServiceRequest, Response = ServiceResponse<B>, Error = Error, InitError = ()> + 'static,
B: MessageBody,
Registers a scope-wide middleware.
`mw` is a middleware component (type), that can modify the request and response across all sub-resources managed by this `Scope`.
See `App::wrap` for more details.
#### pub fn wrap_fn<F, R, B>(
self,
mw: F
) -> Scope<impl ServiceFactory<ServiceRequest, Config = (), Response = ServiceResponse<B>, Error = Error, InitError = ()>>where
F: Fn(ServiceRequest, &T::Service) -> R + Clone + 'static,
R: Future<Output = Result<ServiceResponse<B>, Error>>,
B: MessageBody,
Registers a scope-wide function middleware.
`mw` is a closure that runs during inbound and/or outbound processing in the request life-cycle (request -> response), modifying request/response as necessary, across all requests handled by the `Scope`.
See `App::wrap_fn` for examples and more details.
Trait Implementations
---
### impl<T, B> HttpServiceFactory for Scope<T>where
T: ServiceFactory<ServiceRequest, Config = (), Response = ServiceResponse<B>, Error = Error, InitError = ()> + 'static,
B: MessageBody + 'static,
#### fn register(self, config: &mut AppService)
Auto Trait Implementations
---
### impl<T = ScopeEndpoint> !RefUnwindSafe for Scope<T### impl<T = ScopeEndpoint> !Send for Scope<T### impl<T = ScopeEndpoint> !Sync for Scope<T### impl<T> Unpin for Scope<T>where
T: Unpin,
### impl<T = ScopeEndpoint> !UnwindSafe for Scope<TBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Enum actix_web::Either
===
```
pub enum Either<L, R> {
Left(L),
Right(R),
}
```
Combines two extractor or responder types into a single type.
Extractor
---
Provides a mechanism for trying two extractors, a primary and a fallback. Useful for
“polymorphic payloads” where, for example, a form might be JSON or URL encoded.
It is important to note that this extractor, by necessity, buffers the entire request payload as part of its implementation. Though, it does respect any `PayloadConfig` maximum size limits.
```
use actix_web::{post, web, Either};
use serde::Deserialize;
#[derive(Deserialize)]
struct Info {
name: String,
}
// handler that accepts form as JSON or form-urlencoded.
#[post("/")]
async fn index(form: Either<web::Json<Info>, web::Form<Info>>) -> String {
let name: String = match form {
Either::Left(json) => json.name.to_owned(),
Either::Right(form) => form.name.to_owned(),
};
format!("Welcome {}!", name)
}
```
Responder
---
It may be desirable to use a concrete type for a response with multiple branches. As long as both types implement `Responder`, so will the `Either` type, enabling it to be used as a handler’s return type.
All properties of a response are determined by the Responder branch returned.
```
use actix_web::{get, Either, Error, HttpResponse};
#[get("/")]
async fn index() -> Either<&'static str, Result<HttpResponse, Error>> {
if 1 == 2 {
// respond with Left variant
Either::Left("Bad data")
} else {
// respond with Right variant
Either::Right(
Ok(HttpResponse::Ok()
.content_type(mime::TEXT_HTML)
.body("<p>Hello!</p>"))
)
}
}
```
Variants
---
### Left(L)
A value of type `L`.
### Right(R)
A value of type `R`.
Implementations
---
### impl<T> Either<Form<T>, Json<T>#### pub fn into_inner(self) -> T
### impl<T> Either<Json<T>, Form<T>#### pub fn into_inner(self) -> T
Trait Implementations
---
### impl<L: Debug, R: Debug> Debug for Either<L, R#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
L: FromRequest + 'static,
R: FromRequest + 'static,
See here for example of usage as an extractor.
#### type Error = EitherExtractError<<L as FromRequest>::Error, <R as FromRequest>::ErrorThe associated error which can be returned.#### type Future = EitherExtractFut<L, RFuture that resolves to a `Self`.
Create a `Self` from request parts asynchronously.#### fn extract(req: &HttpRequest) -> Self::Future
Create a `Self` from request head asynchronously.
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl<L, R> Responder for Either<L, R>where
L: Responder,
R: Responder,
See here for example of usage as a handler return type.
#### type Body = EitherBody<<L as Responder>::Body, <R as Responder>::Body#### fn respond_to(self, req: &HttpRequest) -> HttpResponse<Self::BodyConvert self to `HttpResponse`.#### fn customize(self) -> CustomizeResponder<Self>where
Self: Sized,
Wraps responder to allow alteration of its response.
---
### impl<L, R> RefUnwindSafe for Either<L, R>where
L: RefUnwindSafe,
R: RefUnwindSafe,
### impl<L, R> Send for Either<L, R>where
L: Send,
R: Send,
### impl<L, R> Sync for Either<L, R>where
L: Sync,
R: Sync,
### impl<L, R> Unpin for Either<L, R>where
L: Unpin,
R: Unpin,
### impl<L, R> UnwindSafe for Either<L, R>where
L: UnwindSafe,
R: UnwindSafe,
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
Q: Eq + ?Sized,
K: Borrow<Q> + ?Sized,
#### fn equivalent(&self, key: &K) -> bool
Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Trait actix_web::FromRequest
===
```
pub trait FromRequest: Sized {
type Error: Into<Error>;
type Future: Future<Output = Result<Self, Self::Error>>;
// Required method
fn from_request(req: &HttpRequest, payload: &mut Payload) -> Self::Future;
// Provided method
fn extract(req: &HttpRequest) -> Self::Future { ... }
}
```
A type that implements `FromRequest` is called an **extractor** and can extract data from the request. Some types that implement this trait are: `Json`, `Header`, and `Path`.
Check out `ServiceRequest::extract` if you want to leverage extractors when implementing middlewares.
Configuration
---
An extractor can be customized by injecting the corresponding configuration with one of:
* `App::app_data()`
* `Scope::app_data()`
* `Resource::app_data()`
Here are some built-in extractors and their corresponding configuration.
Please refer to the respective documentation for details.
| Extractor | Configuration |
| --- | --- |
| `Header` | *None* |
| `Path` | `PathConfig` |
| `Json` | `JsonConfig` |
| `Form` | `FormConfig` |
| `Query` | `QueryConfig` |
| `Bytes` | `PayloadConfig` |
| `String` | `PayloadConfig` |
| `Payload` | `PayloadConfig` |
Implementing An Extractor
---
To reduce duplicate code in handlers where extracting certain parts of a request has a common structure, you can implement `FromRequest` for your own types.
Note that the request payload can only be consumed by one extractor.
Required Associated Types
---
#### type Error: Into<ErrorThe associated error which can be returned.
#### type Future: Future<Output = Result<Self, Self::Error>Future that resolves to a `Self`.
To use an async function or block, the futures must be boxed. The following snippet will be common when creating async/await extractors (that do not consume the body).
```
type Future = Pin<Box<dyn Future<Output = Result<Self, Self::Error>>>>;
// or type Future = futures_util::future::LocalBoxFuture<'static, Result<Self, Self::Error>>;
fn from_request(req: HttpRequest, ...) -> Self::Future {
let req = req.clone();
Box::pin(async move {
...
})
}
```
Required Methods
---
#### fn from_request(req: &HttpRequest, payload: &mut Payload) -> Self::Future
Create a `Self` from request parts asynchronously.
Provided Methods
---
#### fn extract(req: &HttpRequest) -> Self::Future
Create a `Self` from request head asynchronously.
This method is short for `T::from_request(req, &mut Payload::None)`.
Implementations on Foreign Types
---
### impl<A: FromRequest + 'static, B: FromRequest + 'static, C: FromRequest + 'static, D: FromRequest + 'static, E: FromRequest + 'static, F: FromRequest + 'static, G: FromRequest + 'static, H: FromRequest + 'static, I: FromRequest + 'static> FromRequest for (A, B, C, D, E, F, G, H, I)
FromRequest implementation for tuple
#### type Error = Error
#### type Future = TupleFromRequest9<A, B, C, D, E, F, G, H, I#### fn from_request(req: &HttpRequest, payload: &mut Payload) -> Self::Future
### impl<A: FromRequest + 'static, B: FromRequest + 'static, C: FromRequest + 'static, D: FromRequest + 'static, E: FromRequest + 'static, F: FromRequest + 'static, G: FromRequest + 'static, H: FromRequest + 'static> FromRequest for (A, B, C, D, E, F, G, H)
FromRequest implementation for tuple
#### type Error = Error
#### type Future = TupleFromRequest8<A, B, C, D, E, F, G, H#### fn from_request(req: &HttpRequest, payload: &mut Payload) -> Self::Future
### impl<A: FromRequest + 'static, B: FromRequest + 'static, C: FromRequest + 'static, D: FromRequest + 'static, E: FromRequest + 'static, F: FromRequest + 'static, G: FromRequest + 'static> FromRequest for (A, B, C, D, E, F, G)
FromRequest implementation for tuple
#### type Error = Error
#### type Future = TupleFromRequest7<A, B, C, D, E, F, G#### fn from_request(req: &HttpRequest, payload: &mut Payload) -> Self::Future
### impl<T, E> FromRequest for Result<T, E>where
T: FromRequest,
T::Error: Into<E>,
Extract from the request, passing error type through to handler.
If the inner `T::from_request` returns an error, allow handler to receive the error rather than immediately returning an error response.
#### Examples
```
use actix_web::{web, dev, App, Result, Error, HttpRequest, FromRequest};
use actix_web::error::ErrorBadRequest;
use futures_util::future::{ok, err, Ready};
use serde::Deserialize;
use rand;
#[derive(Debug, Deserialize)]
struct Thing {
name: String
}
impl FromRequest for Thing {
type Error = Error;
type Future = Ready<Result<Thing, Error>>;
fn from_request(req: &HttpRequest, payload: &mut dev::Payload) -> Self::Future {
if rand::random() {
ok(Thing { name: "thingy".into() })
} else {
err(ErrorBadRequest("no luck"))
}
}
}
/// extract `Thing` from request async fn index(supplied_thing: Result<Thing>) -> String {
match supplied_thing {
Ok(thing) => format!("Got thing: {:?}", thing),
Err(e) => format!("Error extracting thing: {}", e)
}
}
let app = App::new().service(
web::resource("/users/:first").route(web::post().to(index))
);
```
#### type Error = Infallible
#### type Future = FromRequestResFuture<<T as FromRequest>::Future, E#### fn from_request(req: &HttpRequest, payload: &mut Payload) -> Self::Future
### impl<A: FromRequest + 'static, B: FromRequest + 'static, C: FromRequest + 'static, D: FromRequest + 'static, E: FromRequest + 'static, F: FromRequest + 'static, G: FromRequest + 'static, H: FromRequest + 'static, I: FromRequest + 'static, J: FromRequest + 'static, K: FromRequest + 'static, L: FromRequest + 'static> FromRequest for (A, B, C, D, E, F, G, H, I, J, K, L)
FromRequest implementation for tuple
#### type Error = Error
#### type Future = TupleFromRequest12<A, B, C, D, E, F, G, H, I, J, K, L#### fn from_request(req: &HttpRequest, payload: &mut Payload) -> Self::Future
### impl<A: FromRequest + 'static, B: FromRequest + 'static, C: FromRequest + 'static, D: FromRequest + 'static, E: FromRequest + 'static> FromRequest for (A, B, C, D, E)
FromRequest implementation for tuple
#### type Error = Error
#### type Future = TupleFromRequest5<A, B, C, D, E#### fn from_request(req: &HttpRequest, payload: &mut Payload) -> Self::Future
### impl<A: FromRequest + 'static, B: FromRequest + 'static> FromRequest for (A, B)
FromRequest implementation for tuple
#### type Error = Error
#### type Future = TupleFromRequest2<A, B#### fn from_request(req: &HttpRequest, payload: &mut Payload) -> Self::Future
### impl<A: FromRequest + 'static, B: FromRequest + 'static, C: FromRequest + 'static, D: FromRequest + 'static, E: FromRequest + 'static, F: FromRequest + 'static, G: FromRequest + 'static, H: FromRequest + 'static, I: FromRequest + 'static, J: FromRequest + 'static, K: FromRequest + 'static> FromRequest for (A, B, C, D, E, F, G, H, I, J, K)
FromRequest implementation for tuple
#### type Error = Error
#### type Future = TupleFromRequest11<A, B, C, D, E, F, G, H, I, J, K#### fn from_request(req: &HttpRequest, payload: &mut Payload) -> Self::Future
### impl<A: FromRequest + 'static, B: FromRequest + 'static, C: FromRequest + 'static, D: FromRequest + 'static, E: FromRequest + 'static, F: FromRequest + 'static> FromRequest for (A, B, C, D, E, F)
FromRequest implementation for tuple
#### type Error = Error
#### type Future = TupleFromRequest6<A, B, C, D, E, F#### fn from_request(req: &HttpRequest, payload: &mut Payload) -> Self::Future
### impl FromRequest for String
Extract text information from a request’s body.
Text extractor automatically decode body according to the request’s charset.
Use `PayloadConfig` to configure extraction process.
#### Examples
```
use actix_web::{post, web, FromRequest};
// extract text data from request
#[post("/")]
async fn index(text: String) -> String {
format!("Body {}!", text)
}
```
#### type Error = Error
#### type Future = Either<StringExtractFut, Ready<Result<String, Error>>#### fn from_request(req: &HttpRequest, payload: &mut Payload) -> Self::Future
### impl<A: FromRequest + 'static, B: FromRequest + 'static, C: FromRequest + 'static, D: FromRequest + 'static, E: FromRequest + 'static, F: FromRequest + 'static, G: FromRequest + 'static, H: FromRequest + 'static, I: FromRequest + 'static, J: FromRequest + 'static, K: FromRequest + 'static, L: FromRequest + 'static, M: FromRequest + 'static, N: FromRequest + 'static> FromRequest for (A, B, C, D, E, F, G, H, I, J, K, L, M, N)
FromRequest implementation for tuple
#### type Error = Error
#### type Future = TupleFromRequest14<A, B, C, D, E, F, G, H, I, J, K, L, M, N#### fn from_request(req: &HttpRequest, payload: &mut Payload) -> Self::Future
### impl<A: FromRequest + 'static, B: FromRequest + 'static, C: FromRequest + 'static, D: FromRequest + 'static, E: FromRequest + 'static, F: FromRequest + 'static, G: FromRequest + 'static, H: FromRequest + 'static, I: FromRequest + 'static, J: FromRequest + 'static, K: FromRequest + 'static, L: FromRequest + 'static, M: FromRequest + 'static> FromRequest for (A, B, C, D, E, F, G, H, I, J, K, L, M)
FromRequest implementation for tuple
#### type Error = Error
#### type Future = TupleFromRequest13<A, B, C, D, E, F, G, H, I, J, K, L, M#### fn from_request(req: &HttpRequest, payload: &mut Payload) -> Self::Future
### impl<A: FromRequest + 'static, B: FromRequest + 'static, C: FromRequest + 'static, D: FromRequest + 'static> FromRequest for (A, B, C, D)
FromRequest implementation for tuple
#### type Error = Error
#### type Future = TupleFromRequest4<A, B, C, D#### fn from_request(req: &HttpRequest, payload: &mut Payload) -> Self::Future
### impl<A: FromRequest + 'static, B: FromRequest + 'static, C: FromRequest + 'static, D: FromRequest + 'static, E: FromRequest + 'static, F: FromRequest + 'static, G: FromRequest + 'static, H: FromRequest + 'static, I: FromRequest + 'static, J: FromRequest + 'static, K: FromRequest + 'static, L: FromRequest + 'static, M: FromRequest + 'static, N: FromRequest + 'static, O: FromRequest + 'static> FromRequest for (A, B, C, D, E, F, G, H, I, J, K, L, M, N, O)
FromRequest implementation for tuple
#### type Error = Error
#### type Future = TupleFromRequest15<A, B, C, D, E, F, G, H, I, J, K, L, M, N, O#### fn from_request(req: &HttpRequest, payload: &mut Payload) -> Self::Future
### impl<A: FromRequest + 'static, B: FromRequest + 'static, C: FromRequest + 'static, D: FromRequest + 'static, E: FromRequest + 'static, F: FromRequest + 'static, G: FromRequest + 'static, H: FromRequest + 'static, I: FromRequest + 'static, J: FromRequest + 'static> FromRequest for (A, B, C, D, E, F, G, H, I, J)
FromRequest implementation for tuple
#### type Error = Error
#### type Future = TupleFromRequest10<A, B, C, D, E, F, G, H, I, J#### fn from_request(req: &HttpRequest, payload: &mut Payload) -> Self::Future
### impl<T> FromRequest for Option<T>where
T: FromRequest,
Optionally extract from the request.
If the inner `T::from_request` returns an error, handler will receive `None` instead.
#### Examples
```
use actix_web::{web, dev, App, Error, HttpRequest, FromRequest};
use actix_web::error::ErrorBadRequest;
use futures_util::future::{ok, err, Ready};
use serde::Deserialize;
use rand;
#[derive(Debug, Deserialize)]
struct Thing {
name: String
}
impl FromRequest for Thing {
type Error = Error;
type Future = Ready<Result<Self, Self::Error>>;
fn from_request(req: &HttpRequest, payload: &mut dev::Payload) -> Self::Future {
if rand::random() {
ok(Thing { name: "thingy".into() })
} else {
err(ErrorBadRequest("no luck"))
}
}
}
/// extract `Thing` from request async fn index(supplied_thing: Option<Thing>) -> String {
match supplied_thing {
// Puns not intended
Some(thing) => format!("Got something: {:?}", thing),
None => format!("No thing!")
}
}
let app = App::new().service(
web::resource("/users/:first").route(
web::post().to(index))
);
```
#### type Error = Infallible
#### type Future = FromRequestOptFuture<<T as FromRequest>::Future#### fn from_request(req: &HttpRequest, payload: &mut Payload) -> Self::Future
### impl FromRequest for ()
#### type Error = Infallible
#### type Future = Ready<Result<(), <() as FromRequest>::Error>#### fn from_request(_: &HttpRequest, _: &mut Payload) -> Self::Future
### impl<A: FromRequest + 'static, B: FromRequest + 'static, C: FromRequest + 'static, D: FromRequest + 'static, E: FromRequest + 'static, F: FromRequest + 'static, G: FromRequest + 'static, H: FromRequest + 'static, I: FromRequest + 'static, J: FromRequest + 'static, K: FromRequest + 'static, L: FromRequest + 'static, M: FromRequest + 'static, N: FromRequest + 'static, O: FromRequest + 'static, P: FromRequest + 'static> FromRequest for (A, B, C, D, E, F, G, H, I, J, K, L, M, N, O, P)
FromRequest implementation for tuple
#### type Error = Error
#### type Future = TupleFromRequest16<A, B, C, D, E, F, G, H, I, J, K, L, M, N, O, P#### fn from_request(req: &HttpRequest, payload: &mut Payload) -> Self::Future
### impl<A: FromRequest + 'static, B: FromRequest + 'static, C: FromRequest + 'static> FromRequest for (A, B, C)
FromRequest implementation for tuple
#### type Error = Error
#### type Future = TupleFromRequest3<A, B, C#### fn from_request(req: &HttpRequest, payload: &mut Payload) -> Self::Future
### impl<A: FromRequest + 'static> FromRequest for (A,)
FromRequest implementation for tuple
#### type Error = Error
#### type Future = TupleFromRequest1<A#### fn from_request(req: &HttpRequest, payload: &mut Payload) -> Self::Future
Implementors
---
### impl FromRequest for ConnectionInfo
#### type Error = Infallible
#### type Future = Ready<Result<ConnectionInfo, <ConnectionInfo as FromRequest>::Error>### impl FromRequest for PeerAddr
#### type Error = MissingPeerAddr
#### type Future = Ready<Result<PeerAddr, <PeerAddr as FromRequest>::Error>### impl FromRequest for Method
Extract the request’s method.
#### Examples
```
use actix_web::{http::Method, web, App, Responder};
async fn handler(method: Method) -> impl Responder {
format!("Request method: {}", method)
}
let app = App::new().default_service(web::to(handler));
```
#### type Error = Infallible
#### type Future = Ready<Result<Method, <Method as FromRequest>::Error>### impl FromRequest for Uri
Extract the request’s URI.
#### Examples
```
use actix_web::{http::Uri, web, App, Responder};
async fn handler(uri: Uri) -> impl Responder {
format!("Requested path: {}", uri.path())
}
let app = App::new().default_service(web::to(handler));
```
#### type Error = Infallible
#### type Future = Ready<Result<Uri, <Uri as FromRequest>::Error>### impl FromRequest for HttpRequest
It is possible to get `HttpRequest` as an extractor handler parameter
#### Examples
```
use actix_web::{web, App, HttpRequest};
use serde::Deserialize;
/// extract `Thing` from request async fn index(req: HttpRequest) -> String {
format!("Got thing: {:?}", req)
}
let app = App::new().service(
web::resource("/users/{first}").route(
web::get().to(index))
);
```
#### type Error = Error
#### type Future = Ready<Result<HttpRequest, Error>### impl FromRequest for Bytes
Extract binary data from a request’s payload.
Collects request payload stream into a Bytes instance.
Use `PayloadConfig` to configure extraction process.
#### Examples
```
use actix_web::{post, web};
/// extract binary data from request
#[post("/")]
async fn index(body: web::Bytes) -> String {
format!("Body {:?}!", body)
}
```
#### type Error = Error
#### type Future = Either<BytesExtractFut, Ready<Result<Bytes, Error>>### impl FromRequest for Payload
See here for example of usage as an extractor.
#### type Error = Error
#### type Future = Ready<Result<Payload, <Payload as FromRequest>::Error>### impl<L, R> FromRequest for Either<L, R>where
L: FromRequest + 'static,
R: FromRequest + 'static,
See here for example of usage as an extractor.
#### type Error = EitherExtractError<<L as FromRequest>::Error, <R as FromRequest>::Error#### type Future = EitherExtractFut<L, R### impl<T> FromRequest for Form<T>where
T: DeserializeOwned + 'static,
See here for example of usage as an extractor.
#### type Error = Error
#### type Future = FormExtractFut<T### impl<T> FromRequest for Header<T>where
T: ParseHeader,
#### type Error = ParseError
#### type Future = Ready<Result<Header<T>, <Header<T> as FromRequest>::Error>### impl<T> FromRequest for Path<T>where
T: DeserializeOwned,
See here for example of usage as an extractor.
#### type Error = Error
#### type Future = Ready<Result<Path<T>, <Path<T> as FromRequest>::Error>### impl<T: Clone + 'static> FromRequest for ReqData<T#### type Error = Error
#### type Future = Ready<Result<ReqData<T>, Error>### impl<T: DeserializeOwned> FromRequest for Json<TSee here for example of usage as an extractor.
#### type Error = Error
#### type Future = JsonExtractFut<T### impl<T: DeserializeOwned> FromRequest for Query<TSee here for example of usage as an extractor.
#### type Error = Error
#### type Future = Ready<Result<Query<T>, Error>### impl<T: ?Sized + 'static> FromRequest for Data<T#### type Error = Error
#### type Future = Ready<Result<Data<T>, Error>Struct actix_web::web::Json
===
```
pub struct Json<T>(pub T);
```
JSON extractor and responder.
`Json` has two uses: JSON responses, and extracting typed data from JSON request payloads.
Extractor
---
To extract typed data from a request body, the inner type `T` must implement the
`serde::Deserialize` trait.
Use `JsonConfig` to configure extraction options.
```
use actix_web::{post, web, App};
use serde::Deserialize;
#[derive(Deserialize)]
struct Info {
username: String,
}
/// deserialize `Info` from request's body
#[post("/")]
async fn index(info: web::Json<Info>) -> String {
format!("Welcome {}!", info.username)
}
```
Responder
---
The `Json` type JSON formatted responses. A handler may return a value of type
`Json<T>` where `T` is the type of a structure to serialize into JSON. The type `T` must implement `serde::Serialize`.
```
use actix_web::{post, web, HttpRequest};
use serde::Serialize;
#[derive(Serialize)]
struct Info {
name: String,
}
#[post("/{name}")]
async fn index(req: HttpRequest) -> web::Json<Info> {
web::Json(Info {
name: req.match_info().get("name").unwrap().to_owned(),
})
}
```
Tuple Fields
---
`0: T`Implementations
---
### impl<T> Json<T#### pub fn into_inner(self) -> T
Unwrap into inner `T` value.
Trait Implementations
---
### impl<T: Debug> Debug for Json<T#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
The resulting type after dereferencing.#### fn deref(&self) -> &T
Dereferences the value.### impl<T> DerefMut for Json<T#### fn deref_mut(&mut self) -> &mut T
Mutably dereferences the value.### impl<T: Display> Display for Json<T#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### type Error = Error
The associated error which can be returned.#### type Future = JsonExtractFut<TFuture that resolves to a `Self`.
Create a `Self` from request parts asynchronously.#### fn extract(req: &HttpRequest) -> Self::Future
Create a `Self` from request head asynchronously.
If serialization failed
#### type Body = EitherBody<String, BoxBody#### fn respond_to(self, _: &HttpRequest) -> HttpResponse<Self::BodyConvert self to `HttpResponse`.#### fn customize(self) -> CustomizeResponder<Self>where
Self: Sized,
Wraps responder to allow alteration of its response.
S: Serializer,
Serialize this value into the given Serde serializer. Read moreAuto Trait Implementations
---
### impl<T> RefUnwindSafe for Json<T>where
T: RefUnwindSafe,
### impl<T> Send for Json<T>where
T: Send,
### impl<T> Sync for Json<T>where
T: Sync,
### impl<T> Unpin for Json<T>where
T: Unpin,
### impl<T> UnwindSafe for Json<T>where
T: UnwindSafe,
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, P> Resource for Twhere
T: DerefMut<Target = Path<P>>,
P: ResourcePath,
#### type Path = P
Type of resource’s path returned in `resource_path`.#### fn resource_path(&mut self) -> &mut Path<<T as Resource>::Path### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> ToString for Twhere
T: Display + ?Sized,
#### default fn to_string(&self) -> String
Converts the given value to a `String`.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: Deref,
<T as Deref>::Target: Formattable,
### impl<T> Parsable for Twhere
T: Deref,
<T as Deref>::Target: Parsable,
Struct actix_web::web::Header
===
```
pub struct Header<T>(pub T);
```
Extract typed headers from the request.
To extract a header, the inner type `T` must implement the
`Header` trait.
Examples
---
```
use actix_web::{get, web, http::header};
#[get("/")]
async fn index(date: web::Header<header::Date>) -> String {
format!("Request was sent at {}", date.to_string())
}
```
Tuple Fields
---
`0: T`Implementations
---
### impl<T> Header<T#### pub fn into_inner(self) -> T
Unwrap into the inner `T` value.
Trait Implementations
---
### impl<T: Clone> Clone for Header<T#### fn clone(&self) -> Header<TReturns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
Formats the value using the given formatter.
The resulting type after dereferencing.#### fn deref(&self) -> &T
Dereferences the value.### impl<T> DerefMut for Header<T#### fn deref_mut(&mut self) -> &mut T
Mutably dereferences the value.### impl<T> Display for Header<T>where
T: Display,
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
T: ParseHeader,
#### type Error = ParseError
The associated error which can be returned.#### type Future = Ready<Result<Header<T>, <Header<T> as FromRequest>::Error>Future that resolves to a `Self`.
Create a `Self` from request parts asynchronously.#### fn extract(req: &HttpRequest) -> Self::Future
Create a `Self` from request head asynchronously.
This method returns an `Ordering` between `self` and `other`. Read more1.21.0 · source#### fn max(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the maximum of two values. Read more1.21.0 · source#### fn min(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the minimum of two values. Read more1.50.0 · source#### fn clamp(self, min: Self, max: Self) -> Selfwhere
Self: Sized + PartialOrd<Self>,
Restrict a value to a certain interval.
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl<T: PartialOrd> PartialOrd<Header<T>> for Header<T#### fn partial_cmp(&self, other: &Header<T>) -> Option<OrderingThis method returns an ordering between `self` and `other` values if one exists. Read more1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool
This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool
This method tests less than or equal to (for `self` and `other`) and is used by the `<=`
operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool
This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool
This method tests greater than or equal to (for `self` and `other`) and is used by the `>=`
operator.
---
### impl<T> RefUnwindSafe for Header<T>where
T: RefUnwindSafe,
### impl<T> Send for Header<T>where
T: Send,
### impl<T> Sync for Header<T>where
T: Sync,
### impl<T> Unpin for Header<T>where
T: Unpin,
### impl<T> UnwindSafe for Header<T>where
T: UnwindSafe,
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
Q: Eq + ?Sized,
K: Borrow<Q> + ?Sized,
#### fn equivalent(&self, key: &K) -> bool
Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, P> Resource for Twhere
T: DerefMut<Target = Path<P>>,
P: ResourcePath,
#### type Path = P
Type of resource’s path returned in `resource_path`.#### fn resource_path(&mut self) -> &mut Path<<T as Resource>::Path### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
T: Display + ?Sized,
#### default fn to_string(&self) -> String
Converts the given value to a `String`.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: Deref,
<T as Deref>::Target: Formattable,
### impl<T> Parsable for Twhere
T: Deref,
<T as Deref>::Target: Parsable,
Struct actix_web::web::Path
===
```
pub struct Path<T>(_);
```
Extract typed data from request path segments.
Use `PathConfig` to configure extraction option.
Unlike, `HttpRequest::match_info`, this extractor will fully percent-decode dynamic segments,
including `/`, `%`, and `+`.
Examples
---
```
use actix_web::{get, web};
// extract path info from "/{name}/{count}/index.html" into tuple
// {name} - deserialize a String
// {count} - deserialize a u32
#[get("/{name}/{count}/index.html")]
async fn index(path: web::Path<(String, u32)>) -> String {
let (name, count) = path.into_inner();
format!("Welcome {}! {}", name, count)
}
```
Path segments also can be deserialized into any type that implements `serde::Deserialize`.
Path segment labels will be matched with struct field names.
```
use actix_web::{get, web};
use serde::Deserialize;
#[derive(Deserialize)]
struct Info {
name: String,
}
// extract `Info` from a path using serde
#[get("/{name}")]
async fn index(info: web::Path<Info>) -> String {
format!("Welcome {}!", info.name)
}
```
Implementations
---
### impl<T> Path<T#### pub fn into_inner(self) -> T
Unwrap into inner `T` value.
Trait Implementations
---
### impl<T> AsRef<T> for Path<T#### fn as_ref(&self) -> &T
Converts this type into a shared reference of the (usually inferred) input type.### impl<T: Debug> Debug for Path<T#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
The resulting type after dereferencing.#### fn deref(&self) -> &Self::Target
Dereferences the value.### impl<T> DerefMut for Path<T#### fn deref_mut(&mut self) -> &mut Self::Target
Mutably dereferences the value.### impl<T> Display for Path<T>where
T: Display,
#### fn fmt(&self, _derive_more_display_formatter: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
T: DeserializeOwned,
See here for example of usage as an extractor.
#### type Error = Error
The associated error which can be returned.#### type Future = Ready<Result<Path<T>, <Path<T> as FromRequest>::Error>Future that resolves to a `Self`.
Create a `Self` from request parts asynchronously.#### fn extract(req: &HttpRequest) -> Self::Future
Create a `Self` from request head asynchronously.
This method returns an `Ordering` between `self` and `other`. Read more1.21.0 · source#### fn max(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the maximum of two values. Read more1.21.0 · source#### fn min(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the minimum of two values. Read more1.50.0 · source#### fn clamp(self, min: Self, max: Self) -> Selfwhere
Self: Sized + PartialOrd<Self>,
Restrict a value to a certain interval.
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl<T: PartialOrd> PartialOrd<Path<T>> for Path<T#### fn partial_cmp(&self, other: &Path<T>) -> Option<OrderingThis method returns an ordering between `self` and `other` values if one exists. Read more1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool
This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool
This method tests less than or equal to (for `self` and `other`) and is used by the `<=`
operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool
This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool
This method tests greater than or equal to (for `self` and `other`) and is used by the `>=`
operator.
---
### impl<T> RefUnwindSafe for Path<T>where
T: RefUnwindSafe,
### impl<T> Send for Path<T>where
T: Send,
### impl<T> Sync for Path<T>where
T: Sync,
### impl<T> Unpin for Path<T>where
T: Unpin,
### impl<T> UnwindSafe for Path<T>where
T: UnwindSafe,
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
Q: Eq + ?Sized,
K: Borrow<Q> + ?Sized,
#### fn equivalent(&self, key: &K) -> bool
Compare self to `key` and return `true` if they are equal.### impl<T> From<!> for T
#### fn from(t: !) -> T
Converts to this type from the input type.### impl<T> From<T> for T
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, P> Resource for Twhere
T: DerefMut<Target = Path<P>>,
P: ResourcePath,
#### type Path = P
Type of resource’s path returned in `resource_path`.#### fn resource_path(&mut self) -> &mut Path<<T as Resource>::Path### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> ToString for Twhere
T: Display + ?Sized,
#### default fn to_string(&self) -> String
Converts the given value to a `String`.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: Deref,
<T as Deref>::Target: Formattable,
### impl<T> Parsable for Twhere
T: Deref,
<T as Deref>::Target: Parsable,
Trait actix_web::Handler
===
```
pub trait Handler<Args>: Clone + 'static {
type Output;
type Future: Future<Output = Self::Output>;
// Required method
fn call(&self, args: Args) -> Self::Future;
}
```
The interface for request handlers.
What Is A Request Handler
---
In short, a handler is just an async function that receives request-based arguments, in any order, and returns something that can be converted to a response.
In particular, a request handler has three requirements:
1. It is an async function (or a function/closure that returns an appropriate future);
2. The function parameters (up to 12) implement `FromRequest`;
3. The async function (or future) resolves to a type that can be converted into an
`HttpResponse` (i.e., it implements the `Responder` trait).
Compiler Errors
---
If you get the error `the trait Handler<_> is not implemented`, then your handler does not fulfill the *first* of the above requirements. Missing other requirements manifest as errors on implementing `FromRequest` and `Responder`, respectively.
How Do Handlers Receive Variable Numbers Of Arguments
---
Rest assured there is no macro magic here; it’s just traits.
The first thing to note is that `FromRequest` is implemented for tuples (up to 12 in length).
Secondly, the `Handler` trait is implemented for functions (up to an arity of 12) in a way that aligns their parameter positions with a corresponding tuple of types (becoming the `Args`
type parameter for this trait).
Thanks to Rust’s type system, Actix Web can infer the function parameter types. During the extraction step, the parameter types are described as a tuple type, `from_request` is run on that tuple, and the `Handler::call` implementation for that particular function arity destructures the tuple into its component types and calls your handler function with them.
In pseudo-code the process looks something like this:
```
async fn my_handler(body: String, state: web::Data<MyState>) -> impl Responder {
...
}
// the function params above described as a tuple, names do not matter, only position type InferredMyHandlerArgs = (String, web::Data<MyState>);
// create tuple of arguments to be passed to handler let args = InferredMyHandlerArgs::from_request(&request, &payload).await;
// call handler with argument tuple let response = Handler::call(&my_handler, args).await;
// which is effectively...
let (body, state) = args;
let response = my_handler(body, state).await;
```
This is the source code for the 2-parameter implementation of `Handler` to help illustrate the bounds of the handler call after argument extraction:
```
impl<Func, Arg1, Arg2, Fut> Handler<(Arg1, Arg2)> for Func where
Func: Fn(Arg1, Arg2) -> Fut + Clone + 'static,
Fut: Future,
{
type Output = Fut::Output;
type Future = Fut;
fn call(&self, (arg1, arg2): (Arg1, Arg2)) -> Self::Future {
(self)(arg1, arg2)
}
}
```
Required Associated Types
---
#### type Output
#### type Future: Future<Output = Self::OutputRequired Methods
---
#### fn call(&self, args: Args) -> Self::Future
Implementors
---
### impl<Func, Fut> Handler<()> for Funcwhere
Func: Fn() -> Fut + Clone + 'static,
Fut: Future,
#### type Output = <Fut as Future>::Output
#### type Future = Fut
### impl<Func, Fut, A> Handler<(A,)> for Funcwhere
Func: Fn(A) -> Fut + Clone + 'static,
Fut: Future,
#### type Output = <Fut as Future>::Output
#### type Future = Fut
### impl<Func, Fut, A, B> Handler<(A, B)> for Funcwhere
Func: Fn(A, B) -> Fut + Clone + 'static,
Fut: Future,
#### type Output = <Fut as Future>::Output
#### type Future = Fut
### impl<Func, Fut, A, B, C> Handler<(A, B, C)> for Funcwhere
Func: Fn(A, B, C) -> Fut + Clone + 'static,
Fut: Future,
#### type Output = <Fut as Future>::Output
#### type Future = Fut
### impl<Func, Fut, A, B, C, D> Handler<(A, B, C, D)> for Funcwhere
Func: Fn(A, B, C, D) -> Fut + Clone + 'static,
Fut: Future,
#### type Output = <Fut as Future>::Output
#### type Future = Fut
### impl<Func, Fut, A, B, C, D, E> Handler<(A, B, C, D, E)> for Funcwhere
Func: Fn(A, B, C, D, E) -> Fut + Clone + 'static,
Fut: Future,
#### type Output = <Fut as Future>::Output
#### type Future = Fut
### impl<Func, Fut, A, B, C, D, E, F> Handler<(A, B, C, D, E, F)> for Funcwhere
Func: Fn(A, B, C, D, E, F) -> Fut + Clone + 'static,
Fut: Future,
#### type Output = <Fut as Future>::Output
#### type Future = Fut
### impl<Func, Fut, A, B, C, D, E, F, G> Handler<(A, B, C, D, E, F, G)> for Funcwhere
Func: Fn(A, B, C, D, E, F, G) -> Fut + Clone + 'static,
Fut: Future,
#### type Output = <Fut as Future>::Output
#### type Future = Fut
### impl<Func, Fut, A, B, C, D, E, F, G, H> Handler<(A, B, C, D, E, F, G, H)> for Funcwhere
Func: Fn(A, B, C, D, E, F, G, H) -> Fut + Clone + 'static,
Fut: Future,
#### type Output = <Fut as Future>::Output
#### type Future = Fut
### impl<Func, Fut, A, B, C, D, E, F, G, H, I> Handler<(A, B, C, D, E, F, G, H, I)> for Funcwhere
Func: Fn(A, B, C, D, E, F, G, H, I) -> Fut + Clone + 'static,
Fut: Future,
#### type Output = <Fut as Future>::Output
#### type Future = Fut
### impl<Func, Fut, A, B, C, D, E, F, G, H, I, J> Handler<(A, B, C, D, E, F, G, H, I, J)> for Funcwhere
Func: Fn(A, B, C, D, E, F, G, H, I, J) -> Fut + Clone + 'static,
Fut: Future,
#### type Output = <Fut as Future>::Output
#### type Future = Fut
### impl<Func, Fut, A, B, C, D, E, F, G, H, I, J, K> Handler<(A, B, C, D, E, F, G, H, I, J, K)> for Funcwhere
Func: Fn(A, B, C, D, E, F, G, H, I, J, K) -> Fut + Clone + 'static,
Fut: Future,
#### type Output = <Fut as Future>::Output
#### type Future = Fut
### impl<Func, Fut, A, B, C, D, E, F, G, H, I, J, K, L> Handler<(A, B, C, D, E, F, G, H, I, J, K, L)> for Funcwhere
Func: Fn(A, B, C, D, E, F, G, H, I, J, K, L) -> Fut + Clone + 'static,
Fut: Future,
#### type Output = <Fut as Future>::Output
#### type Future = Fut
### impl<Func, Fut, A, B, C, D, E, F, G, H, I, J, K, L, M> Handler<(A, B, C, D, E, F, G, H, I, J, K, L, M)> for Funcwhere
Func: Fn(A, B, C, D, E, F, G, H, I, J, K, L, M) -> Fut + Clone + 'static,
Fut: Future,
#### type Output = <Fut as Future>::Output
#### type Future = Fut
### impl<Func, Fut, A, B, C, D, E, F, G, H, I, J, K, L, M, N> Handler<(A, B, C, D, E, F, G, H, I, J, K, L, M, N)> for Funcwhere
Func: Fn(A, B, C, D, E, F, G, H, I, J, K, L, M, N) -> Fut + Clone + 'static,
Fut: Future,
#### type Output = <Fut as Future>::Output
#### type Future = Fut
### impl<Func, Fut, A, B, C, D, E, F, G, H, I, J, K, L, M, N, O> Handler<(A, B, C, D, E, F, G, H, I, J, K, L, M, N, O)> for Funcwhere
Func: Fn(A, B, C, D, E, F, G, H, I, J, K, L, M, N, O) -> Fut + Clone + 'static,
Fut: Future,
#### type Output = <Fut as Future>::Output
#### type Future = Fut
### impl<Func, Fut, A, B, C, D, E, F, G, H, I, J, K, L, M, N, O, P> Handler<(A, B, C, D, E, F, G, H, I, J, K, L, M, N, O, P)> for Funcwhere
Func: Fn(A, B, C, D, E, F, G, H, I, J, K, L, M, N, O, P) -> Fut + Clone + 'static,
Fut: Future,
#### type Output = <Fut as Future>::Output
#### type Future = Fut
Trait actix_web::HttpMessage
===
```
pub trait HttpMessage: Sized {
type Stream;
// Required methods
fn headers(&self) -> &HeaderMap;
fn take_payload(&mut self) -> Payload<Self::Stream>;
fn extensions(&self) -> Ref<'_, Extensions>;
fn extensions_mut(&self) -> RefMut<'_, Extensions>;
// Provided methods
fn content_type(&self) -> &str { ... }
fn encoding(&self) -> Result<&'static Encoding, ContentTypeError> { ... }
fn mime_type(&self) -> Result<Option<Mime>, ContentTypeError> { ... }
fn chunked(&self) -> Result<bool, ParseError> { ... }
}
```
Trait that implements general purpose operations on HTTP messages.
Required Associated Types
---
#### type Stream
Type of message payload stream
Required Methods
---
#### fn headers(&self) -> &HeaderMap
Read the message headers.
#### fn take_payload(&mut self) -> Payload<Self::StreamMessage payload stream
#### fn extensions(&self) -> Ref<'_, ExtensionsReturns a reference to the request-local data/extensions container.
#### fn extensions_mut(&self) -> RefMut<'_, ExtensionsReturns a mutable reference to the request-local data/extensions container.
Provided Methods
---
#### fn content_type(&self) -> &str
Read the request content type. If request did not contain a *Content-Type* header, an empty string is returned.
#### fn encoding(&self) -> Result<&'static Encoding, ContentTypeErrorGet content type encoding.
UTF-8 is used by default, If request charset is not set.
#### fn mime_type(&self) -> Result<Option<Mime>, ContentTypeErrorConvert the request content type to a known mime type.
#### fn chunked(&self) -> Result<bool, ParseErrorCheck if request has chunked transfer encoding.
Implementations on Foreign Types
---
### impl<'a, T> HttpMessage for &'a mut Twhere
T: HttpMessage,
#### fn take_payload(&mut self) -> Payload<<&'a mut T as HttpMessage>::StreamMessage payload stream
#### fn extensions(&self) -> Ref<'_, ExtensionsRequest’s extensions container
#### fn extensions_mut(&self) -> RefMut<'_, ExtensionsMutable reference to a the request’s extensions container
#### type Stream = <T as HttpMessage>::Stream
#### fn headers(&self) -> &HeaderMap
Implementors
---
### impl HttpMessage for ServiceRequest
#### type Stream = Pin<Box<dyn Stream<Item = Result<Bytes, PayloadError>>, Global>### impl HttpMessage for HttpRequest
#### type Stream = ()
### impl<P> HttpMessage for Request<P#### type Stream = P
Type Alias actix_web::Result
===
```
pub type Result<T, E = Error> = Result<T, E>;
```
A convenience `Result` for Actix Web operations.
This type alias is generally used to avoid writing out `actix_http::Error` directly.
Attribute Macro actix_web::connect
===
```
#[connect]
```
Available on **crate feature `macros`** only.Creates route handler with `actix_web::guard::Connect`.
Syntax
---
```
#[connect("path"[, attributes])]
```
Attributes
---
* `"path"`: Raw literal string with path for which to register handler.
* `name = "resource_name"`: Specifies resource name for the handler. If not set, the function name of handler is used.
* `guard = "function_name"`: Registers function as guard using `actix_web::guard::fn_guard`.
* `wrap = "Middleware"`: Registers a resource middleware.
Notes
---
Function name can be specified as any expression that is going to be accessible to the generate code, e.g `my_guard` or `my_module::my_guard`.
Examples
---
```
#[connect("/")]
async fn example() -> HttpResponse {
HttpResponse::Ok().finish()
}
```
Attribute Macro actix_web::delete
===
```
#[delete]
```
Available on **crate feature `macros`** only.Creates route handler with `actix_web::guard::Delete`.
Syntax
---
```
#[delete("path"[, attributes])]
```
Attributes
---
* `"path"`: Raw literal string with path for which to register handler.
* `name = "resource_name"`: Specifies resource name for the handler. If not set, the function name of handler is used.
* `guard = "function_name"`: Registers function as guard using `actix_web::guard::fn_guard`.
* `wrap = "Middleware"`: Registers a resource middleware.
Notes
---
Function name can be specified as any expression that is going to be accessible to the generate code, e.g `my_guard` or `my_module::my_guard`.
Examples
---
```
#[delete("/")]
async fn example() -> HttpResponse {
HttpResponse::Ok().finish()
}
```
Attribute Macro actix_web::get
===
```
#[get]
```
Available on **crate feature `macros`** only.Creates route handler with `actix_web::guard::Get`.
Syntax
---
```
#[get("path"[, attributes])]
```
Attributes
---
* `"path"`: Raw literal string with path for which to register handler.
* `name = "resource_name"`: Specifies resource name for the handler. If not set, the function name of handler is used.
* `guard = "function_name"`: Registers function as guard using `actix_web::guard::fn_guard`.
* `wrap = "Middleware"`: Registers a resource middleware.
Notes
---
Function name can be specified as any expression that is going to be accessible to the generate code, e.g `my_guard` or `my_module::my_guard`.
Examples
---
```
#[get("/")]
async fn example() -> HttpResponse {
HttpResponse::Ok().finish()
}
```
Attribute Macro actix_web::head
===
```
#[head]
```
Available on **crate feature `macros`** only.Creates route handler with `actix_web::guard::Head`.
Syntax
---
```
#[head("path"[, attributes])]
```
Attributes
---
* `"path"`: Raw literal string with path for which to register handler.
* `name = "resource_name"`: Specifies resource name for the handler. If not set, the function name of handler is used.
* `guard = "function_name"`: Registers function as guard using `actix_web::guard::fn_guard`.
* `wrap = "Middleware"`: Registers a resource middleware.
Notes
---
Function name can be specified as any expression that is going to be accessible to the generate code, e.g `my_guard` or `my_module::my_guard`.
Examples
---
```
#[head("/")]
async fn example() -> HttpResponse {
HttpResponse::Ok().finish()
}
```
Attribute Macro actix_web::main
===
```
#[main]
```
Available on **crate feature `macros`** only.Marks async main function as the Actix Web system entry-point.
Note that Actix Web also works under `#[tokio::main]` since version 4.0. However, this macro is still necessary for actor support (since actors use a `System`). Read more in the
`actix_web::rt` module docs.
Examples
---
```
#[actix_web::main]
async fn main() {
async { println!("Hello world"); }.await
}
```
Attribute Macro actix_web::options
===
```
#[options]
```
Available on **crate feature `macros`** only.Creates route handler with `actix_web::guard::Options`.
Syntax
---
```
#[options("path"[, attributes])]
```
Attributes
---
* `"path"`: Raw literal string with path for which to register handler.
* `name = "resource_name"`: Specifies resource name for the handler. If not set, the function name of handler is used.
* `guard = "function_name"`: Registers function as guard using `actix_web::guard::fn_guard`.
* `wrap = "Middleware"`: Registers a resource middleware.
Notes
---
Function name can be specified as any expression that is going to be accessible to the generate code, e.g `my_guard` or `my_module::my_guard`.
Examples
---
```
#[options("/")]
async fn example() -> HttpResponse {
HttpResponse::Ok().finish()
}
```
Attribute Macro actix_web::patch
===
```
#[patch]
```
Available on **crate feature `macros`** only.Creates route handler with `actix_web::guard::Patch`.
Syntax
---
```
#[patch("path"[, attributes])]
```
Attributes
---
* `"path"`: Raw literal string with path for which to register handler.
* `name = "resource_name"`: Specifies resource name for the handler. If not set, the function name of handler is used.
* `guard = "function_name"`: Registers function as guard using `actix_web::guard::fn_guard`.
* `wrap = "Middleware"`: Registers a resource middleware.
Notes
---
Function name can be specified as any expression that is going to be accessible to the generate code, e.g `my_guard` or `my_module::my_guard`.
Examples
---
```
#[patch("/")]
async fn example() -> HttpResponse {
HttpResponse::Ok().finish()
}
```
Attribute Macro actix_web::post
===
```
#[post]
```
Available on **crate feature `macros`** only.Creates route handler with `actix_web::guard::Post`.
Syntax
---
```
#[post("path"[, attributes])]
```
Attributes
---
* `"path"`: Raw literal string with path for which to register handler.
* `name = "resource_name"`: Specifies resource name for the handler. If not set, the function name of handler is used.
* `guard = "function_name"`: Registers function as guard using `actix_web::guard::fn_guard`.
* `wrap = "Middleware"`: Registers a resource middleware.
Notes
---
Function name can be specified as any expression that is going to be accessible to the generate code, e.g `my_guard` or `my_module::my_guard`.
Examples
---
```
#[post("/")]
async fn example() -> HttpResponse {
HttpResponse::Ok().finish()
}
```
Attribute Macro actix_web::put
===
```
#[put]
```
Available on **crate feature `macros`** only.Creates route handler with `actix_web::guard::Put`.
Syntax
---
```
#[put("path"[, attributes])]
```
Attributes
---
* `"path"`: Raw literal string with path for which to register handler.
* `name = "resource_name"`: Specifies resource name for the handler. If not set, the function name of handler is used.
* `guard = "function_name"`: Registers function as guard using `actix_web::guard::fn_guard`.
* `wrap = "Middleware"`: Registers a resource middleware.
Notes
---
Function name can be specified as any expression that is going to be accessible to the generate code, e.g `my_guard` or `my_module::my_guard`.
Examples
---
```
#[put("/")]
async fn example() -> HttpResponse {
HttpResponse::Ok().finish()
}
```
Attribute Macro actix_web::route
===
```
#[route]
```
Available on **crate feature `macros`** only.Creates resource handler, allowing multiple HTTP method guards.
Syntax
---
```
#[route("path", method="HTTP_METHOD"[, attributes])]
```
Attributes
---
* `"path"`: Raw literal string with path for which to register handler.
* `name = "resource_name"`: Specifies resource name for the handler. If not set, the function name of handler is used.
* `method = "HTTP_METHOD"`: Registers HTTP method to provide guard for. Upper-case string,
“GET”, “POST” for example.
* `guard = "function_name"`: Registers function as guard using `actix_web::guard::fn_guard`.
* `wrap = "Middleware"`: Registers a resource middleware.
Notes
---
Function name can be specified as any expression that is going to be accessible to the generate code, e.g `my_guard` or `my_module::my_guard`.
Examples
---
```
#[route("/test", method = "GET", method = "HEAD", method = "CUSTOM")]
async fn example() -> HttpResponse {
HttpResponse::Ok().finish()
}
```
Attribute Macro actix_web::routes
===
```
#[routes]
```
Available on **crate feature `macros`** only.Creates resource handler, allowing multiple HTTP methods and paths.
Syntax
---
```
#[routes]
#[<method>("path", ...)]
#[<method>("path", ...)]
...
```
Attributes
---
The `routes` macro itself has no parameters, but allows specifying the attribute macros for the multiple paths and/or methods, e.g. `GET` and `POST`.
These helper attributes take the same parameters as the single method handlers.
Examples
---
```
#[routes]
#[get("/test")]
#[get("/test2")]
#[delete("/test")]
async fn example() -> HttpResponse {
HttpResponse::Ok().finish()
}
```
Attribute Macro actix_web::test
===
```
#[test]
```
Available on **crate feature `macros`** only.Marks async test functions to use the Actix Web system entry-point.
Examples
---
```
#[actix_web::test]
async fn test() {
assert_eq!(async { "Hello world" }.await, "Hello world");
}
```
Attribute Macro actix_web::trace
===
```
#[trace]
```
Available on **crate feature `macros`** only.Creates route handler with `actix_web::guard::Trace`.
Syntax
---
```
#[trace("path"[, attributes])]
```
Attributes
---
* `"path"`: Raw literal string with path for which to register handler.
* `name = "resource_name"`: Specifies resource name for the handler. If not set, the function name of handler is used.
* `guard = "function_name"`: Registers function as guard using `actix_web::guard::fn_guard`.
* `wrap = "Middleware"`: Registers a resource middleware.
Notes
---
Function name can be specified as any expression that is going to be accessible to the generate code, e.g `my_guard` or `my_module::my_guard`.
Examples
---
```
#[trace("/")]
async fn example() -> HttpResponse {
HttpResponse::Ok().finish()
}
``` |
MortCast | cran | R | Package ‘MortCast’
October 12, 2022
Type Package
Title Estimation and Projection of Age-Specific Mortality Rates
Version 2.7-0
Date 2022-03-31
Author <NAME>, <NAME> and <NAME>
Maintainer <NAME> <<EMAIL>>
Description Age-specific mortality rates are estimated and projected using
the Kannisto, Lee-Carter and related methods as described in
Sevcikova et al. (2016) <doi:10.1007/978-3-319-26603-9_15>.
License GPL (>= 2)
Depends R (>= 3.5.0), wpp2017
RoxygenNote 7.1.1
LazyLoad True
LazyData True
NeedsCompilation yes
Repository CRAN
Date/Publication 2022-03-31 21:30:02 UTC
R topics documented:
MortCast-packag... 2
cokannist... 4
cokannisto.estimat... 5
kannist... 7
kannisto.estimat... 8
kannisto.predic... 9
leecarter.estimat... 10
life.tabl... 12
lileecarter.estimat... 14
logqua... 15
LQcoe... 17
ml... 18
MLTlooku... 19
mortcas... 20
mortcast.blen... 22
pm... 25
PMDadjcoe... 28
PMDrh... 29
rotate.leecarte... 30
MortCast-package MortCast: Estimation and Projection of Age-Specific Mortality Rates
Description
Age-specific mortality rates are estimated and projected using the Kannisto, Lee-Carter and related
methods as described in Sevcikova et al. (2016) <doi:10.1007/978-3-319-26603-9_15>.
Details
The package implements methodology described in Sevcikova et al. (2016) that is related to esti-
mating and predicting age-specific mortality rates. The main functions are:
• cokannisto: Extrapolates given mortality rates into higher ages using the Coherent Kannisto
method. The original Kannisto method (with sex-independent extrapolation) is avalable in the
function kannisto.
• lileecarter.estimate: Estimates the coherent Lee-Carter parameters for male and female
mortality rates (Li and Lee 2005), i.e. sex-independent parameters ax and kt , and the coherent
parameter bx . In addition, it computes the ultimate bux for rotation (Li et al. 2013). The
underlying sex-independent estimation is implemented in the function leecarter.estimate.
• mortcast: Using estimated coherent Lee-Carter parameters and given future sex-specific life
expectancies, it projects age-specific mortality rates, while (by default) rotating the bx param-
eter as described in Li et al. (2013).
Functions contained in the package can be used to apply Algorithm 2 in Sevcikova et al. (2016) as
shown in the Example below. It can be used for both, 5-year and 1-year age groups.
Other methods for forecasting mortality rates are available:
• pmd: pattern of mortality decline
• mlt: model life tables
• logquad: log-quadratic mortality model
• mortcast.blend: combining two different methods
A life table can be constructed using the life.table function.
Author(s)
<NAME>, <NAME> and <NAME>
References
<NAME>. and <NAME>. (2005). Coherent mortality forecasts for a group of populations: An extension
of the Lee-Carter method. Demography, 42, 575-594.
<NAME>., <NAME>. and <NAME>. (2013). Extending the Lee-Carter method to model the rotation
of age patterns of mortality decline for long-term projections. Demography, 50, 2037-2051.
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>. (2016). Age-Specific Mortality and
Fertility Rates for Probabilistic Population Projections. In: Schoen R. (eds) Dynamic Demographic
Analysis. The Springer Series on Demographic Methods and Population Analysis, vol 39. Springer,
Cham. Earlier version.
Examples
# This example applies Algorithm 2 in Sevcikova et al. (2016)
# on data from WPP2017 for China
#
data(mxM, mxF, e0Fproj, e0Mproj, package = "wpp2017")
country <- "China"
# extract observed mortality rates for male and female
mxm <- subset(mxM, name == country)[,4:16]
mxf <- subset(mxF, name == country)[,4:16]
rownames(mxm) <- rownames(mxf) <- c(0,1, seq(5, 100, by=5))
# Step 1: extrapolate from 100+ to 130+ using Coherent Kannisto
mx130 <- cokannisto(mxm, mxf)
# Steps 2-5: estimate coherent Lee-Carter parameters
# (here ax is computed from the last observed period
# and smoothened over ages)
lc.est <- lileecarter.estimate(mx130$male, mx130$female,
ax.index = ncol(mx130$male), ax.smooth = TRUE)
# Steps 6-9: project future mortality rates based on future
# life expectancies from WPP2017
e0f <- as.numeric(subset(e0Fproj, name == country)[-(1:2)])
e0m <- as.numeric(subset(e0Mproj, name == country)[-(1:2)])
names(e0f) <- names(e0m) <- colnames(e0Fproj)[-(1:2)]
pred <- mortcast(e0m, e0f, lc.est)
# plot projection for the first and last future time period
plot(pred$female$mx[,"2015-2020"], type="l", log="y",
ylim=range(pred$female$mx, pred$male$mx), xaxt="n",
ylab="mx", xlab="Age", main=country, col="red")
axis(1, at=1:nrow(pred$female$mx),
labels=rownames(pred$female$mx))
lines(pred$male$mx[,"2015-2020"], col="blue")
lines(pred$female$mx[,"2095-2100"], col="red", lty=2)
lines(pred$male$mx[,"2095-2100"], col="blue", lty=2)
legend("topleft", legend=c("male 2015-2020", "female 2015-2020",
"male 2095-2100", "female 2095-2100"), bty="n",
col=rep(c("blue", "red"),2), lty=c(1,1,2,2))
cokannisto Coherent Kannisto Method
Description
Extrapolate given mortality rates into higher ages using the Coherent Kannisto method as described
in Sevcikova et al. (2016).
Usage
cokannisto(
mxM,
mxF,
est.ages = seq(80, 95, by = 5),
proj.ages = seq(100, 130, by = 5)
)
Arguments
mxM A vector or matrix of male mortality rates. If it is a matrix, rows correspond
to age groups with rownames identifying the corresponding age as integers. By
default five-years age groups are assigned to rows if rownames are not given.
mxF A vector or matrix of female mortality rates. Its length or dimension should be
the same mxM.
est.ages A vector of integers identifying the ages to be used for estimation. It should be
a subset of rownames of mxM. Change the defaults if 1-year age groups are used
(see Example in kannisto).
proj.ages A vector of integers identifying the age groups for which mortality rates are to
be projected. Change the defaults if 1-year age groups are used (see Example in
kannisto).
Details
The function first estimates the coherent Kannisto parameters by passing mortality rates for age
groups est.ages into the cokannisto.estimate function. The estimated parameters are then
passed to the projection function kannisto.predict to extrapolate into ages proj.ages. Lastly,
the input mortality objects are extended by results for the extrapolated ages. If proj.ages contains
age groups that are included in mxM and mxF, values for those age groups are overwritten.
Value
A list of two vectors or matrices (for male and female) containing the input motality objects ex-
tended by the extrapolated age groups.
References
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>. (2016). Age-Specific Mortality and
Fertility Rates for Probabilistic Population Projections. In: Schoen R. (eds) Dynamic Demographic
Analysis. The Springer Series on Demographic Methods and Population Analysis, vol 39. Springer,
Cham
See Also
cokannisto.estimate, kannisto.predict
Examples
data(mxM, mxF, package = "wpp2017")
country <- "South Africa"
mxm <- subset(mxM, name == country)[,-(1:3)]
mxf <- subset(mxF, name == country)[,-(1:3)]
rownames(mxm) <- rownames(mxf) <- c(0,1, seq(5, 100, by=5))
mxnew <- cokannisto(mxm, mxf)
ages <- as.integer(rownames(mxnew$male))
plot(ages, mxnew$male[,"2095-2100"], type="l", log="y",
xlab="age", ylab="mx", col="blue", main=country)
lines(ages, mxnew$female[,"2095-2100"], col="red")
lines(ages, mxnew$male[,"2010-2015"], lty=2, col="blue")
lines(ages, mxnew$female[,"2010-2015"], lty=2, col="red")
legend("bottomright", legend=c("male 2010-2015", "female 2010-2015",
"male 2095-2100", "female 2095-2100"), bty="n",
col=rep(c("blue", "red"),2), lty=c(2,2,1,1))
cokannisto.estimate Coherent Kannisto Estimation
Description
Estimate the coherent Kannisto parameters as described in Sevcikova et al. (2016).
Usage
cokannisto.estimate(mxM, mxF, ages, fitted = TRUE)
Arguments
mxM A vector of male mortality rates.
mxF A vector of female mortality rates.
ages A vector of ages corresponding to mxM and mxF.
fitted Logical. If TRUE the fitted values and residuals are returned.
Details
Given the Kannisto equation logit(mx ) = log(c) + dx, the Coherent Kannisto method estimates
the d parameter jointly for male and female data, in order to prevent mortality crossovers in higher
ages.
Value
List of two lists, one for male and one for female. Each of the two lists contains the following
components:
coefficients: named vector with the coherent Kannisto coefficients c and d. The d values are the
same in both lists.
fitted.values: the fitted values (not included if fitted is FALSE)
residuals: input rates minus the fitted values (not included if fitted is FALSE)
References
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>. (2016). Age-Specific Mortality and
Fertility Rates for Probabilistic Population Projections. In: Schoen R. (eds) Dynamic Demographic
Analysis. The Springer Series on Demographic Methods and Population Analysis, vol 39. Springer,
Cham
See Also
cokannisto, kannisto.predict, kannisto
Examples
data(mxM, mxF, package = "wpp2017")
country <- "Brazil"
mxm <- subset(mxM, name == country)[,"2010-2015"]
mxf <- subset(mxF, name == country)[,"2010-2015"]
cokannisto.estimate(mxm[18:21], mxf[18:21], ages = 18:21)
kannisto Kannisto Method
Description
Extrapolate given mortality rates using the original Kannisto method.
Usage
kannisto(mx, est.ages = seq(80, 95, by = 5), proj.ages = seq(100, 130, by = 5))
Arguments
mx A vector or matrix of mortality rates. If it is a matrix, rows correspond to age
groups with rownames identifying the corresponding age as integers. By default
five-years age groups are assigned to rows if rownames are not given.
est.ages A vector of integers identifying the ages to be used for estimation. It should be
a subset of rownames of mx. Change the defaults if 1-year age groups are used
(see Example).
proj.ages A vector of integers identifying the age groups for which mortality rates are to
be projected. Change the defaults if 1-year age groups are used (see Example).
Details
The function first estimates the original Kannisto parameters by passing mortality rates for age
groups est.ages into the kannisto.estimate function. The estimated parameters are then passed
to the projection function kannisto.predict to extrapolate into ages proj.ages. Lastly, the input
mortality object is extended by results for the extrapolated ages. If proj.ages contains age groups
that are included in mx, values for those age groups are overwritten.
Value
A vector or matrix containing the input mortality object mx extended by the extrapolated age groups.
References
<NAME>., <NAME>. and <NAME>. (1998). The Force of Mortality at Ages 80 to
120, volume 5 of Odense Monographs on Population Aging Series. Odense, Denmark: Odense
University Press.
See Also
kannisto.estimate, kannisto.predict, cokannisto
Examples
data(mxM, package = "wpp2017")
mx <- subset(mxM, name == "<NAME>")[,-(1:3)]
rownames(mx) <- c(0,1, seq(5, 100, by=5))
mxnew <- kannisto(mx)
ages <- as.integer(rownames(mxnew))
plot(ages, mxnew[,"2095-2100"], type="l", log="y",
xlab="age", ylab="mx", col="red")
lines(ages, mxnew[,"2010-2015"])
# Kannisto for 1-year age groups
# derive toy 1-year mx using model life tables at e0 of 70
mx1y <- mlt(70, sex = "male", nx = 1)
# Pretend we only observed mx for ages 0:100.
# Use 90-99 for estimation and extend mx from 100 to 140
mx1ynew <- kannisto(mx1y[1:100, , drop = FALSE], est.ages = 90:99, proj.ages = 100:140)
# Plot the new mx for old ages
plot(80:140, mx1ynew[81:141], type = "l", xlab="age", ylab="mx", col="red")
# Check how it compares to the original mx that was not used in the estimation
lines(100:130, mx1y[101:nrow(mx1y)])
kannisto.estimate Kannisto Estimation
Description
Estimate the Kannisto parameters (Thatcher et al. 1998).
Usage
kannisto.estimate(mx, ages)
Arguments
mx A vector of mortality rates.
ages A vector of ages corresponding to mx. These can be indices of age groups or raw
ages.
Details
Given the Kannisto equation logit(mx ) = log(c)+dx, the function estimates the c and d parameters
using values of ages as the covariate x.
Value
List with the following components:
coefficients: named vector with Kannisto coefficients c and d.
fitted.values: the fitted values
residuals: input rates minus the fitted values
References
<NAME>., <NAME>. and <NAME>. (1998). The Force of Mortality at Ages 80 to
120, volume 5 of Odense Monographs on Population Aging Series. Odense, Denmark: Odense
University Press.
See Also
kannisto.predict, kannisto, cokannisto.estimate
Examples
data(mxM, package = "wpp2017")
mx <- subset(mxM, name == "Canada")[,"2010-2015"]
kannisto.estimate(mx[18:21], ages = 18:21)
kannisto.predict Kannisto Prediction
Description
Given estimated Kannisto parameters (coherent or original), it predicts mortality rates for given
ages.
Usage
kannisto.predict(pars, ages)
Arguments
pars A named vector with Kanisto coefficients c and d (e.g. result of kannisto.estimate
or cokannisto.estimate).
ages A vector of ages to make prediction for. These can be indices of age groups or
raw ages, but on the same scale as used in the estimation.
Details
Given parameters c and d in pars, the function uses the Kannisto equation logit(mx ) = log(c)+dx,
to predict mortality rates for age groups x given by ages.
Value
Vector of predicted mortality rates.
References
<NAME>., <NAME>. and <NAME>. (1998). The Force of Mortality at Ages 80 to
120, volume 5 of Odense Monographs on Population Aging Series. Odense, Denmark: Odense
University Press.
See Also
cokannisto, kannisto.estimate, cokannisto.estimate
Examples
data(mxM, mxF, package = "wpp2017")
mxm <- subset(mxM, name == "Germany")[,"2010-2015"]
ages <- c(0, 1, seq(5, 130, by=5))
# using original Kannisto parameters
pars <- kannisto.estimate(mxm[18:21], ages = ages[18:21])
mxm.pred <- kannisto.predict(pars$coefficients, ages = ages[22:28])
plot(ages, c(mxm[1:21], mxm.pred), type="l", log="y",
xlab="age", ylab="mx")
# Coherent Kannisto
mxf <- subset(mxF, name == "Germany")[,"2010-2015"]
copars <- cokannisto.estimate(
mxm[18:21], mxf[18:21], ages = ages[18:21])
cmxm.pred <- kannisto.predict(copars[["male"]]$coefficients, ages = ages[22:28])
cmxf.pred <- kannisto.predict(copars[["female"]]$coefficients, ages = ages[22:28])
plot(ages, c(mxm[1:21], cmxm.pred), type="l", log="y",
xlab="age", ylab="mx", col="blue")
lines(ages, c(mxf[1:21], cmxf.pred), col="red")
leecarter.estimate Lee-Carter Estimation
Description
Estimate Lee-Carter parameters (Lee and Carter 1992).
Usage
leecarter.estimate(
mx,
ax.index = NULL,
ax.smooth = FALSE,
ax.smooth.df = NULL,
bx.postprocess = TRUE,
nx = 5
)
Arguments
mx A matrix of age-specific mortality rates where rows correspond to age groups
and columns correspond to time periods. Rownames define the starting ages of
the age groups.
ax.index A vector of column indices of mx to be used to estimate the ax parameter. By
default all time periods are used.
ax.smooth Logical allowing to smooth the ax over ages.
ax.smooth.df Degree of freedom for smoothing if ax.smooth is TRUE. Default is half the
length of ax .
bx.postprocess Logical determining if numerical anomalies in bx should be dealt with.
nx Size of age groups. By default ages are determined by rownames of mx. This
argument is only used if mx has no rownames. If nx is 5, the age groups are
interpreted as 0, 1, 5, 10, . . . . For nx equals 1, the age groups are interpreted as
0, 1, 2, 3, . . . .
Details
The function estimates parameters of log(mx (t)) = ax + bx k(t) + x (t) (Lee and Carter 1992).
The argument ax.index determines which time periods to use to estimate the ax parameter, while
ax.smooth controls if the resulting ax should be smoothened over ages (see Sevcikova et al. 2016
for details).
Value
List with elements ax, bx and kt corresponding to the estimated parameters.
References
<NAME>. and <NAME>. (1992). Modeling and forecasting the time series of US mortality. Journal
of the American Statistical Association, 87, 659-671.
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>. (2016). Age-Specific Mortality and
Fertility Rates for Probabilistic Population Projections. In: Schoen R. (eds) Dynamic Demographic
Analysis. The Springer Series on Demographic Methods and Population Analysis, vol 39. Springer,
Cham
See Also
mortcast, lileecarter.estimate
Examples
data(mxM, package = "wpp2017")
mx <- subset(mxM, name == "Netherlands")[,4:16]
rownames(mx) <- c(0,1, seq(5, 100, by=5))
lc.ax.avg <- leecarter.estimate(mx)
lc.ax.last <- leecarter.estimate(mx, ax.index=ncol(mx))
plot(lc.ax.avg$ax, type="l")
lines(lc.ax.last$ax, col="blue")
life.table Life Table Function
Description
Function for obtaining life table quantities from mortality rates.
Usage
life.table(
mx,
sex = c("male", "female", "total"),
abridged = TRUE,
a0rule = c("ak", "cd"),
radix = 1,
open.age = 130
)
Arguments
mx Vector of age-specific mortality rates nmx. If abridged is TRUE (default), the
elements correspond to 1m0, 4m1, 5m5, 5m10, . . . . If abridged is FALSE, they
correspond to 1m0, 1m1, 1m2, 1m3, . . . .
sex Which sex the mortality rates correspond to.
abridged Is it an abridged life table (TRUE, default) or not (FALSE). In the former case, the
mx vector is interpreted as corresponding to age groups 0, 1-4, 5-9, 10-14, . . . .
If FALSE, the mx vector is interpreted as corresponding to one-year age groups,
i.e. 0, 1, 2, 3, . . . .
a0rule Rule for approximation of a0. "ak" (default) uses the Andreev-Kingkade method
(Andreev and Kingkade, 2015), "cd" uses the Coale-Demeany method.
radix Base of the life table.
open.age Open age group. If smaller than the last age group of mx, the life table is trun-
cated. It does not have any effect if larger than the last age group.
Details
Computes a life table corresponding to given mortality rates for either 5- or 1-year age groups. The
implementation follows Preston et al. (2001).
Value
Data frame with rows corresponding to age groups and the following columns:
age Starting year of the age group.
mx Age-specific mortality rates as passed into the mx argument.
qx Probability of dying between ages x and x+n.
lx Number of survivors at age x.
dx Number of deaths between ages x and x+n.
Lx Person-years lived between ages x and x+n.
sx Survival rate from age x to x+n. Note that in an abridged life table, sx always refers to 5-year
intervals. Here, sx in the first row is the survival from births to the second age group, sx in
the second row is the survival from age 0-4 to age 5-9, third row has the survival from 5-9 to
10-14 etc.
Tx Person-years lived after age x.
ex Life expectancy at age x.
ax Average person-years lived in the interval by those dying in the interval. For young ages, it
follows Preston et al. (2001), Table 3.3 on page 48. If a0rule is "ak" (default) the Andreev-
Kingkade method is used for a0. For compatibility with computations done at the UN, we set
ax for ages 5 and 10 in the abridged version to 2.5. For an unabridged life table, ax is set to
0.5 for all but first and last age groups.
References
<NAME>., <NAME>., <NAME>. (2001). Demography: Measuring and Modeling Population
Processes. Oxford: Blackwell Publishers Ltd.
<NAME>. and King<NAME>. (2015). Average age at death in infancy and infant mortality
level: Reconsidering the Coale-Demeny formulas at current levels of low mortality. Demographic
Research, 33(13), p.363-390. DOI: 10.4054/DemRes.2015.33.13
Examples
data(mxF, e0Fproj, package = "wpp2017")
# get female mortality of Mexico for the current year
country <- "Mexico"
mxf <- subset(mxF, name == country)[,"2010-2015"]
life.table(mxf, sex = "female")
lileecarter.estimate Coherent Lee-Carter Estimation
Description
Estimate coherent Lee-Carter parameters (Li and Lee 2005).
Usage
lileecarter.estimate(mxM, mxF, nx = 5, ...)
Arguments
mxM A matrix of male age-specific mortality rates where rows correspond to age
groups and columns correspond to time periods. For 5-year age groups, the first
and second rows corresponds to 0-1 and 1-5 age groups, respectively. Row-
names define the starting ages of the respective groups.
mxF A matrix of female mortality rates of the same shape as mxM.
nx Size of age groups. Should be either 5 or 1.
... Additional arguments passed to leecarter.estimate.
Details
The coherent Lee-Carter parameters for male and female mortality rates share the same bx which is
the average of the age-specific bx parameters.
The function in addition computes the ultimate bux as defined in Li et al. (2013) based on the
coherent bx .
Value
List containing elements bx (coherent bx parameter), ultimate.bx (ultimate bux parameter), ages
(age groups), nx (age group interval), and lists female and male, each with the Lee-Carter parame-
ters.
References
<NAME>. and <NAME>. (2005). Coherent mortality forecasts for a group of populations: An extension
of the Lee-Carter method. Demography, 42, 575-594.
<NAME>., <NAME>. and <NAME>. (2013). Extending the Lee-Carter method to model the rotation
of age patterns of mortality decline for long-term projections. Demography, 50, 2037-2051.
Examples
data(mxM, mxF, package = "wpp2017")
country <- "Germany"
mxm <- subset(mxM, name == country)[,4:16]
mxf <- subset(mxF, name == country)[,4:16]
rownames(mxm) <- rownames(mxf) <- c(0,1, seq(5, 100, by=5))
lc <- lileecarter.estimate(mxm, mxf)
plot(lc$bx, type="l")
lines(lc$ultimate.bx, lty=2)
logquad Log-Quadratic Mortality Model
Description
Predict age-specific mortality rates using the Log-Quadratic Mortality Model (Wilmoth et al. 2012).
Usage
logquad(
e0,
sex = c("male", "female", "total"),
my.coefs = NULL,
q5ranges = c(1e-04, 0.9),
k = 0,
keep.lt = FALSE,
...
)
logquadj(e0m, e0f, ...)
Arguments
e0 Vector of target life expectancies.
sex Which sex does the give e0 corresponds to.
my.coefs Data frame with columns “sex”, “age”, “ax”, “bx”, “cx”, “vx”. The “sex” col-
umn should contain values “female”, “male” and/or “total”. The “age” column
must be sorted so that it assures that rows correspond to ages in increasing or-
der. Any NAs are internally converted to zeros. If not given, the dataset LQcoef
is used.
q5ranges A vector of size two, giving the min and max of 5q0 used in the bisection
method.
k Value of the k parameter.
keep.lt Logical. If TRUE additional life table columns are kept in the resulting object.
... Additional arguments passed to the underlying function.
e0m A time series of target male life expectancy.
e0f A time series of target female life expectancy.
Details
The LogQuad method in this implementation projects mortality rates using the equation
log(mx ) = ax + bx h + cx h2 + vx k
where ax , bx , cx and vx are age-specific coefficients, h = log(5q0) (i.e. reflects child mortality),
and k should be chosen to match 45q15 (adult mortality) or set to 0 (default). The coefficients
can be passed as inputs, or taken from the package default dataset LQcoef which are taken from
https://u.demog.berkeley.edu/~jrw/LogQuad/.
For the given inputs and values of life expectancy e0, the function finds values of h that best match
e0, using life tables and the bisection method. It returns the corresponding mortality schedule for
each value of e0.
Function logquad is for one sex, while logquadj can be used for both sexes.
Value
Function logquad returns a list with the following elements: a matrix mx with the predicted mortal-
ity rates. If keep.lt is TRUE, it also contains matrices sr (survival rates), and life table quantities
Lx and lx. Function logquadj returns a list of objects, one for each sex.
References
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>. (2012). A Flexible Two-
Dimensional Mortality Model for Use in Indirect Estimation. Population studies, 66(1), 1-28.
doi: 10.1080/00324728.2011.611411
See Also
LQcoef, mortcast.blend, mortcast, pmd, mlt
Examples
data(e0Mproj, package = "wpp2017")
country <- "Brazil"
# get target e0
e0m <- as.numeric(subset(e0Mproj, name == country)[-(1:2)])
# project into future
pred <- logquad(e0m, sex = "male")
# plot first projection in black and the remaining ones in heat colors
plot(pred$mx[,1], type = "l", log = "y", ylim = range(pred$mx),
ylab = "male mx", xlab = "Age", main = country)
for(i in 2:ncol(pred$mx)) lines(pred$mx[,i],
col = heat.colors(20)[i])
LQcoef Coefficients for the Log-Quadratic Mortality Model
Description
Data object containing a table of coefficients to be used in the Log-Quadratic Model as implemented
in the logquad function.
Usage
data(LQcoef)
Format
Data frame containing columns “sex”, “age”, “ax”, “bx”, “cx”, “vx”. Rows correspond to sex and
age groups.
Source
https://u.demog.berkeley.edu/~jrw/LogQuad/
References
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>. (2012). A Flexible Two-
Dimensional Mortality Model for Use in Indirect Estimation. Population studies, 66(1), 1-28.
doi: 10.1080/00324728.2011.611411
See Also
logquad
Examples
data(LQcoef)
head(LQcoef)
mlt Model Life Tables Mortality Patterns
Description
Predict age-specific mortality rates using Coale-Demeny and UN model life tables.
Usage
mlt(e0, sex = c("male", "female"), type = "CD_West", nx = 5, ...)
mltj(e0m, e0f, ..., nx = 5)
Arguments
e0 A time series of target life expectancy.
sex Either "male" or "female".
type Type of the model life table. Available options are “CD_East”, “CD_North”,
“CD_South”, “CD_West”, “UN_Chilean”, “UN_Far_Eastern”, “UN_General”,
“UN_Latin_American”, “UN_South_Asian”.
nx Size of age groups. Should be either 5 or 1.
... Additional arguments passed to the underlying function.
e0m A time series of target male life expectancy.
e0f A time series of target female life expectancy.
Details
Given a level of life expectancy (e0), sex and a type of model life table, the function extracts the
corresponding mortality pattern from MLTlookup (for abridged LT) or MLT1Ylookup (for 1-year
LT), while interpolating between neighboring e0 groups. Function mlt is for one sex, while mltj
can be used for both sexes.
Value
Function mlt returns a matrix with the predicted mortality rates. Columns correspond to the values
in the e0 vector and rows correspond to age groups. Function mltj returns a list of such matrices,
one for each sex.
References
https://www.un.org/development/desa/pd/data/extended-model-life-tables
Coale, A., <NAME>, and <NAME>. 1983. Regional model life tables and stable populations. 2nd
ed. New York: Academic Press.
See Also
mortcast, mortcast.blend, pmd, MLTlookup
Examples
data(e0Fproj, package = "wpp2017")
country <- "Uganda"
# get target e0
e0f <- subset(e0Fproj, name == country)[-(1:2)]
# project into future using life table Cole-Demeny North
mx <- mlt(e0f, sex = "female", type = "CD_North")
# plot first projection in black and the remaining ones in grey
plot(mx[,1], type = "l", log = "y", ylim = range(mx),
ylab = "female mx", xlab = "Age",
main = paste(country, "5-year age groups"))
for(i in 2:ncol(mx)) lines(mx[,i], col = "grey")
# MLT for 1-year age groups
mx1y <- mlt(e0f, sex = "female", type = "CD_North", nx = 1)
plot(mx1y[,1], type = "l", log = "y", ylim = range(mx1y),
ylab = "female mx", xlab = "Age",
main = paste(country, "1-year age groups"))
for(i in 2:ncol(mx1y)) lines(mx1y[,i], col = "grey")
MLTlookup Model Life Tables Lookup
Description
Lookup tables containing values for various model life tables, including Coale-Demeny and UN life
tables.
Usage
data(MLTlookup)
data(MLT1Ylookup)
Format
Data frame with the following columns:
type Type of the model life table. Available options are “CD_East”, “CD_North”, “CD_South”,
“CD_West”, “UN_Chilean”, “UN_Far_Eastern”, “UN_General”, “UN_Latin_American”, “UN_South_Asian”.
For the CD types, see Coale et al. (1983). For the UN types, see the link in References below.
sex Code for distinguishing sexes. 1 is for male, 2 is for female.
age Starting age of an age group. In MLTlookup these are 0, 1, 5, 10, ... 130. The MLT1Ylookup
table contains 1-year ages ranging from 0 to 130.
e0 Level of life expectancy, starting at 20 and going by steps of 2.5 up to 115.
mx Mortality rates.
lx, Lx, sx Other life table columns.
Source
An updated version of these datasets were provided by Sara Hertog, United Nations Population
Division, in October 2021 (package version >= 2.6-0). For previous version of the tables, install
MortCast 2.5-0: ‘devtools::install_github("PPgp/[email protected]")‘
References
<NAME>., <NAME>, and <NAME>. 1983. Regional model life tables and stable populations. 2nd
ed. New York: Academic Press.
https://www.un.org/development/desa/pd/data/extended-model-life-tables
See Also
mlt
Examples
data(MLTlookup)
str(MLTlookup)
# CD West life table for male at e0 of 80
subset(MLTlookup, type == "CD_West" & sex == 1 & e0 == 80)
mortcast Coherent Rotated Lee-Carter Prediction
Description
Predict age-specific mortality rates using the coherent rotated Lee-Carter method.
Usage
mortcast(
e0m,
e0f,
lc.pars,
rotate = TRUE,
keep.lt = FALSE,
constrain.all.ages = FALSE,
...
)
Arguments
e0m A time series of future male life expectancy.
e0f A time series of future female life expectancy.
lc.pars A list of coherent Lee-Carter parameters with elements bx, ultimate.bx, ages,
nx, female and male as returned by lileecarter.estimate. The female and
male objects are again lists that should contain a vector ax and optionally a
matrix axt if the ax parameter needs to be defined as time dependent. In such
a case, rows are age groups and columns are time periods corresponding to the
length of the e0f and e0m vectors.
rotate If TRUE the rotation method of bx is used as described in Li et al. (2013).
keep.lt Logical. If TRUE additional life table columns are kept in the resulting object.
constrain.all.ages
By default the method constrains the male mortality to be above female mortality
for old ages if the male life expectancy is below the female life expectancy.
Setting this argument to TRUE causes this constraint to be applied to all ages.
... Additional life table arguments.
Details
This function implements Steps 6-9 of Algorithm 2 in Sevcikova et al. (2016). It uses the abridged
or unabridged life table function to find the level of mortality that coresponds to the given life
expectancy. Thus, it can be used for both, mortality for 5- or 1-year age groups.
Value
List with elements female and male, each of which contains a matrix mx with the predicted mortality
rates. If keep.lt is TRUE, it also contains matrices sr (survival rates), and life table quantities Lx
and lx.
References
<NAME>., <NAME>. and <NAME>. (2013). Extending the Lee-Carter method to model the rotation
of age patterns of mortality decline for long-term projections. Demography, 50, 2037-2051.
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>. (2016). Age-Specific Mortality and
Fertility Rates for Probabilistic Population Projections. In: Schoen R. (eds) Dynamic Demographic
Analysis. The Springer Series on Demographic Methods and Population Analysis, vol 39. Springer,
Cham
See Also
rotate.leecarter, leecarter.estimate, lileecarter.estimate, mortcast.blend
Examples
# estimate parameters from historical mortality data (5-year age groups)
data(mxM, mxF, e0Fproj, e0Mproj, package = "wpp2017")
country <- "Brazil"
mxm <- subset(mxM, name == country)[,4:16]
mxf <- subset(mxF, name == country)[,4:16]
rownames(mxm) <- rownames(mxf) <- c(0,1, seq(5, 100, by=5))
lc <- lileecarter.estimate(mxm, mxf)
# project into future for given levels of life expectancy
e0f <- as.numeric(subset(e0Fproj, name == country)[-(1:2)])
e0m <- as.numeric(subset(e0Mproj, name == country)[-(1:2)])
pred <- mortcast(e0m, e0f, lc)
# plot first projection in black and the remaining ones in grey
plot(lc$ages, pred$female$mx[,1], type="b", log="y", ylim=range(pred$female$mx),
ylab="female mx", xlab="Age", main=paste(country, "(5-year age groups)"), cex=0.5)
for(i in 2:ncol(pred$female$mx)) lines(lc$ages, pred$female$mx[,i], col="grey")
# similarly for 1-year age groups
# derive toy 1-year mx using model life tables at given level of e0
mxm1y <- mlt(seq(65, 71, length = 4), sex = "male", nx = 1)
mxf1y <- mlt(seq(73, 78, length = 4), sex = "female", nx = 1)
# estimate parameters
lc1y <- lileecarter.estimate(mxm1y, mxf1y, nx = 1)
# project into the future
pred1y <- mortcast(e0m, e0f, lc1y)
# plot first projection in black and the remaining ones in grey
plot(lc1y$ages, pred1y$female$mx[,1], type="b", log="y", ylim=range(pred1y$female$mx),
ylab="female mx", xlab="Age", main="1-year age groups", cex=0.5)
for(i in 2:ncol(pred1y$female$mx)) lines(lc1y$ages, pred1y$female$mx[,i], col="grey")
mortcast.blend Mortality Prediction by Method Blending
Description
Predict age-specific mortality rates using a blend of two different methods (Coherent Lee-Carter,
Coherent Pattern Mortality Decline, Log-Quadratic model, or Model Life Tables). Weights can be
applied to fine-tune the blending mix.
Usage
mortcast.blend(
e0m,
e0f,
meth1 = "lc",
meth2 = "mlt",
weights = c(1, 0.5),
nx = 5,
apply.kannisto = TRUE,
min.age.groups = 28,
match.e0 = TRUE,
keep.lt = FALSE,
meth1.args = NULL,
meth2.args = NULL,
kannisto.args = NULL,
...
)
Arguments
e0m A time series of future male life expectancy.
e0f A time series of future female life expectancy.
meth1 Character string giving the name of the first method to blend. It is one of “lc”,
“pmd”, “mlt” or “logquad”, corresponding to Coherent Lee-Carter (function
mortcast), Pattern Mortality Decline (function copmd), Log-Quadratic model
(function logquadj), and Model Life Tables (function mltj), respectively. The
“logquad” method can only be used with 5-year age groups.
meth2 Character string giving the name of the second method to blend. One of the
same choices as meth1.
weights Numeric vector with values between 0 and 1 giving the weight of meth1. If it is
a single value, the same weight is applied for all time periods. If it is a vector
of size two, it is assumed these are weights for the first and the last time period.
Remaining weights will be interpolated. Note that meth2 is weighted by 1 -
weights.
nx Size of age groups. Should be either 5 or 1.
apply.kannisto Logical. If TRUE and if any of the methods results in less than min.age.groups
age categories, the coherent Kannisto method (cokannisto) is applied to extend
the age groups into old ages.
min.age.groups Minimum number of age groups. Triggers the application of Kannisto, see
above. Change the default value if 1-year age groups are used (see Example).
match.e0 Logical. If TRUE the blended mx is scaled so that it matches the input e0.
keep.lt Logical. If TRUE additional life table columns are kept in the resulting object.
Only used if match.e0 is TRUE.
meth1.args List of arguments passed to the function that corresponds to meth1.
meth2.args List of arguments passed to the function that corresponds to meth2.
kannisto.args List of arguments passed to the cokannisto function if Kannisto is applied. If
1-year age groups are used various defaults in the Kannisto function need to be
changed (see Example).
... Additional life table arguments.
Details
The function allows to combine two different methods using given weights. The weights can change
over time - by default they are interpolated from the starting weight to the end weight. As the
blended mortality rates do not necessarily match the target life expectancy, scaling is applied to
improve the match, controlled by the match.e0 argument. The projection is done for both sexes, so
that coherent methods can be applied.
Value
List with elements female and male, each of which contains a matrix mx with the predicted mortality
rates. If the result has been scaled (match.e0 is TRUE), the element mx.rawblend contains the mx
before scaling. Also in such a case, if keep.lt is TRUE, it also contains matrices sr (survival rates),
and life table quantities Lx and lx. In addition, the return object contains elements meth1res and
meth2res which contain the results of the functions corresponding to the two methods. Elements
meth1 and meth2 contain the names of the methods. A vector weights contains the final (possibly
interpolated) weights.
See Also
mortcast, copmd, mltj, logquad, cokannisto
Examples
data(mxM, mxF, e0Fproj, e0Mproj, package = "wpp2017")
country <- "Brazil"
# estimate parameters from historical mortality data
mxm <- subset(mxM, name == country)[,4:16]
mxf <- subset(mxF, name == country)[,4:16]
rownames(mxm) <- rownames(mxf) <- c(0,1, seq(5, 100, by=5))
lcest <- lileecarter.estimate(mxm, mxf)
# project into future
e0f <- subset(e0Fproj, name == country)[-(1:2)]
e0m <- subset(e0Mproj, name == country)[-(1:2)]
# Blend LC and MLT
pred1 <- mortcast.blend(e0m, e0f, meth1 = "lc", meth2 = "mlt",
meth1.args = list(lc.pars = lcest),
meth2.args = list(type = "CD_North"),
weights = c(1,0.25))
# Blend PMD and MLT
pred2 <- mortcast.blend(e0m, e0f, meth1 = "pmd", meth2 = "mlt",
meth1.args = list(mxm0 = mxm[, "2010-2015"],
mxf0 = mxf[, "2010-2015"]))
# plot projection by time
plotmx <- function(pred, iage, main)
with(pred, {
# blended projections
plot(female$mx[iage,], type="l",
ylim=range(meth1res$female$mx[iage,],
meth2res$female$mx[iage,]),
ylab="female mx", xlab="Time", main=main, col = "red")
lines(meth1res$female$mx[iage,], lty = 2)
lines(meth2res$female$mx[iage,], lty = 3)
legend("topright", legend=c("blend", meth1, meth2),
lty = 1:3, col = c("red", "black", "black"), bty = "n")
})
age.group <- 3 # 5-9 years old
par(mfrow=c(1,2))
plotmx(pred1, age.group, "LC-MLT (age 5-9)")
plotmx(pred2, age.group, "PMD-MLT (age 5-9)")
# Blend LC and MLT for 1-year age groups
#########################################
# First interpolate e0 to get 1-year life expectancies (for first five years)
e0m1y <- approx(as.double(e0m[,1:2]), n = 5)$y
e0f1y <- approx(as.double(e0f[,1:2]), n = 5)$y
# derive toy mx in order to get some LC parameters
mxm1y <- mlt(seq(70, 72, length = 4), sex = "male", nx = 1)
mxf1y <- mlt(seq(78, 79, length = 4), sex = "female", nx = 1)
lcest1y <- lileecarter.estimate(mxm1y, mxf1y, nx = 1)
# projections
pred3 <- mortcast.blend(e0m1y, e0f1y, meth1 = "lc", meth2 = "mlt",
weights = c(1,0.25), min.age.groups = 131, nx = 1,
meth1.args = list(lc.pars = lcest1y),
kannisto.args = list(est.ages = 90:99, proj.ages = 100:130))
# plot results
par(mfrow=c(1,1))
plot(0:130, pred3$female$mx[,5], log = "y", type = "l", col = "red")
lines(0:130, pred3$male$mx[,5], col = "blue")
pmd Pattern of Mortality Decline Prediction
Description
Predict age-specific mortality rates using the Pattern of mortality decline (PMD) method (Andreev
et al. 2013).
Usage
pmd(
e0,
mx0,
sex = c("male", "female"),
nx = 5,
interp.rho = FALSE,
kranges = c(0, 25),
keep.lt = FALSE,
keep.rho = FALSE,
...
)
modpmd(
e0,
mx0,
sex = c("male", "female"),
nx = 5,
interp.rho = FALSE,
kranges = c(0, 25),
ax.index = NULL,
ax.smooth = FALSE,
ax.smooth.df = NULL,
keep.lt = FALSE,
keep.rho = FALSE,
...
)
copmd(
e0m,
e0f,
mxm0,
mxf0,
nx = 5,
interp.rho = FALSE,
keep.rho = FALSE,
use.modpmd = FALSE,
...
)
Arguments
e0 A vector of target life expectancy, one element for each predicted time point.
mx0 A vector with starting age-specific mortality rates. In case of modpmd it can be
a matrix where rows correspond to age groups and columns correspond to time
periods. Rownames define the starting ages of the age groups.
sex Either "male" or "female".
nx Size of age groups. Should be either 5 or 1.
interp.rho Logical controlling if the ρ coefficients should be interpolated (TRUE) or if the
raw (binned) version should be used (FALSE), as stored in the dataset PMDrho.
kranges A vector of size two, giving the min and max of the k parameter which is esti-
mated to match the target e0 using the bisection method.
keep.lt Logical. If TRUE additional life table columns are kept in the resulting object.
keep.rho Logical. If TRUE the ρ coefficients are included in the resulting object.
... Additional arguments passed to the underlying functions. For copmd, in addi-
tion to kranges and keep.lt, it can be sexratio.adjust which is a logical
controlling if a sex-ratio adjustment should be applied to prevent crossovers
between male and female mx. In such a case it uses coefficients from the
PMDadjcoef dataset. However, if the argument adjust.with.mxf is set to TRUE
(in addition to sexratio.adjust), the adjustment is done using the female mor-
tality rates as the lower constraint for male mortality rates. If the argument
adjust.sr.if.needed is set to TRUE, a sex-ratio adjustment is performed dy-
namically, using the sex ratio in the previous time point. In such a case, an
adjustment in time t is applied only if there was a drop of sex ratio below one at
time t-1. Other arguments passed here in copmd can be ax.index, ax.smooth
and ax.smooth.df which control the estimation of the initial mx if use.modpmd
is TRUE.
ax.index A vector of column indices of mx to be used to estimate the ax = E[log(mx(t0 ))]
parameter. By default it is estimated as the average over all observed time peri-
ods, but this argument can restrict the time periods to use.
ax.smooth Logical allowing to smooth the ax over ages.
ax.smooth.df Degree of freedom for smoothing if ax.smooth is TRUE. Default is half the
length of ax .
e0m A time series of target male life expectancy.
e0f A time series of target female life expectancy.
mxm0, mxf0 A vector with starting age-specific male/female mortality rates. If use.modpmd
is TRUE, this can be a matrix of historical mx (age x time) from which the starting
values are estimated.
use.modpmd Logical determining if the modified version of PMD (modpmd) should be used. In
such a case the starting values of mortality rates are estimated similarly to ax in
leecarter.estimate, possibly from more than one time periods. In addition,
a smoothing can be applied.
Details
These functions implements the PMD method introduced in Andreev et al. (2013) and its modifica-
tions. It assumes that the future decline in age-specific mortality will follow a certain pattern with
the increase in life expectancy at birth (e0):
log[mx(t)] = log[mx(t − 1)] − k(t)ρx (t)
Here, ρx (t) is the age-specific pattern of mortality decline between t − 1 and t. Such patterns
for each sex and various levels of e0 are stored in the dataset PMDrho. The pmd function can be
instructed to interpolate between neighboring levels of e0 by setting the argument interp.rho to
TRUE. The k parameter is estimated to match the e0 level using the bisection method.
Function pmd evaluates the method for a single sex, while copmd does it coherently for both sexes.
In the latter case, the same ρx (namely the average over sex-specific ρx ) is used for both, male and
female.
Function modpmd implements a modified version of pmd where the initial log[mx(t0 )] is replaced
by an ax estimated as in leecarter.estimate, i.e. using possibly multiple years of historical
mx and optionally smoothed. Arguments ax.index, ax.smooth and ax.smooth.df determine the
estimation years and parameters of the smoothing.
Value
Function pmd and modpmd return a list with the following elements: a matrix mx with the predicted
mortality rates. If keep.lt is TRUE, it also contains matrices sr (survival rates), and life table
quantities Lx and lx. If keep.rho is TRUE, it contains a matrix rho where columns correpond to the
values in the e0 vector and rows correspond to age groups.
Function copmd returns a list with one element for each sex (male and female) where each of
them is a list as described above. In addition if keep.rho is TRUE, element rho.sex gives the
sex-dependent (i.e. not averaged) ρx coefficient.
References
<NAME>., <NAME>., <NAME>. (2013). Age Patterns of Mortality Improvement by Level of
Life Expectancy at Birth with Applications to Mortality Projections. Paper presented at the An-
nual Meeting of the Population Association of America, New Orleans, LA. https://paa2013.
princeton.edu/papers/132554.
<NAME>., <NAME>., <NAME>. (2017). Projecting Age-sex-specific Mortality: A Comparison of the
Modified Lee-Carter and Pattern of Mortality Decline Methods, UN Population Division, Technical
Paper No. 6. New York: United Nations. https://population.un.org/wpp/Publications/
Files/WPP2017_TechnicalPaperNo6.pdf
See Also
mortcast, mortcast.blend, PMDrho
Examples
data(mxF, e0Fproj, package = "wpp2017")
country <- "Hungary"
# get initial mortality for the current year
mxf <- subset(mxF, name == country)[,"2010-2015"]
names(mxf) <- c(0,1, seq(5, 100, by=5))
# get target e0
e0f <- subset(e0Fproj, name == country)[-(1:2)]
# project into future
pred <- pmd(e0f, mxf, sex = "female")
# plot first projection in black and the remaining ones in grey
plot(pred$mx[,1], type = "l", log = "y", ylim = range(pred$mx),
ylab = "female mx", xlab = "Age", main = country)
for(i in 2:ncol(pred$mx)) lines(pred$mx[,i], col = "grey")
PMDadjcoef Coefficients for Sex Ratio Adjustments in the PMD Method
Description
Data object containing a table of coefficients to be used to adjust the sex ratio in the coherent
Pattern Mortality Decline method as implemented in the copmd function. To invoke the adjustment,
argument sexratio.adjust should be set to TRUE.
Usage
data(PMDadjcoef)
Format
Data frame containing columns “age”, “intercept”, “lmxf”, “e0f”, “e0f2”, and “gap”. Rows corre-
spond to age groups. The values are estimates of the following regression
log10 mxM = β0 + β1 log10 mxF + β2 eF F 2 F M
The order of the columns starting with intercept corresponds to the order of the coefficients in the
above equation.
Source
The coefficients were estimated and provided by Danan Gu, UN Population Division.
References
<NAME>., <NAME>. and <NAME>. (2017). Projecting Age-sex-specific Mortality: A Compari-
son of the Modified Lee-Carter and Pattern of Mortality Decline Methods, UN Population Divi-
sion, Technical Paper No. 6. New York: United Nations. https://population.un.org/wpp/
Publications/Files/WPP2017_TechnicalPaperNo6.pdf
See Also
copmd
Examples
data(PMDadjcoef)
PMDadjcoef
PMDrho Pattern Mortality Decline Lookup Tables
Description
Data object containing two tables with ρ coefficients for the Pattern Mortality Decline method as
implemented in the pmd function.
Usage
data(PMDrho)
Format
Using data(PMDrho) loads two objects into memory: RhoFemales and RhoMales. They both are
data frames with 22 rows corresponding to age groups, and 17 columns corresponding to different
levels of life expectancy in 5-years intervals (from 50 to 135). The names of the columns reflect the
middle of the respective interval.
References
<NAME>. <NAME>., <NAME>. (2013). Age Patterns of Mortality Improvement by Level of
Life Expectancy at Birth with Applications to Mortality Projections. Paper presented at the An-
nual Meeting of the Population Association of America, New Orleans, LA. https://paa2013.
princeton.edu/papers/132554.
<NAME>., <NAME>. and <NAME>. (2017). Projecting Age-sex-specific Mortality: A Compari-
son of the Modified Lee-Carter and Pattern of Mortality Decline Methods, UN Population Divi-
sion, Technical Paper No. 6. New York: United Nations. https://population.un.org/wpp/
Publications/Files/WPP2017_TechnicalPaperNo6.pdf
See Also
pmd
Examples
data(PMDrho)
head(RhoFemales)
head(RhoMales)
# plot a few male patterns
e0lev <- colnames(RhoMales)[c(1, 5, 9, 13, 17)]
plot(RhoMales[, e0lev[1]], type="l", log="y", ylim=range(RhoMales[,e0lev]),
ylab="male rho", xlab="Age")
for(i in 2:length(e0lev)) lines(RhoMales[,e0lev[i]], lty = i)
legend("bottomleft", legend = e0lev, lty = 1:length(e0lev), bty= "n")
rotate.leecarter Rotated Lee-Carter
Description
Rotate the Lee-Carter parameter bx over time to reach an ultimate bux , as described in Li et al.
(2013).
Usage
rotate.leecarter(bx, ultimate.bx, e0, e0l = 80, e0u = 102, p = 0.5)
ultimate.bx(bx)
Arguments
bx A vector of the Lee-Carter bx parameter (from e.g. lileecarter.estimate or
leecarter.estimate).
ultimate.bx A vector of the ultimate bux parameter as defined in Li, Lee, Gerland (2013)
(obtained using lileecarter.estimate or ultimate.bx).
e0 A time series of life expectancies.
e0l Level of life expectancy at which the rotation starts.
e0u Level of life expectancy at which the rotation finishes.
p Exponent of the smooth function.
Value
Function rotate.leecarter returns a matrix of rotated Bx (t) where rows correspond to age
groups and columns correspond to time periods (given by the vector e0).
Function ultimate.bx returns a vector of the ultimate bux .
References
<NAME>., <NAME>. and <NAME>. (2013). Extending the Lee-Carter method to model the rotation
of age patterns of mortality decline for long-term projections. Demography, 50, 2037-2051.
Examples
data(mxF, mxM, e0Fproj, e0Mproj, package = "wpp2017")
country <- "Japan"
mxm <- subset(mxM, name == country)[,4:16]
mxf <- subset(mxF, name == country)[,4:16]
e0f <- as.numeric(subset(e0Fproj, name == country)[-(1:2)])
e0m <- as.numeric(subset(e0Mproj, name == country)[-(1:2)])
rownames(mxm) <- rownames(mxf) <- c(0,1, seq(5, 100, by=5))
lc <- lileecarter.estimate(mxm, mxf)
rotlc <- rotate.leecarter(lc$bx, lc$ultimate.bx, (e0f + e0m)/2)
plot(lc$bx, type="l")
lines(lc$ultimate.bx, col="red")
for(i in 1:ncol(rotlc)) lines(rotlc[,i], col="grey") |
scikit-dsp-comm | readthedoc | Matlab | scikit-dsp-comm 2.0.1 documentation
[scikit-dsp-comm](#)
---
Welcome to scikit-dsp-comm’s documentation![¶](#welcome-to-scikit-dsp-comm-s-documentation)
===
Readme[¶](#readme)
---
Logo
### scikit-dsp-comm[¶](#scikit-dsp-comm)
[pypi](https://pypi.python.org/pypi/scikit-dsp-comm)
[Anaconda-Server Badge](https://anaconda.org/conda-forge/scikit-dsp-comm)
[Docs](http://scikit-dsp-comm.readthedocs.io/en/latest/?badge=latest)
#### Background[¶](#background)
The origin of this package comes from the writing the book Signals and Systems for Dummies, published by Wiley in 2013. The original module for this book is named `ssd.py`. In `scikit-dsp-comm` this module is renamed to `sigsys.py` to better reflect the fact that signal processing and communications theory is founded in signals and systems, a traditional subject in electrical engineering curricula.
#### Package High Level Overview[¶](#package-high-level-overview)
This package is a collection of functions and classes to support signal processing and communications theory teaching and research. The foundation for this package is `scipy.signal`. The code in particular currently requires Python `>=3.5x`.
**There are presently ten modules that make up scikit-dsp-comm:**
1. `sigsys.py` for basic signals and systems functions both continuous-time and discrete-time, including graphical display tools such as pole-zero plots, up-sampling and down-sampling.
2. `digitalcomm.py` for digital modulation theory components, including asynchronous resampling and variable time delay functions, both useful in advanced modem testing.
3. `synchronization.py` which contains phase-locked loop simulation functions and functions for carrier and phase synchronization of digital communications waveforms.
4. `fec_conv.py` for the generation rate one-half and one-third convolutional codes and soft decision Viterbi algorithm decoding, including soft and hard decisions, trellis and trellis-traceback display functions, and puncturing.
5. `fir_design_helper.py` which for easy design of lowpass, highpass, bandpass, and bandstop filters using the Kaiser window and equal-ripple designs, also includes a list plotting function for easily comparing magnitude, phase, and group delay frequency responses.
6. `iir_design_helper.py` which for easy design of lowpass, highpass, bandpass, and bandstop filters using scipy.signal Butterworth, Chebyshev I and II, and elliptical designs, including the use of the cascade of second-order sections (SOS) topology from scipy.signal, also includes a list plotting function for easily comparing of magnitude, phase, and group delay frequency responses.
7. `multirate.py` that encapsulate digital filters into objects for filtering, interpolation by an integer factor, and decimation by an integer factor.
8. `coeff2header.py` write `C/C++` header files for FIR and IIR filters implemented in `C/C++`, using the cascade of second-order section representation for the IIR case. This last module find use in real-time signal processing on embedded systems, but can be used for simulation models in `C/C++`.
Presently the collection of modules contains about 125 functions and classes. The authors/maintainers are working to get more detailed documentation in place.
#### Documentation[¶](#documentation)
Documentation is now housed on `readthedocs` which you can get to by clicking the docs badge near the top of this `README`. Example notebooks can be viewed on [GitHub pages](https://mwickert.github.io/scikit-dsp-comm/). In time more notebook postings will be extracted from [Dr. Wickert’s Info Center](http://www.eas.uccs.edu/~mwickert/).
#### Getting Set-up on Your System[¶](#getting-set-up-on-your-system)
The best way to use this package is to clone this repository and then install it.
```
git clone https://github.com/mwickert/scikit-dsp-comm.git
```
There are package dependencies for some modules that you may want to avoid. Specifically these are whenever hardware interfacing is involved. Specific hardware and software configuration details are discussed in [wiki pages](https://github.com/mwickert/SP-Comm-Tutorial-using-scikit-dsp-comm/wiki).
For Windows users `pip` install takes care of almost everything. I assume below you have Python on your path, so for example with [Anaconda](https://www.anaconda.com/download/#macos), I suggest letting the installer set these paths up for you.
##### Editable Install with Dependencies[¶](#editable-install-with-dependencies)
With the terminal in the root directory of the cloned repo perform an editable `pip` install using
```
pip install -e .
```
##### Why an Editable Install?[¶](#why-an-editable-install)
The advantage of the editable `pip` install is that it is very easy to keep `scikit-dsp-comm` up to date. If you know that updates have been pushed to the master branch, you simply go to your local repo folder and
```
git pull origin master
```
This will update you local repo and automatically update the Python install without the need to run `pip` again. **Note**: If you have any Python kernels running, such as a Jupyter Notebook, you will need to restart the kernel to insure any module changes get reloaded.
Examples[¶](#examples)
---
* [SciPy 2017 Tutorial](https://github.com/mwickert/SP-Comm-Tutorial-using-scikit-dsp-comm)
### Jupyter Notebook Examples[¶](#jupyter-notebook-examples)
```
[1]:
```
```
%pylab inline import sk_dsp_comm.sigsys as ss import scipy.signal as signal from IPython.display import Image, SVG
```
```
Populating the interactive namespace from numpy and matplotlib
```
```
[2]:
```
```
pylab.rcParams['savefig.dpi'] = 100 # default 72
%config InlineBackend.figure_formats=['svg'] # SVG inline viewing
```
#### Introduction to Python and the Jupyter Notebook[¶](#Introduction-to-Python-and-the-Jupyter-Notebook)
```
[3]:
```
```
t = arange(-4,4,.01)
x = cos(2*pi*t)
plot(t,x)
grid()
```
#### Rectangle and Triangle Pulses Defined[¶](#Rectangle-and-Triangle-Pulses-Defined)
Before showing more examples, consider some familiar signal primitives in your signals and systems background.
To see these defined in the text see in particular Appendix F.5 (p.727) in the table of Fourier transform pairs.
**Rectangle**
\begin{align}
\Pi\Big(\frac{t}{\tau}\Big) &= \begin{cases}
1, & |t| \leq \tau/2 \\
0, & \text{otherwise}
\end{cases}
\end{align}**Triangle**
\begin{align}
\Lambda\Big(\frac{t}{\tau}\Big) &= \begin{cases}
1-|t/\tau|, & |t|\leq \tau \\
0, & \text{otherwise}
\end{cases}
\end{align}To more readily play with these function represent them numerically in Python. The module `ss.py` has some waveform primitives to help.
```
[4]:
```
```
t = arange(-5,5,.01)
x_rect = ss.rect(t-3,2)
x_tri = ss.tri(t+2,1.5)
subplot(211)
plot(t,x_rect)
grid()
ylabel(r'$\Pi((t-3)/2)$');
subplot(212)
plot(t,x_tri)
grid()
xlabel(r'Time (s)')
ylabel(r'$\Lambda((t+2)/1.5)$');
tight_layout()
```
* Consider an interactive version of the above:
```
[5]:
```
```
# Make an interactive version of the above from ipywidgets import interact, interactive
def pulses_plot(D1,D2,W1,W2):
t = arange(-5,5,.01)
x_rect = ss.rect(t-D1,W1)
x_tri = ss.tri(t-D2,W2)
subplot(211)
plot(t,x_rect)
grid()
ylabel(r'$\Pi((t-3)/2)$');
subplot(212)
plot(t,x_tri)
grid()
xlabel(r'Time (s)')
ylabel(r'$\Lambda((t+2)/1.5)$');
tight_layout()
interactive_plot = interactive(pulses_plot,D1 = (-3,3,.5), D2 = (-3,3,.5), W1 = (0.5,2,.25), W2 = (0.5,2,.25));
output = interactive_plot.children[-1]
output.layout.height = '350px'
interactive_plot
```
##### More Signal Plotting[¶](#More-Signal-Plotting)
The basic pulse shapes (primitives) defined in the module `ssd.py` are very useful for working Text 2.13a &d, but there are also times when you need a custom piecewise function.
###### Simple Cases:[¶](#Simple-Cases:)
Consider plotting
* \(x_1(t) = \sin(2\pi\cdot 5t) \Pi((t-2)/2)\) for \(0\leq t \leq 10\)
* \(x_2(t) = \sum_{n=-\infty}^\infty = \Pi((t-5n)/1)\) for \(-10 \leq t \leq 10\)
```
[6]:
```
```
t1 = arange(0,10+.01,.01) # arange stops one step size less than the upper limit x1 = sin(2*pi*5*t1)* ss.rect(t1-2,2)
subplot(211)
plot(t1,x1)
xlabel(r'Time (s)')
ylabel(r'$x_1(t)$')
grid()
t2 = arange(-10,10,.01)
# Tweak mod() to take on negative values x2 = ss.rect(mod(t2+2.5,5)-2.5,1)
subplot(212)
plot(t2,x2)
xlabel(r'Time (s)')
ylabel(r'$x_2(t)$')
grid()
tight_layout()
```
###### Custom Piecewise:[¶](#Custom-Piecewise:)
A custom piecewise function is a direct and to the point way of getting a more complex function plotted. Consider plotting:
\begin{align}
x_3(t) = \begin{cases}
1 + t^2, & 0\leq t \leq 3 \\
\cos(2\pi\cdot5\cdot t) & 3 < t \leq 5 \\
0, & \text{otherwise}
\end{cases}
\end{align}for \(-2\leq t \leq 6\).
```
[7]:
```
```
def x3_func(t):
"""
Create a piecewise function for plotting x3
"""
x3 = zeros_like(t)
for k,tk in enumerate(t):
if tk >= 0 and tk <= 3:
x3[k] = 1 + tk**2
elif tk > 3 and tk <= 5:
x3[k] = cos(2*pi*5*tk)
return x3
```
```
[8]:
```
```
t3 = arange(-2,6+.01,.01)
x3 = x3_func(t3)
plot(t3,x3)
xlabel(r'Time (s)')
ylabel(r'$x_3(t)$')
xlim([-2,6])
grid()
```
```
[9]:
```
```
26/2
```
```
[9]:
```
```
13.0
```
#### Energy and Power Signals[¶](#Energy-and-Power-Signals)
The general definitions are:
\begin{align}
E &\overset{\Delta}{=} \lim_{T\rightarrow\infty} \int_{-T}^T |x(t)|^2\, dt = \int_{-\infty}^\infty |x(t)|^2\, dt \\
P &\overset{\Delta}{=} \lim_{T\rightarrow\infty}\frac{1}{2T} \int_{-T}^T |x(t)|^2\, dt
\end{align}For the case of a periodic signal, you can take the definition of \(P\) above and reduce the calculation down to
\begin{align}
P = \frac{1}{T} \int_{t_0}^{t_0+T} |x(t)|^2\, dt
\end{align}where \(t_0\) can be any convenient value.
Consider the waveform of Text problem 2.14b
\begin{align}
x_2(t) = \sum_{n=-\infty}^\infty \Lambda\Big(\frac{t-3n}{2}\Big)
\end{align}You can create an approximation to the waveform over a finite number of periods by doing a little programming:
```
[10]:
```
```
def periodic_tri(t,tau,T,N):
"""
Approximate x2(t) by running the sum index from -N to +N.
The period is set by T and tau is the tri pulse width
parameter (base width is 2*tau).
<NAME> January 2015
"""
x = zeros_like(t)
for n in arange(-N,N+1):
x += ss.tri(t-T*n,tau)
return x
```
```
[11]:
```
```
t = arange(-10,10,.001)
x = periodic_tri(t,2,6,10)
plot(t,x)
plot(t,abs(x)**2)
grid()
#xlim([-5,5])
xlabel(r'Time (s)')
ylabel(r'$x_2(t)$ and $x_2^2(t)$');
```
For the power calculation create a time array that runs over exactly one period. Below is the case for the original problem statement.
```
[12]:
```
```
T0 = 6 tp = arange(-T0/2,T0/2+.001,.001)
xp = periodic_tri(tp,2,T0,5)
plot(tp,xp)
plot(tp,abs(xp)**2)
legend((r'$x(t)$', r'$|x(t)|^2$'),loc='best',shadow=True)
grid();
xlim([-T0/2,T0/2])
xlabel(r'Time (s)')
ylabel(r'$x_2(t)$ and $x_2^2(t)$');
```
A simple numerical approximation to the integral
\begin{align}
P = \frac{1}{T}\int_0^T |x_b(t)|^2\, dt
\end{align}is shown below:
```
[13]:
```
```
#Power calculation Px2 = (1/T0)*sum(xp**2)*.001 # rectangular partitions for integral print('Power estimate via numerical integration: %2.4f W' % Px2)
```
```
Power estimate via numerical integration: 0.2222 W
```
##### Power in the Sum of Two Sinusoids[¶](#Power-in-the-Sum-of-Two-Sinusoids)
The problem is what is the power in the signal
\begin{align}
x(t) = A_1 \cos(\omega_1 t +\phi_1) + A_2 \cos(\omega_2 t + \phi_2),\ -\infty < t < \infty
\end{align}Since we are not certain that \(x(t)\) is periodic, the power calculation requires that we form
\begin{align}
P_x = \lim_{T\rightarrow\infty} \frac{1}{T} \int_{-T/2}^{T/2} |x(t)|^2\, dt = \langle |x(t)|^2\rangle
\end{align}* Rather that just jumping in and making a mess, consider first the expansion of \(|x(t)|^2 = x^2(t)\):
\begin{align}
x^2(t) &= \frac{A_1^2}{2}\big[1+\cos(2\omega_1 t + \phi_1)\big] + \frac{A_2^2}{2}\big[1+\cos(2\omega_2 t + \phi_2)\big] \\
&\quad + 2\frac{A_1 A_2}{2}\Big\{\cos[(\omega_1 + \omega_2)t + (\phi_1+\phi_2)\big] + \cos[(\omega_1 - \omega_2)t + (\phi_1-\phi_2)\big]\Big\}
\end{align}
* The time average operator is linear, so we consider \(\langle\ \ \rangle\) operating on each term of the above independently
* For \(\omega_1 \neq \omega_2\), the first two terms yield \(A_1^2/2\) and \(A_2^2/2\) respectively
* The last term requires some thinking, but as long as \(\omega_1 \neq \omega_2\) the times average of \(\cos[(\omega_1 + \omega_2)t + (\phi_1+\phi_2)]\) and \(\cos[(\omega_1 - \omega_2)t + (\phi_1-\phi_2)\)], the two terms respectively are each zero!
* Finally,
\begin{align}
P_x = \frac{A_1^2}{2} + \frac{A_2^2}{2}
\end{align}
* When the frequencies are equal, then you can combine the terms using trig identities (recall the phasor addition formula from ECE 2610
\begin{align}
x(t) = A\cos(\omega t + \phi)
\end{align}where \(\omega = \omega_1 = \omega_2\) and
\begin{align}
Ae^{j\phi} = A_1e^{j\phi_1} + A_2 e^{j\phi_2}
\end{align}
```
[14]:
```
```
t = arange(-10,10,.001)
x1 = 4*cos(2*pi*10*t)
x2 = 3*cos(2*pi*3.45*t+pi/9)
plot(t,x1)
plot(t,x2)
plot(t,x1+x2)
grid()
xlabel(r'Time (s)')
ylabel(r'Amplitude')
legend((r'$x_1(t)$', r'$x_2(t)$', r'$x_1(t)+x_2(t)$'),loc='best',shadow=True)
xlim([-.1,.1]);
```
```
[15]:
```
```
print('Power calculations: %3.2f, %3.2f, %3.2f' \
% (var(x1),var(x2),var(x1+x2)))
```
```
Power calculations: 8.00, 4.50, 12.50
```
```
[16]:
```
```
print('Theory: %3.2f, %3.2f, %3.2f' \
% (4**2/2,3**2/2,4**2/2+3**2/2))
```
```
Theory: 8.00, 4.50, 12.50
```
#### Fourier Series and Line Spectra Plotting[¶](#Fourier-Series-and-Line-Spectra-Plotting)
Being able to easily plot the line spectra of periodic signals will hopefully enhance your understanding. The module `ss.py` contains the function `ss.line_spectra()` for this purpose. The function assumes that the Fourier coefficients, \(X_n\) are available for a real signal \(x(t)\). The function plots line spectra as: * The two-sided magnitude spectra * The two-sided magnitude spectra in dB with an adjustable floor level in dB * The two-sided phase spectra in radians * The one-sided line spectra corresponding to the three cases listed immediately above Examples are given below for the case of a simple pulse train and then for a trapezoidal pulse train. IN the case of the trapezoidal pulse train the underlying Fourier coefficients are obtained numerically using the FFT as described in the course notes.
A fundamental requirement in using `ss.line_spectra()` is to beable to supply the coefficients starting with the DC term coefficient \(X_0\) and moving up to the \(N\)th harmonic. Before plotting the pulse train line spectra I first describe a *helper* function for visualizing the pulse train waveform.
##### Pulse Train[¶](#Pulse-Train)
```
[17]:
```
```
def pulse_train(Np,fs,tau,t0):
"""
Generate a discrete-time approximation to a continuous-time
pulse train signal. Amplitude values are [0,1]. Scale and offset
later if needed.
Inputs
---
Np = number of periods to generate
fs = samples per period
tau = duty cycle
t0 = pulse delay time relative to first rising edge at t = 0
Return
---
t = time axis array
x = waveform
<NAME>, January 2015
"""
t = arange(0,Np*fs+1,1)/fs #time is normalized to make period T0 = 1.0
x = zeros_like(t)
# Using a brute force approach, just fill x with the sample values
for k,tk in enumerate(t):
if mod(tk-t0,1) <= tau and mod(tk-t0,1) >= 0:
x[k] = 1
return t,x
```
```
[18]:
```
```
tau = 1/8; fs = 8*16; t0 = 0 # note t0 = tau/2 subplot(211)
t,x = pulse_train(4,fs,tau,t0)
plot(t,x) # Just a plot of xa(t)
ylim([-.1,1.1])
grid()
ylabel(r'$x_a(t)$')
title(r'Pulse Train Signal: (top) $x_a(t)$, (bot) $x_b(t) = 1-x_a(t)$');
subplot(212)
t,x = pulse_train(4,fs,tau,t0)
plot(t,1-x) # Note here y(t) = 1 - x(t), a special case of ylim([-.1,1.1]) # y(t) = A + B*x(t) in the notes grid()
xlabel(r'Time ($t/T_0$)')
ylabel(r'$x_b(t)$');
```
##### Example: Pulse Train Line Spectra[¶](#Example:-Pulse-Train-Line-Spectra)
For the case of pulse train having the initial pulse starting at \(t=0\), i.e.,
\begin{align}
x(t) = \sum_{k=-\infty}^\infty A\cdot \Pi\left(\frac{t-\tau/2-kT_0}{\tau}\right),
\end{align}the Fourier coefficient are given by
\begin{align}
X_n = A\cdot\frac{\tau}{T_0}\cdot\text{sinc}(nf_0\tau)\cdot\exp(-j2\pi n f_0t_0)
\end{align}where \(f_0 = 1/T_0\) is the fundamental frequency and here \(t_0 = \tau/2\).
Line spectra plotting is shown below for this case. If the pulse train should be shifted in time to some other orientation, then the phase plot will change, as the included \(\exp(j2\pi n f_0 t_0)\) term will be different.
**Note:** The pulse train function define above is slightly different from the pulse train defined in the book and shown in mathematical form as \(x(t)\) just above in this cell. The function `pulse_train()` has the first pulse starting exactly at \(t=0\). To move the pule train right or left on the time axis, you can use the function parameter `t0`.
```
[19]:
```
```
n = arange(0,25+1) # Get 0 through 25 harmonics tau = 0.125; f0 = 1; A = 1;
Xn = A*tau*f0*sinc(n*f0*tau)*exp(-1j*2*pi*n*f0*tau/2)
# Xn = -Xn # Convert the coefficients from xa(t) t0 xb(t)
# Xn[0] += 1 figure(figsize=(6,2))
f = n # Assume a fundamental frequency of 1 Hz so f = n ss.line_spectra(f,Xn,mode='mag',sides=2,fsize=(6,2))
xlim([-25,25]);
#ylim([-50,10])
figure(figsize=(6,2))
ss.line_spectra(f,Xn,mode='phase',fsize=(6,2))
xlim([-25,25]);
```
```
<Figure size 432x144 with 0 Axes>
```
```
<Figure size 432x144 with 0 Axes>
```
##### Example: Trapezoidal Pulse[¶](#Example:-Trapezoidal-Pulse)
Consider the line spectra of a finite rise and fall time pulse train is of practical interest. The function `trap_pulse()` allows you first visualize one period of the trapezoidal pulse train, and then use this waveform in obtaining numerically the Fourier coefficients of this signal. PLotting the corresponding line spectra follows.
A point to be main is that by slowing down the edges (rise time/fall time) of the pulse train the amplitude of the harmonics falls off more rapidly. When considering the clock speed in todays PCs this can be a good thing as harmonic emission is an issue.
```
[20]:
```
```
def trap_pulse(N,tau,tr):
"""
xp = trap_pulse(N,tau,tr)
<NAME>, January 2015
"""
n = arange(0,N)
t = n/N
xp = zeros(len(t))
# Assume tr and tf are equal
T1 = tau + tr
# Create one period of the trapezoidal pulse waveform
for k in n:
if t[k] <= tr:
xp[k] = t[k]/tr
elif (t[k] > tr and t[k] <= tau):
xp[k] = 1
elif (t[k] > tau and t[k] < T1):
xp[k] = -t[k]/tr + 1 + tau/tr;
else:
xp[k] = 0
return xp, t
```
Let \(\tau = 1/8\) and \(t_r = 1/20\):
```
[21]:
```
```
# tau = 1/8, tr = 1/20 N = 1024 xp,t = trap_pulse(N,1/8,1/20)
Xp = fft.fft(xp)
figure(figsize=(6,2))
plot(t,xp)
grid()
title(r'Spectra of Finite Risetime Pulse Train: $\tau = 1/8$ $t_r = 1/20$')
ylabel(r'$x(t)$')
xlabel('Time (s)')
f = arange(0,N/2)
ss.line_spectra(f[0:25],Xp[0:25]/N,'magdB',floor_dB=-80,fsize=(6,2))
ylabel(r'$|X_n| = |X(f_n)|$ (dB)');
#% tau = 1/8, tr = 1/10 xp,t = trap_pulse(N,1/8,1/10)
Xp = fft.fft(xp)
figure(figsize=(6,2))
plot(t,xp)
grid()
title(r'Spectra of Finite Risetime Pulse Train: $\tau = 1/8$ $t_r = 1/10$')
ylabel(r'$x(t)$')
xlabel('Time (s)')
ss.line_spectra(f[0:25],Xp[0:25]/N,'magdB',floor_dB=-80,fsize=(6,2))
ylabel(r'$|X_n| = |X(f_n)|$ (dB)');
```
With the edge speed slowed down it is clear that the harmonics drop off faster.
#### Fourier Transforms[¶](#Fourier-Transforms)
The Fourier transfrom definition is:
\begin{align}
X(f) &= \int_{-\infty}^\infty x(t)\ e^{-j2\pi ft}\, dt \\
x(t) &= \int_{-\infty}^\infty X(f)\, e^{j2\pi ft}\, df
\end{align}A numerical approximation to the Fourier transform is possible using the FFT, or more conveniently using the function `freqz()` from the package `scipy.signal`. A helper function to abstract some of the digital signal processing details is `f, X = FT_approx(x,dt,Nfft)`. The function is now part of `sigsys.py` with name change to `ft_approx()`:
```
[22]:
```
```
def FT_approx(x,t,Nfft):
'''
Approximate the Fourier transform of a finite duration
signal using scipy.signal.freqz()
Inputs
---
x = input signal array
t = time array used to create x(t)
Nfft = the number of frdquency domain points used to
approximate X(f) on the interval [fs/2,fs/2], where
fs = 1/Dt. Dt being the time spacing in array t
Return
---
f = frequency axis array in Hz
X = the Fourier transform approximation (complex)
<NAME>, January 2015
'''
fs = 1/(t[1] - t[0])
t0 = (t[-1]+t[0])/2 # time delay at center
N0 = len(t)/2 # FFT center in samples
f = arange(-1/2,1/2,1/Nfft)
w, X = signal.freqz(x,1,2*pi*f)
X /= fs # account for dt = 1/fs in integral
X *= exp(-1j*2*pi*f*fs*t0)# time interval correction
X *= exp(1j*2*pi*f*N0)# FFT time interval is [0,Nfft-1]
F = f*fs
return F, X
```
##### Example: Rectangular Pulse[¶](#Example:-Rectangular-Pulse)
As as simple starting point example, consider \(x(t) = \Pi(t\tau)\). The well known result for the Fourier transfrom (FT) is:
\begin{align}
X(f) = \mathcal{F}\left\{\Pi\left(\frac{t}{\tau}\right)\right\} = \tau\,\text{sinc}(f\tau)
\end{align}We now use the above defined `FT_approx()` to obtain a numerical approximation to the FT of the rectangular pulse.
**Tips:** * Make sure the signal is well contained on the time interval used to generate \(x(t)\) * Make sure the sampling rate, one over the sample spacing, is adequate to represent the signal spectrum * From sampling theory, the reange of frequencies represented by the spectrum estimate will be \(f_s/2 \leq f < f_s/2\)
```
[23]:
```
```
fs = 100 # sampling rate in Hz tau = 1 t = arange(-5,5,1/fs)
x0 = ss.rect(t-.5,tau)
figure(figsize=(6,5))
subplot(311)
plot(t,x0)
grid()
ylim([-0.1,1.1])
xlim([-2,2])
title(r'Exact Waveform')
xlabel(r'Time (s)')
ylabel(r'$x_0(t)$');
# FT Exact Plot fe = arange(-10,10,.01)
X0e = tau*sinc(fe*tau)
subplot(312)
plot(fe,abs(X0e))
#plot(f,angle(X0))
grid()
xlim([-10,10])
title(r'Exact Spectrum Magnitude')
xlabel(r'Frequency (Hz)')
ylabel(r'$|X_0e(f)|$');
# FT Approximation Plot f,X0 = ss.ft_approx(x0,t,4096)
subplot(313)
plot(f,abs(X0))
#plot(f,angle(X0))
grid()
xlim([-10,10])
title(r'Approximation Spectrum Magnitude')
xlabel(r'Frequency (Hz)')
ylabel(r'$|X_0(f)|$');
tight_layout()
```
##### Example: Text Problem 2.31a Drill Down[¶](#Example:-Text-Problem-2.31a-Drill-Down)
In this problem you are given
\begin{align}
x_1(t) = \Pi\left(\frac{t+1/2}{1}\right) - \Pi\left(\frac{t-1/2}{1}\right)
\end{align}The Fourier transfrom of this signal can be found using *linearity* and the *time delay* theorems.
\begin{align}
X_1(f) &= \mathcal{F}\left\{\Pi\left(\frac{t+1/2}{1}\right) - \Pi\left(\frac{t-1/2}{1}\right)\right\} \\
&= \text{sinc}(f)\cdot\left[e^{j2\pi f\cdot 1/2} - e^{-j2\pi f\cdot 1/2}\right]\times\frac{2j}{2j} \\
&= 2j\ \text{sinc}(f)\cdot\sin(\pi f)
\end{align}
```
[24]:
```
```
fs = 100 t = arange(-5,5,1/fs)
x1 = ss.rect(t+1/2,1)-ss.rect(t-1/2,1)
subplot(211)
plot(t,x1)
grid()
ylim([-1.1,1.1])
xlim([-2,2])
xlabel(r'Time (s)')
ylabel(r'$x_1(t)$');
fe = arange(-10,10,.01)
X1e = 2*1j*sinc(fe)*sin(pi*fe)
f,X1 = ss.ft_approx(x1,t,4096)
subplot(212)
plot(f,abs(X1))
plot(fe,abs(X1e))
#plot(f,angle(X1))
legend((r'Num Approx',r'Exact'),loc='best')
grid()
xlim([-10,10])
xlabel(r'Frequency (Hz)')
ylabel(r'$|X_1(f)|$');
tight_layout()
```
* Notice the numerical approximation and exact spectral plots overlay one another
##### Example: Modulation Theorem[¶](#Example:-Modulation-Theorem)
Consider the modulation theorem, which is extremely important to communications theory:
\begin{align}
y(t) &= x(t)\cdot\cos(2\pi f_0 t) \\
Y(f) &= \frac{1}{2}\left[X(f-f_0) + X(f+f_0)\right]
\end{align}Here we will use a triangle pulse for \(x(t)\):
```
[25]:
```
```
fs = 100 # sampling rate in Hz tau = 1 t = arange(-5,5,1/fs)
x3 = ss.tri(t,tau)
y = x3*cos(2*pi*10*t)
subplot(211)
plot(t,x3)
plot(t,y)
grid()
ylim([-1.1,1.1])
xlim([-2,2])
legend((r'$x_3(t)$', r'$y(t)$'),loc='lower right',shadow=True)
title(r'Time Domain: $x_3(t)$ and $y(t)=x_3(t)\cos(2\pi\cdot 5\cdot t)$')
xlabel(r'Time (s)')
ylabel(r'$y(t)$');
f,Y = ss.ft_approx(y,t,4096)
subplot(212)
plot(f,abs(Y))
#plot(f,angle(X0))
grid()
title(r'Frequency Domain: $Y(f)$')
xlim([-15,15])
xlabel(r'Frequency (Hz)')
ylabel(r'$|Y(f)|$');
tight_layout()
```
##### Example: Representing a Bandlimited Signal[¶](#Example:-Representing-a-Bandlimited-Signal)
We know that in theory a bandlimited signal can only be generated from a signal having infinite duration. Specifically, a signal with rectangular spectrum has Fourier transfrom pair:
\begin{align}
x(t) = 2W\text{sinc}(2Wt) \overset{\mathcal{F}}{\Leftrightarrow} \Pi\left(\frac{f}{2W}\right) = X(f)
\end{align}In a simulation we expect to have troubles modeling the finite duration aspects of the signal.
```
[26]:
```
```
fs = 100 # sampling rate in Hz W = 5 t = arange(-5,5,1/fs)
x4 = 2*W*sinc(2*W*t)
figure(figsize=(6,2))
plot(t,x4)
grid()
#ylim([-1.1,1.1])
xlim([-2,2])
title(r'Time Domain: $x_4(t),\ W = 5$ Hz')
xlabel(r'Time (s)')
ylabel(r'$x_4(t)$');
f,X4 = ss.ft_approx(x4,t,4096)
figure(figsize=(6,2))
plot(f,abs(X4))
grid()
title(r'Frequency Domain: $X_4(f)$')
xlim([-10,10])
xlabel(r'Frequency (Hz)')
ylabel(r'$|X_4(f)|$');
figure(figsize=(6,2))
plot(f,20*log10(abs(X4)))
grid()
title(r'Frequency Domain: $X_4(f)$ in dB')
ylim([-50,5])
xlim([-10,10])
xlabel(r'Frequency (Hz)')
ylabel(r'$|X_4(f)|$ (dB)');
```
**Note:** The dB version (last plot) reveals that the first sidelobes of the spectrum are only down ~21dB. Increasing the length of the time window will not help. The spectral side lobes will become more tightly packed, but the first sidelobe will still be down only 21dB. With other pulse shapes in the time domain, i.e., note simply a truncted \(\text{sinc}()\) function reduced sidelobes can be obtained.
#### Convolution[¶](#Convolution)
* The convolution of two signals \(x_1(t)\) and \(x_2(t)\) is defined as
\begin{align}
x(t) &= x_1(t)\ast x_2(t) = \int_{-\infty}^\infty x_1(\lambda)x_2(t-\lambda)\, d\lambda \\
&= x_2(t)\ast x_1(t) = \int_{-\infty}^\infty x_2(\lambda)x_1(t-\lambda)\, d\lambda
\end{align}
* A special convolution case is \(\delta(t-t_0)\)
\begin{align}
\delta(t-t_0)\ast x(t) &= \int_{-\infty}^\infty \delta(\lambda-t_0)x(t-\lambda)\, d\lambda \\
&= x(t-\lambda)\big|_{\lambda=t_0} = x(t-t_0)
\end{align}You can experiment with the convolution integral numerically using `ssd.conv_integral()` found in the module `ssd.py`.
```
[27]:
```
```
t = arange(-2,2.001,.001)
p1 = ss.rect(t,1)
p2 = ss.rect(t,3)
y,ty = ss.conv_integral(p1,t,p2,t)
plot(ty,y)
ylim([-.01,1.01])
grid()
xlabel(r'Time (s)')
ylabel(r'$y(t)$');
```
For convolutions involving semi-infinite signals, such as \(u(t)\), you can tell `ssd.conv_integral()` about this via the optional extent argument. See the function help using
```
ss.conv_integral?
```
```
[28]:
```
```
# Consider a pulse convolved with an exponential ('r' type extent)
tx = arange(-1,8,.01)
x = ss.rect(tx-2,4) # pulse starts at t = 0 h = 4*exp(-4*tx)*ss.step(tx)
y,ty = ss.conv_integral(x,tx,h,tx,extent=('f','r')) # note extents set plot(ty,y) # expect a pulse charge and discharge waveform grid()
title(r'$\Pi((t-2)/4)\ast 4 e^{-4t} u(t)$')
xlabel(r'Time (s)')
ylabel(r'$y(t)$');
```
#### Spectrum of PN Sequence (exact)[¶](#Spectrum-of-PN-Sequence-(exact))
The cell below is a copy of the earlier pulse train line spectra example. Use this as a template to create the solution to the PN code problem of HW 3.
```
[29]:
```
```
n = arange(0,25+1) # Get 0 through 25 harmonics tau = 0.125; f0 = 1; A = 1;
Xn = A*tau*f0*sinc(n*f0*tau)*exp(-1j*2*pi*n*f0*tau/2)
# Xn = -Xn # Convert the coefficients from xa(t) t0 xb(t)
# Xn[0] += 1 figure(figsize=(6,2))
f = n # Assume a fundamental frequency of 1 Hz so f = n ss.line_spectra(f,Xn,mode='mag',sides=2,fsize=(6,2))
xlim([-25,25]);
#ylim([-50,10])
figure(figsize=(6,2))
ss.line_spectra(f,Xn,mode='phase',fsize=(6,2))
xlim([-25,25]);
```
```
<Figure size 432x144 with 0 Axes>
```
```
<Figure size 432x144 with 0 Axes>
```
#### Spectrum of PN Sequence (approx)[¶](#Spectrum-of-PN-Sequence-(approx))
The code below approximates the PSD of the PN code using a numerical approximation to the Fourier coefficients, \(X_n\). This development may be useful for the lab, as you can esily change the waveform level without having to rework the theory.
The approach taken here to create one period of the PN waveform at 10 samples per bit. The line containing the function `ss.upsample()` converts the bit sequence into a waveform by upsampling and filtering with a rectangular pulse shape (`ones(10)`). The function `ss.fs_coeff()` numerically calculates the \(X_n\)’s. To plot the PSD from the Fourier coefficients we use
\[S_x(f) = \sum_{n=-\infty}^\infty |X_n|^2 \delta(f-nf_0)\]
where \(f_0\) in this case is \(1/(MT_0)\) with \(T_0\) beging the bit period and \(M\) the code period in bits.
```
[30]:
```
```
x_PN4 = ss.m_seq(4)
x = signal.lfilter(ones(10),1,ss.upsample(x_PN4,10))
t = arange(0,len(x))/10 figure(figsize=(6,2))
plot(t,x);
title(r'Time Domain and PSD of $M=15$ PN Code with $T = 1$')
xlabel(r'Time (s)')
ylabel(r'x(t)')
axis([0,15,-0.1,1.1]);
grid()
# 10 samples/bit so 150 samples/period
# harmonics spaced by 1/(15*T) = 1/15 Xk,fk = ss.fs_coeff(x,45,1/15)
ss.line_spectra(fk,Xk,'magdB',lwidth=2.0,floor_dB=-50,fsize=(6,2))
xlim([-3,3])
ylabel(r'$|X_n| = |X(f_n)|$ (dB)');
```
```
[31]:
```
```
# Line spacing 1/15
```
```
[31]:
```
```
0.06666666666666667
```
```
[32]:
```
```
import sk_dsp_comm.digitalcom as dc y_PN5_bits = ss.pn_gen(10000,5)
# Convert to waveform level shifted to +/-1 amplitude y = 2*signal.lfilter(ones(10),1,ss.upsample(y_PN5_bits,10))-1
# Find the time averaged autocorrelation function normalized
# to have a peak amplitude of 1 Ry,tau = dc.xcorr(y,y,400)
# We know Ry is real so strip small imag parts from FFT-based calc Ry = Ry.real
```
```
[33]:
```
```
fs = 10 t = arange(len(y))/fs plot(t[:500],y[:500])
title(r'PN Waveform for 5 Stages (Period $2^5 -1 = 31$ bits)')
ylabel(r'Amplitude')
xlabel(r'Bits (10 samples/bit)')
grid();
```
```
[34]:
```
```
tau_s = tau/10 figure(figsize=(6,2))
plot(tau_s,Ry)
title(r'Autocorrelation and PSD Estimates for $M=31$ with $T = 1$')
xlabel(r'Autocorrelation Lag $\tau$ (s)')
ylabel(r'$R_y(\tau)$')
grid();
figure(figsize=(6,2))
psd(y,2**12,10)
xlabel(r'Frequency (Hz)')
ylabel(r'$S_y(f)$ (dB)')
#xlim([0,.002]);
ylim([-30,20]);
```
In Lab 2 of ECE 4670 a C/C++ version of a PN generator is implemented to run the ARM `mbed` LPC 1768 microcontroller (<https://www.sparkfun.com/products/9564>). At the heart of this code is:
```
// Globals defined as unsigned int tap1 -= 1;
tap2 -= 1;
mask1 = 0x1 << (tap1);
mask2 = 0x1 << (tap2);
bit = 0x0;
sync = 0x0;
void gen_PN() {
my_pin5 = bit;
my_pin6 = synch_bit;
led2 = bit;
led3 = synch_bit;
if (clk_state == 0x1)
{
// Advance m-sequence generator by one bit
// XOR tap1 and tap2 SR values and feedback to input
fb = ((sr & mask1)>> tap1) ^ ((sr & mask2) >> tap2);
sr = (sr << 1) + fb;
bit = sr & 0x1;
// Use random number generator in place of m-sequence bits
if (DIP_sw4)
{
bit = rand_int() & 0x1;
}
clk_state = 0x0;
// See if all 1's condition exists in SR
if ((sr & synch) == synch) {
synch_bit = 0x1;
}
else
{
synch_bit = 0x0;
}
}
else
{
if (DIP_sw1) bit = !bit;
clk_state = 0x1;
}
}
```
The data type is `unsigned int`, which on the mbed is `uint16_t`, an unsigned 16-bit integer. A single unsigned integer is used as a 16-bit shift register with the LSB, furthest bit to the right, used to represent the first register stage. The shift register is advanced using a left shift `<<` bitwise operation. We can code this Python almost directly, as shown below.
```
[35]:
```
```
class bitwise_PN(object):
"""
Implement a PN generator using bitwise manipulation for
the shift register. The LSB holds b0 and bits are shifted left.
+---+---+---+---+---+---+---+
sr = |bM-1| .. |bM-k| .. | b2 | b1 | b0 |
+---+---+---+---+---+---+---+
| |
Feedback:(tap1-1) (tap2-1) Shift left using <<
<NAME> February 2017
"""
def __init__(self,tap1,tap2,Nstage,sr_initialize):
"""
Initialize the PN generator object
"""
self.tap1 = tap1 - 1
self.tap2 = tap2 - 1
self.mask1 = 0x1 << (tap1 - 1) # to select bit of interest
self.mask2 = 0x1 << (tap2 - 1) # to select bit of interest
self.Nstage = Nstage
self.period = 2**Nstage - 1
self.sr = sr_initialize
self.bit = 0
self.sync_bit = 0
def clock_PN(self):
'''
Method to advance m-sequence generator by one bit
XOR tap1 and tap2 SR values and feedback to input
'''
fb = ((self.sr & self.mask1)>> self.tap1) ^ \
((self.sr & self.mask2) >> self.tap2)
self.sr = (self.sr << 1) + fb
self.sr = self.sr & self.period # set MSBs > Nstage to 0
self.bit = self.sr & 0x1 # output LSB from SR
# See if all 1's condition exits in SR, if so output a synch pulse
if ((self.sr & self.period) == self.period):
self.sync_bit = 0x1
else:
self.sync_bit = 0x0
print('output = %d, sr contents = %s, sync bit = %d' \
% (self.bit, binary(self.sr, self.Nstage), self.sync_bit))
```
```
[36]:
```
```
# A simple binary format display function which shows
# leading zeros to a fixed bit width def binary(num, length=8):
return format(num, '#0{}b'.format(length + 2))
```
```
[37]:
```
```
PN1 = bitwise_PN(10,7,10,0x1)
```
```
[38]:
```
```
PN1.clock_PN()
```
```
output = 0, sr contents = 0b0000000010, sync bit = 0
```
```
[39]:
```
```
# sr initial condition sr = 0b1
```
```
[40]:
```
```
Nout = 20 x_out = zeros(Nout)
s_out = zeros(Nout)
PN1 = bitwise_PN(3,2,3,0x1)
for k in range(Nout):
PN1.clock_PN()
x_out[k] = PN1.bit
s_out[k] = PN1.sync_bit
```
```
output = 0, sr contents = 0b010, sync bit = 0 output = 1, sr contents = 0b101, sync bit = 0 output = 1, sr contents = 0b011, sync bit = 0 output = 1, sr contents = 0b111, sync bit = 1 output = 0, sr contents = 0b110, sync bit = 0 output = 0, sr contents = 0b100, sync bit = 0 output = 1, sr contents = 0b001, sync bit = 0 output = 0, sr contents = 0b010, sync bit = 0 output = 1, sr contents = 0b101, sync bit = 0 output = 1, sr contents = 0b011, sync bit = 0 output = 1, sr contents = 0b111, sync bit = 1 output = 0, sr contents = 0b110, sync bit = 0 output = 0, sr contents = 0b100, sync bit = 0 output = 1, sr contents = 0b001, sync bit = 0 output = 0, sr contents = 0b010, sync bit = 0 output = 1, sr contents = 0b101, sync bit = 0 output = 1, sr contents = 0b011, sync bit = 0 output = 1, sr contents = 0b111, sync bit = 1 output = 0, sr contents = 0b110, sync bit = 0 output = 0, sr contents = 0b100, sync bit = 0
```
```
[41]:
```
```
stem(x_out)
stem(0.2*s_out,markerfmt = 'ro')
ylim([0,1.1])
```
```
[41]:
```
```
(0.0, 1.1)
```
##### Cross Correlation and Signal Delay[¶](#Cross-Correlation-and-Signal-Delay)
The idea of the autocorrelation function can be extended to the cross correlation, that is the correlation or likeness between two signals, say \(x(t)\) and \(y(t)\). Define
\begin{align}
R_{xy}(\tau) = \langle x(t)y(t+\tau)\rangle = \lim_{T\rightarrow\infty} \frac{1}{2T}\int_{-T}^T x(t)y(t+\tau)\, dt
\end{align}Consider a simulation example using `dc.xcorr(x,t,lags)`:
```
[42]:
```
```
import sk_dsp_comm.digitalcom as dc x_PN4_bits = ss.pn_gen(10000,6)
# Convert to waveform level shifted to +/-1 amplitude x_s = 2*signal.lfilter(ones(10),1,ss.upsample(x_PN4_bits,10))-1
# Form a delayed version of x_S T_D = 35 # 35 sample delay y_s = signal.lfilter(concatenate((zeros(T_D),array([1]))),1,x_s)
figure(figsize=(6,2))
plot(x_s[:200])
plot(y_s[:200])
ylim([-1.1,1.1])
title(r'Delayed and Undelayed Signals for $T_D = 35$ Samples')
xlabel(r'Samples (10/PN bit)')
ylabel(r'$x_s(t)$ and $y_s(t)$')
grid();
# Find the time averaged autocorrelation function normalized
# to have a peak amplitude of 1 Ryx,tau = dc.xcorr(y_s,x_s,200) #note order change
# We know Ryx is real Ryx = Ryx.real tau_s = tau/10 figure(figsize=(6,2))
plot(tau_s,Ryx)
title(r'Cross Correlation for $M=4$ with $T = 1$ and Delay 35 Samples')
xlabel(r'Autocorrelation Lag $\tau$ (s)')
ylabel(r'$R_{yx}(\tau)$')
grid();
```
#### Spectral Containment Bandwidth (text problem 2.55)[¶](#Spectral-Containment-Bandwidth-(text-problem-2.55))
In text problem 2.55 you are asked to find the 90% energy contain bandwidth of a signal \(x_i(t)\). Specifically you are to find the lowpass or one-sided bandwidth of a baseband signal such that 90% of the total signal energy is contained in the bandwidth, \(B_{90}\). You obtain \(B_{90}\) by solving the following equation
\begin{align}
0.9 = \frac{0.9 E_\text{total}}{E_\text{total}} = \frac{\int_{-B_{90}}^{B_{90}} G(f) df}{\int_{-\infty}^\infty G(f) df} = \frac{2\int_0^{B_{90}} G(f) df}{2\int_0^\infty G(f) df} = \frac{\int_0^{B_{90}} G(f) df}{\int_0^\infty G(f) df},
\end{align}where \(G(f) = |X_i(f)|^2\) is the energy spectral density of \(x_i(t)\).
For parts (c) and (d) the problem states you need to perform numerical integration.
##### Example:[¶](#Example:)
In an exalier example found in this notebook I found the Fourier transform of
\begin{align}
x(t) = \Pi\left(\frac{t-\tau/2}{\tau}\right) - \Pi\left(\frac{t+\tau/2}{\tau}\right)
\end{align}to be
\begin{align}
X(f) &= 2j\ \text{sinc}(f\tau)\cdot\sin(\pi f\tau)
\end{align}Note I have modified the problem to now have pulse width \(\tau\) to better match the homework problem where \(\tau\) is a variable.
The energy spectral density is
\begin{align}
G(f) = 4\, \text{sinc}^2(f\tau)\cdot\sin^2(\pi f\tau)
\end{align}I convenient way to numerically integrate \(G(f)\) is using simple reactangular partitions, but making sure that \(\Delta f\) is small relative to the changes in \(G(f)\). Since you do not know what the value of \(\tau\) you consider a *normalized frequency* variable \(f_n = f\tau\) in the analysis. The rest of the steps are:
1. Sweep \(G(f_n)\) using an array `fn` running from zero to \(f_n\) large enough to insure that \(G(f_n)\) is very small relative to it largest value. In Python this is just filling an array, `Gn` with the functional values.
2. Form a new array which contains the cumulative sum of the values in `Gn`, say `Gn_cumsum = cumsum(Gn)`. Aso form the sum of the array values, i.e., `Gn_tot = sum(Gn)`
3. Plot the ratio of ``Gn_cumsum/Gn_sum` versus `fn`. The curve should start at zero and climb to one as \(f_n\) becomes large. The value of \(f_n\) where the curve crosses through 0.9 is the 90% containment bandwidth.
**Note:** You might notice that \(\Delta f\), which is needed in the rectangular integration formula was never used. Why? In the calculation of the cumulative sum and the calculation of the total, both should include \(\Delta f\), hence in the ration the values cancel out. Nice!
```
[43]:
```
```
fn = arange(0,10,.001)
Gn = 4*sinc(fn)**2 * sin(pi*fn)**2 Gn_cumsum = cumsum(Gn)
Gn_tot = sum(Gn)
plot(fn,Gn_cumsum/Gn_tot)
grid()
xlabel('Normalized Frequency $f\tau$')
ylabel('Fractional Power Containment');
```
```
[44]:
```
```
fn_idx = np.nonzero(np.ravel(abs(Gn_cumsum/Gn_tot - 0.9)< 0.0005))[0]
fn_idx
```
```
[44]:
```
```
array([1446, 1447, 1448, 1449, 1450])
```
```
[45]:
```
```
print('The normalized 90 percent containment bandwidth is %2.2f Hz-s.' \
% fn[1448])
```
```
The normalized 90 percent containment bandwidth is 1.45 Hz-s.
```
#### Filter Analysis[¶](#Filter-Analysis)
To facilitate the performance analysis of both discrete-time and continuous-time filters, the functions `freqz_resp()` and `freqs_resp()` are available (definitions below, respectively). With these functions you can quickly move from *z*-domain or *s*-domain rational system function coefficients to visualization of the filter frequency response * Magnitude * Magnitude in dB * Phase in radians * Group delay in samples or seconds (digital filter) * Group delay in seconds (analog filter)
```
[46]:
```
```
def freqz_resp(b,a=[1],mode = 'dB',fs=1.0,Npts = 1024,fsize=(6,4)):
"""
A method for displaying digital filter frequency response magnitude,
phase, and group delay. A plot is produced using matplotlib
freq_resp(self,mode = 'dB',Npts = 1024)
A method for displaying the filter frequency response magnitude,
phase, and group delay. A plot is produced using matplotlib
freqs_resp(b,a=[1],Dmin=1,Dmax=5,mode = 'dB',Npts = 1024,fsize=(6,4))
b = ndarray of numerator coefficients
a = ndarray of denominator coefficents
Dmin = start frequency as 10**Dmin
Dmax = stop frequency as 10**Dmax
mode = display mode: 'dB' magnitude, 'phase' in radians, or
'groupdelay_s' in samples and 'groupdelay_t' in sec,
all versus frequency in Hz
Npts = number of points to plot; defult is 1024
fsize = figure size; defult is (6,4) inches
<NAME>, January 2015
"""
f = np.arange(0,Npts)/(2.0*Npts)
w,H = signal.freqz(b,a,2*np.pi*f)
plt.figure(figsize=fsize)
if mode.lower() == 'db':
plt.plot(f*fs,20*np.log10(np.abs(H)))
plt.xlabel('Frequency (Hz)')
plt.ylabel('Gain (dB)')
plt.title('Frequency Response - Magnitude')
elif mode.lower() == 'phase':
plt.plot(f*fs,np.angle(H))
plt.xlabel('Frequency (Hz)')
plt.ylabel('Phase (rad)')
plt.title('Frequency Response - Phase')
elif (mode.lower() == 'groupdelay_s') or (mode.lower() == 'groupdelay_t'):
"""
Notes
---
Since this calculation involves finding the derivative of the
phase response, care must be taken at phase wrapping points
and when the phase jumps by +/-pi, which occurs when the
amplitude response changes sign. Since the amplitude response
is zero when the sign changes, the jumps do not alter the group
delay results.
"""
theta = np.unwrap(np.angle(H))
# Since theta for an FIR filter is likely to have many pi phase
# jumps too, we unwrap a second time 2*theta and divide by 2
theta2 = np.unwrap(2*theta)/2.
theta_dif = np.diff(theta2)
f_diff = np.diff(f)
Tg = -np.diff(theta2)/np.diff(w)
max_Tg = np.max(Tg)
#print(max_Tg)
if mode.lower() == 'groupdelay_t':
max_Tg /= fs
plt.plot(f[:-1]*fs,Tg/fs)
plt.ylim([0,1.2*max_Tg])
else:
plt.plot(f[:-1]*fs,Tg)
plt.ylim([0,1.2*max_Tg])
plt.xlabel('Frequency (Hz)')
if mode.lower() == 'groupdelay_t':
plt.ylabel('Group Delay (s)')
else:
plt.ylabel('Group Delay (samples)')
plt.title('Frequency Response - Group Delay')
else:
s1 = 'Error, mode must be "dB", "phase, '
s2 = '"groupdelay_s", or "groupdelay_t"'
print(s1 + s2)
```
```
[47]:
```
```
def freqs_resp(b,a=[1],Dmin=1,Dmax=5,mode = 'dB',Npts = 1024,fsize=(6,4)):
"""
A method for displaying analog filter frequency response magnitude,
phase, and group delay. A plot is produced using matplotlib
freqs_resp(b,a=[1],Dmin=1,Dmax=5,mode='dB',Npts=1024,fsize=(6,4))
b = ndarray of numerator coefficients
a = ndarray of denominator coefficents
Dmin = start frequency as 10**Dmin
Dmax = stop frequency as 10**Dmax
mode = display mode: 'dB' magnitude, 'phase' in radians, or
'groupdelay', all versus log frequency in Hz
Npts = number of points to plot; defult is 1024
fsize = figure size; defult is (6,4) inches
<NAME>, January 2015
"""
f = np.logspace(Dmin,Dmax,Npts)
w,H = signal.freqs(b,a,2*np.pi*f)
plt.figure(figsize=fsize)
if mode.lower() == 'db':
plt.semilogx(f,20*np.log10(np.abs(H)))
plt.xlabel('Frequency (Hz)')
plt.ylabel('Gain (dB)')
plt.title('Frequency Response - Magnitude')
elif mode.lower() == 'phase':
plt.semilogx(f,np.angle(H))
plt.xlabel('Frequency (Hz)')
plt.ylabel('Phase (rad)')
plt.title('Frequency Response - Phase')
elif mode.lower() == 'groupdelay':
"""
Notes
---
See freqz_resp() for calculation details.
"""
theta = np.unwrap(np.angle(H))
# Since theta for an FIR filter is likely to have many pi phase
# jumps too, we unwrap a second time 2*theta and divide by 2
theta2 = np.unwrap(2*theta)/2.
theta_dif = np.diff(theta2)
f_diff = np.diff(f)
Tg = -np.diff(theta2)/np.diff(w)
max_Tg = np.max(Tg)
#print(max_Tg)
plt.semilogx(f[:-1],Tg)
plt.ylim([0,1.2*max_Tg])
plt.xlabel('Frequency (Hz)')
plt.ylabel('Group Delay (s)')
plt.title('Frequency Response - Group Delay')
else:
print('Error, mode must be "dB" or "phase or "groupdelay"')
```
##### Example: Discrete-Time Chebyshev Type I Bandpass Filter[¶](#Example:-Discrete-Time-Chebyshev-Type-I-Bandpass-Filter)
```
[48]:
```
```
import sk_dsp_comm.iir_design_helper as iird import sk_dsp_comm.fir_design_helper as fird
```
```
[49]:
```
```
b1,a1,sos1 = iird.IIR_bpf(200,250,300,350,0.1,60.0,1000,'butter')
b2,a2,sos2 = iird.IIR_bpf(200,250,300,350,0.1,60.0,1000,'cheby1')
```
```
[50]:
```
```
figure()
iird.freqz_resp_cas_list([sos1,sos2],'dB',1000)
ylim([-70,0])
grid();
figure()
iird.freqz_resp_cas_list([sos1,sos2],'groupdelay_t',1000)
grid();
figure()
iird.sos_zplane(sos2)
```
```
/home/docs/checkouts/readthedocs.org/user_builds/scikit-dsp-comm/envs/stable/lib/python3.7/site-packages/scikit_dsp_comm-2.0.1-py3.7.egg/sk_dsp_comm/iir_design_helper.py:350: RuntimeWarning: divide by zero encountered in log10
plt.plot(f*fs,20*np.log10(np.abs(H)))
/home/docs/checkouts/readthedocs.org/user_builds/scikit-dsp-comm/envs/stable/lib/python3.7/site-packages/scikit_dsp_comm-2.0.1-py3.7.egg/sk_dsp_comm/iir_design_helper.py:350: RuntimeWarning: divide by zero encountered in log10
plt.plot(f*fs,20*np.log10(np.abs(H)))
/home/docs/checkouts/readthedocs.org/user_builds/scikit-dsp-comm/envs/stable/lib/python3.7/site-packages/scikit_dsp_comm-2.0.1-py3.7.egg/sk_dsp_comm/iir_design_helper.py:383: RuntimeWarning: divide by zero encountered in log10
idx = np.nonzero(np.ravel(20*np.log10(H[:-1]) < -400))[0]
/home/docs/checkouts/readthedocs.org/user_builds/scikit-dsp-comm/envs/stable/lib/python3.7/site-packages/scikit_dsp_comm-2.0.1-py3.7.egg/sk_dsp_comm/iir_design_helper.py:383: RuntimeWarning: invalid value encountered in multiply
idx = np.nonzero(np.ravel(20*np.log10(H[:-1]) < -400))[0]
/home/docs/checkouts/readthedocs.org/user_builds/scikit-dsp-comm/envs/stable/lib/python3.7/site-packages/scikit_dsp_comm-2.0.1-py3.7.egg/sk_dsp_comm/iir_design_helper.py:383: RuntimeWarning: invalid value encountered in less
idx = np.nonzero(np.ravel(20*np.log10(H[:-1]) < -400))[0]
/home/docs/checkouts/readthedocs.org/user_builds/scikit-dsp-comm/envs/stable/lib/python3.7/site-packages/scikit_dsp_comm-2.0.1-py3.7.egg/sk_dsp_comm/iir_design_helper.py:383: RuntimeWarning: divide by zero encountered in log10
idx = np.nonzero(np.ravel(20*np.log10(H[:-1]) < -400))[0]
/home/docs/checkouts/readthedocs.org/user_builds/scikit-dsp-comm/envs/stable/lib/python3.7/site-packages/scikit_dsp_comm-2.0.1-py3.7.egg/sk_dsp_comm/iir_design_helper.py:383: RuntimeWarning: invalid value encountered in multiply
idx = np.nonzero(np.ravel(20*np.log10(H[:-1]) < -400))[0]
/home/docs/checkouts/readthedocs.org/user_builds/scikit-dsp-comm/envs/stable/lib/python3.7/site-packages/scikit_dsp_comm-2.0.1-py3.7.egg/sk_dsp_comm/iir_design_helper.py:383: RuntimeWarning: invalid value encountered in less
idx = np.nonzero(np.ravel(20*np.log10(H[:-1]) < -400))[0]
```
```
[50]:
```
```
(12, 12)
```
```
<Figure size 432x288 with 0 Axes>
```
```
<Figure size 432x288 with 0 Axes>
```
```
<Figure size 432x288 with 0 Axes>
```
```
[51]:
```
```
b,a = signal.cheby1(5,.1,2*array([250,300])/1000,btype='bandpass')
```
```
[52]:
```
```
freqz_resp(b,a,mode='dB',fs=1000,fsize=(6,2))
grid()
ylim([-80,5]);
xlim([100,400]);
freqz_resp(b,a,mode='groupdelay_s',fs=1000,fsize=(6,2))
grid()
xlim([100,400]);
```
##### Example: Continuous-Time Bessel Bandpass Filter[¶](#Example:-Continuous-Time-Bessel-Bandpass-Filter)
```
[53]:
```
```
bc,ac = signal.bessel(7,2*pi*array([10.0,50.0])*1e6,btype='bandpass',analog=True)
```
```
[54]:
```
```
freqs_resp(bc,ac,6,9,mode='dB',fsize=(6,2))
grid()
ylim([-80,5]);
freqs_resp(bc,ac,6,9,mode='groupdelay',fsize=(6,2))
grid()
```
##### Second-Order Butterworth Lowpass Response[¶](#Second-Order-Butterworth-Lowpass-Response)
Consider a 3rd-order analog Butterworth is the \(s\)-domain having transfer function \(H(s)\). Using the `scipy.signal` function `butter()` we find the coefficients to the rational transfer function of the form:
\begin{align}
H(s) = \frac{\sum_{n=0}^M b_n s^n}{\sum_{n=0}^N a_n s^n}
\end{align}
```
[55]:
```
```
b3,a3 = signal.butter(3,2*pi*1,analog=True)
freqs_resp(b3,a3,-1,2,mode='dB',fsize=(6,2))
grid()
ylim([-80,5]);
freqs_resp(b3,a3,-1,2,mode='groupdelay',fsize=(6,2))
grid()
```
###### Obtaining the Step Response via Simulation[¶](#Obtaining-the-Step-Response-via-Simulation)
Time domain simulation of continuous time system can be performed using the `signal.lsim()` function. You have to make sure the time step is sufficiently small relative to the filter bandwidth.
```
[56]:
```
```
t = arange(0,2,.0001)
xs = ss.step(t)
tout,ys,x_state = signal.lsim((b3,a3),xs,t)
plot(t,ys)
title(r'Third-Order Butterworth Step Response for $f_3 = 1$ Hz')
ylabel(r'Ste Response')
xlabel(r'Time (s)')
grid();
```
```
[1]:
```
```
%pylab inline import sk_dsp_comm.sigsys as ss import sk_dsp_comm.fir_design_helper as fir_d import sk_dsp_comm.iir_design_helper as iir_d import sk_dsp_comm.multirate_helper as mrh import scipy.signal as signal from IPython.display import Audio, display from IPython.display import Image, SVG
```
```
Populating the interactive namespace from numpy and matplotlib
```
```
[2]:
```
```
%config InlineBackend.figure_formats=['svg'] # SVG inline viewing
```
#### Filter Design Using the Helper Modules[¶](#Filter-Design-Using-the-Helper-Modules)
The Scipy package *signal* assists with the design of many digital filter types. As an alternative, here we explore the use of the filter design modules found in `scikit-dsp-comm` (<https://github.com/mwickert/scikit-dsp-comm>).
In this note we briefly explore the use of `sk_dsp_comm.fir_design_helper` and `sk_dsp_comm.iir_design_helper`. In the examples that follow we assume the import of these modules is made as follows:
```
import sk_dsp_comm.fir_design_helper as fir_d import sk_dsp_comm.iir_design_helper as iir_d
```
The functions in these modules provide an easier and more consistent interface for both finte impulse response (FIR) (linear phase) and infinite impulse response (IIR) classical designs. Functions inside these modules *wrap* `scipy.signal` functions and also incorporate new functionality.
#### Design From Amplitude Response Requirements[¶](#Design-From-Amplitude-Response-Requirements)
With both `fir_design_helper` and `iir_design_helper` a design starts with amplitude response requirements, that is the filter passband critical frequencies, stopband critical frequencies, passband ripple, and stopband attenuation. The number of taps/coefficients (FIR case) or the filter order (IIR case) needed to meet these requirements is then determined and the filter coefficients are returned as an ndarray `b` for FIR, and for IIR both `b` and `a` arrays, and a second-order sections `sos` 2D array, with the rows containing the corresponding cascade of second-order sections toplogy for IIR filters.
For the FIR case we have in the \(z\)-domain
\[H_\text{FIR}(z) = \sum_{k=0}^N b_k z^{-k}\]
with ndarray `b` = \([b_0, b_1, \ldots, b_N]\). For the IIR case we have in the \(z\)-domain
\[\begin{split}\begin{align}
H_\text{IIR}(z) &= \frac{\sum_{k=0}^M b_k z^{-k}}{\sum_{k=1}^N a_k z^{-k}} \\
&= \prod_{k=0}^{N_s-1} \frac{b_{k0} + b_{k1} z^{-1} + b_{k2} z^{-2}}{1 + a_{k1} z^{-1} + a_{k2} z^{-2}} = \prod_{k=0}^{N_s-1} H_k(z)
\end{align}\end{split}\]
where \(N_s = \lfloor(N+1)/2\rfloor\). For the `b/a` form the coefficients are arranged as
```
b = [b0, b1, ..., bM-1], the numerator filter coefficients a = [a0, a1, ..., aN-1], the denominator filter ceofficients
```
For the `sos` form each row of the 2D `sos` array corresponds to the coefficients of \(H_k(z)\), as follows:
```
SOS_mat = [[b00, b01, b02, 1, a01, a02], #biquad 0
[b10, b11, b12, 1, a11, a12], #biquad 1
.
.
[bNs-10, bNs-11, bNs-12, 1, aNs-11, aNs-12]] #biquad Ns-1
```
#### Linear Phase FIR Filter Design[¶](#Linear-Phase-FIR-Filter-Design)
The primary focus of this module is adding the ability to design linear phase FIR filters from user friendly amplitude response requirements.
Most digital filter design is motivated by the desire to approach an ideal filter. Recall an ideal filter will pass signals of a certain of frequencies and block others. For both analog and digital filters the designer can choose from a variety of approximation techniques. For digital filters the approximation techniques fall into the categories of IIR or FIR. In the design of FIR filters two popular techniques are truncating the ideal filter impulse response and applying a window, and optimum equiripple approximations [Oppenheim2010](https://www.amazon.com/Discrete-Time-Signal-Processing-3rd-Prentice-Hall/dp/0131988425/ref=sr_1_1?ie=UTF8&qid=1519940790&sr=8-1&keywords=oppenheim+discrete+time+signal+processing&dpID=51v48p99JjL&preST=_SX218_BO1,204,203,200_QL40_&dpSrc=srch). Frequency sampling based approaches are also popular, but will not be considered here, even though `scipy.signal` supports all three. Filter design generally begins with a specification of the desired frequency response. The filter frequency response may be stated in several ways, but amplitude response is the most common, e.g., state how \(H_c(j\Omega)\) or \(H(e^{j\omega}) = H(e^{j2\pi f/f_s})\) should behave. A completed design consists of the number of coefficients (taps) required and the coefficients themselves (double precision float or `float64` in Numpy, and `float64_t` in C). Figure 1, below, shows amplitude response requirements in terms of filter gain and critical frequencies for lowpass, highpass, bandpass, and bandstop filters. The critical frequencies are given here in terms of analog requirements in Hz. The sampling frequency is assumed to be in Hz. The passband ripple and stopband attenuation values are in dB. Note in dB terms attenuation is the negative of gain, e.g., -60 of stopband gain is equivalent to 60 dB of stopband attenuation.
```
[3]:
```
```
Image('300ppi/[email protected]',width='90%')
```
```
[3]:
```
There are 10 filter design functions and one plotting function available in `fir_design_helper.py`. Four functions for designing Kaiser window based FIR filters and four functions for designing equiripple based FIR filters. Of the eight just described, they all take in amplitude response requirements and return a coefficients array. Two of the 10 filter functions are simply wrappers around the `scipy.signal` function `signal.firwin()` for designing filters of a specific order when one
(lowpass) or two (bandpass) critical frequencies are given. The wrapper functions fix the window type to the `firwin` default of hann (hanning). The remamining eight are described below in Table 1. The plotting function provides an easy means to compare the resulting frequency response of one or more designs on a single plot. Display modes allow gain in dB, phase in radians, group delay in samples, and group delay in seconds for a given sampling rate. This function, `freq_resp_list()`, works for both FIR and IIR designs. Table 1 provides the interface details to the eight design functions where d_stop and d_pass are positive dB values and the critical frequencies have the same unit as the sampling frequency \(f_s\). These functions do not create perfect results so some tuning of of the design parameters may be needed, in addition to bumping the filter order up or down via `N_bump`.
```
[4]:
```
```
Image('300ppi/[email protected]',width='80%')
```
```
[4]:
```
##### Design Examples[¶](#Design-Examples)
###### Example 1: Lowpass with \(f_s = 1\) Hz[¶](#Example-1:-Lowpass-with-f_s-=-1-Hz)
For this 31 tap filter we choose the cutoff frequency to be \(F_c = F_s/8\), or in normalized form \(f_c = 1/8\).
```
[5]:
```
```
b_k = fir_d.firwin_kaiser_lpf(1/8,1/6,50,1.0)
b_r = fir_d.fir_remez_lpf(1/8,1/6,0.2,50,1.0)
```
```
[6]:
```
```
fir_d.freqz_resp_list([b_k,b_r],[[1],[1]],'dB',fs=1)
ylim([-80,5])
title(r'Kaiser vs Equal Ripple Lowpass')
ylabel(r'Filter Gain (dB)')
xlabel(r'Frequency in kHz')
legend((r'Kaiser: %d taps' % len(b_k),r'Remez: %d taps' % len(b_r)),loc='best')
grid();
```
```
[7]:
```
```
b_k_hp = fir_d.firwin_kaiser_hpf(1/8,1/6,50,1.0)
b_r_hp = fir_d.fir_remez_hpf(1/8,1/6,0.2,50,1.0)
```
```
[8]:
```
```
fir_d.freqz_resp_list([b_k_hp,b_r_hp],[[1],[1]],'dB',fs=1)
ylim([-80,5])
title(r'Kaiser vs Equal Ripple Lowpass')
ylabel(r'Filter Gain (dB)')
xlabel(r'Frequency in kHz')
legend((r'Kaiser: %d taps' % len(b_k),r'Remez: %d taps' % len(b_r)),loc='best')
grid();
```
```
[9]:
```
```
b_k_bp = fir_d.firwin_kaiser_bpf(7000,8000,14000,15000,50,48000)
b_r_bp = fir_d.fir_remez_bpf(7000,8000,14000,15000,0.2,50,48000)
```
```
[10]:
```
```
fir_d.freqz_resp_list([b_k_bp,b_r_bp],[[1],[1]],'dB',fs=48)
ylim([-80,5])
title(r'Kaiser vs Equal Ripple Bandpass')
ylabel(r'Filter Gain (dB)')
xlabel(r'Frequency in kHz')
legend((r'Kaiser: %d taps' % len(b_k_bp),
r'Remez: %d taps' % len(b_r_bp)),
loc='lower right')
grid();
```
##### A Design Example Useful for Interpolation or Decimation[¶](#A-Design-Example-Useful-for-Interpolation-or-Decimation)
Here we consider a lowpass design that needs to pass frequencies from [0, 4000] Hz with a sampling rate of 96000 Hz. This scenario arises when building an interpolator using the classes of the `scikit-dps-comm` module `multirate_helper.py` to increase the sampling rate from 8000 Hz to 96000 Hz, or an interpolation factor of \(L = 12\). Note at the top of this notebook we have also have the import
```
import sk_dsp_comm.multirate_helper as mrh
```
so that some of the functionality can be accessed. For more details on the use of `multirate_helper` [see](https://mwickert.github.io/scikit-dsp-comm/example_notebooks/multirate_helper/Multirate_Processing.html).
Start with an equalripple design having transition band centered on 4000 Hz with passband ripple of 0.5 dB and stopband attenuation of 60 dB.
```
[11]:
```
```
b_up = fir_d.fir_remez_lpf(3300,4300,0.5,60,96000)
```
```
[12]:
```
```
mr_up = mrh.multirate_FIR(b_up)
```
* Consider the pole-zero configuration for this high-order filter
```
[13]:
```
```
# Take a look at the pole-zero configuration of this very
# high-order (many taps) linear phase FIR mr_up.zplane()
```
* Check out the passband and stopband gains
```
[14]:
```
```
# Verify the passband and stopband gains are as expected mr_up.freq_resp('db',96000)
```
* See that the group delay is the expected value of \((N_\text{taps} - 1)/2 = 98\) samples
```
[15]:
```
```
(len(b_up-1))/2
```
```
[15]:
```
```
98.0
```
```
[16]:
```
```
# Verify that the FIR design has constant group delay (N_taps - 1)/2 samples mr_up.freq_resp('groupdelay_s',96000,[0,100])
```
The object `mr_up` can now be used for interpolation or decimation with a rate change factor of 12.
#### Traditional IIR Filter Design using the Bilinear Transform[¶](#Traditional-IIR-Filter-Design-using-the-Bilinear-Transform)
The scipy.signal package fully supports the design of IIR digital filters from analog prototypes. IIR filters like FIR filters, are typically designed with amplitude response requirements in mind. A collection of design functions are available directly from `scipy.signal` for this purpose, in particular the function `scipy.signal.iirdesign()`. To make the design of lowpass, highpass, bandpass, and bandstop filters consistent with the module `fir_design_helper.py` the module
`iir_design_helper.py` was written. Figure 2, below, details how the amplitude response parameters are defined graphically.
```
[17]:
```
```
Image('300ppi/[email protected]',width='90%')
```
```
[17]:
```
Within `iir_design_helper.py` there are four filter design functions and a collection of supporting functions available. The four filter design functions are used for designing lowpass, highpass, bandpass, and bandstop filters, utilizing Butterworth, Chebshev type 1, Chebyshev type 2, and elliptical filter prototypes. See
[Oppenheim2010](https://www.amazon.com/Discrete-Time-Signal-Processing-3rd-Prentice-Hall/dp/0131988425/ref=sr_1_1?ie=UTF8&qid=1519940790&sr=8-1&keywords=oppenheim+discrete+time+signal+processing&dpID=51v48p99JjL&preST=_SX218_BO1,204,203,200_QL40_&dpSrc=srch) and [ECE 5650 notes Chapter 9](http://www.eas.uccs.edu/~mwickert/ece5650/notes/N5650_9.pdf) for detailed design information. The function interfaces are described in Table 2.
```
[18]:
```
```
Image('300ppi/[email protected]',width='80%')
```
```
[18]:
```
The filter functions return the filter coefficients in two formats:
1. Traditional transfer function form as numerator coefficients `b` and denominator `a` coefficients arrays, and 2. Cascade of biquadratic sections form using the previously introduced sos 2D array or matrix.
Both are provided to allow further analysis with either a direct form topology or the sos form. The underlying `signal.iirdesign()` function also provides a third option: a list of poles and zeros. The `sos` form desireable for high precision filters, as it is more robust to coefficient quantization, in spite using double precision coefficients in the `b` and `a` arrays.
Of the remaining support functions four are also described in Table 2, above. The most significant functions are `freqz_resp_cas_list`, available for graphically comparing the frequency response over several designs, and `sos_zplane` a function for plotting the pole-zero pattern. Both operate using the `sos` matrix. A transfer function form (`b/a`) for frequency response plotting, `freqz_resp_list`, is also present in the module. This function was first introduced in the FIR design section. The frequency response function plotting offers modes for gain in dB, phase in radians, group delay in samples, and group delay in seconds, all for a given sampling rate in Hz. The pole-zero plotting function locates pole and zeros more accurately than `sk_dsp_commsigsys.zplane`, as the numpy function `roots()` is only solving quadratic polynomials. Also, repeated roots can be displayed as theoretically expected, and also so noted in the graphical display by superscripts next to the pole and zero markers.
##### IIR Design Based on the Bilinear Transformation[¶](#IIR-Design-Based-on-the-Bilinear-Transformation)
There are multiple ways of designing IIR filters based on amplitude response requirements. When the desire is to have the filter approximation follow an analog prototype such as Butterworth, Chebychev, etc., is using the bilinear transformation. The function `signal.iirdesign()` described above does exactly this.
In the example below we consider lowpass amplitude response requirements and see how the filter order changes when we choose different analog prototypes.
###### Example: Lowpass Design Comparison[¶](#Example:-Lowpass-Design-Comparison)
The lowpass amplitude response requirements given \(f_s = 48\) kHz are: 1. \(f_\text{pass} = 5\) kHz 2. \(f_\text{stop} = 8\) kHz 3. Passband ripple of 0.5 dB 4. Stopband attenuation of 60 dB
Design four filters to meet the same requirements: `butter`, `cheby1`, ,`cheby2`, and `ellip`:
```
[19]:
```
```
fs = 48000 f_pass = 5000 f_stop = 8000 b_but,a_but,sos_but = iir_d.IIR_lpf(f_pass,f_stop,0.5,60,fs,'butter')
b_cheb1,a_cheb1,sos_cheb1 = iir_d.IIR_lpf(f_pass,f_stop,0.5,60,fs,'cheby1')
b_cheb2,a_cheb2,sos_cheb2 = iir_d.IIR_lpf(f_pass,f_stop,0.5,60,fs,'cheby2')
b_elli,a_elli,sos_elli = iir_d.IIR_lpf(f_pass,f_stop,0.5,60,fs,'ellip')
```
####### Frequency Response Comparison[¶](#Frequency-Response-Comparison)
Here we compare the magnitude response in dB using the `sos` form of each filter as the input. The elliptic is the most efficient, and actually over achieves by reaching the stopband requirement at less than 8 kHz.
```
[20]:
```
```
iir_d.freqz_resp_cas_list([sos_but,sos_cheb1,sos_cheb2,sos_elli],'dB',fs=48)
ylim([-80,5])
title(r'IIR Lowpass Compare')
ylabel(r'Filter Gain (dB)')
xlabel(r'Frequency in kHz')
legend((r'Butter order: %d' % (len(a_but)-1),
r'Cheby1 order: %d' % (len(a_cheb1)-1),
r'Cheby2 order: %d' % (len(a_cheb2)-1),
r'Elliptic order: %d' % (len(a_elli)-1)),loc='best')
grid();
```
Next plot the pole-zero configuration of just the butterworth design. Here we use the a special version of `ss.zplane` that works with the `sos` 2D array.
```
[21]:
```
```
iir_d.sos_zplane(sos_but)
```
```
[21]:
```
```
(15, 15)
```
Note the two plots above can also be obtained using the transfer function form via `iir_d.freqz_resp_list([b],[a],'dB',fs=48)` and `ss.zplane(b,a)`, respectively. The `sos` form will yield more accurate results, as it is less sensitive to coefficient quantization. This is particularly true for the pole-zero plot, as rooting a 15th degree polynomial is far more subject to errors than rooting a simple quadratic.
For the 15th-order Butterworth the bilinear transformation maps the expected 15 s-domain zeros at infinity to \(z=-1\). If you use `sk_dsp_comm.sigsys.zplane()` you will find that the 15 zeros at are in a tight circle around \(z=-1\), indicating polynomial rooting errors. Likewise the frequency response will be more accurate.
Signal filtering of ndarray `x` is done using the filter designs is done using functions from `scipy.signal`:
1. For transfer function form `y = signal.lfilter(b,a,x)`
2. For sos form `y = signal.sosfilt(sos,x)`
##### A Half-Band Filter Design to Pass up to \(W/2\) when \(f_s = 8\) kHz[¶](#A-Half-Band-Filter-Design-to-Pass-up-to-W/2-when-f_s-=-8-kHz)
Here we consider a lowpass design that needs to pass frequencies up to \(f_s/4\). Specifically when \(f_s = 8000\) Hz, the filter passband becomes [0, 2000] Hz. Once the coefficients are found a `mrh.multirate` object is created to allow further study of the filter, and ultimately implement filtering of a white noise signal.
Start with an elliptical design having transition band centered on 2000 Hz with passband ripple of 0.5 dB and stopband attenuation of 80 dB. The transition bandwidth is set to 100 Hz, with 50 Hz on either side of 2000 Hz.
```
[22]:
```
```
# Elliptic IIR Lowpass b_lp,a_lp,sos_lp = iir_d.IIR_lpf(1950,2050,0.5,80,8000.,'ellip')
mr_lp = mrh.multirate_IIR(sos_lp)
```
```
[23]:
```
```
mr_lp.freq_resp('db',8000)
```
Pass Gaussian white noise of variance \(\sigma_x^2 = 1\) through the filter. Use a lot of samples so the spectral estimate can accurately form \(S_y(f) = \sigma_x^2\cdot |H(e^{j2\pi f/f_s})|^2 = |H(e^{j2\pi f/f_s})|^2\).
```
[24]:
```
```
x = randn(1000000)
y = mr_lp.filter(x)
psd(x,2**10,8000);
psd(y,2**10,8000);
title(r'Filtering White Noise Having $\sigma_x^2 = 1$')
legend(('Input PSD','Output PSD'),loc='best')
ylim([-130,-30])
```
```
[24]:
```
```
(-130.0, -30.0)
```
```
[25]:
```
```
fs = 8000 print('Expected PSD of %2.3f dB/Hz' % (0-10*log10(fs),))
```
```
Expected PSD of -39.031 dB/Hz
```
##### Amplitude Response Bandpass Design[¶](#Amplitude-Response-Bandpass-Design)
Here we consider FIR and IIR bandpass designs for use in an SSB demodulator to remove potential adjacent channel signals sitting either side of a frequency band running from 23 kHz to 24 kHz.
```
[26]:
```
```
b_rec_bpf1 = fir_d.fir_remez_bpf(23000,24000,28000,29000,0.5,70,96000,8)
fir_d.freqz_resp_list([b_rec_bpf1],[1],mode='dB',fs=96000)
ylim([-80, 5])
grid();
```
The group delay is flat (constant) by virture of the design having linear phase.
```
[27]:
```
```
b_rec_bpf1 = fir_d.fir_remez_bpf(23000,24000,28000,29000,0.5,70,96000,8)
fir_d.freqz_resp_list([b_rec_bpf1],[1],mode='groupdelay_s',fs=96000)
grid();
```
Compare the FIR design with an elliptical design:
```
[28]:
```
```
b_rec_bpf2,a_rec_bpf2,sos_rec_bpf2 = iir_d.IIR_bpf(23000,24000,28000,29000,
0.5,70,96000,'ellip')
with np.errstate(divide='ignore'):
iir_d.freqz_resp_cas_list([sos_rec_bpf2],mode='dB',fs=96000)
ylim([-80, 5])
grid();
```
This high order elliptic has a nice tight amplitude response for minimal coefficients, but the group delay is terrible:
```
[29]:
```
```
with np.errstate(divide='ignore', invalid='ignore'): #manage singularity warnings
iir_d.freqz_resp_cas_list([sos_rec_bpf2],mode='groupdelay_s',fs=96000)
#ylim([-80, 5])
grid();
```
```
[1]:
```
```
%pylab inline import sk_dsp_comm.sigsys as ss import sk_dsp_comm.fir_design_helper as fir_d import sk_dsp_comm.iir_design_helper as iir_d import sk_dsp_comm.multirate_helper as mrh import scipy.signal as signal from IPython.display import Audio, display from IPython.display import Image, SVG
```
```
Populating the interactive namespace from numpy and matplotlib
```
```
[2]:
```
```
%config InlineBackend.figure_formats=['svg'] # SVG inline viewing
```
#### Multirate Signal Processing Using `multirate_helper`[¶](#Multirate-Signal-Processing-Using-multirate_helper)
In this section the classes `multirate_FIR` and `multirate_IIR`, found in the module `sk_dsp_comm.multirate_helper`, are discussed with the aim of seeing how they can be used to filter, interpolate (upsample and filter), and decimate (filter and downsample) discrete time signals. Fundamentally the processing consists of two elements: (1) and upsampler or downsampler and (2) a lowpass filter.
Fundamentally this modules provides classes to change the sampling rate by an integer factor, either up, *interpolation* or down, *decimation*, with integrated filtering to supress spectral images or aliases, respectively. The top level block diagram of the interpolator and decimator are given in the following two figures. The frequencies given in the figures assume that the interpolator is rate chainging from 8 ksps to 96 ksps (\(L=12\)) and the decimator is rate changing from 96 ksps to 8 ksps (\(M=12\)). This is for example purposes only. The FIR/IIR filter cutoff frequency will in general be \(f_c = f_\text{s,out}/(2L)\) for the decimator and \(f_c = f_\text{s,in}/(2M)\). The primitives to implement the classes are available in `sk_dsp_comm.sigsys` and `scipy.signal`.
```
[3]:
```
```
Image('300ppi/[email protected]',width='60%')
```
```
[3]:
```
```
[4]:
```
```
Image('300ppi/[email protected]',width='60%')
```
```
[4]:
```
The upsample block, shown above with arrow pointing up and integer \(L=12\) next to the arrow, takes the input sequence and produces the output sequence by inserting \(L-1\) (as shown here 11) zero samples between each input sample. The downsample block, shown above with arrow pointing down and integer \(M=12\) next to the arrow, takes the input sequence and retains at the output sequence every \(M\)th (as shown here 12th) sample.
The impact of these blocks in the frequency domain is a little harder to explain. In words, the spectrum at the output of the upsampler is compressed by the factor \(L\), such that it will contain \(L\) spectral images, including the fundamental image centered at \(f = 0\), evenly spaced up to the sampling \(f_s\). Overall the spectrum of \(x_\text{up}[n]\) is of course periodic with respect to the sampling rate. The lowpass filter interpolates signal sample values from the non-zero samples where the zero samples reside. It is this interpolation that effectively removed or suppresses the spectral images outside the interval \(|f| > f_s/(2L)\).
For the downsampler the input spectrum is stretched along the frequency axis by the factor \(M\), with aliasing from frequency bands outside \(|f| < f_s/(2M)\). To avoid aliasing the lowpass filter blocks input signals for \(f > f_s/(2M)\).
To get started using the module you will need an `import` similar to:
```
import sk_dsp_comm.multirate_helper as mrh
```
##### The `rate_change` Class[¶](#The-rate_change-Class)
We start with the description of a third class, `mrh.rate_change`, which is simplistic, offering little user interaction, but automatically designs the required lowpass filter you see in the above block diagrams. Below is a table which describes this class:
```
[5]:
```
```
Image('300ppi/[email protected]',width='85%')
```
```
[5]:
```
This class is used in the analog modulation demos for the [ECE 4625/5625 Chapter 3 Jupyter notebook](http://www.eas.uccs.edu/~mwickert/ece5625/lecture_notes/5625_Chapter_3_IPYNB.zip). Using this class you can quickly create a interpolation or decimation block with the necessary lowpass filter automatically designed and implemented. Fine tuning of the filter is limited to choosing the filter order and the cutoff frequency as a fraction of the signal bandwidth given the rate change integer,
\(L\) or \(M\). The filter type is also limited to Butterworth or Chebyshev type 1 having passband ripple of 0.05 dB.
##### A Simple Example[¶](#A-Simple-Example)
Pass a sinusoidal signal through an \(L=4\) interpolator. Verify that spectral images occur with the use of the interpolation lowpass filter.
```
[6]:
```
```
fs_in = 8000 M = 4 fs_out = M*fs_in rc1 = mrh.rate_change(M) # Rate change by 4 n = arange(0,1000)
x = cos(2*pi*1000/fs_in*n)
x_up = ss.upsample(x,4)
y = rc1.up(x)
```
###### Time Domain[¶](#Time-Domain)
```
[7]:
```
```
subplot(211)
stem(n[500:550],x_up[500:550]);
ylabel(r'$x_{up}[n]$')
title(r'Upsample by $L=4$ Output')
#ylim(-100,-10)
subplot(212)
stem(n[500:550],y[500:550]);
ylabel(r'$y[n]$')
xlabel(r'')
title(r'Interpolate by $L=4$ Output')
#ylim(-100,-10)
tight_layout()
```
* Clearly the lowpass interpolation filter has done a good job of filling in values for the zero samples
###### Frequency Domain[¶](#Frequency-Domain)
```
[8]:
```
```
subplot(211)
psd(x_up,2**10,fs_out);
ylabel(r'PSD (dB)')
title(r'Upsample by $L=4$ Output')
ylim(-100,-10)
subplot(212)
psd(y,2**10,fs_out);
ylabel(r'PSD (dB)')
title(r'Interpolate by $L=4$ Output')
ylim(-100,-10)
tight_layout()
```
* The filtering action of the LPF does its best to suppress the images at 7000, 9000, and 15000 Hz.
##### The `multirate_FIR` Class[¶](#The-multirate_FIR-Class)
With this class you implement an object that can filter, interpolate, or decimate a signal. Additionally support methods drill into the characteristics of the lowpass filter at the heart of the processing block. To use this class the user must supply FIR filter coefficients that implement a lowpass filter with cutoff frequency appropriate for the desired interpolation of decimation factor. The module `sk_dsp_com.FIR_design_helper` is capable of delivering the need filter coefficients array.
See [FIR design helper notes](https://mwickert.github.io/scikit-dsp-comm/example_notebooks/FIR_IIR_design_helper/FIR_and_IIR_Filter_Design.html) for multirate filter design examples.
With FIR coefficients in hand it is an easy matter to create an multirate FIR object capable of filtering, interpolation, or decimation. The details of the class interface are given in Table 2 below.
```
[9]:
```
```
Image('300ppi/[email protected]',width='85%')
```
```
[9]:
```
Notice that the class also provides a means to obtain frequency response plots and pole-zero plots directly from the instantiated multirate objects.
##### FIR Interpolator Design Example[¶](#FIR-Interpolator-Design-Example)
Here we take the earlier lowpass filter designed to interpolate a signal being upsampled from \(f_{s1} = 8000\) kHz to \(f_{s2} = 96\) kHz. The upsampling factor is \(L = f_{s2}/f_{s1} = 12\). The ideal interpolation filter should cutoff at \(f_{s1}/2 = f_{s2}/(2\cdot 12) = 8000/2 = 4000\) Hz.
Recall the upsampler (`y = ss.upsampler(x, L)`) inserts \(L-1\) samples between each input sample. In the frequency domain the zero insertion replicates the input spectrum on \([0,f_{s1}/2]\) \(L\) times over the interval \([0,f_{s2}]\) (equivalently \(L/2\) times on the inteval \([0f_{s2}/2]\). The lowpass interpolation filter serves to removes the images above \(f_{s2}/(2L)\) in the frequency domain and in so doing filling in the zeros samples with waveform interpolants in the time domain.
```
[10]:
```
```
# Design the filter core for an interpolator used in changing the sampling rate from 8000 Hz
# to 96000 Hz b_up = fir_d.fir_remez_lpf(3300,4300,0.5,60,96000)
# Create the multirate object mrh_up = mrh.multirate_FIR(b_up)
```
As an input consider a sinusoid at 1 kHz and observe the interpolator output spectrum compared with the input spectrum.
```
[11]:
```
```
# Sinusoidal test signal n = arange(10000)
x = cos(2*pi*1000/8000*n)
# Interpolate by 12 (upsample by 12 followed by lowpass filter)
y = mrh_up.up(x,12)
```
```
[12]:
```
```
# Plot the results subplot(211)
psd(x,2**12,8000);
title(r'1 KHz Sinusoid Input to $L=12$ Interpolator')
ylabel(r'PSD (dB)')
ylim([-100,0])
subplot(212)
psd(y,2**12,12*8000)
title(r'1 KHz Sinusoid Output from $L=12$ Interpolator')
ylabel(r'PSD (dB)')
ylim([-100,0])
tight_layout()
```
In the above spectrum plots notice that images of the input 1 kHz sinusoid are down \(\simeq 60\) dB, which is precisely the stop band attenuation provided by the interpolation filter. The variation is due to the stopband ripple.
##### The `multirate_IIR` Class[¶](#The-multirate_IIR-Class)
With this class, as with `multirate_FIR` you implement an object that can filter, interpolate, or decimate a signal. The filter in this case is a user supplied IIR filter in second-order sections (`sos`) form. Additionally support methods drill into the characteristics of the lowpass filter at the heart of the procssing block. The module `sk_dsp_com.IIR_design_helper` is capable of delivering the need filter coefficients array. See [IIR design helper notes](https://mwickert.github.io/scikit-dsp-comm/example_notebooks/FIR_IIR_design_helper/FIR_and_IIR_Filter_Design.html) for multirate filter design examples.
With IIR coefficients in hand it is an easy matter to create an multirate IIR object capable of filtering, interpolation, or decimation. The details of the class interface are given in Table 3 below.
```
[13]:
```
```
Image('300ppi/[email protected]',width='85%')
```
```
[13]:
```
##### IIR Decimator Design Example[¶](#IIR-Decimator-Design-Example)
Whan a signal is decimated the signal is first lowpass filtered then downsampled. The lowpass filter serves to prevent aliasing as the sampling rate is reduced. Downsampling by \(M\) (`y = ss.downsample(x, M)`) removes \(M-1\) sampling for every \(M\) sampling input or equivalently retains one sample out of \(M\). The lowpass prefilter has cutoff frequency equal to the folding frequency of the output sampling rate, i.e., \(f_c = f_{s2}/2\). Note avoid confusion with the project requirements, where the decimator is needed to take a rate \(f_{s2}\) signal back to \(f_{s1}\), let the input sampling rate be \(f_{s2} = 96000\) HZ and the output sampling rate be \(f_{s1} = 8000\) Hz. The input sampling rate is \(M\) times the output rate, i.e., \(f_{s2} = Mf_{s1}\), so you design the lowpass filter to have cutoff \(f_c = f_{s2}/(2\cdot L)\).
**ECE 5625 Important Observation**: In the coherent SSB demodulator of Project 1, the decimator can be conveniently integrated with the lowpass filter that serves to remove the double frequency term.
In the example that follows a Chebyshev type 1 lowpass filter is designed to have cutoff around 4000 Hz. A sinusoid is used as a test input signal at sampling rate 96000 Hz.
```
[14]:
```
```
# Design the filter core for a decimator used in changing the
# sampling rate from 96000 Hz to 8000 Hz b_dn, a_dn, sos_dn = iir_d.IIR_lpf(3300,4300,0.5,60,96000,'cheby1')
# Create the multirate object mrh_dn = mrh.multirate_IIR(sos_dn)
mrh_dn.freq_resp('dB',96000)
title(r'Decimation Filter Frequency Response - Magnitude');
```
* Note the Chebyshev lowpass filter design above is very efficient compared with the 196-tap FIR lowpass designed for use in the interpolator. It is perhaps a better overall choice. The FIR has linear phase and the IIR filter does not, but for the project this is not really an issue.
As an input consider a sinusoid at 1 kHz and observe the interpolator output spectrum compared with the input spectrum.
```
[15]:
```
```
# Sinusoidal test signal n = arange(100000)
x = cos(2*pi*1000/96000*n)
# Decimate by 12 (lowpass filter followed by downsample by 12)
y = mrh_dn.dn(x,12)
```
```
[16]:
```
```
# Plot the results subplot(211)
psd(x,2**12,96000);
title(r'1 KHz Sinusoid Input to $M=12$ Decimator')
ylabel(r'PSD (dB)')
ylim([-100,0])
subplot(212)
psd(y,2**12,8000)
title(r'1 KHz Sinusoid Output from $M=12$ Decimator')
ylabel(r'PSD (dB)')
ylim([-100,0])
tight_layout()
```
\tableofcontents
% These TeX commands run at the start to remove section numbering
\renewcommand{\thesection}{\hspace*{-1.0em}}
\renewcommand{\thesubsection}{\hspace*{-1.0em}}
\renewcommand{\thesubsubsection}{\hspace*{-1.0em}}
```
[1]:
```
```
%pylab inline
#%matplotlib qt import sk_dsp_comm.sigsys as ss import scipy.signal as signal from IPython.display import Audio, display from IPython.display import Image, SVG
```
```
Populating the interactive namespace from numpy and matplotlib
```
```
[2]:
```
```
pylab.rcParams['savefig.dpi'] = 100 # default 72
#pylab.rcParams['figure.figsize'] = (6.0, 4.0) # default (6,4)
#%config InlineBackend.figure_formats=['png'] # default for inline viewing
%config InlineBackend.figure_formats=['svg'] # SVG inline viewing
#%config InlineBackend.figure_formats=['pdf'] # render pdf figs for LaTeX
```
```
[3]:
```
```
import scipy.special as special import sk_dsp_comm.digitalcom as dc import sk_dsp_comm.fec_conv as fec
```
#### Convolutional Coding[¶](#Convolutional-Coding)
##### Rate 1/2[¶](#Rate-1/2)
A convolutional encoder object can be created with the `fec.FECConv` method. The rate of the object will be determined by the number of generator polynomials used. Right now, only rate 1/2 and rate 1/3 are supported, so 2 or three generator polynomials can be used. The following table shows ideal rate 1/2 generator polynomials. These are also included in the docstring.
**Table 1: Weight spectra :math:`c_k` for bounding the codedrate 1/2 BEP**.
| CL | Polynomials | \(D_{free}\) | \(d_f\) | \(d_f+1\) | \(d_f+2\) | \(d_f+3\) | \(d_f+4\) | \(d_f+5\) | \(d_f+6\) | \(d_f+7\) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 3 | (5,7) = (‘101’,’111’) | 5 | 1 | 4 | 12 | 32 | 80 | 192 | 488 | 1024 |
| 4 | (15,17) = (‘1101’,’1111’) | 6 | 2 | 7 | 18 | 49 | 130 | 333 | 836 | 2069 |
| 5 | (23,35) = (‘10011’,’11101’) | 7 | 4 | 12 | 20 | 72 | 225 | 500 | 1324 | 3680 |
| 6 | (53,75) = (‘101011’,’111101’) | 8 | 2 | 36 | 32 | 62 | 332 | 701 | 2342 | 5503 |
| 7 | (133,171) = (‘1011011’,’1111001’) | 10 | 36 | 0 | 211 | 0 | 1404 | 0 | 11633 | 0 |
In addition to the generator polynomials, you can specify a decision depth for the object. This will determine how many state transitions will be used for the traceback. The following shows how to create a rate 1/2 `fec_conv` object with contraint length 3 and decision depth 10.
```
[4]:
```
```
cc1 = fec.FECConv(('111','101'),10)
```
The `trellis_plot()` method can be used to see the state transitions of the `fec_conv` object.
```
[5]:
```
```
cc1.trellis_plot()
```
```
/home/docs/.pyenv/versions/3.7.9/lib/python3.7/site-packages/numpy/core/_asarray.py:136: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
return array(a, dtype, copy=False, order=order, subok=True)
```
###### Rate 1/2 Hard Decision Decoding[¶](#Rate-1/2-Hard-Decision-Decoding)
Now, we would like to know the theoretical bit error probability bounds of our convolutional encoding/decoding setup. We can do this using the `fec.conv_Pb_bound` method. The method takes the rate, degrees of freedom, \(c_k\) values, SNR, hard or soft decisions, and order M for an MPSK modulation scheme as arguments. It returns the BEP. The following shows theoretical bounds for rate 1/2 encoding/decoding BPSK system. Compare with Ziemer pg 667.
####### Weight Structure Bounds BEP[¶](#Weight-Structure-Bounds-BEP)
```
[6]:
```
```
SNRdB = arange(0,12,.1)
Pb_uc = fec.conv_Pb_bound(1/2,7,[4, 12, 20, 72, 225],SNRdB,2)
Pb_s_half_3_hard = fec.conv_Pb_bound(1/2,5,[1, 4, 12, 32, 80, 192, 448, 1024],SNRdB,0)
Pb_s_half_5_hard = fec.conv_Pb_bound(1/2,7,[4, 12, 20, 72, 225, 500, 1324, 3680],SNRdB,0)
Pb_s_half_7_hard = fec.conv_Pb_bound(1/2,10,[36, 0, 211, 0, 1404, 0, 11633, 0],SNRdB,0)
Pb_s_half_9_hard = fec.conv_Pb_bound(1/2,12,[33, 0, 281, 0, 2179, 0, 15035, 0],SNRdB,0)
figure(figsize=(5,5))
semilogy(SNRdB,Pb_uc)
semilogy(SNRdB,Pb_s_half_3_hard,'--')
semilogy(SNRdB,Pb_s_half_5_hard,'--')
semilogy(SNRdB,Pb_s_half_7_hard,'--')
semilogy(SNRdB,Pb_s_half_9_hard,'--')
axis([0,12,1e-7,1e0])
title(r'Hard Decision Rate 1/2 Coding Theory Bounds')
xlabel(r'$E_b/N_0$ (dB)')
ylabel(r'Symbol Error Probability')
legend(('Uncoded BPSK','R=1/2, K=3, Hard',\
'R=1/2, K=5, Hard', 'R=1/2, K=7, Hard',\
'R=1/2, K=9, Hard'),loc='upper right')
grid();
```
####### BEP Simulation[¶](#BEP-Simulation)
Now that we can determine our BEP bounds, we can test the actual encoder/decoder using dummy binary data. The following code creates a rate 1/2 fec_conv object. It then generates dummy binary data and encodes the data using the `conv_encoder` method. This method takes an array of binary values, and an initial state as the input and returns the encoded bits and states. We then adds nois to the encoded data according to the set \(E_b/N_0\) to simulate a noisy channel. The data is then decoded using the `viterbi_decoder` method. This method takes the array of noisy data and a decision metric. If the hard decision metric is selected, then we expect binary input values from around 0 to around 1. The method then returns the decoded binary values. Then the bit errors are counted. Once at least 100 bit errors are counted, the bit error probability is calculated.
```
[7]:
```
```
N_bits_per_frame = 10000 EbN0 = 4 total_bit_errors = 0 total_bit_count = 0 cc1 = fec.FECConv(('11101','10011'),25)
# Encode with shift register starting state of '0000'
state = '0000'
while total_bit_errors < 100:
# Create 100000 random 0/1 bits
x = randint(0,2,N_bits_per_frame)
y,state = cc1.conv_encoder(x,state)
# Add channel noise to bits, include antipodal level shift to [-1,1]
yn_soft = dc.cpx_awgn(2*y-1,EbN0-3,1) # Channel SNR is 3 dB less for rate 1/2
yn_hard = ((sign(yn_soft.real)+1)/2).astype(int)
z = cc1.viterbi_decoder(yn_hard,'hard')
# Count bit errors
bit_count, bit_errors = dc.bit_errors(x,z)
total_bit_errors += bit_errors
total_bit_count += bit_count
print('Bits Received = %d, Bit errors = %d, BEP = %1.2e' %\
(total_bit_count, total_bit_errors,\
total_bit_errors/total_bit_count))
print('*****************************************************')
print('Bits Received = %d, Bit errors = %d, BEP = %1.2e' %\
(total_bit_count, total_bit_errors,\
total_bit_errors/total_bit_count))
```
```
Bits Received = 9976, Bit errors = 60, BEP = 6.01e-03 Bits Received = 19952, Bit errors = 137, BEP = 6.87e-03
*****************************************************
Bits Received = 19952, Bit errors = 137, BEP = 6.87e-03
```
```
[8]:
```
```
y[:100].astype(int)
```
```
[8]:
```
```
array([0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0,
0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1,
1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0])
```
The simulated BEP can then be compared to the theoretical bounds that were shown earlier. Some values were simulated for the constraint length 3 and constraint length 5 cases.
```
[9]:
```
```
SNRdB = arange(0,12,.1)
Pb_uc = fec.conv_Pb_bound(1/2,7,[4, 12, 20, 72, 225],SNRdB,2)
Pb_s_half_3_hard = fec.conv_Pb_bound(1/2,5,[1, 4, 12, 32, 80, 192, 448, 1024],SNRdB,0)
Pb_s_half_5_hard = fec.conv_Pb_bound(1/2,7,[4, 12, 20, 72, 225, 500, 1324, 3680],SNRdB,0)
Pb_s_half_7_hard = fec.conv_Pb_bound(1/2,10,[36, 0, 211, 0, 1404, 0, 11633, 0],SNRdB,0)
Pb_s_half_9_hard = fec.conv_Pb_bound(1/2,12,[33, 0, 281, 0, 2179, 0, 15035, 0],SNRdB,0)
Pb_s_half_5_hard_sim = array([3.36e-2,1.04e-2,1.39e-3,1.56e-04,1.24e-05])
Pb_s_half_3_hard_sim = array([2.59e-02,1.35e-02,2.71e-03,6.39e-04,9.73e-05,7.71e-06])
figure(figsize=(5,5))
semilogy(SNRdB,Pb_uc)
semilogy(SNRdB,Pb_s_half_3_hard,'y--')
semilogy(SNRdB,Pb_s_half_5_hard,'g--')
semilogy(SNRdB,Pb_s_half_7_hard,'--')
semilogy(SNRdB,Pb_s_half_9_hard,'--')
semilogy([3,4,5,6,7,8],Pb_s_half_3_hard_sim,'ys')
semilogy([3,4,5,6,7],Pb_s_half_5_hard_sim,'gs')
axis([0,12,1e-7,1e0])
title(r'Hard Decision Rate 1/2 Coding Measurements')
xlabel(r'$E_b/N_0$ (dB)')
ylabel(r'Symbol Error Probability')
legend(('Uncoded BPSK','R=1/2, K=3, Hard',\
'R=1/2, K=5, Hard', 'R=1/2, K=7, Hard',\
'R=1/2, K=9, Hard', 'R=1/2, K=3, Simulation',\
'R=1/2, K=5, Simulation'),loc='lower left')
grid();
```
We can look at the surviving paths using the `traceback_plot` method.
```
[10]:
```
```
cc1.traceback_plot()
```
###### Soft Decision Decoding BEP Simulation[¶](#Soft-Decision-Decoding-BEP-Simulation)
Soft decision decoding can also be done. In order to simulate the soft decision decoder, we can use the same setup as before, but now we specify ‘soft’ in the `viterbi_decoder` method. We also have to pick a quantization level when we do this. If we want 3-bit quantization we would specify that the quant_level=3. When we use soft decisions we have to scale our noisy received values to values on \([0,2^{n}-1]\). So for a three-bit quantizaiton, we would scale to values on \([0,7]\).
This helps the system to get better distance metrics for all possible paths in the decoder, thus improving the BEP. The following shows how to simulate soft decisions.
```
[11]:
```
```
N_bits_per_frame = 10000 EbN0 = 2 total_bit_errors = 0 total_bit_count = 0 cc1 = fec.FECConv(('11101','10011'),25)
# Encode with shift register starting state of '0000'
state = '0000'
while total_bit_errors < 100:
# Create 100000 random 0/1 bits
x = randint(0,2,N_bits_per_frame)
y,state = cc1.conv_encoder(x,state)
# Add channel noise to bits, include antipodal level shift to [-1,1]
yn = dc.cpx_awgn(2*y-1,EbN0-3,1) # Channel SNR is 3dB less for rate 1/2
# Scale & level shift to three-bit quantization levels [0,7]
yn = (yn.real+1)/2*7
z = cc1.viterbi_decoder(yn.real,'soft',quant_level=3)
# Count bit errors
bit_count, bit_errors = dc.bit_errors(x,z)
total_bit_errors += bit_errors
total_bit_count += bit_count
print('Bits Received = %d, Bit errors = %d, BEP = %1.2e' %\
(total_bit_count, total_bit_errors,\
total_bit_errors/total_bit_count))
print('*****************************************************')
print('Bits Received = %d, Bit errors = %d, BEP = %1.2e' %\
(total_bit_count, total_bit_errors,\
total_bit_errors/total_bit_count))
```
```
Bits Received = 9976, Bit errors = 108, BEP = 1.08e-02
*****************************************************
Bits Received = 9976, Bit errors = 108, BEP = 1.08e-02
```
```
[12]:
```
```
SNRdB = arange(0,12,.1)
Pb_uc = fec.conv_Pb_bound(1/3,7,[4, 12, 20, 72, 225],SNRdB,2)
Pb_s_third_3 = fec.conv_Pb_bound(1/3,8,[3, 0, 15],SNRdB,1)
Pb_s_third_4 = fec.conv_Pb_bound(1/3,10,[6, 0, 6, 0],SNRdB,1)
Pb_s_third_5 = fec.conv_Pb_bound(1/3,12,[12, 0, 12, 0, 56],SNRdB,1)
Pb_s_third_6 = fec.conv_Pb_bound(1/3,13,[1, 8, 26, 20, 19, 62],SNRdB,1)
Pb_s_third_7 = fec.conv_Pb_bound(1/3,14,[1, 0, 20, 0, 53, 0, 184],SNRdB,1)
Pb_s_third_8 = fec.conv_Pb_bound(1/3,16,[1, 0, 24, 0, 113, 0, 287, 0],SNRdB,1)
Pb_s_half = fec.conv_Pb_bound(1/2,7,[4, 12, 20, 72, 225],SNRdB,1)
figure(figsize=(5,5))
semilogy(SNRdB,Pb_uc)
semilogy(SNRdB,Pb_s_third_3,'--')
semilogy(SNRdB,Pb_s_third_4,'--')
semilogy(SNRdB,Pb_s_third_5,'g')
semilogy(SNRdB,Pb_s_third_6,'--')
semilogy(SNRdB,Pb_s_third_7,'--')
semilogy(SNRdB,Pb_s_third_8,'--')
#semilogy(SNRdB,Pb_s_half,'--')
semilogy([0,1,2,3,4,5],[9.08e-02,2.73e-02,6.52e-03,\
8.94e-04,8.54e-05,5e-6],'gs')
axis([0,12,1e-7,1e0])
title(r'Soft Decision Rate 1/2 Coding Measurements')
xlabel(r'$E_b/N_0$ (dB)')
ylabel(r'Symbol Error Probability')
legend(('Uncoded BPSK','R=1/3, K=3, Soft',\
'R=1/3, K=4, Soft','R=1/3, K=5, Soft',\
'R=1/3, K=6, Soft','R=1/3, K=7, Soft',\
'R=1/3, K=8, Soft','R=1/3, K=5, Sim', \
'Simulation'),loc='upper right')
grid();
```
The decoder can also do unquantized soft decisions. This is done by specifying ‘unquant’ for the metric type. The system will then expect floating point numbers on \([0,1]\) at the decoder input.
##### Rate 1/3[¶](#Rate-1/3)
Rate 1/3 convolution encoding/decoding can be done very similarly to the rate 1/2 code. The difference when instantiating, is that the rate 1/3 uses 3 generator polynmials instead of 2. The following table shows ideal generator polynomials at different constraint lengths for rate 1/3 convolutional codes.
**Table 2: Weight spectra :math:`c_k` for bounding the coded rate 1/3 BEP**.
| CL | Polynomials | \(d_{free}\) | \(d_f\) | \(d_f+1\) | \(d_f+2\) | \(d_f+3\) | \(d_f+4\) | \(d_f+5\) | \(d_f+6\) | \(d_f+7\) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 3 | (7,7,5) = (‘111’,’111’,’101’) | 8 | 3 | 0 | 15 | 0 | 58 | 0 | 201 | 0 |
| 4 | (15,13,11) = (‘1111’,’1101’,’1011’) | 10 | 6 | 0 | 6 | 0 | 58 | 0 | 118 | 0 |
| 5 | (31,27,21) = (‘11111’,’11011’,’10101’) | 12 | 12 | 0 | 12 | 0 | 56 | 0 | 320 | 0 |
| 6 | (61,43,39) = (‘111101’,’101011’,’100111’) | 13 | 1 | 8 | 26 | 20 | 19 | 62 | 86 | 204 |
| 7 | (121,101,91) = (‘1111001’,’1100101’,’1011011’) | 14 | 1 | 0 | 20 | 0 | 53 | 0 | 184 | 0 |
| 8 | (247,217,149) = (‘11110111’,’11011001’,’10010101’) | 16 | 1 | 0 | 24 | 0 | 113 | 0 | 287 | 0 |
```
[13]:
```
```
cc2 = fec.FECConv(('111','111','101'),10)
cc2.trellis_plot()
```
```
/home/docs/.pyenv/versions/3.7.9/lib/python3.7/site-packages/numpy/core/_asarray.py:136: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
return array(a, dtype, copy=False, order=order, subok=True)
```
###### Rate 1/3 Hard Decision Decoding[¶](#Rate-1/3-Hard-Decision-Decoding)
####### Weight Structure Bounds BEP[¶](#id1)
Compare with Ziemer pg 668.
```
[14]:
```
```
SNRdB = arange(0,12,.1)
Pb_uc = fec.conv_Pb_bound(1/3,7,[4, 12, 20, 72, 225],SNRdB,2)
Pb_s_third_3_hard = fec.conv_Pb_bound(1/3,8,[3, 0, 15, 0, 58, 0, 201, 0],SNRdB,0)
Pb_s_third_4_hard = fec.conv_Pb_bound(1/3,10,[6, 0, 6, 0, 58, 0, 118, 0],SNRdB,0)
Pb_s_third_5_hard = fec.conv_Pb_bound(1/3,12,[12, 0, 12, 0, 56, 0, 320, 0],SNRdB,0)
Pb_s_third_6_hard = fec.conv_Pb_bound(1/3,13,[1, 8, 26, 20, 19, 62, 86, 204],SNRdB,0)
Pb_s_third_7_hard = fec.conv_Pb_bound(1/3,14,[1, 0, 20, 0, 53, 0, 184],SNRdB,0)
Pb_s_third_8_hard = fec.conv_Pb_bound(1/3,16,[1, 0, 24, 0, 113, 0, 287, 0],SNRdB,0)
figure(figsize=(5,5))
semilogy(SNRdB,Pb_uc)
semilogy(SNRdB,Pb_s_third_3_hard,'--')
#semilogy(SNRdB,Pb_s_third_4_hard,'--')
semilogy(SNRdB,Pb_s_third_5_hard,'--')
#semilogy(SNRdB,Pb_s_third_6_hard,'--')
semilogy(SNRdB,Pb_s_third_7_hard,'--')
#semilogy(SNRdB,Pb_s_third_8_hard,'--')
axis([0,12,1e-7,1e0])
title(r'Hard Decision Rate 1/3 Coding Theory Bounds')
xlabel(r'$E_b/N_0$ (dB)')
ylabel(r'Symbol Error Probability')
legend(('Uncoded BPSK','R=1/3, K=3, Hard',\
#'R=1/3, K=4, Hard', 'R=1/3, K=5, Hard',\
#'R=1/3, K=6, Hard', 'R=1/3, K=7, Hard',\
#'R=1/3, K=7, Hard'),loc='upper right')
'R=1/3, K=5, Hard', 'R=1/3, K=7, Hard'),\
loc='upper right')
grid();
```
####### BEP Simulation[¶](#id2)
```
[15]:
```
```
N_bits_per_frame = 10000 EbN0 = 3 total_bit_errors = 0 total_bit_count = 0 cc1 = fec.FECConv(('11111','11011','10101'),25)
# Encode with shift register starting state of '0000'
state = '0000'
while total_bit_errors < 100:
# Create 100000 random 0/1 bits
x = randint(0,2,N_bits_per_frame)
y,state = cc1.conv_encoder(x,state)
# Add channel noise to bits, include antipodal level shift to [-1,1]
yn_soft = dc.cpx_awgn(2*y-1,EbN0-10*log10(3),1) # Channel SNR is 10*log10(3) dB less
yn_hard = ((sign(yn_soft.real)+1)/2).astype(int)
z = cc1.viterbi_decoder(yn_hard.real,'hard')
# Count bit errors
bit_count, bit_errors = dc.bit_errors(x,z)
total_bit_errors += bit_errors
total_bit_count += bit_count
print('Bits Received = %d, Bit errors = %d, BEP = %1.2e' %\
(total_bit_count, total_bit_errors,\
total_bit_errors/total_bit_count))
print('*****************************************************')
print('Bits Received = %d, Bit errors = %d, BEP = %1.2e' %\
(total_bit_count, total_bit_errors,\
total_bit_errors/total_bit_count))
```
```
Bits Received = 9976, Bit errors = 181, BEP = 1.81e-02
*****************************************************
Bits Received = 9976, Bit errors = 181, BEP = 1.81e-02
```
```
[16]:
```
```
SNRdB = arange(0,12,.1)
Pb_uc = fec.conv_Pb_bound(1/3,7,[4, 12, 20, 72, 225],SNRdB,2)
Pb_s_third_3_hard = fec.conv_Pb_bound(1/3,8,[3, 0, 15, 0, 58, 0, 201, 0],SNRdB,0)
Pb_s_third_5_hard = fec.conv_Pb_bound(1/3,12,[12, 0, 12, 0, 56, 0, 320, 0],SNRdB,0)
Pb_s_third_7_hard = fec.conv_Pb_bound(1/3,14,[1, 0, 20, 0, 53, 0, 184],SNRdB,0)
Pb_s_third_5_hard_sim = array([8.94e-04,1.11e-04,8.73e-06])
figure(figsize=(5,5))
semilogy(SNRdB,Pb_uc)
semilogy(SNRdB,Pb_s_third_3_hard,'r--')
semilogy(SNRdB,Pb_s_third_5_hard,'g--')
semilogy(SNRdB,Pb_s_third_7_hard,'k--')
semilogy(array([5,6,7]),Pb_s_third_5_hard_sim,'sg')
axis([0,12,1e-7,1e0])
title(r'Hard Decision Rate 1/3 Coding Measurements')
xlabel(r'$E_b/N_0$ (dB)')
ylabel(r'Symbol Error Probability')
legend(('Uncoded BPSK','R=1/3, K=3, Hard',\
'R=1/3, K=5, Hard', 'R=1/3, K=7, Hard',\
),loc='upper right')
grid();
```
```
[17]:
```
```
cc1.traceback_plot()
```
###### Soft Decision Decoding BEP Simulation[¶](#id3)
Here we use 3-bit quantization soft decoding.
```
[18]:
```
```
N_bits_per_frame = 10000 EbN0 = 2 total_bit_errors = 0 total_bit_count = 0 cc1 = fec.FECConv(('11111','11011','10101'),25)
# Encode with shift register starting state of '0000'
state = '0000'
while total_bit_errors < 100:
# Create 100000 random 0/1 bits
x = randint(0,2,N_bits_per_frame)
y,state = cc1.conv_encoder(x,state)
# Add channel noise to bits, include antipodal level shift to [-1,1]
yn = dc.cpx_awgn(2*y-1,EbN0-10*log10(3),1) # Channel SNR is 10*log10(3) dB less
# Translate to [0,7]
yn = (yn.real+1)/2*7
z = cc1.viterbi_decoder(yn,'soft',quant_level=3)
# Count bit errors
bit_count, bit_errors = dc.bit_errors(x,z)
total_bit_errors += bit_errors
total_bit_count += bit_count
print('Bits Received = %d, Bit errors = %d, BEP = %1.2e' %\
(total_bit_count, total_bit_errors,\
total_bit_errors/total_bit_count))
print('*****************************************************')
print('Bits Received = %d, Bit errors = %d, BEP = %1.2e' %\
(total_bit_count, total_bit_errors,\
total_bit_errors/total_bit_count))
```
```
Bits Received = 9976, Bit errors = 56, BEP = 5.61e-03 Bits Received = 19952, Bit errors = 87, BEP = 4.36e-03 Bits Received = 29928, Bit errors = 170, BEP = 5.68e-03
*****************************************************
Bits Received = 29928, Bit errors = 170, BEP = 5.68e-03
```
```
[19]:
```
```
SNRdB = arange(0,12,.1)
Pb_uc = fec.conv_Pb_bound(1/3,7,[4, 12, 20, 72, 225],SNRdB,2)
Pb_s_third_3 = fec.conv_Pb_bound(1/3,8,[3, 0, 15, 0, 58, 0, 201, 0],SNRdB,1)
#Pb_s_third_4 = fec.conv_Pb_bound(1/3,10,[6, 0, 6, 0, 58, 0, 118, 0],SNRdB,1)
Pb_s_third_5 = fec.conv_Pb_bound(1/3,12,[12, 0, 12, 0, 56, 0, 320, 0],SNRdB,1)
#Pb_s_third_6 = fec.conv_Pb_bound(1/3,13,[1, 8, 26, 20, 19, 62, 86, 204],SNRdB,1)
Pb_s_third_7 = fec.conv_Pb_bound(1/3,14,[1, 0, 20, 0, 53, 0, 184, 0],SNRdB,1)
#Pb_s_third_8 = fec.conv_Pb_bound(1/3,16,[1, 0, 24, 0, 113, 0, 287, 0],SNRdB,1)
figure(figsize=(5,5))
semilogy(SNRdB,Pb_uc)
semilogy(SNRdB,Pb_s_third_3,'--')
#semilogy(SNRdB,Pb_s_third_4,'--')
semilogy(SNRdB,Pb_s_third_5,'g')
#semilogy(SNRdB,Pb_s_third_6,'--')
semilogy(SNRdB,Pb_s_third_7,'r--')
#semilogy(SNRdB,Pb_s_third_8,'--')
#semilogy(SNRdB,Pb_s_half,'--')
semilogy([0,1,2,3,4,5],[9.08e-02,2.73e-02,6.52e-03,\
8.94e-04,8.54e-05,5e-6],'gs')
axis([0,12,1e-7,1e0])
title(r'Soft Decision Rate 1/3 Coding Measurements')
xlabel(r'$E_b/N_0$ (dB)')
ylabel(r'Symbol Error Probability')
legend(('Uncoded BPSK','R=1/3, K=3, Soft',\
#'R=1/3, K=4, Soft','R=1/3, K=5, Soft',\
'R=1/3, K=5, Soft','R=1/3, K=7, Soft',\
#'R=1/3, K=8, Soft','R=1/2, K=5, Soft', \
'R-1/3, K=5, Simulation'),loc='upper right')
grid();
```
\tableofcontents
% These TeX commands run at the start to remove section numbering
\renewcommand{\thesection}{\hspace*{-1.0em}}
\renewcommand{\thesubsection}{\hspace*{-1.0em}}
\renewcommand{\thesubsubsection}{\hspace*{-1.0em}}
```
[1]:
```
```
%pylab inline
#%matplotlib qt import sk_dsp_comm.sigsys as ss import scipy.signal as signal from IPython.display import Audio, display from IPython.display import Image, SVG
```
```
Populating the interactive namespace from numpy and matplotlib
```
```
[2]:
```
```
pylab.rcParams['savefig.dpi'] = 100 # default 72
#pylab.rcParams['figure.figsize'] = (6.0, 4.0) # default (6,4)
#%config InlineBackend.figure_formats=['png'] # default for inline viewing
%config InlineBackend.figure_formats=['svg'] # SVG inline viewing
#%config InlineBackend.figure_formats=['pdf'] # render pdf figs for LaTeX
```
```
[3]:
```
```
import scipy.special as special import sk_dsp_comm.digitalcom as dc import sk_dsp_comm.fec_block as block
```
#### Block Codes[¶](#Block-Codes)
Block codes take serial source symbols and group them into k-symbol blocks. They then take n-k check symbols to make code words of length n > k. The code is denoted (n,k). The following shows a general block diagram of block encoder.
The block encoder takes k source bits and encodes it into a length n codeword. A block decoder then works in reverse. The length n channel symbol codewords are decoded into the original length k source bits.
##### Single Error Correction Block Codes[¶](#Single-Error-Correction-Block-Codes)
Several block codes are able to correct only one error per block. Two common single error correction codes are cyclic codes and hamming codes. In `scikit-dsp-comm` there is a module called `fec_block.py`. This module contains two classes so far: `fec_cyclic` for cyclic codes and `fec_hamming` for hamming codes. Each class has methods for encoding, decoding, and plotting theoretical bit error probability bounds.
###### Cyclic Codes[¶](#Cyclic-Codes)
A (n,k) cyclic code can easily be generated with an n-k stage shift register with appropriate feedback according to Ziemer and Tranter pgs 646 and 647. The following shows a block diagram for a cyclic encoder.
This block diagram can be expanded to larger codes as well. A generator polynomial can be used to determine the position of the binary adders. The previous example uses a generator polynomial of ‘1011’. This means that there is a binary adder after the input, after second shift register, and after the third shift register.
The source symbol length and the channel symbol length can be determined from the number of shift registers \(j\). The length of the generator polynomial is always \(1+j\). In this case we have 3 shift registers, so \(j=3\). We have \(k=4\) source bits and \(n=7\) channel bits. For other shift register lengths, we can use the following equations. \(n=j^2-1\) and \(k = n-j\). The following table (from Ziemer and Peterson pg 429) shows the source symbol length, channel symbol length, and the code rate for various shift register lengths for single error correction codes.
| j | k | n | R=k/n |
| --- | --- | --- | --- |
| 3 | 4 | 7 | 0.57 |
| 4 | 11 | 15 | 0.73 |
| 5 | 26 | 31 | 0.84 |
| 6 | 57 | 63 | 0.90 |
| 7 | 120 | 127 | 0.94 |
| 8 | 247 | 255 | 0.97 |
| 9 | 502 | 511 | 0.98 |
| 10 | 1013 | 1023 | 0.99 |
The following block diagram shows a block decoder (from Ziemer and Tranter page 647). The block decoder takes in a codeword of channel symbol length n and decodes it to the original source bits of length k.
The `fec_cyclic` class can be used to generate a cyclic code object. The cyclic code object can be initialized by a generator polynomial. The length of the generator determines the source symbol length, the channel symbol length, and the rate. The following shows the generator polynomial ‘1011’ considered in the two example block diagrams.
```
[4]:
```
```
cc1 = block.FECCyclic('1011')
```
After the cyclic code object `cc1` is created, the `cc1.cyclic_encoder` method can be used to encode source data bits. In the following example, we generate 16 distinct source symbols to get 16 distinct channel symbol codewords using the `cyclic_encoder` method. The `cyclic_encoder` method takes an array of source bits as a paramter. The array of source bits must be a length of a multiple of \(k\). Otherwise, the method will throw an error.
```
[5]:
```
```
# Generate 16 distinct codewords codewords = zeros((16,7),dtype=int)
x = zeros((16,4))
for i in range(0,16):
xbin = block.binary(i,4)
xbin = array(list(xbin)).astype(int)
x[i,:] = xbin x = reshape(x,size(x)).astype(int)
codewords = cc1.cyclic_encoder(x)
print(reshape(codewords,(16,7)))
```
```
[[0 0 0 0 0 0 0]
[0 0 0 1 0 1 1]
[0 0 1 0 1 1 0]
[0 0 1 1 1 0 1]
[0 1 0 0 1 1 1]
[0 1 0 1 1 0 0]
[0 1 1 0 0 0 1]
[0 1 1 1 0 1 0]
[1 0 0 0 1 0 1]
[1 0 0 1 1 1 0]
[1 0 1 0 0 1 1]
[1 0 1 1 0 0 0]
[1 1 0 0 0 1 0]
[1 1 0 1 0 0 1]
[1 1 1 0 1 0 0]
[1 1 1 1 1 1 1]]
```
Now, a bit error is introduced into each of the codewords. Then, the codwords with the error are decoded using the `cyclic_decoder` method. The `cyclic_decoder` method takes an array of codewords of length \(n\) as a parameter and returns an array of source bits. Even with 1 error introduced into each codeword, All of the original source bits are still decoded properly.
```
[6]:
```
```
# introduce 1 bit error into each code word and decode codewords = reshape(codewords,(16,7))
for i in range(16):
error_pos = i % 6
codewords[i,error_pos] = (codewords[i,error_pos] +1) % 2 codewords = reshape(codewords,size(codewords))
decoded_blocks = cc1.cyclic_decoder(codewords)
print(reshape(decoded_blocks,(16,4)))
```
```
[[0 0 0 0]
[0 0 0 1]
[0 0 1 0]
[0 0 1 1]
[0 1 0 0]
[0 1 0 1]
[0 1 1 0]
[0 1 1 1]
[1 0 0 0]
[1 0 0 1]
[1 0 1 0]
[1 0 1 1]
[1 1 0 0]
[1 1 0 1]
[1 1 1 0]
[1 1 1 1]]
```
The following example generates many random source symbols. It then encodes the symbols using the cyclic encoder. It then simulates a channel by adding noise. It then implements hard decisions on each of the incoming bits and puts the received noisy bits into the cyclic decoder. Source bits are then returned and errors are counted until 100 bit errors are received. Once 100 bit errors are received, the bit error probability is calculated. This code can be run at a variety of SNRs and with various code rates.
```
[7]:
```
```
cc1 = block.FECCyclic('101001')
N_blocks_per_frame = 2000 N_bits_per_frame = N_blocks_per_frame*cc1.k EbN0 = 6 total_bit_errors = 0 total_bit_count = 0
while total_bit_errors < 100:
# Create random 0/1 bits
x = randint(0,2,N_bits_per_frame)
y = cc1.cyclic_encoder(x)
# Add channel noise to bits and scale to +/- 1
yn = dc.cpx_awgn(2*y-1,EbN0-10*log10(cc1.n/cc1.k),1) # Channel SNR is dB less
# Scale back to 0 and 1
yn = ((sign(yn.real)+1)/2).astype(int)
z = cc1.cyclic_decoder(yn)
# Count bit errors
bit_count, bit_errors = dc.bit_errors(x,z)
total_bit_errors += bit_errors
total_bit_count += bit_count
print('Bits Received = %d, Bit errors = %d, BEP = %1.2e' %\
(total_bit_count, total_bit_errors,\
total_bit_errors/total_bit_count))
print('*****************************************************')
print('Bits Received = %d, Bit errors = %d, BEP = %1.2e' %\
(total_bit_count, total_bit_errors,\
total_bit_errors/total_bit_count))
```
```
Bits Received = 52000, Bit errors = 27, BEP = 5.19e-04 Bits Received = 104000, Bit errors = 83, BEP = 7.98e-04 Bits Received = 156000, Bit errors = 141, BEP = 9.04e-04
*****************************************************
Bits Received = 156000, Bit errors = 141, BEP = 9.04e-04
```
There is a function in the `fec_block` module called `block_single_error_Pb_bound` that can be used to generate the theoretical bit error probability bounds for single error correction block codes. Measured bit error probabilities from the previous example were recorded to compare to the bounds.
```
[8]:
```
```
SNRdB = arange(0,12,.1)
#SNRdB = arange(9.4,9.6,0.1)
Pb_uc = block.block_single_error_Pb_bound(3,SNRdB,False)
Pb_c_3 = block.block_single_error_Pb_bound(3,SNRdB)
Pb_c_4 = block.block_single_error_Pb_bound(4,SNRdB)
Pb_c_5 = block.block_single_error_Pb_bound(5,SNRdB)
figure(figsize=(5,5))
semilogy(SNRdB,Pb_uc,'k-')
semilogy(SNRdB,Pb_c_3,'c--')
semilogy(SNRdB,Pb_c_4,'m--')
semilogy(SNRdB,Pb_c_5,'g--')
semilogy([4,5,6,7,8,9],[1.44e-2,5.45e-3,2.37e-3,6.63e-4,1.33e-4,1.31e-5],'cs')
semilogy([5,6,7,8],[4.86e-3,1.16e-3,2.32e-4,2.73e-5],'ms')
semilogy([5,6,7,8],[4.31e-3,9.42e-4,1.38e-4,1.15e-5],'gs')
axis([0,12,1e-10,1e0])
title('Cyclic code BEP')
xlabel(r'$E_b/N_0$ (dB)')
ylabel(r'Bit Error Probability')
legend(('Uncoded BPSK','(7,4), hard',\
'(15,11), hard', '(31,26), hard',\
'(7,4) sim', '(15,11) sim', \
'(31,26) sim'),loc='lower left')
grid();
```
These plots show that the simulated bit error probability is very close to the theoretical bit error probabilites.
###### Hamming Code[¶](#Hamming-Code)
Hamming codes are another form of single error correction block codes. Hamming codes use parity-checks in order to generate and decode block codes. The code rates of Hamming codes are generated the same way as cyclic codes. In this case a parity-check length of length \(j\) is chosen, and n and k are calculated by \(n=2^j-1\) and \(k=n-j\). Hamming codes are generated first by defining a parity-check matrix \(H\). The parity-check matrix is a j x n matrix containing binary numbers from 1 to n as the columns. For a \(j=3\) (\(k=4\), \(n=7\)) Hamming code. The parity-check matrix starts out as the following:
\begin{equation}
\mathbf{H} = \left[\begin{array}
{rrr}
0 & 0 & 0 & 1 & 1 & 1 & 1\\
0 & 1 & 1 & 0 & 0 & 1 & 1\\
1 & 0 & 1 & 0 & 1 & 0 & 1
\end{array}\right]
\end{equation}The parity-chekc matrix can be reordered to provice a systematic code by interchanging the columns to create an identity matrix on the right side of the matrix. In this case, this is done by interchangeing columsn 1 and 7, columns 2 and 6, and columsn 4 and 5. The resulting parity-check matrix is the following.
\begin{equation}
\mathbf{H} = \left[\begin{array}
{rrr}
1 & 1 & 0 & 1 & 1 & 0 & 0\\
1 & 1 & 1 & 0 & 0 & 1 & 0\\
1 & 0 & 1 & 1 & 0 & 0 & 1
\end{array}\right]
\end{equation}Next, a generator matrix \(G\) is created by restructuring the parity-check matrix. The \(G\) matrix is gathered from the \(H\) matrix through the following relationship.
\begin{equation}
\mathbf{G} = \left[\begin{array}
{rrr}
I_k & ... & H_p
\end{array}\right]
\end{equation}where \(H_p\) is defined as the transpose of the first k columns of H. For this example we arrive at the following \(G\) matrix. G always ends up being a k x n matrix.
\begin{equation}
\mathbf{G} = \left[\begin{array}
{rrr}
1 & 0 & 0 & 0 & 1 & 1 & 1\\
0 & 1 & 0 & 0 & 1 & 1 & 0\\
0 & 0 & 1 & 0 & 0 & 1 & 1\\
0 & 0 & 0 & 1 & 1 & 0 & 1
\end{array}\right]
\end{equation}Codewords can be generated by multiplying a source symbol matrix by the generator matrix.
\begin{equation}
codeword = xG
\end{equation}Where the codeword is a column vector of length \(n\) and x is a row vector of length \(n\). This is the basic operation of the encoder. The decoder is slightly more complicated. The decoder starts by taking the parity-check matrix \(H\) and multiplying it by the codeword column vector. This gives the “syndrome” of the block. The syndrome tells us whether or not there is an error in the codeword. If no errors are present, the syndrome will be 0. If there is an error in the codeword,
the syndrome will tell us which bit has the error.
\begin{equation}
S = H \cdot codeword
\end{equation}If the syndrome is nonzero, then it can be used to correct the error bit in the codeword. After that, the original source blocks can be decoded from the codewords by the following equation.
\begin{equation}
source = R\cdot codeword
\end{equation}Where \(R\) is a k x n matrix where R is made up of a k x k identity matrix and a k x n-k matrix of zeros. Again, the Hamming code is only capable of correcting one error per block, so if more than one error is present in the block, then the syndrome cannot be used to correct the error.
The hamming code class can be found in the `fec_block` module as `fec_hamming`. Hamming codes are sometimes generated using generator polynomials just like with cyclic codes. This is not completely necessary, however, if the previously described process is used. This process simply relies on choosing a number of parity bits and then systematic single-error correction hamming codes are automatically generated. The following will go through an example of a \(j=3\) (\(k=4\),
\(n=7\)) hamming code.
Hamming Block Code Class Definition:
```
[9]:
```
```
hh1 = block.FECHamming(3)
```
\(k\) and \(n\) are calculated form the number of parity checks \(j\) and can be accessed by `hh1.k` and `hh1.n`. The \(j\) x \(n\) parity-check matrix \(H\) and the \(k\) x \(n\) generator matrix \(G\) can be accessed by `hh1.H` and `hh1.G`. These are exactly as described previously.
```
[10]:
```
```
print('k = ' + str(hh1.k))
print('n = ' + str(hh1.n))
print('H = \n' + str(hh1.H))
print('G = \n' + str(hh1.G))
```
```
k = 4 n = 7 H =
[[1 1 0 1 1 0 0]
[1 1 1 0 0 1 0]
[1 0 1 1 0 0 1]]
G =
[[1 0 0 0 1 1 1]
[0 1 0 0 1 1 0]
[0 0 1 0 0 1 1]
[0 0 0 1 1 0 1]]
```
The `fec_hamming` class has an encoder method called `hamm_encoder`. This method works the same way as the cyclic encoder. It takes an array of source bits with a length that is a multiple of \(k\) and returns an array of codewords. This class has another method called `hamm_decoder` which can decode an array of codewords. The array of codewords must have a length that is a multiple of \(n\). The following example generates random source bits, encodes them using a hamming encoder,
simulates transmitting them over a channel, uses hard decisions after the receiver to get a received array of codewords, and decodes the codewords using the hamming decoder. It runs until it counds 100 bit errors and then calculates the bit error probability. This can be used to simulate hamming codes with different rates (different numbers of parity checks) at different SNRs.
```
[11]:
```
```
hh1 = block.FECHamming(5)
N_blocks_per_frame = 20000 N_bits_per_frame = N_blocks_per_frame*hh1.k EbN0 = 8 total_bit_errors = 0 total_bit_count = 0
while total_bit_errors < 100:
# Create random 0/1 bits
x = randint(0,2,N_bits_per_frame)
y = hh1.hamm_encoder(x)
# Add channel noise to bits and scale to +/- 1
yn = dc.cpx_awgn(2*y-1,EbN0-10*log10(hh1.n/hh1.k),1) # Channel SNR is dB less
# Scale back to 0 and 1
yn = ((sign(yn.real)+1)/2).astype(int)
z = hh1.hamm_decoder(yn)
# Count bit errors
bit_count, bit_errors = dc.bit_errors(x,z)
total_bit_errors += bit_errors
total_bit_count += bit_count
print('Bits Received = %d, Bit errors = %d, BEP = %1.2e' %\
(total_bit_count, total_bit_errors,\
total_bit_errors/total_bit_count))
print('*****************************************************')
print('Bits Received = %d, Bit errors = %d, BEP = %1.2e' %\
(total_bit_count, total_bit_errors,\
total_bit_errors/total_bit_count))
```
```
Bits Received = 520000, Bit errors = 8, BEP = 1.54e-05 Bits Received = 1040000, Bit errors = 20, BEP = 1.92e-05 Bits Received = 1560000, Bit errors = 26, BEP = 1.67e-05 Bits Received = 2080000, Bit errors = 26, BEP = 1.25e-05 Bits Received = 2600000, Bit errors = 34, BEP = 1.31e-05 Bits Received = 3120000, Bit errors = 42, BEP = 1.35e-05 Bits Received = 3640000, Bit errors = 48, BEP = 1.32e-05 Bits Received = 4160000, Bit errors = 50, BEP = 1.20e-05 Bits Received = 4680000, Bit errors = 56, BEP = 1.20e-05 Bits Received = 5200000, Bit errors = 63, BEP = 1.21e-05 Bits Received = 5720000, Bit errors = 66, BEP = 1.15e-05 Bits Received = 6240000, Bit errors = 76, BEP = 1.22e-05 Bits Received = 6760000, Bit errors = 77, BEP = 1.14e-05 Bits Received = 7280000, Bit errors = 86, BEP = 1.18e-05 Bits Received = 7800000, Bit errors = 96, BEP = 1.23e-05 Bits Received = 8320000, Bit errors = 107, BEP = 1.29e-05
*****************************************************
Bits Received = 8320000, Bit errors = 107, BEP = 1.29e-05
```
The `fec_block.block_single_error_Pb_bound` function can also be used to generate the bit error probability bounds for hamming codes. The following example generates theoretical bit error probability bounds for hamming codes and compares it with simulated bit error probabilities from the previous examples.
```
[12]:
```
```
SNRdB = arange(0,12,.1)
Pb_uc = block.block_single_error_Pb_bound(3,SNRdB,False)
Pb_c_3 = block.block_single_error_Pb_bound(3,SNRdB)
Pb_c_4 = block.block_single_error_Pb_bound(4,SNRdB)
Pb_c_5 = block.block_single_error_Pb_bound(5,SNRdB)
figure(figsize=(5,5))
semilogy(SNRdB,Pb_uc,'k-')
semilogy(SNRdB,Pb_c_3,'c--')
semilogy(SNRdB,Pb_c_4,'m--')
semilogy(SNRdB,Pb_c_5,'g--')
semilogy([5,6,7,8,9,10],[6.64e-3,2.32e-3,5.25e-4,1.16e-4,1.46e-5,1.19e-6],'cs')
semilogy([5,6,7,8,9],[4.68e-3,1.19e-3,2.48e-4,3.6e-5,1.76e-6],'ms')
semilogy([5,6,7,8,9],[4.42e-3,1.11e-3,1.41e-4,1.43e-5,6.73e-7],'gs')
axis([0,12,1e-10,1e0])
title('Hamming code BEP')
xlabel(r'$E_b/N_0$ (dB)')
ylabel(r'Bit Error Probability')
legend(('Uncoded BPSK','(7,4), hard',\
'(15,11), hard', '(31,26), hard',\
'(7,4) sim', '(15,11) sim', \
'(31,26) sim'),loc='lower left')
grid();
```
##### Multiple Error Correction Block Codes[¶](#Multiple-Error-Correction-Block-Codes)
Other block codes are capable of correcting multiple errors in blocks. Golay Codes, Bose_Chaudhuri-Hocquenghem (BCH) Codes, and Reed-Solomon Codes are all capable of correcting multiple errors. These codes have not been developed yet, but they will be the next codes to be added to the `fec_block` module.
###### Golay Code[¶](#Golay-Code)
Golay codes are capable of correcting three errors in a block of 23 symbols. Golay codes are one of the few known “perfect” codes where all error patterns with hamming weight \(t\) or less and no error patters with weight greater than \(t\) are correctable using a minimum-distance maximum-likelihood decoder. Golay codes are discussed in detail in Ziemer and Peterson pgs 448-450.
###### Bose-Chaudhuri-Hocquenghem (BCH) Codes[¶](#Bose-Chaudhuri-Hocquenghem-(BCH)-Codes)
BCH codes are very important because they exist for a wide range of rates, can achieve significant coding gain, and decoders can be implemented even at high speeds. BCH codes are described in detail in Ziemer and Peterson pgs 436-444.
###### Reed-Solomon Codes[¶](#Reed-Solomon-Codes)
RS codes are nonbinary BCH codes that use input and output alphabets having \(2^m\) symbols, {\(0,1,2,...,2^m-1\)}. Block length is \(n=2^m-1\) and can be extended to \(n=2^m\) or \(n=2^m+1\). Reed-Solomon codes are useful in burst communications Reed-Solomon Codes are discussed in detail in Ziemer and Peterson pgs 444-447.
### coeff2header[¶](#module-sk_dsp_comm.coeff2header)
Digital Filter Coefficient Conversion to C Header Files
Copyright (c) March 2017, <NAME> All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
The views and conclusions contained in the software and documentation are those of the authors and should not be interpreted as representing official policies,
either expressed or implied, of the FreeBSD Project.
sk_dsp_comm.coeff2header.ca_code_header(*fname_out*, *Nca*)[[source]](_modules/sk_dsp_comm/coeff2header.html#ca_code_header)[¶](#sk_dsp_comm.coeff2header.ca_code_header)
Write 1023 bit CA (Gold) Code Header Files
<NAME> February 2015
sk_dsp_comm.coeff2header.fir_fix_header(*fname_out*, *h*)[[source]](_modules/sk_dsp_comm/coeff2header.html#fir_fix_header)[¶](#sk_dsp_comm.coeff2header.fir_fix_header)
Write FIR Fixed-Point Filter Header Files
<NAME> February 2015
sk_dsp_comm.coeff2header.fir_header(*fname_out*, *h*)[[source]](_modules/sk_dsp_comm/coeff2header.html#fir_header)[¶](#sk_dsp_comm.coeff2header.fir_header)
Write FIR Filter Header Files
<NAME> February 2015
sk_dsp_comm.coeff2header.freqz_resp_list(*b*, *a=array([1])*, *mode='dB'*, *fs=1.0*, *n_pts=1024*, *fsize=(6, 4)*)[[source]](_modules/sk_dsp_comm/coeff2header.html#freqz_resp_list)[¶](#sk_dsp_comm.coeff2header.freqz_resp_list)
A method for displaying digital filter frequency response magnitude,
phase, and group delay. A plot is produced using matplotlib
freq_resp(self,mode = ‘dB’,Npts = 1024)
A method for displaying the filter frequency response magnitude,
phase, and group delay. A plot is produced using matplotlib
freqz_resp(b,a=[1],mode = ‘dB’,Npts = 1024,fsize=(6,4))
Parameters
**b**ndarray of numerator coefficients
**a**ndarray of denominator coefficents
**mode**display mode: ‘dB’ magnitude, ‘phase’ in radians, or‘groupdelay_s’ in samples and ‘groupdelay_t’ in sec,
all versus frequency in Hz
**n_pts**number of points to plot; default is 1024
**fsize**figure size; defult is (6,4) inches
**<NAME>, January 2015**
sk_dsp_comm.coeff2header.iir_sos_header(*fname_out*, *SOS_mat*)[[source]](_modules/sk_dsp_comm/coeff2header.html#iir_sos_header)[¶](#sk_dsp_comm.coeff2header.iir_sos_header)
Write IIR SOS Header Files File format is compatible with CMSIS-DSP IIR
Directform II Filter Functions
<NAME> March 2015-October 2016
### digitalcom[¶](#module-sk_dsp_comm.digitalcom)
Digital Communications Function Module
Copyright (c) March 2017, <NAME> All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
The views and conclusions contained in the software and documentation are those of the authors and should not be interpreted as representing official policies,
either expressed or implied, of the FreeBSD Project.
sk_dsp_comm.digitalcom.awgn_channel(*x_bits*, *eb_n0_dB*)[[source]](_modules/sk_dsp_comm/digitalcom.html#awgn_channel)[¶](#sk_dsp_comm.digitalcom.awgn_channel)
Parameters
**x_bits**serial bit stream of 0/1 values.
**eb_n0_dB**Energy per bit to noise power density ratio in dB of the serial bit stream sent through the AWGN channel. Frequently we equate EBN0 to SNR in link budget calculations.
Returns
**y_bits**Received serial bit stream following hard decisions. This bit will have bit errors. To check the estimated bit error probability use `BPSK_BEP()` or simply:
```
>>> Pe_est = sum(xor(x_bits,y_bits))/length(x_bits);
..
```
<NAME>, March 2015
sk_dsp_comm.digitalcom.bin2gray(*d_word*, *b_width*)[[source]](_modules/sk_dsp_comm/digitalcom.html#bin2gray)[¶](#sk_dsp_comm.digitalcom.bin2gray)
Convert integer bit words to gray encoded binary words via Gray coding starting from the MSB to the LSB
<NAME> November 2018
sk_dsp_comm.digitalcom.bit_errors(*tx_data*, *rx_data*, *n_corr=1024*, *n_transient=0*)[[source]](_modules/sk_dsp_comm/digitalcom.html#bit_errors)[¶](#sk_dsp_comm.digitalcom.bit_errors)
Count bit errors between a transmitted and received BPSK signal.
Time delay between streams is detected as well as ambiquity resolution due to carrier phase lock offsets of \(k*\pi\), k=0,1.
sk_dsp_comm.digitalcom.bpsk_bep(*tx_data*, *rx_data*, *n_corr=1024*, *n_transient=0*)[[source]](_modules/sk_dsp_comm/digitalcom.html#bpsk_bep)[¶](#sk_dsp_comm.digitalcom.bpsk_bep)
Count bit errors between a transmitted and received BPSK signal.
Time delay between streams is detected as well as ambiquity resolution due to carrier phase lock offsets of \(k*\pi\), k=0,1.
The ndarray tx_data is Tx +/-1 symbols as real numbers I.
The ndarray rx_data is Rx +/-1 symbols as real numbers I.
Note: Ncorr needs to be even
sk_dsp_comm.digitalcom.bpsk_tx(*n_bits*, *ns*, *ach_fc=2.0*, *ach_lvl_dB=- 100*, *pulse='rect'*, *alpha=0.25*, *m=6*)[[source]](_modules/sk_dsp_comm/digitalcom.html#bpsk_tx)[¶](#sk_dsp_comm.digitalcom.bpsk_tx)
Generates biphase shift keyed (BPSK) transmitter with adjacent channel interference.
Generates three BPSK signals with rectangular or square root raised cosine (SRC)
pulse shaping of duration N_bits and Ns samples per bit. The desired signal is centered on f = 0, which the adjacent channel signals to the left and right are also generated at dB level relative to the desired signal. Used in the
digital communications Case Study supplement.
Parameters
**n_bits**the number of bits to simulate
**ns**the number of samples per bit
**ach_fc**the frequency offset of the adjacent channel signals (default 2.0)
**ach_lvl_dB**the level of the adjacent channel signals in dB (default -100)
**pulse**the pulse shape ‘rect’ or ‘src’
**alpha**square root raised cosine pulse shape factor (default = 0.25)
**m**square root raised cosine pulse truncation factor (default = 6)
Returns
**x**ndarray of the composite signal x0 + ach_lvl*(x1p + x1m)
**b**the transmit pulse shape
**data0**the data bits used to form the desired signal; used for error checking
Examples
```
>>> x,b,data0 = bpsk_tx(1000,10,pulse='src')
```
sk_dsp_comm.digitalcom.chan_est_equalize(*z*, *npbp*, *alpha*, *ht=None*)[[source]](_modules/sk_dsp_comm/digitalcom.html#chan_est_equalize)[¶](#sk_dsp_comm.digitalcom.chan_est_equalize)
This is a helper function for `OFDM_rx()` to unpack pilot blocks from from the entire set of received OFDM symbols (the Nf of N filled carriers only); then estimate the channel array H recursively,
and finally apply H_hat to Y, i.e., X_hat = Y/H_hat carrier-by-carrier. Note if Np = -1, then H_hat = H, the true channel.
Parameters
**z**Input N_OFDM x Nf 2D array containing pilot blocks and OFDM data symbols.
**npbp**The pilot block period; if -1 use the known channel impulse response input to ht.
**alpha**The forgetting factor used to recursively estimate H_hat
**ht**The theoretical channel frquency response to allow ideal equalization provided Ncp is adequate.
Returns
**zz_out**The input z with the pilot blocks removed and one-tap equalization applied to each of the Nf carriers.
**H**The channel estimate in the frequency domain; an array of length Nf; will return Ht if provided as an input.
Examples
```
>>> from sk_dsp_comm.digitalcom import chan_est_equalize
>>> zz_out,H = chan_est_eq(z,Nf,npbp,alpha,Ht=None)
```
sk_dsp_comm.digitalcom.eye_plot(*x*, *l*, *s=0*)[[source]](_modules/sk_dsp_comm/digitalcom.html#eye_plot)[¶](#sk_dsp_comm.digitalcom.eye_plot)
Eye pattern plot of a baseband digital communications waveform.
The signal must be real, but can be multivalued in terms of the underlying modulation scheme. Used for BPSK eye plots in the Case Study article.
Parameters
**x**ndarray of the real input data vector/array
**l**display length in samples (usually two symbols)
**s**start index
Returns
**None**A plot window opens containing the eye plot
Notes
Increase S to eliminate filter transients.
Examples
1000 bits at 10 samples per bit with ‘rc’ shaping.
```
>>> import matplotlib.pyplot as plt
>>> from sk_dsp_comm import digitalcom as dc
>>> x,b, data = dc.nrz_bits(1000,10,'rc')
>>> dc.eye_plot(x,20,60)
>>> plt.show()
```
([Source code](.//digitalcom-1.py))
sk_dsp_comm.digitalcom.farrow_resample(*x*, *fs_old*, *fs_new*)[[source]](_modules/sk_dsp_comm/digitalcom.html#farrow_resample)[¶](#sk_dsp_comm.digitalcom.farrow_resample)
Parameters
**x**Input list representing a signal vector needing resampling.
**fs_old**Starting/old sampling frequency.
**fs_new**New sampling frequency.
Returns
**y**List representing the signal vector resampled at the new frequency.
Notes
A cubic interpolator using a Farrow structure is used resample the input data at a new sampling rate that may be an irrational multiple of the input sampling rate.
Time alignment can be found for a integer value M, found with the following:
\[f_{s,out} = f_{s,in} (M - 1) / M\]
The filter coefficients used here and a more comprehensive listing can be found in <NAME>, <NAME>, & <NAME>, “Digital Communication
Receivers,” Wiley, 1998, Chapter 9, pp. 521-523.
Another good paper on variable interpolators is: <NAME>, <NAME>, &
<NAME>, “Interpolation in Digital Modems–Part II: Implementation and Performance,” IEEE Comm. Trans., June 1993, pp. 998-1008.
A founding paper on the subject of interpolators is: <NAME>, “A Continuously variable Digital Delay Element,” Proceedings of the IEEE Intern. Symp. on Circuits Syst., pp. 2641-2645, June 1988.
<NAME> April 2003, recoded to Python November 2013
Examples
The following example uses a QPSK signal with rc pulse shaping, and time alignment at M = 15.
```
>>> import matplotlib.pyplot as plt
>>> from numpy import arange
>>> from sk_dsp_comm import digitalcom as dc
>>> Ns = 8
>>> Rs = 1.
>>> fsin = Ns*Rs
>>> Tsin = 1 / fsin
>>> N = 200
>>> ts = 1
>>> x, b, data = dc.mpsk_bb(N+12, Ns, 4, 'rc')
>>> x = x[12*Ns:]
>>> xxI = x.real
>>> M = 15
>>> fsout = fsin * (M-1) / M
>>> Tsout = 1. / fsout
>>> xI = dc.farrow_resample(xxI, fsin, fsin)
>>> tx = arange(0, len(xI)) / fsin
>>> yI = dc.farrow_resample(xxI, fsin, fsout)
>>> ty = arange(0, len(yI)) / fsout
>>> plt.plot(tx - Tsin, xI)
>>> plt.plot(tx[ts::Ns] - Tsin, xI[ts::Ns], 'r.')
>>> plt.plot(ty[ts::Ns] - Tsout, yI[ts::Ns], 'g.')
>>> plt.title(r'Impact of Asynchronous Sampling')
>>> plt.ylabel(r'Real Signal Amplitude')
>>> plt.xlabel(r'Symbol Rate Normalized Time')
>>> plt.xlim([0, 20])
>>> plt.grid()
>>> plt.show()
```
([Source code](.//digitalcom-2.py))
sk_dsp_comm.digitalcom.from_bin(*bin_array*)[[source]](_modules/sk_dsp_comm/digitalcom.html#from_bin)[¶](#sk_dsp_comm.digitalcom.from_bin)
Convert binary array back a nonnegative integer. The array length is the bit width. The first input index holds the MSB and the last holds the LSB.
sk_dsp_comm.digitalcom.gmsk_bb(*n_bits*, *ns*, *msk=0*, *bt=0.35*)[[source]](_modules/sk_dsp_comm/digitalcom.html#gmsk_bb)[¶](#sk_dsp_comm.digitalcom.gmsk_bb)
MSK/GMSK Complex Baseband Modulation x,data = gmsk(N_bits, Ns, BT = 0.35, MSK = 0)
Parameters
**n_bits**number of symbols processed
**ns**the number of samples per bit
**msk**0 for no shaping which is standard MSK, MSK <> 0 –> GMSK is generated.
**bt**premodulation Bb*T product which sets the bandwidth of the Gaussian lowpass filter
**<NAME> Python version November 2014**
sk_dsp_comm.digitalcom.gray2bin(*d_word*, *b_width*)[[source]](_modules/sk_dsp_comm/digitalcom.html#gray2bin)[¶](#sk_dsp_comm.digitalcom.gray2bin)
Convert gray encoded binary words to integer bit words via Gray decoding starting from the MSB to the LSB
<NAME> November 2018
sk_dsp_comm.digitalcom.mpsk_bb(*n_symb*, *ns*, *mod*, *pulse='rect'*, *alpha=0.25*, *m=6*)[[source]](_modules/sk_dsp_comm/digitalcom.html#mpsk_bb)[¶](#sk_dsp_comm.digitalcom.mpsk_bb)
Generate a complex baseband MPSK signal with pulse shaping.
Parameters
**n_symb**number of MPSK symbols to produce
**ns**the number of samples per bit,
**mod**MPSK modulation order, e.g., 4, 8, 16, …
**pulse**‘rect’ , ‘rc’, ‘src’ (default ‘rect’)
**alpha**excess bandwidth factor(default 0.25)
**m**single sided pulse duration (default = 6)
Returns
**x**ndarray of the MPSK signal values
**b**ndarray of the pulse shape
**data**ndarray of the underlying data bits
Notes
Pulse shapes include ‘rect’ (rectangular), ‘rc’ (raised cosine),
‘src’ (root raised cosine). The actual pulse length is 2*M+1 samples.
This function is used by BPSK_tx in the Case Study article.
Examples
```
>>> from sk_dsp_comm import digitalcom as dc
>>> import scipy.signal as signal
>>> import matplotlib.pyplot as plt
>>> x,b,data = dc.mpsk_bb(500,10,8,'src',0.35)
>>> # Matched filter received signal x
>>> y = signal.lfilter(b,1,x)
>>> plt.plot(y.real[12*10:],y.imag[12*10:])
>>> plt.xlabel('In-Phase')
>>> plt.ylabel('Quadrature')
>>> plt.axis('equal')
>>> # Sample once per symbol
>>> plt.plot(y.real[12*10::10],y.imag[12*10::10],'r.')
>>> plt.show()
```
([Source code](.//digitalcom-3.py))
sk_dsp_comm.digitalcom.mpsk_bep_thy(*snr_dB*, *mod*, *eb_n0_mode=True*)[[source]](_modules/sk_dsp_comm/digitalcom.html#mpsk_bep_thy)[¶](#sk_dsp_comm.digitalcom.mpsk_bep_thy)
Approximate the bit error probability of MPSK assuming Gray encoding
<NAME> November 2018
sk_dsp_comm.digitalcom.mpsk_gray_decode(*x_hat*, *mod=4*)[[source]](_modules/sk_dsp_comm/digitalcom.html#mpsk_gray_decode)[¶](#sk_dsp_comm.digitalcom.mpsk_gray_decode)
Decode MPSK IQ symbols to a serial bit stream using gray2bin decoding
Parameters
**x_hat**symbol spaced samples of the MPSK waveform taken at the maximumeye opening. Normally this is following the matched filter
**mod**Modulation scheme
**<NAME> November 2018**
sk_dsp_comm.digitalcom.mpsk_gray_encode_bb(*n_symb*, *ns*, *mod=4*, *pulse='rect'*, *alpha=0.35*, *m_span=6*, *ext_data=None*)[[source]](_modules/sk_dsp_comm/digitalcom.html#mpsk_gray_encode_bb)[¶](#sk_dsp_comm.digitalcom.mpsk_gray_encode_bb)
MPSK_gray_bb: A gray code mapped MPSK complex baseband transmitter x,b,tx_data = MPSK_gray_bb(K,Ns,M)
Parameters
**n_symb**the number of symbols to process
**ns**number of samples per symbol
**mod**modulation order: 2, 4, 8, 16 MPSK
**alpha**squareroot raised cosine excess bandwidth factor. Can range over 0 < alpha < 1.
**pulse**‘rect’, ‘src’, or ‘rc’
Returns
**x**complex baseband digital modulation
**b**transmitter shaping filter, rectangle or SRC
**tx_data**xI+1j*xQ = inphase symbol sequence + 1j*quadrature symbol sequence <NAME> November 2018
sk_dsp_comm.digitalcom.mux_pilot_blocks(*iq_data*, *npb*)[[source]](_modules/sk_dsp_comm/digitalcom.html#mux_pilot_blocks)[¶](#sk_dsp_comm.digitalcom.mux_pilot_blocks)
Parameters
**iq_data**a 2D array of input QAM symbols with the columnsrepresenting the NF carrier frequencies and each row the QAM symbols used to form an OFDM symbol
**npb**the period of the pilot blocks; e.g., a pilot block isinserted every Np OFDM symbols (Np-1 OFDM data symbols of width Nf are inserted in between the pilot blocks.
Returns
**IQ_datap**IQ_data with pilot blocks inserted
See also
`OFDM_tx`
Notes
A helper function called by `OFDM_tx()` that inserts pilot block for use in channel estimation when a delay spread channel is present.
sk_dsp_comm.digitalcom.my_psd(*x*, *NFFT=1024*, *Fs=1*)[[source]](_modules/sk_dsp_comm/digitalcom.html#my_psd)[¶](#sk_dsp_comm.digitalcom.my_psd)
A local version of NumPy’s PSD function that returns the plot arrays.
A mlab.psd wrapper function that returns two ndarrays;
makes no attempt to auto plot anything.
Parameters
**x**ndarray input signal
**NFFT**a power of two, e.g., 2**10 = 1024
**Fs**the sampling rate in Hz
Returns
**Px**ndarray of the power spectrum estimate
**f**ndarray of frequency values
Notes
This function makes it easier to overlay spectrum plots because you have better control over the axis scaling than when using psd()
in the autoscale mode.
Examples
```
>>> import matplotlib.pyplot as plt
>>> from sk_dsp_comm import digitalcom as dc
>>> from numpy import log10
>>> x,b, data = dc.nrz_bits(10000,10)
>>> Px,f = dc.my_psd(x,2**10,10)
>>> plt.plot(f, 10*log10(Px))
>>> plt.show()
```
([Source code](.//digitalcom-4.py))
sk_dsp_comm.digitalcom.ofdm_rx(*x*, *nf*, *nc*, *npb=0*, *cp=False*, *ncp=0*, *alpha=0.95*, *ht=None*)[[source]](_modules/sk_dsp_comm/digitalcom.html#ofdm_rx)[¶](#sk_dsp_comm.digitalcom.ofdm_rx)
Parameters
**x**Received complex baseband OFDM signal
**nf**Number of filled carriers, must be even and Nf < N
**nc**Total number of carriers; generally a power 2, e.g., 64, 1024, etc
**npb**Period of pilot code blocks; 0 <=> no pilots; -1 <=> use the ht impulse response input to equalize the OFDM symbols; note equalization still requires Ncp > 0 to work on a delay spread channel.
**cp**False/True <=> if False assume no CP is present
**ncp**The length of the cyclic prefix
**alpha**The filter forgetting factor in the channel estimator. Typically alpha is 0.9 to 0.99.
**ht**Input the known theoretical channel impulse response
Returns
**z_out**Recovered complex baseband QAM symbols as a serial stream; as appropriate channel estimation has been applied.
**H**channel estimate (in the frequency domain at each subcarrier)
See also
`OFDM_tx`
Examples
```
>>> import matplotlib.pyplot as plt
>>> from sk_dsp_comm import digitalcom as dc
>>> from scipy import signal
>>> from numpy import array
>>> hc = array([1.0, 0.1, -0.05, 0.15, 0.2, 0.05]) # impulse response spanning five symbols
>>> # Quick example using the above channel with no cyclic prefix
>>> x1,b1,IQ_data1 = dc.QAM_bb(50000,1,'16qam')
>>> x_out = dc.ofdm_tx(IQ_data1,32,64,0,True,0)
>>> x1,b1,IQ_data1 = dc.qam_bb(50000,1,'16qam')
>>> x_out = dc.ofdm_tx(IQ_data1,32,64,0,True,0)
>>> c_out = signal.lfilter(hc,1,x_out) # Apply channel distortion
>>> r_out = dc.cpx_awgn(c_out,100,64/32) # Es/N0 = 100 dB
>>> z_out,H = dc.ofdm_rx(r_out,32,64,-1,True,0,alpha=0.95,ht=hc)
>>> plt.plot(z_out[200:].real,z_out[200:].imag,'.')
>>> plt.xlabel('In-Phase')
>>> plt.ylabel('Quadrature')
>>> plt.axis('equal')
>>> plt.grid()
>>> plt.show()
```
Another example with noise using a 10 symbol cyclic prefix and channel estimation:
```
>>> x_out = dc.ofdm_tx(IQ_data1,32,64,100,True,10)
>>> c_out = signal.lfilter(hc,1,x_out) # Apply channel distortion
>>> r_out = dc.cpx_awgn(c_out,25,64/32) # Es/N0 = 25 dB
>>> z_out,H = dc.ofdm_rx(r_out,32,64,100,True,10,alpha=0.95,ht=hc);
>>> plt.figure() # if channel estimation is turned on need this
>>> plt.plot(z_out[-2000:].real,z_out[-2000:].imag,'.') # allow settling time
>>> plt.xlabel('In-Phase')
>>> plt.ylabel('Quadrature')
>>> plt.axis('equal')
>>> plt.grid()
>>> plt.show()
```
([Source code](.//digitalcom-5.py))
sk_dsp_comm.digitalcom.ofdm_tx(*iq_data*, *nf*, *nc*, *npb=0*, *cp=False*, *ncp=0*)[[source]](_modules/sk_dsp_comm/digitalcom.html#ofdm_tx)[¶](#sk_dsp_comm.digitalcom.ofdm_tx)
Parameters
**iq_data**+/-1, +/-3, etc complex QAM symbol sample inputs
**nf**number of filled carriers, must be even and Nf < N
**nc**total number of carriers; generally a power 2, e.g., 64, 1024, etc
**npb**Period of pilot code blocks; 0 <=> no pilots
**cp**False/True <=> bypass cp insertion entirely if False
**ncp**the length of the cyclic prefix
Returns
**x_out**complex baseband OFDM waveform output after P/S and CP insertion
See also
`OFDM_rx`
Examples
```
>>> import matplotlib.pyplot as plt
>>> from sk_dsp_comm import digitalcom as dc
>>> x1,b1,IQ_data1 = dc.QAM_bb(50000,1,'16qam')
>>> x_out = dc.ofdm_tx(IQ_data1,32,64)
>>> x1,b1,IQ_data1 = dc.qam_bb(50000,1,'16qam')
>>> x_out = dc.ofdm_tx(IQ_data1,32,64)
>>> plt.psd(x_out,2**10,1);
>>> plt.xlabel(r'Normalized Frequency ($\omega/(2\pi)=f/f_s$)')
>>> plt.ylim([-40,0])
>>> plt.xlim([-.5,.5])
>>> plt.show()
```
([Source code](.//digitalcom-6.py))
sk_dsp_comm.digitalcom.pcm_decode(*x_bits*, *n_bits*)[[source]](_modules/sk_dsp_comm/digitalcom.html#pcm_decode)[¶](#sk_dsp_comm.digitalcom.pcm_decode)
Parameters
**x_bits**serial bit stream of 0/1 values. The length ofx_bits must be a multiple of N_bits
**n_bits**bit precision of PCM samples
Returns
**xhat**decoded PCM signal samples <NAME>, March 2015
sk_dsp_comm.digitalcom.pcm_encode(*x*, *n_bits*)[[source]](_modules/sk_dsp_comm/digitalcom.html#pcm_encode)[¶](#sk_dsp_comm.digitalcom.pcm_encode)
Parameters
**x**signal samples to be PCM encoded
**n_bits**bit precision of PCM samples
Returns
**x_bits**encoded serial bit stream of 0/1 values. MSB first.
<NAME>, Mark 2015
sk_dsp_comm.digitalcom.q_fctn(*x*)[[source]](_modules/sk_dsp_comm/digitalcom.html#q_fctn)[¶](#sk_dsp_comm.digitalcom.q_fctn)
Gaussian Q-function
sk_dsp_comm.digitalcom.qam_bb(*n_symb*, *ns*, *mod='16qam'*, *pulse='rect'*, *alpha=0.35*)[[source]](_modules/sk_dsp_comm/digitalcom.html#qam_bb)[¶](#sk_dsp_comm.digitalcom.qam_bb)
A complex baseband transmitter
Parameters
**n_symb**the number of symbols to process
**ns**number of samples per symbol
**mod**modulation type: qpsk, 16qam, 64qam, or 256qam
**alpha**squareroot raised codine pulse shape bandwidth factor.For DOCSIS alpha = 0.12 to 0.18. In general alpha can
range over 0 < alpha < 1.
**pulse: pulse shapes: src, rc, rect**
Returns
**x**complex baseband digital modulation
**b**transmitter shaping filter, rectangle or SRC
**tx_data**xI+1j*xQ = inphase symbol sequence +1j*quadrature symbol sequence
<NAME> November 2014
sk_dsp_comm.digitalcom.qam_bep_thy(*snr_dB*, *mod*, *eb_n0_mode=True*)[[source]](_modules/sk_dsp_comm/digitalcom.html#qam_bep_thy)[¶](#sk_dsp_comm.digitalcom.qam_bep_thy)
Approximate the bit error probability of QAM assuming Gray encoding
<NAME> November 2018
sk_dsp_comm.digitalcom.qam_gray_decode(*x_hat*, *mod=4*)[[source]](_modules/sk_dsp_comm/digitalcom.html#qam_gray_decode)[¶](#sk_dsp_comm.digitalcom.qam_gray_decode)
Decode MQAM IQ symbols to a serial bit stream using gray2bin decoding
x_hat = symbol spaced samples of the QAM waveform taken at the maximumeye opening. Normally this is following the matched filter
<NAME> April 2018
sk_dsp_comm.digitalcom.qam_gray_encode_bb(*n_symb*, *ns*, *mod=4*, *pulse='rect'*, *alpha=0.35*, *m_span=6*, *ext_data=None*)[[source]](_modules/sk_dsp_comm/digitalcom.html#qam_gray_encode_bb)[¶](#sk_dsp_comm.digitalcom.qam_gray_encode_bb)
QAM_gray_bb: A gray code mapped QAM complex baseband transmitter x,b,tx_data = QAM_gray_bb(K,Ns,M)
Parameters
**n_symb**The number of symbols to process
**ns**Number of samples per symbol
**mod**Modulation order: 2, 4, 16, 64, 256 QAM. Note 2 <=> BPSK, 4 <=> QPSK
**alpha**Square root raised cosine excess bandwidth factor.For DOCSIS alpha = 0.12 to 0.18. In general alpha can range over 0 < alpha < 1.
**pulse**‘rect’, ‘src’, or ‘rc’
Returns
**x**Complex baseband digital modulation
**b**Transmitter shaping filter, rectangle or SRC
**tx_data**xI+1j*xQ = inphase symbol sequence + 1j*quadrature symbol sequence
See also
`QAM_gray_decode`
sk_dsp_comm.digitalcom.qam_sep(*tx_data*, *rx_data*, *mod_type*, *Ncorr=1024*, *Ntransient=0*)[[source]](_modules/sk_dsp_comm/digitalcom.html#qam_sep)[¶](#sk_dsp_comm.digitalcom.qam_sep)
Count symbol errors between a transmitted and received QAM signal.
The received symbols are assumed to be soft values on a unit square.
Time delay between streams is detected.
The ndarray tx_data is Tx complex symbols.
The ndarray rx_data is Rx complex symbols.
Note: Ncorr needs to be even
sk_dsp_comm.digitalcom.qpsk_bb(*n_symb*, *ns*, *lfsr_len=5*, *pulse='src'*, *alpha=0.25*, *m=6*)[[source]](_modules/sk_dsp_comm/digitalcom.html#qpsk_bb)[¶](#sk_dsp_comm.digitalcom.qpsk_bb)
sk_dsp_comm.digitalcom.qpsk_bep(*tx_data*, *rx_data*, *n_corr=1024*, *n_transient=0*)[[source]](_modules/sk_dsp_comm/digitalcom.html#qpsk_bep)[¶](#sk_dsp_comm.digitalcom.qpsk_bep)
Count bit errors between a transmitted and received QPSK signal.
Time delay between streams is detected as well as ambiquity resolution due to carrier phase lock offsets of \(k*\frac{\pi}{4}\), k=0,1,2,3.
The ndarray sdata is Tx +/-1 symbols as complex numbers I + j*Q.
The ndarray data is Rx +/-1 symbols as complex numbers I + j*Q.
Note: Ncorr needs to be even
sk_dsp_comm.digitalcom.qpsk_rx(*fc*, *n_symb*, *rs*, *es_n0=100*, *fs=125*, *lfsr_len=10*, *phase=0*, *pulse='src'*)[[source]](_modules/sk_dsp_comm/digitalcom.html#qpsk_rx)[¶](#sk_dsp_comm.digitalcom.qpsk_rx)
This function generates
sk_dsp_comm.digitalcom.qpsk_tx(*fc*, *n_symb*, *rs*, *fs=125*, *lfsr_len=10*, *pulse='src'*)[[source]](_modules/sk_dsp_comm/digitalcom.html#qpsk_tx)[¶](#sk_dsp_comm.digitalcom.qpsk_tx)
sk_dsp_comm.digitalcom.rc_imp(*ns*, *alpha*, *m=6*)[[source]](_modules/sk_dsp_comm/digitalcom.html#rc_imp)[¶](#sk_dsp_comm.digitalcom.rc_imp)
A truncated raised cosine pulse used in digital communications.
The pulse shaping factor \(0 < \alpha < 1\) is required as well as the truncation factor M which sets the pulse duration to be \(2*M*T_{symbol}\).
Parameters
**ns**number of samples per symbol
**alpha**excess bandwidth factor on (0, 1), e.g., 0.35
**m**equals RC one-sided symbol truncation factor
Returns
**b**ndarray containing the pulse shape
See also
[`sqrt_rc_imp`](#sk_dsp_comm.digitalcom.sqrt_rc_imp)
Notes
The pulse shape b is typically used as the FIR filter coefficients when forming a pulse shaped digital communications waveform.
Examples
Ten samples per symbol and \(\alpha = 0.35\).
```
>>> import matplotlib.pyplot as plt
>>> from sk_dsp_comm.digitalcom import rc_imp
>>> from numpy import arange
>>> b = rc_imp(10,0.35)
>>> n = arange(-10*6,10*6+1)
>>> plt.stem(n,b)
>>> plt.show()
```
([Source code](.//digitalcom-7.py))
sk_dsp_comm.digitalcom.rz_bits(*n_bits*, *ns*, *pulse='rect'*, *alpha=0.25*, *m=6*)[[source]](_modules/sk_dsp_comm/digitalcom.html#rz_bits)[¶](#sk_dsp_comm.digitalcom.rz_bits)
Generate return-to-zero (RZ) data bits with pulse shaping.
A baseband digital data signal using +/-1 amplitude signal values and including pulse shaping.
Parameters
**n_bits**number of RZ {0,1} data bits to produce
**ns**the number of samples per bit,
**pulse**‘rect’ , ‘rc’, ‘src’ (default ‘rect’)
**alpha**excess bandwidth factor(default 0.25)
**m**single sided pulse duration (default = 6)
Returns
**x**ndarray of the RZ signal values
**b**ndarray of the pulse shape
**data**ndarray of the underlying data bits
Notes
Pulse shapes include ‘rect’ (rectangular), ‘rc’ (raised cosine),
‘src’ (root raised cosine). The actual pulse length is 2*M+1 samples.
This function is used by BPSK_tx in the Case Study article.
Examples
```
>>> import matplotlib.pyplot as plt
>>> from numpy import arange
>>> from sk_dsp_comm.digitalcom import rz_bits
>>> x,b,data = rz_bits(100,10)
>>> t = arange(len(x))
>>> plt.plot(t,x)
>>> plt.ylim([-0.01, 1.01])
>>> plt.show()
```
([Source code](.//digitalcom-8.py))
sk_dsp_comm.digitalcom.scatter(*x*, *ns*, *start*)[[source]](_modules/sk_dsp_comm/digitalcom.html#scatter)[¶](#sk_dsp_comm.digitalcom.scatter)
Sample a baseband digital communications waveform at the symbol spacing.
Parameters
**x**ndarray of the input digital comm signal
**ns**number of samples per symbol (bit)
**start**the array index to start the sampling
Returns
**xI**ndarray of the real part of x following sampling
**xQ**ndarray of the imaginary part of x following sampling
Notes
Normally the signal is complex, so the scatter plot contains
clusters at point in the complex plane. For a binary signal
such as BPSK, the point centers are nominally +/-1 on the real axis. Start is used to eliminate transients from the FIR pulse shaping filters from appearing in the scatter plot.
Examples
```
>>> import matplotlib.pyplot as plt
>>> from sk_dsp_comm import digitalcom as dc
>>> x,b, data = dc.nrz_bits(1000,10,'rc')
```
Add some noise so points are now scattered about +/-1.
```
>>> y = dc.cpx_awgn(x,20,10)
>>> yI,yQ = dc.scatter(y,10,60)
>>> plt.plot(yI,yQ,'.')
>>> plt.grid()
>>> plt.xlabel('In-Phase')
>>> plt.ylabel('Quadrature')
>>> plt.axis('equal')
>>> plt.show()
```
([Source code](.//digitalcom-9.py))
sk_dsp_comm.digitalcom.sqrt_rc_imp(*ns*, *alpha*, *m=6*)[[source]](_modules/sk_dsp_comm/digitalcom.html#sqrt_rc_imp)[¶](#sk_dsp_comm.digitalcom.sqrt_rc_imp)
A truncated square root raised cosine pulse used in digital communications.
The pulse shaping factor \(0 < \alpha < 1\) is required as well as the truncation factor M which sets the pulse duration to be \(2*M*T_{symbol}\).
Parameters
**ns**number of samples per symbol
**alpha**excess bandwidth factor on (0, 1), e.g., 0.35
**m**equals RC one-sided symbol truncation factor
Returns
**b**ndarray containing the pulse shape
Notes
The pulse shape b is typically used as the FIR filter coefficients when forming a pulse shaped digital communications waveform. When
square root raised cosine (SRC) pulse is used to generate Tx signals and at the receiver used as a matched filter (receiver FIR filter), the
received signal is now raised cosine shaped, thus having zero intersymbol interference and the optimum removal of additive white
noise if present at the receiver input.
Examples
Ten samples per symbol and \(\alpha = 0.35\).
```
>>> import matplotlib.pyplot as plt
>>> from numpy import arange
>>> from sk_dsp_comm.digitalcom import sqrt_rc_imp
>>> b = sqrt_rc_imp(10,0.35)
>>> n = arange(-10*6,10*6+1)
>>> plt.stem(n,b)
>>> plt.show()
```
([Source code](.//digitalcom-10.py))
sk_dsp_comm.digitalcom.strips(*x*, *nx*, *fig_size=(6, 4)*)[[source]](_modules/sk_dsp_comm/digitalcom.html#strips)[¶](#sk_dsp_comm.digitalcom.strips)
Plots the contents of real ndarray x as a vertical stacking of strips, each of length Nx. The default figure size is (6,4) inches.
The yaxis tick labels are the starting index of each strip. The red dashed lines correspond to zero amplitude in each strip.
strips(x,Nx,my_figsize=(6,4))
<NAME> April 2014
sk_dsp_comm.digitalcom.time_delay(*x*, *d*, *n=4*)[[source]](_modules/sk_dsp_comm/digitalcom.html#time_delay)[¶](#sk_dsp_comm.digitalcom.time_delay)
A time varying time delay which takes advantage of the Farrow structure for cubic interpolation:
y = time_delay(x,D,N = 3)
Note that D is an array of the same length as the input signal x. This allows you to make the delay a function of time. If you want a constant
delay just use D*zeros(len(x)). The minimum delay allowable is one sample or D = 1.0. This is due to the causal system nature of the Farrow
structure.
A founding paper on the subject of interpolators is: <NAME>, “A Continuously variable Digital Delay Element,” Proceedings of the IEEE Intern. Symp. on Circuits Syst., pp. 2641-2645, June 1988.
<NAME>, February 2014
sk_dsp_comm.digitalcom.to_bin(*data*, *width*)[[source]](_modules/sk_dsp_comm/digitalcom.html#to_bin)[¶](#sk_dsp_comm.digitalcom.to_bin)
Convert an unsigned integer to a numpy binary array with the first element the MSB and the last element the LSB.
sk_dsp_comm.digitalcom.xcorr(*x1*, *x2*, *n_lags*)[[source]](_modules/sk_dsp_comm/digitalcom.html#xcorr)[¶](#sk_dsp_comm.digitalcom.xcorr)
r12, k = xcorr(x1,x2,Nlags), r12 and k are ndarray’s Compute the energy normalized cross correlation between the sequences x1 and x2. If x1 = x2 the cross correlation is the autocorrelation.
The number of lags sets how many lags to return centered about zero
### fec_conv[¶](#module-sk_dsp_comm.fec_conv)
A Convolutional Encoding and Decoding
Copyright (c) March 2017, <NAME> All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
The views and conclusions contained in the software and documentation are those of the authors and should not be interpreted as representing official policies,
either expressed or implied, of the FreeBSD Project.
A forward error correcting coding (FEC) class which defines methods
for performing convolutional encoding and decoding. Arbitrary
polynomials are supported, but the rate is presently limited to r = 1/n,
where n = 2. Punctured (perforated) convolutional codes are also supported.
The puncturing pattern (matrix) is arbitrary.
Two popular encoder polynomial sets are:
K = 3 ==> G1 = ‘111’, G2 = ‘101’ and
K = 7 ==> G1 = ‘1011011’, G2 = ‘1111001’.
A popular puncturing pattern to convert from rate 1/2 to rate 3/4 is a G1 output puncture pattern of ‘110’ and a G2 output puncture
pattern of ‘101’.
Graphical display functions are included to allow the user to better understand the operation of the Viterbi decoder.
<NAME> and <NAME>: October 2018.
*class* sk_dsp_comm.fec_conv.FECConv(*G=('111', '101')*, *Depth=10*)[[source]](_modules/sk_dsp_comm/fec_conv.html#FECConv)[¶](#sk_dsp_comm.fec_conv.FECConv)
Class responsible for creating rate 1/2 convolutional code objects, and
then encoding and decoding the user code set in polynomials of G. Key methods provided include [`conv_encoder()`](#sk_dsp_comm.fec_conv.FECConv.conv_encoder), [`viterbi_decoder()`](#sk_dsp_comm.fec_conv.FECConv.viterbi_decoder), [`puncture()`](#sk_dsp_comm.fec_conv.FECConv.puncture),
[`depuncture()`](#sk_dsp_comm.fec_conv.FECConv.depuncture), [`trellis_plot()`](#sk_dsp_comm.fec_conv.FECConv.trellis_plot), and [`traceback_plot()`](#sk_dsp_comm.fec_conv.FECConv.traceback_plot).
Parameters
**G: A tuple of two binary strings corresponding to the encoder polynomials**
**Depth: The decision depth employed by the Viterbi decoder method**
Examples
```
>>> from sk_dsp_comm import fec_conv
>>> # Rate 1/2
>>> cc1 = fec_conv.FECConv(('101', '111'), Depth=10) # decision depth is 10
```
```
>>> # Rate 1/3
>>> from sk_dsp_comm import fec_conv
>>> cc2 = fec_conv.FECConv(('101','011','111'), Depth=15) # decision depth is 15
```
Methods
| [`bm_calc`](#sk_dsp_comm.fec_conv.FECConv.bm_calc)(ref_code_bits, rec_code_bits, ...) | distance = bm_calc(ref_code_bits, rec_code_bits, metric_type) Branch metrics calculation |
| [`conv_encoder`](#sk_dsp_comm.fec_conv.FECConv.conv_encoder)(input, state) | output, state = conv_encoder(input,state) We get the 1/2 or 1/3 rate from self.rate Polys G1 and G2 are entered as binary strings, e.g, G1 = '111' and G2 = '101' for K = 3 G1 = '1011011' and G2 = '1111001' for K = 7 G3 is also included for rate 1/3 Input state as a binary string of length K-1, e.g., '00' or '0000000' e.g., state = '00' for K = 3 e.g., state = '000000' for K = 7 <NAME> and <NAME> 2018 |
| [`depuncture`](#sk_dsp_comm.fec_conv.FECConv.depuncture)(soft_bits[, puncture_pattern, ...]) | Apply de-puncturing to the soft bits coming from the channel. |
| [`puncture`](#sk_dsp_comm.fec_conv.FECConv.puncture)(code_bits[, puncture_pattern]) | Apply puncturing to the serial bits produced by convolutionally encoding. |
| [`traceback_plot`](#sk_dsp_comm.fec_conv.FECConv.traceback_plot)([fsize]) | Plots a path of the possible last 4 states. |
| [`trellis_plot`](#sk_dsp_comm.fec_conv.FECConv.trellis_plot)([fsize]) | Plots a trellis diagram of the possible state transitions. |
| [`viterbi_decoder`](#sk_dsp_comm.fec_conv.FECConv.viterbi_decoder)(x[, metric_type, quant_level]) | A method which performs Viterbi decoding of noisy bit stream, taking as input soft bit values centered on +/-1 and returning hard decision 0/1 bits. |
bm_calc(*ref_code_bits*, *rec_code_bits*, *metric_type*, *quant_level*)[[source]](_modules/sk_dsp_comm/fec_conv.html#FECConv.bm_calc)[¶](#sk_dsp_comm.fec_conv.FECConv.bm_calc)
distance = bm_calc(ref_code_bits, rec_code_bits, metric_type)
Branch metrics calculation
<NAME> and <NAME> October 2018
conv_encoder(*input*, *state*)[[source]](_modules/sk_dsp_comm/fec_conv.html#FECConv.conv_encoder)[¶](#sk_dsp_comm.fec_conv.FECConv.conv_encoder)
output, state = conv_encoder(input,state)
We get the 1/2 or 1/3 rate from self.rate Polys G1 and G2 are entered as binary strings, e.g,
G1 = ‘111’ and G2 = ‘101’ for K = 3 G1 = ‘1011011’ and G2 = ‘1111001’ for K = 7 G3 is also included for rate 1/3 Input state as a binary string of length K-1, e.g., ‘00’ or ‘0000000’
e.g., state = ‘00’ for K = 3 e.g., state = ‘000000’ for K = 7 <NAME> and <NAME> 2018
depuncture(*soft_bits*, *puncture_pattern=('110', '101')*, *erase_value=3.5*)[[source]](_modules/sk_dsp_comm/fec_conv.html#FECConv.depuncture)[¶](#sk_dsp_comm.fec_conv.FECConv.depuncture)
Apply de-puncturing to the soft bits coming from the channel. Erasure bits are inserted to return the soft bit values back to a form that can be Viterbi decoded.
Parameters
* **soft_bits** –
* **puncture_pattern** –
* **erase_value** –
Returns
Examples
This example uses the following puncture matrix:
\[\begin{split}\begin{align*}
\mathbf{A} = \begin{bmatrix}
1 & 1 & 0 \\
1 & 0 & 1
\end{bmatrix}
\end{align*}\end{split}\]
The upper row operates on the outputs for the \(G_{1}\) polynomial and the lower row operates on the outputs of the \(G_{2}\) polynomial.
```
>>> import numpy as np
>>> from sk_dsp_comm.fec_conv import FECConv
>>> cc = FECConv(('101','111'))
>>> x = np.array([0, 0, 1, 1, 1, 0, 0, 0, 0, 0])
>>> state = '00'
>>> y, state = cc.conv_encoder(x, state)
>>> yp = cc.puncture(y, ('110','101'))
>>> cc.depuncture(yp, ('110', '101'), 1)
array([ 0., 0., 0., 1., 1., 1., 1., 0., 0., 1., 1., 0., 1., 1., 0., 1., 1., 0.]
```
puncture(*code_bits*, *puncture_pattern=('110', '101')*)[[source]](_modules/sk_dsp_comm/fec_conv.html#FECConv.puncture)[¶](#sk_dsp_comm.fec_conv.FECConv.puncture)
Apply puncturing to the serial bits produced by convolutionally encoding.
Parameters
* **code_bits** –
* **puncture_pattern** –
Returns
Examples
This example uses the following puncture matrix:
\[\begin{split}\begin{align*}
\mathbf{A} = \begin{bmatrix}
1 & 1 & 0 \\
1 & 0 & 1
\end{bmatrix}
\end{align*}\end{split}\]
The upper row operates on the outputs for the \(G_{1}\) polynomial and the lower row operates on the outputs of the \(G_{2}\) polynomial.
```
>>> import numpy as np
>>> from sk_dsp_comm.fec_conv import FECConv
>>> cc = FECConv(('101','111'))
>>> x = np.array([0, 0, 1, 1, 1, 0, 0, 0, 0, 0])
>>> state = '00'
>>> y, state = cc.conv_encoder(x, state)
>>> cc.puncture(y, ('110','101'))
array([ 0., 0., 0., 1., 1., 0., 0., 0., 1., 1., 0., 0.])
```
traceback_plot(*fsize=(6, 4)*)[[source]](_modules/sk_dsp_comm/fec_conv.html#FECConv.traceback_plot)[¶](#sk_dsp_comm.fec_conv.FECConv.traceback_plot)
Plots a path of the possible last 4 states.
Parameters
**fsize**Plot size for matplotlib.
Examples
```
>>> import matplotlib.pyplot as plt
>>> from sk_dsp_comm.fec_conv import FECConv
>>> from sk_dsp_comm import digitalcom as dc
>>> import numpy as np
>>> cc = FECConv()
>>> x = np.random.randint(0,2,100)
>>> state = '00'
>>> y,state = cc.conv_encoder(x,state)
>>> # Add channel noise to bits translated to +1/-1
>>> yn = dc.cpx_awgn(2*y-1,5,1) # SNR = 5 dB
>>> # Translate noisy +1/-1 bits to soft values on [0,7]
>>> yn = (yn.real+1)/2*7
>>> z = cc.viterbi_decoder(yn)
>>> cc.traceback_plot()
>>> plt.show()
```
([Source code](.//fec_conv-1.py))
trellis_plot(*fsize=(6, 4)*)[[source]](_modules/sk_dsp_comm/fec_conv.html#FECConv.trellis_plot)[¶](#sk_dsp_comm.fec_conv.FECConv.trellis_plot)
Plots a trellis diagram of the possible state transitions.
Parameters
**fsize**Plot size for matplotlib.
Examples
```
>>> import matplotlib.pyplot as plt
>>> from sk_dsp_comm.fec_conv import FECConv
>>> cc = FECConv()
>>> cc.trellis_plot()
>>> plt.show()
```
([Source code](.//fec_conv-2.py))
viterbi_decoder(*x*, *metric_type='soft'*, *quant_level=3*)[[source]](_modules/sk_dsp_comm/fec_conv.html#FECConv.viterbi_decoder)[¶](#sk_dsp_comm.fec_conv.FECConv.viterbi_decoder)
A method which performs Viterbi decoding of noisy bit stream,
taking as input soft bit values centered on +/-1 and returning
hard decision 0/1 bits.
Parameters
**x: Received noisy bit values centered on +/-1 at one sample per bit**
**metric_type:**‘hard’ - Hard decision metric. Expects binary or 0/1 input values.
‘unquant’ - unquantized soft decision decoding. Expects +/-1
> input values.
‘soft’ - soft decision decoding.
**quant_level: The quantization level for soft decoding. Expected**
**input values between 0 and 2^quant_level-1. 0 represents the most**
**confident 0 and 2^quant_level-1 represents the most confident 1.**
**Only used for ‘soft’ metric type.**
Returns
y: Decoded 0/1 bit stream
Examples
```
>>> import numpy as np
>>> from numpy.random import randint
>>> import sk_dsp_comm.fec_conv as fec
>>> import sk_dsp_comm.digitalcom as dc
>>> import matplotlib.pyplot as plt
>>> # Soft decision rate 1/2 simulation
>>> N_bits_per_frame = 10000
>>> EbN0 = 4
>>> total_bit_errors = 0
>>> total_bit_count = 0
>>> cc1 = fec.FECConv(('11101','10011'),25)
>>> # Encode with shift register starting state of '0000'
>>> state = '0000'
>>> while total_bit_errors < 100:
>>> # Create 100000 random 0/1 bits
>>> x = randint(0,2,N_bits_per_frame)
>>> y,state = cc1.conv_encoder(x,state)
>>> # Add channel noise to bits, include antipodal level shift to [-1,1]
>>> yn_soft = dc.cpx_awgn(2*y-1,EbN0-3,1) # Channel SNR is 3 dB less for rate 1/2
>>> yn_hard = ((np.sign(yn_soft.real)+1)/2).astype(int)
>>> z = cc1.viterbi_decoder(yn_hard,'hard')
>>> # Count bit errors
>>> bit_count, bit_errors = dc.bit_errors(x,z)
>>> total_bit_errors += bit_errors
>>> total_bit_count += bit_count
>>> print('Bits Received = %d, Bit errors = %d, BEP = %1.2e' % (total_bit_count, total_bit_errors, total_bit_errors/total_bit_count))
>>> print('*****************************************************')
>>> print('Bits Received = %d, Bit errors = %d, BEP = %1.2e' % (total_bit_count, total_bit_errors, total_bit_errors/total_bit_count))
Rate 1/2 Object kmax = 0, taumax = 0 Bits Received = 9976, Bit errors = 77, BEP = 7.72e-03 kmax = 0, taumax = 0 Bits Received = 19952, Bit errors = 175, BEP = 8.77e-03
*****************************************************
Bits Received = 19952, Bit errors = 175, BEP = 8.77e-03
```
```
>>> # Consider the trellis traceback after the sim completes
>>> cc1.traceback_plot()
>>> plt.show()
```
([Source code](.//fec_conv-3.py))
```
>>> # Compare a collection of simulation results with soft decision
>>> # bounds
>>> SNRdB = np.arange(0,12,.1)
>>> Pb_uc = fec.conv_Pb_bound(1/3,7,[4, 12, 20, 72, 225],SNRdB,2)
>>> Pb_s_third_3 = fec.conv_Pb_bound(1/3,8,[3, 0, 15],SNRdB,1)
>>> Pb_s_third_4 = fec.conv_Pb_bound(1/3,10,[6, 0, 6, 0],SNRdB,1)
>>> Pb_s_third_5 = fec.conv_Pb_bound(1/3,12,[12, 0, 12, 0, 56],SNRdB,1)
>>> Pb_s_third_6 = fec.conv_Pb_bound(1/3,13,[1, 8, 26, 20, 19, 62],SNRdB,1)
>>> Pb_s_third_7 = fec.conv_Pb_bound(1/3,14,[1, 0, 20, 0, 53, 0, 184],SNRdB,1)
>>> Pb_s_third_8 = fec.conv_Pb_bound(1/3,16,[1, 0, 24, 0, 113, 0, 287, 0],SNRdB,1)
>>> Pb_s_half = fec.conv_Pb_bound(1/2,7,[4, 12, 20, 72, 225],SNRdB,1)
>>> plt.figure(figsize=(5,5))
>>> plt.semilogy(SNRdB,Pb_uc)
>>> plt.semilogy(SNRdB,Pb_s_third_3,'--')
>>> plt.semilogy(SNRdB,Pb_s_third_4,'--')
>>> plt.semilogy(SNRdB,Pb_s_third_5,'g')
>>> plt.semilogy(SNRdB,Pb_s_third_6,'--')
>>> plt.semilogy(SNRdB,Pb_s_third_7,'--')
>>> plt.semilogy(SNRdB,Pb_s_third_8,'--')
>>> plt.semilogy([0,1,2,3,4,5],[9.08e-02,2.73e-02,6.52e-03, 8.94e-04,8.54e-05,5e-6],'gs')
>>> plt.axis([0,12,1e-7,1e0])
>>> plt.title(r'Soft Decision Rate 1/2 Coding Measurements')
>>> plt.xlabel(r'$E_b/N_0$ (dB)')
>>> plt.ylabel(r'Symbol Error Probability')
>>> plt.legend(('Uncoded BPSK','R=1/3, K=3, Soft', 'R=1/3, K=4, Soft','R=1/3, K=5, Soft', 'R=1/3, K=6, Soft','R=1/3, K=7, Soft', 'R=1/3, K=8, Soft','R=1/3, K=5, Sim', 'Simulation'),loc='upper right')
>>> plt.grid();
>>> plt.show()
```
```
>>> # Hard decision rate 1/3 simulation
>>> N_bits_per_frame = 10000
>>> EbN0 = 3
>>> total_bit_errors = 0
>>> total_bit_count = 0
>>> cc2 = fec.FECConv(('11111','11011','10101'),25)
>>> # Encode with shift register starting state of '0000'
>>> state = '0000'
>>> while total_bit_errors < 100:
>>> # Create 100000 random 0/1 bits
>>> x = randint(0,2,N_bits_per_frame)
>>> y,state = cc2.conv_encoder(x,state)
>>> # Add channel noise to bits, include antipodal level shift to [-1,1]
>>> yn_soft = dc.cpx_awgn(2*y-1,EbN0-10*np.log10(3),1) # Channel SNR is 10*log10(3) dB less
>>> yn_hard = ((np.sign(yn_soft.real)+1)/2).astype(int)
>>> z = cc2.viterbi_decoder(yn_hard.real,'hard')
>>> # Count bit errors
>>> bit_count, bit_errors = dc.bit_errors(x,z)
>>> total_bit_errors += bit_errors
>>> total_bit_count += bit_count
>>> print('Bits Received = %d, Bit errors = %d, BEP = %1.2e' % (total_bit_count, total_bit_errors, total_bit_errors/total_bit_count))
>>> print('*****************************************************')
>>> print('Bits Received = %d, Bit errors = %d, BEP = %1.2e' % (total_bit_count, total_bit_errors, total_bit_errors/total_bit_count))
Rate 1/3 Object kmax = 0, taumax = 0 Bits Received = 9976, Bit errors = 251, BEP = 2.52e-02
*****************************************************
Bits Received = 9976, Bit errors = 251, BEP = 2.52e-02
```
```
>>> # Compare a collection of simulation results with hard decision
>>> # bounds
>>> SNRdB = np.arange(0,12,.1)
>>> Pb_uc = fec.conv_Pb_bound(1/3,7,[4, 12, 20, 72, 225],SNRdB,2)
>>> Pb_s_third_3_hard = fec.conv_Pb_bound(1/3,8,[3, 0, 15, 0, 58, 0, 201, 0],SNRdB,0)
>>> Pb_s_third_5_hard = fec.conv_Pb_bound(1/3,12,[12, 0, 12, 0, 56, 0, 320, 0],SNRdB,0)
>>> Pb_s_third_7_hard = fec.conv_Pb_bound(1/3,14,[1, 0, 20, 0, 53, 0, 184],SNRdB,0)
>>> Pb_s_third_5_hard_sim = np.array([8.94e-04,1.11e-04,8.73e-06])
>>> plt.figure(figsize=(5,5))
>>> plt.semilogy(SNRdB,Pb_uc)
>>> plt.semilogy(SNRdB,Pb_s_third_3_hard,'r--')
>>> plt.semilogy(SNRdB,Pb_s_third_5_hard,'g--')
>>> plt.semilogy(SNRdB,Pb_s_third_7_hard,'k--')
>>> plt.semilogy(np.array([5,6,7]),Pb_s_third_5_hard_sim,'sg')
>>> plt.axis([0,12,1e-7,1e0])
>>> plt.title(r'Hard Decision Rate 1/3 Coding Measurements')
>>> plt.xlabel(r'$E_b/N_0$ (dB)')
>>> plt.ylabel(r'Symbol Error Probability')
>>> plt.legend(('Uncoded BPSK','R=1/3, K=3, Hard', 'R=1/3, K=5, Hard', 'R=1/3, K=7, Hard', ),loc='upper right')
>>> plt.grid();
>>> plt.show()
```
```
>>> # Show the traceback for the rate 1/3 hard decision case
>>> cc2.traceback_plot()
```
*class* sk_dsp_comm.fec_conv.TrellisBranches(*Ns*)[[source]](_modules/sk_dsp_comm/fec_conv.html#TrellisBranches)[¶](#sk_dsp_comm.fec_conv.TrellisBranches)
A structure to hold the trellis states, bits, and input values for both ‘1’ and ‘0’ transitions.
Ns is the number of states = \(2^{(K-1)}\).
*class* sk_dsp_comm.fec_conv.TrellisNodes(*Ns*)[[source]](_modules/sk_dsp_comm/fec_conv.html#TrellisNodes)[¶](#sk_dsp_comm.fec_conv.TrellisNodes)
A structure to hold the trellis from nodes and to nodes.
Ns is the number of states = \(2^{(K-1)}\).
*class* sk_dsp_comm.fec_conv.TrellisPaths(*Ns*, *D*)[[source]](_modules/sk_dsp_comm/fec_conv.html#TrellisPaths)[¶](#sk_dsp_comm.fec_conv.TrellisPaths)
A structure to hold the trellis paths in terms of traceback_states,
cumulative_metrics, and traceback_bits. A full decision depth history of all this infomation is not essential, but does allow the graphical depiction created by the method traceback_plot().
Ns is the number of states = \(2^{(K-1)}\) and D is the decision depth.
As a rule, D should be about 5 times K.
sk_dsp_comm.fec_conv.binary(*num*, *length=8*)[[source]](_modules/sk_dsp_comm/fec_conv.html#binary)[¶](#sk_dsp_comm.fec_conv.binary)
Format an integer to binary without the leading ‘0b’
sk_dsp_comm.fec_conv.conv_Pb_bound(*R*, *dfree*, *Ck*, *SNRdB*, *hard_soft*, *M=2*)[[source]](_modules/sk_dsp_comm/fec_conv.html#conv_Pb_bound)[¶](#sk_dsp_comm.fec_conv.conv_Pb_bound)
Coded bit error probabilty
Convolution coding bit error probability upper bound according to Ziemer & Peterson 7-16, p. 507
<NAME> November 2014
Parameters
**R: Code rate**
**dfree: Free distance of the code**
**Ck: Weight coefficient**
**SNRdB: Signal to noise ratio in dB**
**hard_soft: 0 hard, 1 soft, 2 uncoded**
**M: M-ary**
Notes
The code rate R is given by \(R_{s} = \frac{k}{n}\).
<NAME> and <NAME> 2018
Examples
```
>>> import numpy as np
>>> from sk_dsp_comm import fec_conv as fec
>>> import matplotlib.pyplot as plt
>>> SNRdB = np.arange(2,12,.1)
>>> Pb = fec.conv_Pb_bound(1./2,10,[36, 0, 211, 0, 1404, 0, 11633],SNRdB,2)
>>> Pb_1_2 = fec.conv_Pb_bound(1./2,10,[36, 0, 211, 0, 1404, 0, 11633],SNRdB,1)
>>> Pb_3_4 = fec.conv_Pb_bound(3./4,4,[164, 0, 5200, 0, 151211, 0, 3988108],SNRdB,1)
>>> plt.semilogy(SNRdB,Pb)
>>> plt.semilogy(SNRdB,Pb_1_2)
>>> plt.semilogy(SNRdB,Pb_3_4)
>>> plt.axis([2,12,1e-7,1e0])
>>> plt.xlabel(r'$E_b/N_0$ (dB)')
>>> plt.ylabel(r'Symbol Error Probability')
>>> plt.legend(('Uncoded BPSK','R=1/2, K=7, Soft','R=3/4 (punc), K=7, Soft'),loc='best')
>>> plt.grid();
>>> plt.show()
```
([Source code](.//fec_conv-4.py))
sk_dsp_comm.fec_conv.hard_Pk(*k*, *R*, *SNR*)[[source]](_modules/sk_dsp_comm/fec_conv.html#hard_Pk)[¶](#sk_dsp_comm.fec_conv.hard_Pk)
Calculates Pk as found in Ziemer & Peterson eq. 7-12, p.505
<NAME> and <NAME> 2018
sk_dsp_comm.fec_conv.soft_Pk(*k*, *R*, *SNR*)[[source]](_modules/sk_dsp_comm/fec_conv.html#soft_Pk)[¶](#sk_dsp_comm.fec_conv.soft_Pk)
Calculates Pk as found in Ziemer & Peterson eq. 7-13, p.505
<NAME> November 2014
### fir_design_helper[¶](#module-sk_dsp_comm.fir_design_helper)
Basic Linear Phase Digital Filter Design Helper
Copyright (c) March 2017, <NAME> All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
The views and conclusions contained in the software and documentation are those of the authors and should not be interpreted as representing official policies,
either expressed or implied, of the FreeBSD Project.
sk_dsp_comm.fir_design_helper.bandpass_order(*f_stop1*, *f_pass1*, *f_pass2*, *f_stop2*, *dpass_dB*, *dstop_dB*, *fsamp=1*)[[source]](_modules/sk_dsp_comm/fir_design_helper.html#bandpass_order)[¶](#sk_dsp_comm.fir_design_helper.bandpass_order)
Optimal FIR (equal ripple) Bandpass Order Determination
Text reference: Ifeachor, Digital Signal Processing a Practical Approach,
second edition, Prentice Hall, 2002.
Journal paper reference: <NAME> & <NAME>, Practical Design Rules for Optimum FIR Bandpass Digital Filters, IEEE Transactions on Acoustics and Speech, pp.
204-206, April,1979.
sk_dsp_comm.fir_design_helper.bandstop_order(*f_stop1*, *f_pass1*, *f_pass2*, *f_stop2*, *dpass_dB*, *dstop_dB*, *fsamp=1*)[[source]](_modules/sk_dsp_comm/fir_design_helper.html#bandstop_order)[¶](#sk_dsp_comm.fir_design_helper.bandstop_order)
Optimal FIR (equal ripple) Bandstop Order Determination
Text reference: Ifeachor, Digital Signal Processing a Practical Approach,
second edition, Prentice Hall, 2002.
Journal paper reference: <NAME> & <NAME>, Practical Design Rules for Optimum FIR Bandpass Digital Filters, IEEE Transactions on Acoustics and Speech, pp.
204-206, April,1979.
sk_dsp_comm.fir_design_helper.fir_remez_bpf(*f_stop1*, *f_pass1*, *f_pass2*, *f_stop2*, *d_pass*, *d_stop*, *fs=1.0*, *n_bump=5*, *status=True*)[[source]](_modules/sk_dsp_comm/fir_design_helper.html#fir_remez_bpf)[¶](#sk_dsp_comm.fir_design_helper.fir_remez_bpf)
Design an FIR bandpass filter using remez with order determination. The filter order is determined based on
f_stop1 Hz, f_pass1 Hz, f_pass2 Hz, f_stop2 Hz, and the
desired passband ripple d_pass dB and stopband attenuation d_stop dB all relative to a sampling rate of fs Hz.
<NAME> October 2016, updated October 2018
sk_dsp_comm.fir_design_helper.fir_remez_bsf(*f_pass1*, *f_stop1*, *f_stop2*, *f_pass2*, *d_pass*, *d_stop*, *fs=1.0*, *n_bump=5*, *status=True*)[[source]](_modules/sk_dsp_comm/fir_design_helper.html#fir_remez_bsf)[¶](#sk_dsp_comm.fir_design_helper.fir_remez_bsf)
Design an FIR bandstop filter using remez with order determination. The filter order is determined based on
f_pass1 Hz, f_stop1 Hz, f_stop2 Hz, f_pass2 Hz, and the
desired passband ripple d_pass dB and stopband attenuation d_stop dB all relative to a sampling rate of fs Hz.
<NAME> October 2016, updated October 2018
sk_dsp_comm.fir_design_helper.fir_remez_hpf(*f_stop*, *f_pass*, *d_pass*, *d_stop*, *fs=1.0*, *n_bump=5*, *status=True*)[[source]](_modules/sk_dsp_comm/fir_design_helper.html#fir_remez_hpf)[¶](#sk_dsp_comm.fir_design_helper.fir_remez_hpf)
Design an FIR highpass filter using remez with order determination. The filter order is determined based on
f_pass Hz, fstop Hz, and the desired passband ripple
d_pass dB and stopband attenuation d_stop dB all
relative to a sampling rate of fs Hz.
<NAME> October 2016, updated October 2018
sk_dsp_comm.fir_design_helper.fir_remez_lpf(*f_pass*, *f_stop*, *d_pass*, *d_stop*, *fs=1.0*, *n_bump=5*, *status=True*)[[source]](_modules/sk_dsp_comm/fir_design_helper.html#fir_remez_lpf)[¶](#sk_dsp_comm.fir_design_helper.fir_remez_lpf)
Design an FIR lowpass filter using remez with order determination. The filter order is determined based on
f_pass Hz, fstop Hz, and the desired passband ripple
d_pass dB and stopband attenuation d_stop dB all
relative to a sampling rate of fs Hz.
<NAME> October 2016, updated October 2018
sk_dsp_comm.fir_design_helper.firwin_bpf(*n_taps*, *f1*, *f2*, *fs=1.0*, *pass_zero=False*)[[source]](_modules/sk_dsp_comm/fir_design_helper.html#firwin_bpf)[¶](#sk_dsp_comm.fir_design_helper.firwin_bpf)
Design a windowed FIR bandpass filter in terms of passband critical frequencies f1 < f2 in Hz relative to sampling rate fs in Hz. The number of taps must be provided.
<NAME> October 2016
sk_dsp_comm.fir_design_helper.firwin_kaiser_bpf(*f_stop1*, *f_pass1*, *f_pass2*, *f_stop2*, *d_stop*, *fs=1.0*, *n_bump=0*, *status=True*)[[source]](_modules/sk_dsp_comm/fir_design_helper.html#firwin_kaiser_bpf)[¶](#sk_dsp_comm.fir_design_helper.firwin_kaiser_bpf)
Design an FIR bandpass filter using the sinc() kernel and a Kaiser window. The filter order is determined based on
f_stop1 Hz, f_pass1 Hz, f_pass2 Hz, f_stop2 Hz, and the
desired stopband attenuation d_stop in dB for both stopbands,
all relative to a sampling rate of fs Hz.
Note: the passband ripple cannot be set independent of the stopband attenuation.
<NAME> October 2016
sk_dsp_comm.fir_design_helper.firwin_kaiser_bsf(*f_stop1*, *f_pass1*, *f_pass2*, *f_stop2*, *d_stop*, *fs=1.0*, *n_bump=0*, *status=True*)[[source]](_modules/sk_dsp_comm/fir_design_helper.html#firwin_kaiser_bsf)[¶](#sk_dsp_comm.fir_design_helper.firwin_kaiser_bsf)
Design an FIR bandstop filter using the sinc() kernel and a Kaiser window. The filter order is determined based on
f_stop1 Hz, f_pass1 Hz, f_pass2 Hz, f_stop2 Hz, and the
desired stopband attenuation d_stop in dB for both stopbands,
all relative to a sampling rate of fs Hz.
Note: The passband ripple cannot be set independent of the stopband attenuation.
Note: The filter order is forced to be even (odd number of taps)
so there is a center tap that can be used to form 1 - H_BPF.
<NAME> October 2016
sk_dsp_comm.fir_design_helper.firwin_kaiser_hpf(*f_stop*, *f_pass*, *d_stop*, *fs=1.0*, *n_bump=0*, *status=True*)[[source]](_modules/sk_dsp_comm/fir_design_helper.html#firwin_kaiser_hpf)[¶](#sk_dsp_comm.fir_design_helper.firwin_kaiser_hpf)
Design an FIR highpass filter using the sinc() kernel and a Kaiser window. The filter order is determined based on
f_pass Hz, f_stop Hz, and the desired stopband attenuation d_stop in dB, all relative to a sampling rate of fs Hz.
Note: the passband ripple cannot be set independent of the stopband attenuation.
<NAME> October 2016
sk_dsp_comm.fir_design_helper.firwin_kaiser_lpf(*f_pass*, *f_stop*, *d_stop*, *fs=1.0*, *n_bump=0*, *status=True*)[[source]](_modules/sk_dsp_comm/fir_design_helper.html#firwin_kaiser_lpf)[¶](#sk_dsp_comm.fir_design_helper.firwin_kaiser_lpf)
Design an FIR lowpass filter using the sinc() kernel and a Kaiser window. The filter order is determined based on
f_pass Hz, f_stop Hz, and the desired stopband attenuation d_stop in dB, all relative to a sampling rate of fs Hz.
Note: the passband ripple cannot be set independent of the stopband attenuation.
<NAME> October 2016
sk_dsp_comm.fir_design_helper.firwin_lpf(*n_taps*, *fc*, *fs=1.0*)[[source]](_modules/sk_dsp_comm/fir_design_helper.html#firwin_lpf)[¶](#sk_dsp_comm.fir_design_helper.firwin_lpf)
Design a windowed FIR lowpass filter in terms of passband critical frequencies f1 < f2 in Hz relative to sampling rate fs in Hz. The number of taps must be provided.
<NAME> October 2016
sk_dsp_comm.fir_design_helper.freqz_resp_list(*b*, *a=array([1])*, *mode='dB'*, *fs=1.0*, *n_pts=1024*, *fsize=(6, 4)*)[[source]](_modules/sk_dsp_comm/fir_design_helper.html#freqz_resp_list)[¶](#sk_dsp_comm.fir_design_helper.freqz_resp_list)
A method for displaying digital filter frequency response magnitude,
phase, and group delay. A plot is produced using matplotlib
freq_resp(self,mode = ‘dB’,Npts = 1024)
A method for displaying the filter frequency response magnitude,
phase, and group delay. A plot is produced using matplotlib
freqz_resp(b,a=[1],mode = ‘dB’,Npts = 1024,fsize=(6,4))
> > > b = ndarray of numerator coefficients
> > a = ndarray of denominator coefficents
> > > > mode = display mode: ‘dB’ magnitude, ‘phase’ in radians, or ‘groupdelay_s’ in samples and ‘groupdelay_t’ in sec,
> all versus frequency in Hz
> Npts = number of points to plot; default is 1024
fsize = figure size; defult is (6,4) inches
<NAME>, January 2015
sk_dsp_comm.fir_design_helper.lowpass_order(*f_pass*, *f_stop*, *dpass_dB*, *dstop_dB*, *fsamp=1*)[[source]](_modules/sk_dsp_comm/fir_design_helper.html#lowpass_order)[¶](#sk_dsp_comm.fir_design_helper.lowpass_order)
Optimal FIR (equal ripple) Lowpass Order Determination
Text reference: Ifeachor, Digital Signal Processing a Practical Approach,
second edition, Prentice Hall, 2002.
Journal paper reference: <NAME> al., Practical Design Rules for Optimum Finite Imulse Response Digitl Filters, Bell Syst. Tech. J., vol 52, pp.
769-799, July-Aug., 1973.IEEE, 1973.
### iir_design_helper[¶](#module-sk_dsp_comm.iir_design_helper)
Basic IIR Bilinear Transform-Based Digital Filter Design Helper
Copyright (c) March 2017, <NAME> All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
The views and conclusions contained in the software and documentation are those of the authors and should not be interpreted as representing official policies,
either expressed or implied, of the FreeBSD Project.
sk_dsp_comm.iir_design_helper.IIR_bpf(*f_stop1*, *f_pass1*, *f_pass2*, *f_stop2*, *Ripple_pass*, *Atten_stop*, *fs=1.0*, *ftype='butter'*, *status=True*)[[source]](_modules/sk_dsp_comm/iir_design_helper.html#IIR_bpf)[¶](#sk_dsp_comm.iir_design_helper.IIR_bpf)
Design an IIR bandpass filter using scipy.signal.iirdesign.
The filter order is determined based on
f_pass Hz, f_stop Hz, and the desired stopband attenuation d_stop in dB, all relative to a sampling rate of fs Hz.
Parameters
**f_stop1**ndarray of the numerator coefficients
**f_pass**ndarray of the denominator coefficients
**Ripple_pass :**
**Atten_stop :**
**fs**sampling rate in Hz
**ftype**Analog prototype from ‘butter’ ‘cheby1’, ‘cheby2’,‘ellip’, and ‘bessel’
Returns
**b**ndarray of the numerator coefficients
**a**ndarray of the denominator coefficients
**sos**2D ndarray of second-order section coefficients
Examples
```
>>> fs = 48000
>>> f_pass = 8000
>>> f_stop = 5000
>>> b_but,a_but,sos_but = IIR_hpf(f_stop,f_pass,0.5,60,fs,'butter')
>>> b_cheb1,a_cheb1,sos_cheb1 = IIR_hpf(f_stop,f_pass,0.5,60,fs,'cheby1')
>>> b_cheb2,a_cheb2,sos_cheb2 = IIR_hpf(f_stop,f_pass,0.5,60,fs,'cheby2')
>>> b_elli,a_elli,sos_elli = IIR_hpf(f_stop,f_pass,0.5,60,fs,'ellip')
```
<NAME> October 2016
sk_dsp_comm.iir_design_helper.IIR_bsf(*f_pass1*, *f_stop1*, *f_stop2*, *f_pass2*, *Ripple_pass*, *Atten_stop*, *fs=1.0*, *ftype='butter'*, *status=True*)[[source]](_modules/sk_dsp_comm/iir_design_helper.html#IIR_bsf)[¶](#sk_dsp_comm.iir_design_helper.IIR_bsf)
Design an IIR bandstop filter using scipy.signal.iirdesign.
The filter order is determined based on
f_pass Hz, f_stop Hz, and the desired stopband attenuation d_stop in dB, all relative to a sampling rate of fs Hz.
<NAME> October 2016
sk_dsp_comm.iir_design_helper.IIR_hpf(*f_stop*, *f_pass*, *Ripple_pass*, *Atten_stop*, *fs=1.0*, *ftype='butter'*, *status=True*)[[source]](_modules/sk_dsp_comm/iir_design_helper.html#IIR_hpf)[¶](#sk_dsp_comm.iir_design_helper.IIR_hpf)
Design an IIR highpass filter using scipy.signal.iirdesign.
The filter order is determined based on
f_pass Hz, f_stop Hz, and the desired stopband attenuation d_stop in dB, all relative to a sampling rate of fs Hz.
Parameters
**f_stop :**
**f_pass :**
**Ripple_pass :**
**Atten_stop :**
**fs**sampling rate in Hz
**ftype**Analog prototype from ‘butter’ ‘cheby1’, ‘cheby2’,‘ellip’, and ‘bessel’
Returns
**b**ndarray of the numerator coefficients
**a**ndarray of the denominator coefficients
**sos**2D ndarray of second-order section coefficients
Examples
```
>>> fs = 48000
>>> f_pass = 8000
>>> f_stop = 5000
>>> b_but,a_but,sos_but = IIR_hpf(f_stop,f_pass,0.5,60,fs,'butter')
>>> b_cheb1,a_cheb1,sos_cheb1 = IIR_hpf(f_stop,f_pass,0.5,60,fs,'cheby1')
>>> b_cheb2,a_cheb2,sos_cheb2 = IIR_hpf(f_stop,f_pass,0.5,60,fs,'cheby2')
>>> b_elli,a_elli,sos_elli = IIR_hpf(f_stop,f_pass,0.5,60,fs,'ellip')
```
<NAME> October 2016
sk_dsp_comm.iir_design_helper.IIR_lpf(*f_pass*, *f_stop*, *Ripple_pass*, *Atten_stop*, *fs=1.0*, *ftype='butter'*, *status=True*)[[source]](_modules/sk_dsp_comm/iir_design_helper.html#IIR_lpf)[¶](#sk_dsp_comm.iir_design_helper.IIR_lpf)
Design an IIR lowpass filter using scipy.signal.iirdesign.
The filter order is determined based on
f_pass Hz, f_stop Hz, and the desired stopband attenuation d_stop in dB, all relative to a sampling rate of fs Hz.
Parameters
**f_pass**Passband critical frequency in Hz
**f_stop**Stopband critical frequency in Hz
**Ripple_pass**Filter gain in dB at f_pass
**Atten_stop**Filter attenuation in dB at f_stop
**fs**Sampling rate in Hz
**ftype**Analog prototype from ‘butter’ ‘cheby1’, ‘cheby2’,‘ellip’, and ‘bessel’
Returns
**b**ndarray of the numerator coefficients
**a**ndarray of the denominator coefficients
**sos**2D ndarray of second-order section coefficients
Notes
Additionally a text string telling the user the filter order is written to the console, e.g., IIR cheby1 order = 8.
Examples
```
>>> fs = 48000
>>> f_pass = 5000
>>> f_stop = 8000
>>> b_but,a_but,sos_but = IIR_lpf(f_pass,f_stop,0.5,60,fs,'butter')
>>> b_cheb1,a_cheb1,sos_cheb1 = IIR_lpf(f_pass,f_stop,0.5,60,fs,'cheby1')
>>> b_cheb2,a_cheb2,sos_cheb2 = IIR_lpf(f_pass,f_stop,0.5,60,fs,'cheby2')
>>> b_elli,a_elli,sos_elli = IIR_lpf(f_pass,f_stop,0.5,60,fs,'ellip')
```
<NAME> October 2016
sk_dsp_comm.iir_design_helper.freqz_cas(*sos*, *w*)[[source]](_modules/sk_dsp_comm/iir_design_helper.html#freqz_cas)[¶](#sk_dsp_comm.iir_design_helper.freqz_cas)
Cascade frequency response
<NAME> October 2016
sk_dsp_comm.iir_design_helper.freqz_resp_cas_list(*sos*, *mode='dB'*, *fs=1.0*, *n_pts=1024*, *fsize=(6, 4)*)[[source]](_modules/sk_dsp_comm/iir_design_helper.html#freqz_resp_cas_list)[¶](#sk_dsp_comm.iir_design_helper.freqz_resp_cas_list)
A method for displaying cascade digital filter form frequency response
magnitude, phase, and group delay. A plot is produced using matplotlib
freq_resp(self,mode = ‘dB’,Npts = 1024)
A method for displaying the filter frequency response magnitude,
phase, and group delay. A plot is produced using matplotlib
freqz_resp(b,a=[1],mode = ‘dB’,Npts = 1024,fsize=(6,4))
> > > b = ndarray of numerator coefficients
> > a = ndarray of denominator coefficents
> > > > mode = display mode: ‘dB’ magnitude, ‘phase’ in radians, or ‘groupdelay_s’ in samples and ‘groupdelay_t’ in sec,
> all versus frequency in Hz
> Npts = number of points to plot; default is 1024
fsize = figure size; defult is (6,4) inches
<NAME>, January 2015
sk_dsp_comm.iir_design_helper.freqz_resp_list(*b*, *a=array([1])*, *mode='dB'*, *fs=1.0*, *Npts=1024*, *fsize=(6, 4)*)[[source]](_modules/sk_dsp_comm/iir_design_helper.html#freqz_resp_list)[¶](#sk_dsp_comm.iir_design_helper.freqz_resp_list)
A method for displaying digital filter frequency response magnitude,
phase, and group delay. A plot is produced using matplotlib
freq_resp(self,mode = ‘dB’,Npts = 1024)
A method for displaying the filter frequency response magnitude,
phase, and group delay. A plot is produced using matplotlib
freqz_resp(b,a=[1],mode = ‘dB’,Npts = 1024,fsize=(6,4))
> > > b = ndarray of numerator coefficients
> > a = ndarray of denominator coefficents
> > > > mode = display mode: ‘dB’ magnitude, ‘phase’ in radians, or ‘groupdelay_s’ in samples and ‘groupdelay_t’ in sec,
> all versus frequency in Hz
> Npts = number of points to plot; default is 1024
fsize = figure size; defult is (6,4) inches
<NAME>, January 2015
sk_dsp_comm.iir_design_helper.sos_cascade(*sos1*, *sos2*)[[source]](_modules/sk_dsp_comm/iir_design_helper.html#sos_cascade)[¶](#sk_dsp_comm.iir_design_helper.sos_cascade)
<NAME> October 2016
sk_dsp_comm.iir_design_helper.sos_zplane(*sos*, *auto_scale=True*, *size=2*, *tol=0.001*)[[source]](_modules/sk_dsp_comm/iir_design_helper.html#sos_zplane)[¶](#sk_dsp_comm.iir_design_helper.sos_zplane)
Create an z-plane pole-zero plot.
Create an z-plane pole-zero plot using the numerator and denominator z-domain system function coefficient ndarrays b and a respectively. Assume descending powers of z.
Parameters
**sos**ndarray of the sos coefficients
**auto_scale**bool (default True)
**size**plot radius maximum when scale = False
Returns
**(M,N)**tuple of zero and pole counts + plot window
Notes
This function tries to identify repeated poles and zeros and will
place the multiplicity number above and to the right of the pole or zero.
The difficulty is setting the tolerance for this detection. Currently it is set at 1e-3 via the function signal.unique_roots.
Examples
```
>>> # Here the plot is generated using auto_scale
>>> sos_zplane(sos)
>>> # Here the plot is generated using manual scaling
>>> sos_zplane(sos,False,1.5)
```
sk_dsp_comm.iir_design_helper.unique_cpx_roots(*rlist*, *tol=0.001*)[[source]](_modules/sk_dsp_comm/iir_design_helper.html#unique_cpx_roots)[¶](#sk_dsp_comm.iir_design_helper.unique_cpx_roots)
The average of the root values is used when multiplicity
is greater than one.
<NAME> October 2016
### multirate_helper[¶](#module-sk_dsp_comm.multirate_helper)
Multirate help for building interpolation and decimation systems
Copyright (c) March 2017, <NAME> All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
The views and conclusions contained in the software and documentation are those of the authors and should not be interpreted as representing official policies,
either expressed or implied, of the FreeBSD Project.
sk_dsp_comm.multirate_helper.freqz_resp(*b*, *a=[1]*, *mode='dB'*, *fs=1.0*, *Npts=1024*, *fsize=(6, 4)*)[[source]](_modules/sk_dsp_comm/multirate_helper.html#freqz_resp)[¶](#sk_dsp_comm.multirate_helper.freqz_resp)
A method for displaying digital filter frequency response magnitude,
phase, and group delay. A plot is produced using matplotlib
freq_resp(self,mode = ‘dB’,Npts = 1024)
A method for displaying the filter frequency response magnitude,
phase, and group delay. A plot is produced using matplotlib
freqz_resp(b,a=[1],mode = ‘dB’,Npts = 1024,fsize=(6,4))
> > > b = ndarray of numerator coefficients
> > a = ndarray of denominator coefficents
> > > > mode = display mode: ‘dB’ magnitude, ‘phase’ in radians, or ‘groupdelay_s’ in samples and ‘groupdelay_t’ in sec,
> all versus frequency in Hz
> Npts = number of points to plot; defult is 1024
fsize = figure size; defult is (6,4) inches
<NAME>, January 2015
*class* sk_dsp_comm.multirate_helper.multirate_FIR(*b*)[[source]](_modules/sk_dsp_comm/multirate_helper.html#multirate_FIR)[¶](#sk_dsp_comm.multirate_helper.multirate_FIR)
A simple class for encapsulating FIR filtering, or FIR upsample/
filter, or FIR filter/downsample operations used in modeling a comm system. Objects of this class will hold the required filter
coefficients once an object is instantiated. Frequency response
and the pole zero plot can also be plotted using supplied class methods.
<NAME> March 2017
Methods
| [`dn`](#sk_dsp_comm.multirate_helper.multirate_FIR.dn)(x[, M_change]) | Downsample and filter the signal |
| [`filter`](#sk_dsp_comm.multirate_helper.multirate_FIR.filter)(x) | Filter the signal |
| [`up`](#sk_dsp_comm.multirate_helper.multirate_FIR.up)(x[, L_change]) | Upsample and filter the signal |
| [`zplane`](#sk_dsp_comm.multirate_helper.multirate_FIR.zplane)([auto_scale, size, detect_mult, tol]) | Plot the poles and zeros of the FIR filter in the z-plane |
| **freq_resp** | |
dn(*x*, *M_change=12*)[[source]](_modules/sk_dsp_comm/multirate_helper.html#multirate_FIR.dn)[¶](#sk_dsp_comm.multirate_helper.multirate_FIR.dn)
Downsample and filter the signal
filter(*x*)[[source]](_modules/sk_dsp_comm/multirate_helper.html#multirate_FIR.filter)[¶](#sk_dsp_comm.multirate_helper.multirate_FIR.filter)
Filter the signal
freq_resp(*mode='dB'*, *fs=8000*, *ylim=[- 100, 2]*)[[source]](_modules/sk_dsp_comm/multirate_helper.html#multirate_FIR.freq_resp)[¶](#sk_dsp_comm.multirate_helper.multirate_FIR.freq_resp)
up(*x*, *L_change=12*)[[source]](_modules/sk_dsp_comm/multirate_helper.html#multirate_FIR.up)[¶](#sk_dsp_comm.multirate_helper.multirate_FIR.up)
Upsample and filter the signal
zplane(*auto_scale=True*, *size=2*, *detect_mult=True*, *tol=0.001*)[[source]](_modules/sk_dsp_comm/multirate_helper.html#multirate_FIR.zplane)[¶](#sk_dsp_comm.multirate_helper.multirate_FIR.zplane)
Plot the poles and zeros of the FIR filter in the z-plane
*class* sk_dsp_comm.multirate_helper.multirate_IIR(*sos*)[[source]](_modules/sk_dsp_comm/multirate_helper.html#multirate_IIR)[¶](#sk_dsp_comm.multirate_helper.multirate_IIR)
A simple class for encapsulating IIR filtering, or IIR upsample/
filter, or IIR filter/downsample operations used in modeling a comm system. Objects of this class will hold the required filter
coefficients once an object is instantiated. Frequency response
and the pole zero plot can also be plotted using supplied class methods.
For added robustness to floating point quantization all filtering
is done using the scipy.signal cascade of second-order sections filter method y = sosfilter(sos,x).
<NAME> March 2017
Methods
| [`dn`](#sk_dsp_comm.multirate_helper.multirate_IIR.dn)(x[, M_change]) | Downsample and filter the signal |
| [`filter`](#sk_dsp_comm.multirate_helper.multirate_IIR.filter)(x) | Filter the signal using second-order sections |
| [`freq_resp`](#sk_dsp_comm.multirate_helper.multirate_IIR.freq_resp)([mode, fs, ylim]) | Frequency response plot |
| [`up`](#sk_dsp_comm.multirate_helper.multirate_IIR.up)(x[, L_change]) | Upsample and filter the signal |
| [`zplane`](#sk_dsp_comm.multirate_helper.multirate_IIR.zplane)([auto_scale, size, detect_mult, tol]) | Plot the poles and zeros of the FIR filter in the z-plane |
dn(*x*, *M_change=12*)[[source]](_modules/sk_dsp_comm/multirate_helper.html#multirate_IIR.dn)[¶](#sk_dsp_comm.multirate_helper.multirate_IIR.dn)
Downsample and filter the signal
filter(*x*)[[source]](_modules/sk_dsp_comm/multirate_helper.html#multirate_IIR.filter)[¶](#sk_dsp_comm.multirate_helper.multirate_IIR.filter)
Filter the signal using second-order sections
freq_resp(*mode='dB'*, *fs=8000*, *ylim=[- 100, 2]*)[[source]](_modules/sk_dsp_comm/multirate_helper.html#multirate_IIR.freq_resp)[¶](#sk_dsp_comm.multirate_helper.multirate_IIR.freq_resp)
Frequency response plot
up(*x*, *L_change=12*)[[source]](_modules/sk_dsp_comm/multirate_helper.html#multirate_IIR.up)[¶](#sk_dsp_comm.multirate_helper.multirate_IIR.up)
Upsample and filter the signal
zplane(*auto_scale=True*, *size=2*, *detect_mult=True*, *tol=0.001*)[[source]](_modules/sk_dsp_comm/multirate_helper.html#multirate_IIR.zplane)[¶](#sk_dsp_comm.multirate_helper.multirate_IIR.zplane)
Plot the poles and zeros of the FIR filter in the z-plane
*class* sk_dsp_comm.multirate_helper.rate_change(*M_change=12*, *fcutoff=0.9*, *N_filt_order=8*, *ftype='butter'*)[[source]](_modules/sk_dsp_comm/multirate_helper.html#rate_change)[¶](#sk_dsp_comm.multirate_helper.rate_change)
A simple class for encapsulating the upsample/filter and filter/downsample operations used in modeling a comm system. Objects of this class will hold the required filter coefficients once an object is instantiated.
<NAME> February 2015
Methods
| [`dn`](#sk_dsp_comm.multirate_helper.rate_change.dn)(x) | Downsample and filter the signal |
| [`up`](#sk_dsp_comm.multirate_helper.rate_change.up)(x) | Upsample and filter the signal |
dn(*x*)[[source]](_modules/sk_dsp_comm/multirate_helper.html#rate_change.dn)[¶](#sk_dsp_comm.multirate_helper.rate_change.dn)
Downsample and filter the signal
up(*x*)[[source]](_modules/sk_dsp_comm/multirate_helper.html#rate_change.up)[¶](#sk_dsp_comm.multirate_helper.rate_change.up)
Upsample and filter the signal
### sigsys[¶](#module-sk_dsp_comm.sigsys)
Signals and Systems Function Module
Copyright (c) March 2017, <NAME> All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
The views and conclusions contained in the software and documentation are those of the authors and should not be interpreted as representing official policies,
either expressed or implied, of the FreeBSD Project.
#### Notes[¶](#notes)
The primary purpose of this function library is to support the book Signals and Systems for Dummies. Beyond that it should be useful to anyone who wants to use Pylab for general signals and systems modeling and simulation. There is a good collection of digital communication simulation primitives included in the library. More enhancements are planned over time.
The formatted docstrings for the library follow. Click index in the upper right to get an alphabetical listing of the library functions. In all of the example code given it is assumed that ssd has been imported into your workspace. See the examples below for import options.
#### Examples[¶](#examples)
```
>>> import sk_dsp_comm.sigsys as ssd
>>> # Commands then need to be prefixed with ssd., i.e.,
>>> ssd.tri(t,tau)
>>> # A full import of the module, to avoid the the need to prefix with ssd, is:
>>> from sk_dsp_comm.sigsys import *
```
#### Function Catalog[¶](#function-catalog)
sk_dsp_comm.sigsys.am_rx(*x192*)[[source]](_modules/sk_dsp_comm/sigsys.html#am_rx)[¶](#sk_dsp_comm.sigsys.am_rx)
AM envelope detector receiver for the Chapter 17 Case Study
The receiver bandpass filter is not included in this function.
Parameters
**x192**ndarray of the AM signal at sampling rate 192 ksps
Returns
**m_rx8**ndarray of the demodulated message at 8 ksps
**t8**ndarray of the time axis at 8 ksps
**m_rx192**ndarray of the demodulated output at 192 ksps
**x_edet192**ndarray of the envelope detector output at 192 ksps
Notes
The bandpass filter needed at the receiver front-end can be designed using b_bpf,a_bpf = `am_rx_BPF()`.
Examples
```
>>> import numpy as np
>>> n = np.arange(0,1000)
>>> # 1 kHz message signal
>>> m = np.cos(2*np.pi*1000/8000.*n)
>>> m_rx8,t8,m_rx192,x_edet192 = am_rx(x192)
```
sk_dsp_comm.sigsys.am_rx_bpf(*n_order=7*, *ripple_dB=1*, *b=10000.0*, *fs=192000.0*)[[source]](_modules/sk_dsp_comm/sigsys.html#am_rx_bpf)[¶](#sk_dsp_comm.sigsys.am_rx_bpf)
Bandpass filter design for the AM receiver Case Study of Chapter 17.
Design a 7th-order Chebyshev type 1 bandpass filter to remove/reduce adjacent channel intereference at the envelope detector input.
Parameters
**n_order**the filter order (default = 7)
**ripple_dB**the passband ripple in dB (default = 1)
**b**the RF bandwidth (default = 10e3)
**fs**the sampling frequency
Returns
**b_bpf**ndarray of the numerator filter coefficients
**a_bpf**ndarray of the denominator filter coefficients
Examples
```
>>> from scipy import signal
>>> import numpy as np
>>> import matplotlib.pyplot as plt
>>> import sk_dsp_comm.sigsys as ss
>>> # Use the default values
>>> b_bpf,a_bpf = ss.am_rx_bpf()
```
Pole-zero plot of the filter.
```
>>> ss.zplane(b_bpf,a_bpf)
>>> plt.show()
```
([Source code](.//sigsys-1.py))
Plot of the frequency response.
```
>>> f = np.arange(0,192/2.,.1)
>>> w, Hbpf = signal.freqz(b_bpf,a_bpf,2*np.pi*f/192)
>>> plt.plot(f*10,20*np.log10(abs(Hbpf)))
>>> plt.axis([0,1920/2.,-80,10])
>>> plt.ylabel("Power Spectral Density (dB)")
>>> plt.xlabel("Frequency (kHz)")
>>> plt.show()
```
sk_dsp_comm.sigsys.am_tx(*m*, *a_mod*, *fc=75000.0*)[[source]](_modules/sk_dsp_comm/sigsys.html#am_tx)[¶](#sk_dsp_comm.sigsys.am_tx)
AM transmitter for Case Study of Chapter 17.
Assume input is sampled at 8 Ksps and upsampling by 24 is performed to arrive at fs_out = 192 Ksps.
Parameters
**m**ndarray of the input message signal
**a_mod**AM modulation index, between 0 and 1
**fc**the carrier frequency in Hz
Returns
**x192**ndarray of the upsampled by 24 and modulated carrier
**t192**ndarray of the upsampled by 24 time axis
**m24**ndarray of the upsampled by 24 message signal
Notes
The sampling rate of the input signal is assumed to be 8 kHz.
Examples
```
>>> n = arange(0,1000)
>>> # 1 kHz message signal
>>> m = cos(2*pi*1000/8000.*n)
>>> x192, t192 = am_tx(m,0.8,fc=75e3)
```
sk_dsp_comm.sigsys.bin_num(*n*, *n_bits*)[[source]](_modules/sk_dsp_comm/sigsys.html#bin_num)[¶](#sk_dsp_comm.sigsys.bin_num)
Produce a signed representation of the number n using n_bits.
Parameters
* **n** – Number n
* **n_bits** – Number of bits
Returns
sk_dsp_comm.sigsys.biquad2(*w_num*, *r_num*, *w_den*, *r_den*)[[source]](_modules/sk_dsp_comm/sigsys.html#biquad2)[¶](#sk_dsp_comm.sigsys.biquad2)
A biquadratic filter in terms of conjugate pole and zero pairs.
Parameters
**w_num**zero frequency (angle) in rad/sample
**r_num**conjugate zeros radius
**w_den**pole frequency (angle) in rad/sample
**r_den**conjugate poles radius; less than 1 for stability
Returns
**b**ndarray of numerator coefficients
**a**ndarray of denominator coefficients
Examples
```
>>> b,a = biquad2(pi/4., 1, pi/4., 0.95)
```
sk_dsp_comm.sigsys.bit_errors(*z*, *data*, *start*, *ns*)[[source]](_modules/sk_dsp_comm/sigsys.html#bit_errors)[¶](#sk_dsp_comm.sigsys.bit_errors)
A simple bit error counting function.
In its present form this function counts bit errors between hard decision BPSK bits in +/-1 form and compares them with 0/1 binary data that was transmitted. Timing between the Tx and Rx data is the responsibility of the user. An enhanced
version of this function, which features automatic synching will be created in the future.
Parameters
**z**ndarray of hard decision BPSK data prior to symbol spaced sampling
**data**ndarray of reference bits in 1/0 format
**start**timing reference for the received
**ns**the number of samples per symbol
Returns
**Pe_hat**the estimated probability of a bit error
Notes
The Tx and Rx data streams are exclusive-or’d and the then the bit errors are summed, and finally divided by the number of bits observed to form an estimate of the bit error probability. This function needs to be
enhanced to be more useful.
Examples
```
>>> from scipy import signal
>>> x,b, data = nrz_bits(1000,10)
>>> # set Eb/N0 to 8 dB
>>> y = cpx_awgn(x,8,10)
>>> # matched filter the signal
>>> z = signal.lfilter(b,1,y)
>>> # make bit decisions at 10 and Ns multiples thereafter
>>> Pe_hat = bit_errors(z,data,10,10)
```
sk_dsp_comm.sigsys.bpsk_tx(*N_bits*, *Ns*, *ach_fc=2.0*, *ach_lvl_dB=- 100*, *pulse='rect'*, *alpha=0.25*, *M=6*)[[source]](_modules/sk_dsp_comm/sigsys.html#bpsk_tx)[¶](#sk_dsp_comm.sigsys.bpsk_tx)
Generates biphase shift keyed (BPSK) transmitter with adjacent channel interference.
Generates three BPSK signals with rectangular or square root raised cosine (SRC)
pulse shaping of duration N_bits and Ns samples per bit. The desired signal is centered on f = 0, which the adjacent channel signals to the left and right are also generated at dB level relative to the desired signal. Used in the
digital communications Case Study supplement.
Parameters
**N_bits**the number of bits to simulate
**Ns**the number of samples per bit
**ach_fc**the frequency offset of the adjacent channel signals (default 2.0)
**ach_lvl_dB**the level of the adjacent channel signals in dB (default -100)
**pulse :the pulse shape ‘rect’ or ‘src’**
**alpha**square root raised cosine pulse shape factor (default = 0.25)
**M**square root raised cosine pulse truncation factor (default = 6)
Returns
**x**ndarray of the composite signal x0 + ach_lvl*(x1p + x1m)
**b**the transmit pulse shape
**data0**the data bits used to form the desired signal; used for error checking
Examples
```
>>> x,b,data0 = bpsk_tx(1000,10,pulse='src')
```
sk_dsp_comm.sigsys.cascade_filters(*b1*, *a1*, *b2*, *a2*)[[source]](_modules/sk_dsp_comm/sigsys.html#cascade_filters)[¶](#sk_dsp_comm.sigsys.cascade_filters)
Cascade two IIR digital filters into a single (b,a) coefficient set.
To cascade two digital filters (system functions) given their numerator and denominator coefficients you simply convolve the coefficient arrays.
Parameters
**b1**ndarray of numerator coefficients for filter 1
**a1**ndarray of denominator coefficients for filter 1
**b2**ndarray of numerator coefficients for filter 2
**a2**ndarray of denominator coefficients for filter 2
Returns
**b**ndarray of numerator coefficients for the cascade
**a**ndarray of denominator coefficients for the cascade
Examples
```
>>> from scipy import signal
>>> b1,a1 = signal.butter(3, 0.1)
>>> b2,a2 = signal.butter(3, 0.15)
>>> b,a = cascade_filters(b1,a1,b2,a2)
```
sk_dsp_comm.sigsys.cic(*m*, *k*)[[source]](_modules/sk_dsp_comm/sigsys.html#cic)[¶](#sk_dsp_comm.sigsys.cic)
A functional form implementation of a cascade of integrator comb (CIC) filters.
Parameters
**m**Effective number of taps per section (typically the decimation factor).
**k**The number of CIC sections cascaded (larger K gives the filter a wider image rejection bandwidth).
Returns
**b**FIR filter coefficients for a simple direct form implementation using the filter() function.
Notes
Commonly used in multirate signal processing digital down-converters and digital up-converters. A true CIC filter requires no multiplies, only add and subtract operations. The functional form created here is a simple FIR requiring real coefficient multiplies via filter().
<NAME> July 2013
sk_dsp_comm.sigsys.conv_integral(*x1*, *tx1*, *x2*, *tx2*, *extent=('f', 'f')*)[[source]](_modules/sk_dsp_comm/sigsys.html#conv_integral)[¶](#sk_dsp_comm.sigsys.conv_integral)
Continuous-time convolution of x1 and x2 with proper tracking of the output time axis.
Appromimate the convolution integral for the convolution of two continuous-time signals using the SciPy function signal. The time (sequence axis) are managed from input to output. y(t) = x1(t)*x2(t).
Parameters
**x1**ndarray of signal x1 corresponding to tx1
**tx1**ndarray time axis for x1
**x2**ndarray of signal x2 corresponding to tx2
**tx2**ndarray time axis for x2
**extent**(‘e1’,’e2’) where ‘e1’, ‘e2’ may be ‘f’ finite, ‘r’ right-sided, or ‘l’ left-sided
Returns
**y**ndarray of output values y
**ty**ndarray of the corresponding time axis for y
Notes
The output time axis starts at the sum of the starting values in x1 and x2
and ends at the sum of the two ending values in x1 and x2. The time steps used in x1(t) and x2(t) must match. The default extents of (‘f’,’f’) are used for signals that are active (have support) on or within t1 and t2 respectively. A right-sided signal such as exp(-a*t)*u(t) is semi-infinite, so it has extent ‘r’ and the convolution output will be truncated to display only the valid results.
Examples
```
>>> import matplotlib.pyplot as plt
>>> import numpy as np
>>> import sk_dsp_comm.sigsys as ss
>>> tx = np.arange(-5,10,.01)
>>> x = ss.rect(tx-2,4) # pulse starts at t = 0
>>> y,ty = ss.conv_integral(x,tx,x,tx)
>>> plt.plot(ty,y) # expect a triangle on [0,8]
>>> plt.show()
```
([Source code](.//sigsys-2.py))
Now, consider a pulse convolved with an exponential.
```
>>> h = 4*np.exp(-4*tx)*ss.step(tx)
>>> y,ty = ss.conv_integral(x,tx,h,tx,extent=('f','r')) # note extents set
>>> plt.plot(ty,y) # expect a pulse charge and discharge waveform
```
sk_dsp_comm.sigsys.conv_sum(*x1*, *nx1*, *x2*, *nx2*, *extent=('f', 'f')*)[[source]](_modules/sk_dsp_comm/sigsys.html#conv_sum)[¶](#sk_dsp_comm.sigsys.conv_sum)
Discrete convolution of x1 and x2 with proper tracking of the output time axis.
Convolve two discrete-time signals using the SciPy function `scipy.signal.convolution()`.
The time (sequence axis) are managed from input to output. y[n] = x1[n]*x2[n].
Parameters
**x1**ndarray of signal x1 corresponding to nx1
**nx1**ndarray time axis for x1
**x2**ndarray of signal x2 corresponding to nx2
**nx2**ndarray time axis for x2
**extent**(‘e1’,’e2’) where ‘e1’, ‘e2’ may be ‘f’ finite, ‘r’ right-sided, or ‘l’ left-sided
Returns
**y**ndarray of output values y
**ny**ndarray of the corresponding sequence index n
Notes
The output time axis starts at the sum of the starting values in x1 and x2
and ends at the sum of the two ending values in x1 and x2. The default
extents of (‘f’,’f’) are used for signals that are active (have support)
on or within n1 and n2 respectively. A right-sided signal such as
a^n*u[n] is semi-infinite, so it has extent ‘r’ and the convolution output will be truncated to display only the valid results.
Examples
```
>>> import matplotlib.pyplot as plt
>>> import numpy as np
>>> import sk_dsp_comm.sigsys as ss
>>> nx = np.arange(-5,10)
>>> x = ss.drect(nx,4)
>>> y,ny = ss.conv_sum(x,nx,x,nx)
>>> plt.stem(ny,y)
>>> plt.show()
```
([Source code](.//sigsys-3.py))
Consider a pulse convolved with an exponential. (‘r’ type extent)
```
>>> h = 0.5**nx*ss.dstep(nx)
>>> y,ny = ss.conv_sum(x,nx,h,nx,('f','r')) # note extents set
>>> plt.stem(ny,y) # expect a pulse charge and discharge sequence
```
sk_dsp_comm.sigsys.cpx_awgn(*x*, *es_n0*, *ns*)[[source]](_modules/sk_dsp_comm/sigsys.html#cpx_awgn)[¶](#sk_dsp_comm.sigsys.cpx_awgn)
Apply white Gaussian noise to a digital communications signal.
This function represents a complex baseband white Gaussian noise digital communications channel. The input signal array may be real or complex.
Parameters
**x**ndarray noise free complex baseband input signal.
**EsNO**set the channel Es/N0 (Eb/N0 for binary) level in dB
**ns**number of samples per symbol (bit)
Returns
**y**ndarray x with additive noise added.
Notes
Set the channel energy per symbol-to-noise power spectral
density ratio (Es/N0) in dB.
Examples
```
>>> x,b, data = nrz_bits(1000,10)
>>> # set Eb/N0 = 10 dB
>>> y = cpx_awgn(x,10,10)
```
sk_dsp_comm.sigsys.cruise_control(*wn*, *zeta*, *T*, *vcruise*, *vmax*, *tf_mode='H'*)[[source]](_modules/sk_dsp_comm/sigsys.html#cruise_control)[¶](#sk_dsp_comm.sigsys.cruise_control)
Cruise control with PI controller and hill disturbance.
This function returns various system function configurations for a the cruise control Case Study example found in
the supplementary article. The plant model is obtained by the linearizing the equations of motion and the controller contains a proportional and integral gain term set via the closed-loop parameters natural frequency wn (rad/s) and damping zeta.
Parameters
**wn**closed-loop natural frequency in rad/s, nominally 0.1
**zeta**closed-loop damping factor, nominally 1.0
**T**vehicle time constant, nominally 10 s
**vcruise**cruise velocity set point, nominally 75 mph
**vmax**maximum vehicle velocity, nominally 120 mph
**tf_mode**‘H’, ‘HE’, ‘HVW’, or ‘HED’ controls the system function returned by the function
**‘H’**closed-loop system function V(s)/R(s)
**‘HE’**closed-loop system function E(s)/R(s)
**‘HVW’**closed-loop system function V(s)/W(s)
**‘HED’**closed-loop system function E(s)/D(s), where D is the hill disturbance input
Returns
**b**numerator coefficient ndarray
**a**denominator coefficient ndarray
Examples
```
>>> # return the closed-loop system function output/input velocity
>>> b,a = cruise_control(wn,zeta,T,vcruise,vmax,tf_mode='H')
>>> # return the closed-loop system function loop error/hill disturbance
>>> b,a = cruise_control(wn,zeta,T,vcruise,vmax,tf_mode='HED')
```
sk_dsp_comm.sigsys.deci24(*x*)[[source]](_modules/sk_dsp_comm/sigsys.html#deci24)[¶](#sk_dsp_comm.sigsys.deci24)
Decimate by L = 24 using Butterworth filters.
The decimation is done using two three stages. Downsample sample by
L = 2 and lowpass filter, downsample by 3 and lowpass filter, then downsample by L = 4 and lowpass filter. In all cases the lowpass filter is a 10th-order Butterworth lowpass.
Parameters
**x**ndarray of the input signal
Returns
**y**ndarray of the output signal
Notes
The cutoff frequency of the lowpass filters is 1/2, 1/3, and 1/4 to
track the upsampling by 2, 3, and 4 respectively.
Examples
```
>>> y = deci24(x)
```
sk_dsp_comm.sigsys.delta_eps(*t*, *eps*)[[source]](_modules/sk_dsp_comm/sigsys.html#delta_eps)[¶](#sk_dsp_comm.sigsys.delta_eps)
Rectangular pulse approximation to impulse function.
Parameters
**t**ndarray of time axis
**eps**pulse width
Returns
**d**ndarray containing the impulse approximation
Examples
```
>>> import matplotlib.pyplot as plt
>>> from numpy import arange
>>> from sk_dsp_comm.sigsys import delta_eps
>>> t = np.arange(-2,2,.001)
>>> d = delta_eps(t,.1)
>>> plt.plot(t,d)
>>> plt.show()
```
([Source code](.//sigsys-4.py))
sk_dsp_comm.sigsys.dimpulse(*n*)[[source]](_modules/sk_dsp_comm/sigsys.html#dimpulse)[¶](#sk_dsp_comm.sigsys.dimpulse)
Discrete impulse function delta[n].
Parameters
**n**ndarray of the time axis
Returns
**x**ndarray of the signal delta[n]
Examples
```
>>> import matplotlib.pyplot as plt
>>> from numpy import arange
>>> from sk_dsp_comm.sigsys import dimpulse
>>> n = arange(-5,5)
>>> x = dimpulse(n)
>>> plt.stem(n,x)
>>> plt.show()
```
([Source code](.//sigsys-5.py))
Shift the delta left by 2.
```
>>> x = dimpulse(n+2)
>>> plt.stem(n,x)
```
sk_dsp_comm.sigsys.downsample(*x*, *M*, *p=0*)[[source]](_modules/sk_dsp_comm/sigsys.html#downsample)[¶](#sk_dsp_comm.sigsys.downsample)
Downsample by factor M
Keep every Mth sample of the input. The phase of the input samples kept can be selected.
Parameters
**x**ndarray of input signal values
**M**downsample factor
**p**phase of decimated value, 0 (default), 1, …, M-1
Returns
**y**ndarray of the output signal values
Examples
```
>>> y = downsample(x,3)
>>> y = downsample(x,3,1)
```
sk_dsp_comm.sigsys.drect(*n*, *N*)[[source]](_modules/sk_dsp_comm/sigsys.html#drect)[¶](#sk_dsp_comm.sigsys.drect)
Discrete rectangle function of duration N samples.
The signal is active on the interval 0 <= n <= N-1. Also known as the rectangular window function, which is available in
scipy.signal.
Parameters
**n**ndarray of the time axis
**N**the pulse duration
Returns
**x**ndarray of the signal
Notes
The discrete rectangle turns on at n = 0, off at n = N-1 and
has duration of exactly N samples.
Examples
```
>>> import matplotlib.pyplot as plt
>>> from numpy import arange
>>> from sk_dsp_comm.sigsys import drect
>>> n = arange(-5,5)
>>> x = drect(n, N=3)
>>> plt.stem(n,x)
>>> plt.show()
```
([Source code](.//sigsys-6.py))
Shift the delta left by 2.
```
>>> x = drect(n+2, N=3)
>>> plt.stem(n,x)
```
sk_dsp_comm.sigsys.dstep(*n*)[[source]](_modules/sk_dsp_comm/sigsys.html#dstep)[¶](#sk_dsp_comm.sigsys.dstep)
Discrete step function u[n].
Parameters
**n**ndarray of the time axis
Returns
**x**ndarray of the signal u[n]
Examples
```
>>> import matplotlib.pyplot as plt
>>> from numpy import arange
>>> from sk_dsp_comm.sigsys import dstep
>>> n = arange(-5,5)
>>> x = dstep(n)
>>> plt.stem(n,x)
>>> plt.show()
```
([Source code](.//sigsys-7.py))
Shift the delta left by 2.
```
>>> x = dstep(n+2)
>>> plt.stem(n,x)
```
sk_dsp_comm.sigsys.env_det(*x*)[[source]](_modules/sk_dsp_comm/sigsys.html#env_det)[¶](#sk_dsp_comm.sigsys.env_det)
Ideal envelope detector.
This function retains the positive half cycles of the input signal.
Parameters
**x**ndarray of the input sugnal
Returns
**y**ndarray of the output signal
Examples
```
>>> n = arange(0,100)
>>> # 1 kHz message signal
>>> m = cos(2*pi*1000/8000.*n)
>>> x192, t192, m24 = am_tx(m,0.8,fc=75e3)
>>> y = env_det(x192)
```
sk_dsp_comm.sigsys.ex6_2(*n*)[[source]](_modules/sk_dsp_comm/sigsys.html#ex6_2)[¶](#sk_dsp_comm.sigsys.ex6_2)
Generate a triangle pulse as described in Example 6-2 of Chapter 6.
You need to supply an index array n that covers at least [-2, 5].
The function returns the hard-coded signal of the example.
Parameters
**n**time index ndarray covering at least -2 to +5.
Returns
**x**ndarray of signal samples in x
Examples
```
>>> import numpy as np
>>> import matplotlib.pyplot as plt
>>> from sk_dsp_comm import sigsys as ss
>>> n = np.arange(-5,8)
>>> x = ss.ex6_2(n)
>>> plt.stem(n,x) # creates a stem plot of x vs n
```
([Source code](.//sigsys-8.py))
sk_dsp_comm.sigsys.eye_plot(*x*, *l*, *s=0*)[[source]](_modules/sk_dsp_comm/sigsys.html#eye_plot)[¶](#sk_dsp_comm.sigsys.eye_plot)
Eye pattern plot of a baseband digital communications waveform.
The signal must be real, but can be multivalued in terms of the underlying modulation scheme. Used for BPSK eye plots in the Case Study article.
Parameters
**x**ndarray of the real input data vector/array
**l**display length in samples (usually two symbols)
**s**start index
Returns
**Nothing**A plot window opens containing the eye plot
Notes
Increase S to eliminate filter transients.
Examples
1000 bits at 10 samples per bit with ‘rc’ shaping.
```
>>> import matplotlib.pyplot as plt
>>> from sk_dsp_comm import sigsys as ss
>>> x,b, data = ss.nrz_bits(1000,10,'rc')
>>> ss.eye_plot(x,20,60)
```
([Source code](.//sigsys-9.py))
sk_dsp_comm.sigsys.fir_iir_notch(*fi*, *fs*, *r=0.95*)[[source]](_modules/sk_dsp_comm/sigsys.html#fir_iir_notch)[¶](#sk_dsp_comm.sigsys.fir_iir_notch)
Design a second-order FIR or IIR notch filter.
A second-order FIR notch filter is created by placing conjugate zeros on the unit circle at angle corresponidng to the notch center frequency. The IIR notch variation places a pair of conjugate poles at the same angle, but with radius r < 1 (typically 0.9 to 0.95).
Parameters
**fi**notch frequency is Hz relative to fs
**fs**the sampling frequency in Hz, e.g. 8000
**r**pole radius for IIR version, default = 0.95
Returns
**b**numerator coefficient ndarray
**a**denominator coefficient ndarray
Notes
If the pole radius is 0 then an FIR version is created, that is
there are no poles except at z = 0.
Examples
```
>>> import matplotlib.pyplot as plt
>>> from sk_dsp_comm import sigsys as ss
```
```
>>> b_FIR, a_FIR = ss.fir_iir_notch(1000,8000,0)
>>> ss.zplane(b_FIR, a_FIR)
>>> plt.show()
```
([Source code](.//sigsys-10.py))
```
>>> b_IIR, a_IIR = ss.fir_iir_notch(1000,8000)
>>> ss.zplane(b_IIR, a_IIR)
```
sk_dsp_comm.sigsys.from_wav(*filename*)[[source]](_modules/sk_dsp_comm/sigsys.html#from_wav)[¶](#sk_dsp_comm.sigsys.from_wav)
Read a wave file.
A wrapper function for scipy.io.wavfile.read that also includes int16 to float [-1,1] scaling.
Parameters
**filename**file name string
Returns
**fs**sampling frequency in Hz
**x**ndarray of normalized to 1 signal samples
Examples
```
>>> fs,x = from_wav('test_file.wav')
```
sk_dsp_comm.sigsys.fs_approx(*Xk*, *fk*, *t*)[[source]](_modules/sk_dsp_comm/sigsys.html#fs_approx)[¶](#sk_dsp_comm.sigsys.fs_approx)
Synthesize periodic signal x(t) using Fourier series coefficients at harmonic frequencies
Assume the signal is real so coefficients Xk are supplied for nonnegative indicies. The negative index coefficients are assumed to be complex conjugates.
Parameters
**Xk**ndarray of complex Fourier series coefficients
**fk**ndarray of harmonic frequencies in Hz
**t**ndarray time axis corresponding to output signal array x_approx
Returns
**x_approx**ndarray of periodic waveform approximation over time span t
Examples
```
>>> t = arange(0,2,.002)
>>> # a 20% duty cycle pulse train
>>> n = arange(0,20,1) # 0 to 19th harmonic
>>> fk = 1*n % period = 1s
>>> t, x_approx = fs_approx(Xk,fk,t)
>>> plot(t,x_approx)
```
sk_dsp_comm.sigsys.fs_coeff(*xp*, *N*, *f0*, *one_side=True*)[[source]](_modules/sk_dsp_comm/sigsys.html#fs_coeff)[¶](#sk_dsp_comm.sigsys.fs_coeff)
Numerically approximate the Fourier series coefficients given periodic x(t).
The input is assummed to represent one period of the waveform x(t) that has been uniformly sampled. The number of samples supplied to represent one period of the waveform sets the sampling rate.
Parameters
**xp**ndarray of one period of the waveform x(t)
**N**maximum Fourier series coefficient, [0,…,N]
**f0**fundamental frequency used to form fk.
Returns
**Xk**ndarray of the coefficients over indices [0,1,…,N]
**fk**ndarray of the harmonic frequencies [0, f0,2f0,…,Nf0]
Notes
len(xp) >= 2*N+1 as len(xp) is the fft length.
Examples
```
>>> import matplotlib.pyplot as plt
>>> from numpy import arange
>>> import sk_dsp_comm.sigsys as ss
>>> t = arange(0,1,1/1024.)
>>> # a 20% duty cycle pulse starting at t = 0
>>> x_rect = ss.rect(t-.1,0.2)
>>> Xk, fk = ss.fs_coeff(x_rect,25,10)
>>> # plot the spectral lines
>>> ss.line_spectra(fk,Xk,'mag')
>>> plt.show()
```
([Source code](.//sigsys-11.py))
sk_dsp_comm.sigsys.ft_approx(*x*, *t*, *Nfft*)[[source]](_modules/sk_dsp_comm/sigsys.html#ft_approx)[¶](#sk_dsp_comm.sigsys.ft_approx)
Approximate the Fourier transform of a finite duration signal using scipy.signal.freqz()
Parameters
**x**input signal array
**t**time array used to create x(t)
**Nfft**the number of frdquency domain points used toapproximate X(f) on the interval [fs/2,fs/2], where fs = 1/Dt. Dt being the time spacing in array t
Returns
**f**frequency axis array in Hz
**X**the Fourier transform approximation (complex)
Notes
The output time axis starts at the sum of the starting values in x1 and x2
and ends at the sum of the two ending values in x1 and x2. The default
extents of (‘f’,’f’) are used for signals that are active (have support)
on or within n1 and n2 respectively. A right-sided signal such as
\(a^n*u[n]\) is semi-infinite, so it has extent ‘r’ and the convolution output will be truncated to display only the valid results.
Examples
```
>>> import matplotlib.pyplot as plt
>>> import numpy as np
>>> import sk_dsp_comm.sigsys as ss
>>> fs = 100 # sampling rate in Hz
>>> tau = 1
>>> t = np.arange(-5,5,1/fs)
>>> x0 = ss.rect(t-.5,tau)
>>> plt.figure(figsize=(6,5))
>>> plt.plot(t,x0)
>>> plt.grid()
>>> plt.ylim([-0.1,1.1])
>>> plt.xlim([-2,2])
>>> plt.title(r'Exact Waveform')
>>> plt.xlabel(r'Time (s)')
>>> plt.ylabel(r'$x_0(t)$')
>>> plt.show()
```
([Source code](.//sigsys-12.py))
```
>>> # FT Exact Plot
>>> import matplotlib.pyplot as plt
>>> import numpy as np
>>> import sk_dsp_comm.sigsys as ss
>>> fs = 100 # sampling rate in Hz
>>> tau = 1
>>> t = np.arange(-5,5,1/fs)
>>> x0 = ss.rect(t-.5,tau)
>>> fe = np.arange(-10,10,.01)
>>> X0e = tau*np.sinc(fe*tau)
>>> plt.plot(fe,abs(X0e))
>>> #plot(f,angle(X0))
>>> plt.grid()
>>> plt.xlim([-10,10])
>>> plt.title(r'Exact (Theory) Spectrum Magnitude')
>>> plt.xlabel(r'Frequency (Hz)')
>>> plt.ylabel(r'$|X_0e(f)|$')
>>> plt.show()
```
```
>>> # FT Approximation Plot
>>> import matplotlib.pyplot as plt
>>> import numpy as np
>>> import sk_dsp_comm.sigsys as ss
>>> fs = 100 # sampling rate in Hz
>>> tau = 1
>>> t = np.arange(-5,5,1/fs)
>>> x0 = ss.rect(t-.5,tau)
>>> f,X0 = ss.ft_approx(x0,t,4096)
>>> plt.plot(f,abs(X0))
>>> #plt.plot(f,angle(X0))
>>> plt.grid()
>>> plt.xlim([-10,10])
>>> plt.title(r'Approximation Spectrum Magnitude')
>>> plt.xlabel(r'Frequency (Hz)')
>>> plt.ylabel(r'$|X_0(f)|$');
>>> plt.tight_layout()
>>> plt.show()
```
sk_dsp_comm.sigsys.interp24(*x*)[[source]](_modules/sk_dsp_comm/sigsys.html#interp24)[¶](#sk_dsp_comm.sigsys.interp24)
Interpolate by L = 24 using Butterworth filters.
The interpolation is done using three stages. Upsample by
L = 2 and lowpass filter, upsample by 3 and lowpass filter, then upsample by L = 4 and lowpass filter. In all cases the lowpass filter is a 10th-order Butterworth lowpass.
Parameters
**x**ndarray of the input signal
Returns
**y**ndarray of the output signal
Notes
The cutoff frequency of the lowpass filters is 1/2, 1/3, and 1/4 to
track the upsampling by 2, 3, and 4 respectively.
Examples
```
>>> y = interp24(x)
```
sk_dsp_comm.sigsys.line_spectra(*fk*, *Xk*, *mode*, *sides=2*, *linetype='b'*, *lwidth=2*, *floor_dB=- 100*, *fsize=(6, 4)*)[[source]](_modules/sk_dsp_comm/sigsys.html#line_spectra)[¶](#sk_dsp_comm.sigsys.line_spectra)
Plot the Fourier series line spectral given the coefficients.
This function plots two-sided and one-sided line spectra of a periodic signal given the complex exponential Fourier series coefficients and the corresponding harmonic frequencies.
Parameters
**fk**vector of real sinusoid frequencies
**Xk**magnitude and phase at each positive frequency in fk
**mode**‘mag’ => magnitude plot, ‘magdB’ => magnitude in dB plot,
**mode cont**‘magdBn’ => magnitude in dB normalized, ‘phase’ => a phase plot in radians
**sides**2; 2-sided or 1-sided
**linetype**line type per Matplotlib definitions, e.g., ‘b’;
**lwidth**2; linewidth in points
**fsize**optional figure size in inches, default = (6,4) inches
Returns
**Nothing**A plot window opens containing the line spectrum plot
Notes
Since real signals are assumed the frequencies of fk are 0 and/or positive numbers. The supplied Fourier coefficients correspond.
Examples
```
>>> import matplotlib.pyplot as plt
>>> import numpy as np
>>> from sk_dsp_comm.sigsys import line_spectra
>>> n = np.arange(0,25)
>>> # a pulse train with 10 Hz fundamental and 20% duty cycle
>>> fk = n*10
>>> Xk = np.sinc(n*10*.02)*np.exp(-1j*2*np.pi*n*10*.01) # 1j = sqrt(-1)
```
```
>>> line_spectra(fk,Xk,'mag')
>>> plt.show()
```
([Source code](.//sigsys-13.py))
```
>>> line_spectra(fk,Xk,'phase')
```
sk_dsp_comm.sigsys.lms_ic(*r*, *M*, *mu*, *delta=1*)[[source]](_modules/sk_dsp_comm/sigsys.html#lms_ic)[¶](#sk_dsp_comm.sigsys.lms_ic)
Least mean square (LMS) interference canceller adaptive filter.
A complete LMS adaptive filter simulation function for the case of interference cancellation. Used in the digital filtering case study.
Parameters
**M**FIR Filter length (order M-1)
**delta**Delay used to generate the reference signal
**mu**LMS step-size
**delta**decorrelation delay between input and FIR filter input
Returns
**n**ndarray Index vector
**r**ndarray noisy (with interference) input signal
**r_hat**ndarray filtered output (NB_hat[n])
**e**ndarray error sequence (WB_hat[n])
**ao**ndarray final value of weight vector
**F**ndarray frequency response axis vector
**Ao**ndarray frequency response of filter
Examples
```
>>> # import a speech signal
>>> fs,s = from_wav('OSR_us_000_0030_8k.wav')
>>> # add interference at 1kHz and 1.5 kHz and
>>> # truncate to 5 seconds
>>> r = soi_snoi_gen(s,10,5*8000,[1000, 1500])
>>> # simulate with a 64 tap FIR and mu = 0.005
>>> n,r,r_hat,e,ao,F,Ao = lms_ic(r,64,0.005)
```
sk_dsp_comm.sigsys.lp_samp(*fb*, *fs*, *fmax*, *N*, *shape='tri'*, *fsize=(6, 4)*)[[source]](_modules/sk_dsp_comm/sigsys.html#lp_samp)[¶](#sk_dsp_comm.sigsys.lp_samp)
Lowpass sampling theorem plotting function.
Display the spectrum of a sampled signal after setting the bandwidth,
sampling frequency, maximum display frequency, and spectral shape.
Parameters
**fb**spectrum lowpass bandwidth in Hz
**fs**sampling frequency in Hz
**fmax**plot over [-fmax,fmax]
**shape**‘tri’ or ‘line’
**N**number of translates, N positive and N negative
**fsize**the size of the figure window, default (6,4)
Returns
**Nothing**A plot window opens containing the spectrum plot
Examples
```
>>> import matplotlib.pyplot as plt
>>> from sk_dsp_comm.sigsys import lp_samp
```
No aliasing as bandwidth 10 Hz < 25/2; fs > fb.
```
>>> lp_samp(10,25,50,10)
>>> plt.show()
```
([Source code](.//sigsys-14.py))
Now aliasing as bandwidth 15 Hz > 25/2; fs < fb.
```
>>> lp_samp(15,25,50,10)
```
sk_dsp_comm.sigsys.lp_tri(*f*, *fb*)[[source]](_modules/sk_dsp_comm/sigsys.html#lp_tri)[¶](#sk_dsp_comm.sigsys.lp_tri)
Triangle spectral shape function used by [`lp_samp()`](#sk_dsp_comm.sigsys.lp_samp).
Parameters
**f**ndarray containing frequency samples
**fb**the bandwidth as a float constant
Returns
**x**ndarray of spectrum samples for a single triangle shape
Notes
This is a support function for the lowpass spectrum plotting function
[`lp_samp()`](#sk_dsp_comm.sigsys.lp_samp).
Examples
```
>>> x = lp_tri(f, fb)
```
sk_dsp_comm.sigsys.m_seq(*m*)[[source]](_modules/sk_dsp_comm/sigsys.html#m_seq)[¶](#sk_dsp_comm.sigsys.m_seq)
Generate an m-sequence ndarray using an all-ones initialization.
Available m-sequence (PN generators) include m = 2,3,…,12, & 16.
Parameters
**m**the number of shift registers. 2,3, .., 12, & 16
Returns
**c**ndarray of one period of the m-sequence
Notes
The sequence period is 2**m - 1 (2^m - 1).
Examples
```
>>> c = m_seq(5)
```
sk_dsp_comm.sigsys.my_psd(*x*, *NFFT=1024*, *Fs=1*)[[source]](_modules/sk_dsp_comm/sigsys.html#my_psd)[¶](#sk_dsp_comm.sigsys.my_psd)
A local version of NumPy’s PSD function that returns the plot arrays.
A mlab.psd wrapper function that returns two ndarrays;
makes no attempt to auto plot anything.
Parameters
**x**ndarray input signal
**NFFT**a power of two, e.g., 2**10 = 1024
**Fs**the sampling rate in Hz
Returns
**Px**ndarray of the power spectrum estimate
**f**ndarray of frequency values
Notes
This function makes it easier to overlay spectrum plots because you have better control over the axis scaling than when using psd()
in the autoscale mode.
Examples
```
>>> import matplotlib.pyplot as plt
>>> from numpy import log10
>>> from sk_dsp_comm import sigsys as ss
>>> x,b, data = ss.nrz_bits(10000,10)
>>> Px,f = ss.my_psd(x,2**10,10)
>>> plt.plot(f, 10*log10(Px))
>>> plt.ylabel("Power Spectral Density (dB)")
>>> plt.xlabel("Frequency (Hz)")
>>> plt.show()
```
([Source code](.//sigsys-15.py))
sk_dsp_comm.sigsys.nrz_bits(*n_bits*, *ns*, *pulse='rect'*, *alpha=0.25*, *m=6*)[[source]](_modules/sk_dsp_comm/sigsys.html#nrz_bits)[¶](#sk_dsp_comm.sigsys.nrz_bits)
Generate non-return-to-zero (NRZ) data bits with pulse shaping.
A baseband digital data signal using +/-1 amplitude signal values and including pulse shaping.
Parameters
**n_bits**number of NRZ +/-1 data bits to produce
**ns**the number of samples per bit,
**pulse_type**‘rect’ , ‘rc’, ‘src’ (default ‘rect’)
**alpha**excess bandwidth factor(default 0.25)
**m**single sided pulse duration (default = 6)
Returns
**x**ndarray of the NRZ signal values
**b**ndarray of the pulse shape
**data**ndarray of the underlying data bits
Notes
Pulse shapes include ‘rect’ (rectangular), ‘rc’ (raised cosine),
‘src’ (root raised cosine). The actual pulse length is 2*M+1 samples.
This function is used by BPSK_tx in the Case Study article.
Examples
```
>>> import matplotlib.pyplot as plt
>>> from sk_dsp_comm.sigsys import nrz_bits
>>> from numpy import arange
>>> x,b,data = nrz_bits(100, 10)
>>> t = arange(len(x))
>>> plt.plot(t, x)
>>> plt.ylim([-1.01, 1.01])
>>> plt.show()
```
([Source code](.//sigsys-16.py))
sk_dsp_comm.sigsys.nrz_bits2(*data*, *Ns*, *pulse='rect'*, *alpha=0.25*, *M=6*)[[source]](_modules/sk_dsp_comm/sigsys.html#nrz_bits2)[¶](#sk_dsp_comm.sigsys.nrz_bits2)
Generate non-return-to-zero (NRZ) data bits with pulse shaping with user data
A baseband digital data signal using +/-1 amplitude signal values and including pulse shaping. The data sequence is user supplied.
Parameters
**data**ndarray of the data bits as 0/1 values
**Ns**the number of samples per bit,
**pulse_type**‘rect’ , ‘rc’, ‘src’ (default ‘rect’)
**alpha**excess bandwidth factor(default 0.25)
**M**single sided pulse duration (default = 6)
Returns
**x**ndarray of the NRZ signal values
**b**ndarray of the pulse shape
Notes
Pulse shapes include ‘rect’ (rectangular), ‘rc’ (raised cosine),
‘src’ (root raised cosine). The actual pulse length is 2*M+1 samples.
Examples
```
>>> import matplotlib.pyplot as plt
>>> from sk_dsp_comm.sigsys import nrz_bits2
>>> from sk_dsp_comm.sigsys import m_seq
>>> from numpy import arange
>>> x,b = nrz_bits2(m_seq(5),10)
>>> t = arange(len(x))
>>> plt.ylim([-1.01, 1.01])
>>> plt.plot(t,x)
```
([Source code](.//sigsys-17.py))
sk_dsp_comm.sigsys.oa_filter(*x*, *h*, *N*, *mode=0*)[[source]](_modules/sk_dsp_comm/sigsys.html#oa_filter)[¶](#sk_dsp_comm.sigsys.oa_filter)
Overlap and add transform domain FIR filtering.
This function implements the classical overlap and add method of transform domain filtering using a length P FIR filter.
Parameters
**x**input signal to be filtered as an ndarray
**h**FIR filter coefficients as an ndarray of length P
**N**FFT size > P, typically a power of two
**mode**0 or 1, when 1 returns a diagnostic matrix
Returns
**y**the filtered output as an ndarray
**y_mat**an ndarray whose rows are the individual overlap outputs.
Notes
y_mat is used for diagnostics and to gain understanding of the algorithm.
Examples
```
>>> import numpy as np
>>> from sk_dsp_comm.sigsys import oa_filter
>>> n = np.arange(0,100)
>>> x = np.cos(2*np.pi*0.05*n)
>>> b = np.ones(10)
>>> y = oa_filter(x,h,N)
>>> # set mode = 1
>>> y, y_mat = oa_filter(x,h,N,1)
```
sk_dsp_comm.sigsys.os_filter(*x*, *h*, *N*, *mode=0*)[[source]](_modules/sk_dsp_comm/sigsys.html#os_filter)[¶](#sk_dsp_comm.sigsys.os_filter)
Overlap and save transform domain FIR filtering.
This function implements the classical overlap and save method of transform domain filtering using a length P FIR filter.
Parameters
**x**input signal to be filtered as an ndarray
**h**FIR filter coefficients as an ndarray of length P
**N**FFT size > P, typically a power of two
**mode**0 or 1, when 1 returns a diagnostic matrix
Returns
**y**the filtered output as an ndarray
**y_mat**an ndarray whose rows are the individual overlap outputs.
Notes
y_mat is used for diagnostics and to gain understanding of the algorithm.
Examples
```
>>> from numpy import arange, cos, pi, ones
>>> n = arange(0,100)
>>> x = cos(2*pi*0.05*n)
>>> b = ones(10)
>>> y = os_filter(x,h,N)
>>> # set mode = 1
>>> y, y_mat = os_filter(x,h,N,1)
```
sk_dsp_comm.sigsys.peaking(*GdB*, *fc*, *Q=3.5*, *fs=44100.0*)[[source]](_modules/sk_dsp_comm/sigsys.html#peaking)[¶](#sk_dsp_comm.sigsys.peaking)
A second-order peaking filter having GdB gain at fc and approximately and 0 dB otherwise.
The filter coefficients returns correspond to a biquadratic system function containing five parameters.
Parameters
**GdB**Lowpass gain in dB
**fc**Center frequency in Hz
**Q**Filter Q which is inversely proportional to bandwidth
**fs**Sampling frquency in Hz
Returns
**b**ndarray containing the numerator filter coefficients
**a**ndarray containing the denominator filter coefficients
Examples
```
>>> import matplotlib.pyplot as plt
>>> import numpy as np
>>> from sk_dsp_comm.sigsys import peaking
>>> from scipy import signal
>>> b,a = peaking(2.0,500)
>>> f = np.logspace(1,5,400)
>>> w,H = signal.freqz(b,a,2*np.pi*f/44100)
>>> plt.semilogx(f,20*np.log10(abs(H)))
>>> plt.ylabel("Power Spectral Density (dB)")
>>> plt.xlabel("Frequency (Hz)")
>>> plt.show()
```
([Source code](.//sigsys-18.py))
```
>>> b,a = peaking(-5.0,500,4)
>>> w,H = signal.freqz(b,a,2*np.pi*f/44100)
>>> plt.semilogx(f,20*np.log10(abs(H)))
>>> plt.ylabel("Power Spectral Density (dB)")
>>> plt.xlabel("Frequency (Hz)")
```
sk_dsp_comm.sigsys.pn_gen(*n_bits*, *m=5*)[[source]](_modules/sk_dsp_comm/sigsys.html#pn_gen)[¶](#sk_dsp_comm.sigsys.pn_gen)
Maximal length sequence signal generator.
Generates a sequence 0/1 bits of N_bit duration. The bits themselves are obtained from an m-sequence of length m. Available m-sequence
(PN generators) include m = 2,3,…,12, & 16.
Parameters
**n_bits**the number of bits to generate
**m**the number of shift registers. 2,3, .., 12, & 16
Returns
**PN**ndarray of the generator output over N_bits
Notes
The sequence is periodic having period 2**m - 1 (2^m - 1).
Examples
```
>>> # A 15 bit period signal nover 50 bits
>>> PN = pn_gen(50,4)
```
sk_dsp_comm.sigsys.position_cd(*Ka*, *out_type='fb_exact'*)[[source]](_modules/sk_dsp_comm/sigsys.html#position_cd)[¶](#sk_dsp_comm.sigsys.position_cd)
CD sled position control case study of Chapter 18.
The function returns the closed-loop and open-loop system function for a CD/DVD sled position control system. The loop amplifier gain is the only variable that may be changed. The returned system function can however be changed.
Parameters
**Ka**loop amplifier gain, start with 50.
**out_type**‘open_loop’ for open loop system function
**out_type**‘fb_approx’ for closed-loop approximation
**out_type**‘fb_exact’ for closed-loop exact
Returns
**b**numerator coefficient ndarray
**a**denominator coefficient ndarray
Notes
With the exception of the loop amplifier gain, all other parameters are hard-coded from Case Study example.
Examples
```
>>> b,a = position_cd(Ka,'fb_approx')
>>> b,a = position_cd(Ka,'fb_exact')
```
sk_dsp_comm.sigsys.prin_alias(*f_in*, *fs*)[[source]](_modules/sk_dsp_comm/sigsys.html#prin_alias)[¶](#sk_dsp_comm.sigsys.prin_alias)
Calculate the principle alias frequencies.
Given an array of input frequencies the function returns an array of principle alias frequencies.
Parameters
**f_in**ndarray of input frequencies
**fs**sampling frequency
Returns
**f_out**ndarray of principle alias frequencies
Examples
```
>>> # Linear frequency sweep from 0 to 50 Hz
>>> f_in = arange(0,50,0.1)
>>> # Calculate principle alias with fs = 10 Hz
>>> f_out = prin_alias(f_in,10)
```
sk_dsp_comm.sigsys.rc_imp(*Ns*, *alpha*, *M=6*)[[source]](_modules/sk_dsp_comm/sigsys.html#rc_imp)[¶](#sk_dsp_comm.sigsys.rc_imp)
A truncated raised cosine pulse used in digital communications.
The pulse shaping factor \(0< \alpha < 1\) is required as well as the truncation factor M which sets the pulse duration to be 2*M*Tsymbol.
Parameters
**Ns**number of samples per symbol
**alpha**excess bandwidth factor on (0, 1), e.g., 0.35
**M**equals RC one-sided symbol truncation factor
Returns
**b**ndarray containing the pulse shape
Notes
The pulse shape b is typically used as the FIR filter coefficients when forming a pulse shaped digital communications waveform.
Examples
Ten samples per symbol and alpha = 0.35.
```
>>> import matplotlib.pyplot as plt
>>> from numpy import arange
>>> from sk_dsp_comm.sigsys import rc_imp
>>> b = rc_imp(10,0.35)
>>> n = arange(-10*6,10*6+1)
>>> plt.stem(n,b)
>>> plt.show()
```
([Source code](.//sigsys-19.py))
sk_dsp_comm.sigsys.rect(*t*, *tau*)[[source]](_modules/sk_dsp_comm/sigsys.html#rect)[¶](#sk_dsp_comm.sigsys.rect)
Approximation to the rectangle pulse Pi(t/tau).
In this numerical version of Pi(t/tau) the pulse is active over -tau/2 <= t <= tau/2.
Parameters
**t**ndarray of the time axis
**tau**the pulse width
Returns
**x**ndarray of the signal Pi(t/tau)
Examples
```
>>> import matplotlib.pyplot as plt
>>> from numpy import arange
>>> from sk_dsp_comm.sigsys import rect
>>> t = arange(-1,5,.01)
>>> x = rect(t,1.0)
>>> plt.plot(t,x)
>>> plt.ylim([0, 1.01])
>>> plt.show()
```
([Source code](.//sigsys-20.py))
To turn on the pulse at t = 1 shift t.
```
>>> x = rect(t - 1.0,1.0)
>>> plt.plot(t,x)
>>> plt.ylim([0, 1.01])
```
sk_dsp_comm.sigsys.rect_conv(*n*, *n_len*)[[source]](_modules/sk_dsp_comm/sigsys.html#rect_conv)[¶](#sk_dsp_comm.sigsys.rect_conv)
The theoretical result of convolving two rectangle sequences.
The result is a triangle. The solution is based on pure analysis. Simply coded as opposed
to efficiently coded.
Parameters
**n**ndarray of time axis
**n_len**rectangle pulse duration
Returns
**y**ndarray of of output signal
Examples
```
>>> import matplotlib.pyplot as plt
>>> from numpy import arange
>>> from sk_dsp_comm.sigsys import rect_conv
>>> n = arange(-5,20)
>>> y = rect_conv(n,6)
>>> plt.plot(n, y)
>>> plt.show()
```
([Source code](.//sigsys-21.py))
sk_dsp_comm.sigsys.scatter(*x*, *ns*, *start*)[[source]](_modules/sk_dsp_comm/sigsys.html#scatter)[¶](#sk_dsp_comm.sigsys.scatter)
Sample a baseband digital communications waveform at the symbol spacing.
Parameters
**x**ndarray of the input digital comm signal
**ns**number of samples per symbol (bit)
**start**the array index to start the sampling
Returns
**xI**ndarray of the real part of x following sampling
**xQ**ndarray of the imaginary part of x following sampling
Notes
Normally the signal is complex, so the scatter plot contains
clusters at points in the complex plane. For a binary signal such as BPSK, the point centers are nominally +/-1 on the real axis. Start is used to eliminate transients from the FIR pulse shaping filters from appearing in the scatter plot.
Examples
```
>>> import matplotlib.pyplot as plt
>>> from sk_dsp_comm import sigsys as ss
>>> x,b, data = ss.nrz_bits(1000,10,'rc')
>>> # Add some noise so points are now scattered about +/-1
>>> y = ss.cpx_awgn(x,20,10)
>>> yI,yQ = ss.scatter(y,10,60)
>>> plt.plot(yI,yQ,'.')
>>> plt.axis('equal')
>>> plt.ylabel("Quadrature")
>>> plt.xlabel("In-Phase")
>>> plt.grid()
>>> plt.show()
```
([Source code](.//sigsys-22.py))
sk_dsp_comm.sigsys.simple_quant(*x*, *b_tot*, *x_max*, *limit*)[[source]](_modules/sk_dsp_comm/sigsys.html#simple_quant)[¶](#sk_dsp_comm.sigsys.simple_quant)
A simple rounding quantizer for bipolar signals having Btot = B + 1 bits.
This function models a quantizer that employs Btot bits that has one of three selectable limiting types: saturation, overflow, and none.
The quantizer is bipolar and implements rounding.
Parameters
**x**input signal ndarray to be quantized
**b_tot**total number of bits in the quantizer, e.g. 16
**x_max**quantizer full-scale dynamic range is [-Xmax, Xmax]
**Limit = Limiting of the form ‘sat’, ‘over’, ‘none’**
Returns
**xq**quantized output ndarray
Notes
The quantization can be formed as e = xq - x
Examples
```
>>> import matplotlib.pyplot as plt
>>> from matplotlib.mlab import psd
>>> import numpy as np
>>> from sk_dsp_comm import sigsys as ss
>>> n = np.arange(0,10000)
>>> x = np.cos(2*np.pi*0.211*n)
>>> y = ss.sinusoid_awgn(x,90)
>>> Px, f = psd(y,2**10,Fs=1)
>>> plt.plot(f, 10*np.log10(Px))
>>> plt.ylim([-80, 25])
>>> plt.ylabel("Power Spectral Density (dB)")
>>> plt.xlabel(r'Normalized Frequency $\omega/2\pi$')
>>> plt.show()
```
([Source code](.//sigsys-23.py))
```
>>> yq = ss.simple_quant(y,12,1,'sat')
>>> Px, f = psd(yq,2**10,Fs=1)
>>> plt.plot(f, 10*np.log10(Px))
>>> plt.ylim([-80, 25])
>>> plt.ylabel("Power Spectral Density (dB)")
>>> plt.xlabel(r'Normalized Frequency $\omega/2\pi$')
>>> plt.show()
```
sk_dsp_comm.sigsys.simple_sa(*x*, *NS*, *NFFT*, *fs*, *NAVG=1*, *window='boxcar'*)[[source]](_modules/sk_dsp_comm/sigsys.html#simple_sa)[¶](#sk_dsp_comm.sigsys.simple_sa)
Spectral estimation using windowing and averaging.
This function implements averaged periodogram spectral estimation estimation similar to the NumPy’s psd() function, but more specialized for the windowing case study of Chapter 16.
Parameters
**x**ndarray containing the input signal
**NS**The subrecord length less zero padding, e.g. NS < NFFT
**NFFT**FFT length, e.g., 1024 = 2**10
**fs**sampling rate in Hz
**NAVG**the number of averages, e.g., 1 for deterministic signals
**window**hardcoded window ‘boxcar’ (default) or ‘hanning’
Returns
**f**ndarray frequency axis in Hz on [0, fs/2]
**Sx**ndarray the power spectrum estimate
Notes
The function also prints the maximum number of averages K possible for the input data record.
Examples
```
>>> import matplotlib.pyplot as plt
>>> import numpy as np
>>> from sk_dsp_comm import sigsys as ss
>>> n = np.arange(0,2048)
>>> x = np.cos(2*np.pi*1000/10000*n) + 0.01*np.cos(2*np.pi*3000/10000*n)
>>> f, Sx = ss.simple_sa(x,128,512,10000)
>>> plt.plot(f, 10*np.log10(Sx))
>>> plt.ylim([-80, 0])
>>> plt.xlabel("Frequency (Hz)")
>>> plt.ylabel("Power Spectral Density (dB)")
>>> plt.show()
```
([Source code](.//sigsys-24.py))
With a hanning window.
```
>>> f, Sx = ss.simple_sa(x,256,1024,10000,window='hanning')
>>> plt.plot(f, 10*np.log10(Sx))
>>> plt.xlabel("Frequency (Hz)")
>>> plt.ylabel("Power Spectral Density (dB)")
>>> plt.ylim([-80, 0])
```
sk_dsp_comm.sigsys.sinusoid_awgn(*x*, *SNRdB*)[[source]](_modules/sk_dsp_comm/sigsys.html#sinusoid_awgn)[¶](#sk_dsp_comm.sigsys.sinusoid_awgn)
Add white Gaussian noise to a single real sinusoid.
Input a single sinusoid to this function and it returns a noisy sinusoid at a specific SNR value in dB. Sinusoid power is calculated using np.var.
Parameters
**x**Input signal as ndarray consisting of a single sinusoid
**SNRdB**SNR in dB for output sinusoid
Returns
**y**Noisy sinusoid return vector
Examples
```
>>> # set the SNR to 10 dB
>>> n = arange(0,10000)
>>> x = cos(2*pi*0.04*n)
>>> y = sinusoid_awgn(x,10.0)
```
sk_dsp_comm.sigsys.soi_snoi_gen(*s*, *SIR_dB*, *N*, *fi*, *fs=8000*)[[source]](_modules/sk_dsp_comm/sigsys.html#soi_snoi_gen)[¶](#sk_dsp_comm.sigsys.soi_snoi_gen)
Add an interfering sinusoidal tone to the input signal at a given SIR_dB.
The input is the signal of interest (SOI) and number of sinsuoid signals not of interest (SNOI) are addedto the SOI at a prescribed signal-to-
intereference SIR level in dB.
Parameters
**s**ndarray of signal of SOI
**SIR_dB**interference level in dB
**N**Trim input signal s to length N + 1 samples
**fi**ndarray of intereference frequencies in Hz
**fs**sampling rate in Hz, default is 8000 Hz
Returns
**r**ndarray of combined signal plus intereference of length N+1 samples
Examples
```
>>> # load a speech ndarray and trim to 5*8000 + 1 samples
>>> fs,s = from_wav('OSR_us_000_0030_8k.wav')
>>> r = soi_snoi_gen(s,10,5*8000,[1000, 1500])
```
sk_dsp_comm.sigsys.splane(*b*, *a*, *auto_scale=True*, *size=[- 1, 1, - 1, 1]*)[[source]](_modules/sk_dsp_comm/sigsys.html#splane)[¶](#sk_dsp_comm.sigsys.splane)
Create an s-plane pole-zero plot.
As input the function uses the numerator and denominator
s-domain system function coefficient ndarrays b and a respectively.
Assumed to be stored in descending powers of s.
Parameters
**b**numerator coefficient ndarray.
**a**denominator coefficient ndarray.
**auto_scale**True
**size**[xmin,xmax,ymin,ymax] plot scaling when scale = False
Returns
**(M,N)**tuple of zero and pole counts + plot window
Notes
This function tries to identify repeated poles and zeros and will
place the multiplicity number above and to the right of the pole or zero.
The difficulty is setting the tolerance for this detection. Currently it is set at 1e-3 via the function signal.unique_roots.
Examples
```
>>> # Here the plot is generated using auto_scale
>>> splane(b,a)
>>> # Here the plot is generated using manual scaling
>>> splane(b,a,False,[-10,1,-10,10])
```
sk_dsp_comm.sigsys.sqrt_rc_imp(*Ns*, *alpha*, *M=6*)[[source]](_modules/sk_dsp_comm/sigsys.html#sqrt_rc_imp)[¶](#sk_dsp_comm.sigsys.sqrt_rc_imp)
A truncated square root raised cosine pulse used in digital communications.
The pulse shaping factor 0< alpha < 1 is required as well as the
truncation factor M which sets the pulse duration to be 2*M*Tsymbol.
Parameters
**Ns**number of samples per symbol
**alpha**excess bandwidth factor on (0, 1), e.g., 0.35
**M**equals RC one-sided symbol truncation factor
Returns
**b**ndarray containing the pulse shape
Notes
The pulse shape b is typically used as the FIR filter coefficients when forming a pulse shaped digital communications waveform. When
square root raised cosine (SRC) pulse is used generate Tx signals and
at the receiver used as a matched filter (receiver FIR filter), the
received signal is now raised cosine shaped, this having zero
intersymbol interference and the optimum removal of additive white
noise if present at the receiver input.
Examples
```
>>> # ten samples per symbol and alpha = 0.35
>>> import matplotlib.pyplot as plt
>>> from numpy import arange
>>> from sk_dsp_comm.sigsys import sqrt_rc_imp
>>> b = sqrt_rc_imp(10,0.35)
>>> n = arange(-10*6,10*6+1)
>>> plt.stem(n,b)
>>> plt.show()
```
([Source code](.//sigsys-25.py))
sk_dsp_comm.sigsys.step(*t*)[[source]](_modules/sk_dsp_comm/sigsys.html#step)[¶](#sk_dsp_comm.sigsys.step)
Approximation to step function signal u(t).
In this numerical version of u(t) the step turns on at t = 0.
Parameters
**t**ndarray of the time axis
Returns
**x**ndarray of the step function signal u(t)
Examples
```
>>> import matplotlib.pyplot as plt
>>> from numpy import arange
>>> from sk_dsp_comm.sigsys import step
>>> t = arange(-1,5,.01)
>>> x = step(t)
>>> plt.plot(t,x)
>>> plt.ylim([-0.01, 1.01])
>>> plt.show()
```
([Source code](.//sigsys-26.py))
To turn on at t = 1, shift t.
```
>>> x = step(t - 1.0)
>>> plt.ylim([-0.01, 1.01])
>>> plt.plot(t,x)
```
sk_dsp_comm.sigsys.ten_band_eq_filt(*x*, *GdB*, *Q=3.5*)[[source]](_modules/sk_dsp_comm/sigsys.html#ten_band_eq_filt)[¶](#sk_dsp_comm.sigsys.ten_band_eq_filt)
Filter the input signal x with a ten-band equalizer having octave gain values in ndarray GdB.
The signal x is filtered using octave-spaced peaking filters starting at 31.25 Hz and stopping at 16 kHz. The Q of each filter is 3.5, but can be changed. The sampling rate is assumed to be 44.1 kHz.
Parameters
**x**ndarray of the input signal samples
**GdB**ndarray containing ten octave band gain values [G0dB,…,G9dB]
**Q**Quality factor vector for each of the NB peaking filters
Returns
**y**ndarray of output signal samples
Examples
```
>>> # Test with white noise
>>> w = randn(100000)
>>> y = ten_band_eq_filt(x,GdB)
>>> psd(y,2**10,44.1)
```
sk_dsp_comm.sigsys.ten_band_eq_resp(*GdB*, *Q=3.5*)[[source]](_modules/sk_dsp_comm/sigsys.html#ten_band_eq_resp)[¶](#sk_dsp_comm.sigsys.ten_band_eq_resp)
Create a frequency response magnitude plot in dB of a ten band equalizer using a semilogplot (semilogx()) type plot
Parameters
**GdB**Gain vector for 10 peaking filters [G0,…,G9]
**Q**Quality factor for each peaking filter (default 3.5)
Returns
**Nothing**two plots are created
Examples
```
>>> import matplotlib.pyplot as plt
>>> from sk_dsp_comm import sigsys as ss
>>> ss.ten_band_eq_resp([0,10.0,0,0,-1,0,5,0,-4,0])
>>> plt.show()
```
([Source code](.//sigsys-27.py))
sk_dsp_comm.sigsys.to_wav(*filename*, *rate*, *x*)[[source]](_modules/sk_dsp_comm/sigsys.html#to_wav)[¶](#sk_dsp_comm.sigsys.to_wav)
Write a wave file.
A wrapper function for scipy.io.wavfile.write that also includes int16 scaling and conversion.
Assume input x is [-1,1] values.
Parameters
**filename**file name string
**rate**sampling frequency in Hz
Returns
**Nothing**writes only the [*](#id1).wav file
Examples
```
>>> to_wav('test_file.wav', 8000, x)
```
sk_dsp_comm.sigsys.tri(*t*, *tau*)[[source]](_modules/sk_dsp_comm/sigsys.html#tri)[¶](#sk_dsp_comm.sigsys.tri)
Approximation to the triangle pulse Lambda(t/tau).
In this numerical version of Lambda(t/tau) the pulse is active over -tau <= t <= tau.
Parameters
**t**ndarray of the time axis
**tau**one half the triangle base width
Returns
**x**ndarray of the signal Lambda(t/tau)
Examples
```
>>> import matplotlib.pyplot as plt
>>> from numpy import arange
>>> from sk_dsp_comm.sigsys import tri
>>> t = arange(-1,5,.01)
>>> x = tri(t,1.0)
>>> plt.plot(t,x)
>>> plt.show()
```
([Source code](.//sigsys-28.py))
To turn on at t = 1, shift t.
```
>>> x = tri(t - 1.0,1.0)
>>> plt.plot(t,x)
```
sk_dsp_comm.sigsys.unique_cpx_roots(*rlist*, *tol=0.001*)[[source]](_modules/sk_dsp_comm/sigsys.html#unique_cpx_roots)[¶](#sk_dsp_comm.sigsys.unique_cpx_roots)
The average of the root values is used when multiplicity
is greater than one.
<NAME> October 2016
sk_dsp_comm.sigsys.upsample(*x*, *L*)[[source]](_modules/sk_dsp_comm/sigsys.html#upsample)[¶](#sk_dsp_comm.sigsys.upsample)
Upsample by factor L
Insert L - 1 zero samples in between each input sample.
Parameters
**x**ndarray of input signal values
**L**upsample factor
Returns
**y**ndarray of the output signal values
Examples
```
>>> y = upsample(x,3)
```
sk_dsp_comm.sigsys.zplane(*b*, *a*, *auto_scale=True*, *size=2*, *detect_mult=True*, *tol=0.001*)[[source]](_modules/sk_dsp_comm/sigsys.html#zplane)[¶](#sk_dsp_comm.sigsys.zplane)
Create an z-plane pole-zero plot.
Create an z-plane pole-zero plot using the numerator and denominator z-domain system function coefficient ndarrays b and a respectively. Assume descending powers of z.
Parameters
**b**ndarray of the numerator coefficients
**a**ndarray of the denominator coefficients
**auto_scale**bool (default True)
**size**plot radius maximum when scale = False
Returns
**(M,N)**tuple of zero and pole counts + plot window
Notes
This function tries to identify repeated poles and zeros and will
place the multiplicity number above and to the right of the pole or zero.
The difficulty is setting the tolerance for this detection. Currently it is set at 1e-3 via the function signal.unique_roots.
Examples
```
>>> # Here the plot is generated using auto_scale
>>> zplane(b,a)
>>> # Here the plot is generated using manual scaling
>>> zplane(b,a,False,1.5)
```
### synchronization[¶](#module-sk_dsp_comm.synchronization)
A Digital Communications Synchronization
and PLLs Function Module
A collection of useful functions when studying PLLs and synchronization and digital comm
Copyright (c) March 2017, <NAME> All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
The views and conclusions contained in the software and documentation are those of the authors and should not be interpreted as representing official policies,
either expressed or implied, of the FreeBSD Project.
sk_dsp_comm.synchronization.DD_carrier_sync(*z*, *M*, *BnTs*, *zeta=0.707*, *mod_type='MPSK'*, *type=0*, *open_loop=False*)[[source]](_modules/sk_dsp_comm/synchronization.html#DD_carrier_sync)[¶](#sk_dsp_comm.synchronization.DD_carrier_sync)
z_prime,a_hat,e_phi = DD_carrier_sync(z,M,BnTs,zeta=0.707,type=0)
Decision directed carrier phase tracking
> > > > > > > z = complex baseband PSK signal at one sample per symbol
> > > M = The PSK modulation order, i.e., 2, 8, or 8.
> > > > > > > > > > > BnTs = time bandwidth product of loop bandwidth and the symbol period,thus the loop bandwidth as a fraction of the symbol rate.
> > > > > > zeta = loop damping factor
> > type = Phase error detector type: 0 <> ML, 1 <> heuristic
> > > > z_prime = phase rotation output (like soft symbol values)
> a_hat = the hard decision symbol values landing at the constellationvalues
> e_phi = the phase error e(k) into the loop filter
> > > > Ns = Nominal number of samples per symbol (Ts/T) in the carrier phase tracking loop, almost always 1
> > > > > Kp = The phase detector gain in the carrier phase tracking loop; This value depends upon the algorithm type. For the ML scheme
> > described at the end of notes Chapter 9, A = 1, K 1/sqrt(2),
> > so Kp = sqrt(2).
> > > > > <NAME> July 2014 Updated for improved MPSK performance April 2020 Added experimental MQAM capability April 2020
Motivated by code found in M. Rice, Digital Communications A Discrete-Time
Approach, Prentice Hall, New Jersey, 2009. (ISBN 978-0-13-030497-1).
sk_dsp_comm.synchronization.NDA_symb_sync(*z*, *Ns*, *L*, *BnTs*, *zeta=0.707*, *I_ord=3*)[[source]](_modules/sk_dsp_comm/synchronization.html#NDA_symb_sync)[¶](#sk_dsp_comm.synchronization.NDA_symb_sync)
> > > > > > > > > > > > > > > > z = complex baseband input signal at nominally Ns samplesper symbol
> > > > > > > > > > > > > > > > > > > > > > > > Ns = Nominal number of samples per symbol (Ts/T) in the symbol tracking loop, often 4
> > > > > > > > > > > > > > > BnTs = time bandwidth product of loop bandwidth and the symbol period,thus the loop bandwidth as a fraction of the symbol rate.
> > > > > > zeta = loop damping factor
> > > > I_ord = interpolator order, 1, 2, or 3
> e_tau = the timing error e(k) input to the loop filter
> > > > Kp = The phase detector gain in the symbol tracking loop; for theNDA algoithm used here always 1
> > > > > <NAME> July 2014
Motivated by code found in M. Rice, Digital Communications A Discrete-Time
Approach, Prentice Hall, New Jersey, 2009. (ISBN 978-0-13-030497-1).
sk_dsp_comm.synchronization.PLL1(*theta*, *fs*, *loop_type*, *Kv*, *fn*, *zeta*, *non_lin*)[[source]](_modules/sk_dsp_comm/synchronization.html#PLL1)[¶](#sk_dsp_comm.synchronization.PLL1)
Baseband Analog PLL Simulation Model
Parameters
* **theta** – input phase deviation in radians
* **fs** – sampling rate in sample per second or Hz
* **loop_type** – 1, first-order loop filter F(s)=K_LF; 2, integrator with lead compensation F(s) = (1 + s tau2)/(s tau1),
i.e., a type II, or 3, lowpass with lead compensation F(s) = (1 + s tau2)/(1 + s tau1)
* **Kv** – VCO gain in Hz/v; note presently assume Kp = 1v/rad and K_LF = 1; the user can easily change this
* **fn** – Loop natural frequency (loops 2 & 3) or cutoff frquency (loop 1)
* **zeta** – Damping factor for loops 2 & 3
* **non_lin** – 0, linear phase detector; 1, sinusoidal phase detector
Returns theta_hat = Output phase estimate of the input theta in radians,
ev = VCO control voltage,
phi = phase error = theta - theta_hat
Notes
Alternate input in place of natural frequency, fn, in Hz is the noise equivalent bandwidth Bn in Hz.
<NAME>, April 2007 for ECE 5625/4625 Modified February 2008 and July 2014 for ECE 5675/4675 Python version August 2014
sk_dsp_comm.synchronization.PLL_cbb(*x*, *fs*, *loop_type*, *Kv*, *fn*, *zeta*)[[source]](_modules/sk_dsp_comm/synchronization.html#PLL_cbb)[¶](#sk_dsp_comm.synchronization.PLL_cbb)
Baseband Analog PLL Simulation Model
Parameters
* **x** – input phase deviation in radians
* **fs** – sampling rate in sample per second or Hz
* **loop_type** – 1, first-order loop filter F(s)=K_LF; 2, integrator with lead compensation F(s) = (1 + s tau2)/(s tau1),
i.e., a type II, or 3, lowpass with lead compensation F(s) = (1 + s tau2)/(1 + s tau1)
* **Kv** – VCO gain in Hz/v; note presently assume Kp = 1v/rad and K_LF = 1; the user can easily change this
* **fn** – Loop natural frequency (loops 2 & 3) or cutoff frequency (loop 1)
* **zeta** – Damping factor for loops 2 & 3
Returns theta_hat = Output phase estimate of the input theta in radians,
ev = VCO control voltage,
phi = phase error = theta - theta_hat
<NAME>, April 2007 for ECE 5625/4625 Modified February 2008 and July 2014 for ECE 5675/4675 Python version August 2014
sk_dsp_comm.synchronization.phase_step(*z*, *ns*, *p_step*, *n_step*)[[source]](_modules/sk_dsp_comm/synchronization.html#phase_step)[¶](#sk_dsp_comm.synchronization.phase_step)
Create a one sample per symbol signal containing a phase rotation step Nsymb into the waveform.
Parameters
* **z** – complex baseband signal after matched filter
* **ns** – number of sample per symbol
* **p_step** – size in radians of the phase step
* **n_step** – symbol sample location where the step turns on
Returns the one sample symbol signal containing the phase step
<NAME> July 2014
sk_dsp_comm.synchronization.time_step(*z*, *ns*, *t_step*, *n_step*)[[source]](_modules/sk_dsp_comm/synchronization.html#time_step)[¶](#sk_dsp_comm.synchronization.time_step)
Create a one sample per symbol signal containing a phase rotation step Nsymb into the waveform.
Parameters
* **z** – complex baseband signal after matched filter
* **ns** – number of sample per symbol
* **t_step** – in samples relative to Ns
* **n_step** – symbol sample location where the step turns on
Returns the one sample per symbol signal containing the phase step
<NAME> July 2014
Indices and tables[¶](#indices-and-tables)
---
* [Index](genindex.html)
* [Module Index](py-modindex.html)
* [Search Page](search.html) |
rusoto_appmesh | rust | Rust | Crate rusoto_appmesh
===
App Mesh is a service mesh based on the Envoy proxy that makes it easy to monitor and control microservices. App Mesh standardizes how your microservices communicate, giving you end-to-end visibility and helping to ensure high availability for your applications.
App Mesh gives you consistent visibility and network traffic controls for every microservice in an application. You can use App Mesh with Amazon Web Services Fargate, Amazon ECS, Amazon EKS, Kubernetes on Amazon Web Services, and Amazon EC2.
App Mesh supports microservice applications that use service discovery naming for their components. For more information about service discovery on Amazon ECS, see Service Discovery in the *Amazon Elastic Container Service Developer Guide*. Kubernetes `kube-dns` and `coredns` are supported. For more information, see DNS for Services and Pods in the Kubernetes documentation.
If you’re using the service, you’re probably looking for AppMeshClient and AppMesh.
Structs
---
AccessLogAn object that represents the access logging information for a virtual node.
AppMeshClientA client for the AWS App Mesh API.
AwsCloudMapInstanceAttributeAn object that represents the Cloud Map attribute information for your virtual node.
AWS Cloud Map is not available in the eu-south-1 Region.
AwsCloudMapServiceDiscoveryAn object that represents the Cloud Map service discovery information for your virtual node.
Cloud Map is not available in the eu-south-1 Region.
BackendAn object that represents the backends that a virtual node is expected to send outbound traffic to.
BackendDefaultsAn object that represents the default properties for a backend.
ClientPolicyAn object that represents a client policy.
ClientPolicyTlsA reference to an object that represents a Transport Layer Security (TLS) client policy.
ClientTlsCertificateAn object that represents the client's certificate.
CreateGatewayRouteInputCreateGatewayRouteOutputCreateMeshInputCreateMeshOutputCreateRouteInputCreateRouteOutputCreateVirtualGatewayInputCreateVirtualGatewayOutputCreateVirtualNodeInputCreateVirtualNodeOutputCreateVirtualRouterInputCreateVirtualRouterOutputCreateVirtualServiceInputCreateVirtualServiceOutputDeleteGatewayRouteInputDeleteGatewayRouteOutputDeleteMeshInputDeleteMeshOutputDeleteRouteInputDeleteRouteOutputDeleteVirtualGatewayInputDeleteVirtualGatewayOutputDeleteVirtualNodeInputDeletes a virtual node input.
DeleteVirtualNodeOutputDeleteVirtualRouterInputDeleteVirtualRouterOutputDeleteVirtualServiceInputDeleteVirtualServiceOutputDescribeGatewayRouteInputDescribeGatewayRouteOutputDescribeMeshInputDescribeMeshOutputDescribeRouteInputDescribeRouteOutputDescribeVirtualGatewayInputDescribeVirtualGatewayOutputDescribeVirtualNodeInputDescribeVirtualNodeOutputDescribeVirtualRouterInputDescribeVirtualRouterOutputDescribeVirtualServiceInputDescribeVirtualServiceOutputDnsServiceDiscoveryAn object that represents the DNS service discovery information for your virtual node.
DurationAn object that represents a duration of time.
EgressFilterAn object that represents the egress filter rules for a service mesh.
FileAccessLogAn object that represents an access log file.
GatewayRouteDataAn object that represents a gateway route returned by a describe operation.
GatewayRouteHostnameMatchAn object representing the gateway route host name to match.
GatewayRouteHostnameRewriteAn object representing the gateway route host name to rewrite.
GatewayRouteRefAn object that represents a gateway route returned by a list operation.
GatewayRouteSpecAn object that represents a gateway route specification. Specify one gateway route type.
GatewayRouteStatusAn object that represents the current status of a gateway route.
GatewayRouteTargetAn object that represents a gateway route target.
GatewayRouteVirtualServiceAn object that represents the virtual service that traffic is routed to.
GrpcGatewayRouteAn object that represents a gRPC gateway route.
GrpcGatewayRouteActionAn object that represents the action to take if a match is determined.
GrpcGatewayRouteMatchAn object that represents the criteria for determining a request match.
GrpcGatewayRouteMetadataAn object representing the metadata of the gateway route.
GrpcGatewayRouteRewriteAn object that represents the gateway route to rewrite.
GrpcMetadataMatchMethodAn object representing the method header to be matched.
GrpcRetryPolicyAn object that represents a retry policy. Specify at least one value for at least one of the types of `RetryEvents`, a value for `maxRetries`, and a value for `perRetryTimeout`. Both `server-error` and `gateway-error` under `httpRetryEvents` include the Envoy `reset` policy. For more information on the `reset` policy, see the Envoy documentation.
GrpcRouteAn object that represents a gRPC route type.
GrpcRouteActionAn object that represents the action to take if a match is determined.
GrpcRouteMatchAn object that represents the criteria for determining a request match.
GrpcRouteMetadataAn object that represents the match metadata for the route.
GrpcRouteMetadataMatchMethodAn object that represents the match method. Specify one of the match values.
GrpcTimeoutAn object that represents types of timeouts.
HeaderMatchMethodAn object that represents the method and value to match with the header value sent in a request. Specify one match method.
HealthCheckPolicyAn object that represents the health check policy for a virtual node's listener.
HttpGatewayRouteAn object that represents an HTTP gateway route.
HttpGatewayRouteActionAn object that represents the action to take if a match is determined.
HttpGatewayRouteHeaderAn object that represents the HTTP header in the gateway route.
HttpGatewayRouteMatchAn object that represents the criteria for determining a request match.
HttpGatewayRoutePathRewriteAn object that represents the path to rewrite.
HttpGatewayRoutePrefixRewriteAn object representing the beginning characters of the route to rewrite.
HttpGatewayRouteRewriteAn object representing the gateway route to rewrite.
HttpPathMatchAn object representing the path to match in the request.
HttpQueryParameterAn object that represents the query parameter in the request.
HttpRetryPolicyAn object that represents a retry policy. Specify at least one value for at least one of the types of `RetryEvents`, a value for `maxRetries`, and a value for `perRetryTimeout`. Both `server-error` and `gateway-error` under `httpRetryEvents` include the Envoy `reset` policy. For more information on the `reset` policy, see the Envoy documentation.
HttpRouteAn object that represents an HTTP or HTTP/2 route type.
HttpRouteActionAn object that represents the action to take if a match is determined.
HttpRouteHeaderAn object that represents the HTTP header in the request.
HttpRouteMatchAn object that represents the requirements for a route to match HTTP requests for a virtual router.
HttpTimeoutAn object that represents types of timeouts.
ListGatewayRoutesInputListGatewayRoutesOutputListMeshesInputListMeshesOutputListRoutesInputListRoutesOutputListTagsForResourceInputListTagsForResourceOutputListVirtualGatewaysInputListVirtualGatewaysOutputListVirtualNodesInputListVirtualNodesOutputListVirtualRoutersInputListVirtualRoutersOutputListVirtualServicesInputListVirtualServicesOutputListenerAn object that represents a listener for a virtual node.
ListenerTimeoutAn object that represents timeouts for different protocols.
ListenerTlsAn object that represents the Transport Layer Security (TLS) properties for a listener.
ListenerTlsAcmCertificateAn object that represents an AWS Certicate Manager (ACM) certificate.
ListenerTlsCertificateAn object that represents a listener's Transport Layer Security (TLS) certificate.
ListenerTlsFileCertificateAn object that represents a local file certificate. The certificate must meet specific requirements and you must have proxy authorization enabled. For more information, see Transport Layer Security (TLS).
ListenerTlsSdsCertificateAn object that represents the listener's Secret Discovery Service certificate. The proxy must be configured with a local SDS provider via a Unix Domain Socket. See App Mesh TLS documentation for more info.
ListenerTlsValidationContextAn object that represents a listener's Transport Layer Security (TLS) validation context.
ListenerTlsValidationContextTrustAn object that represents a listener's Transport Layer Security (TLS) validation context trust.
LoggingAn object that represents the logging information for a virtual node.
MatchRangeAn object that represents the range of values to match on. The first character of the range is included in the range, though the last character is not. For example, if the range specified were 1-100, only values 1-99 would be matched.
MeshDataAn object that represents a service mesh returned by a describe operation.
MeshRefAn object that represents a service mesh returned by a list operation.
MeshSpecAn object that represents the specification of a service mesh.
MeshStatusAn object that represents the status of a service mesh.
OutlierDetectionAn object that represents the outlier detection for a virtual node's listener.
PortMappingAn object that represents a port mapping.
QueryParameterMatchAn object representing the query parameter to match.
ResourceMetadataAn object that represents metadata for a resource.
RouteDataAn object that represents a route returned by a describe operation.
RouteRefAn object that represents a route returned by a list operation.
RouteSpecAn object that represents a route specification. Specify one route type.
RouteStatusAn object that represents the current status of a route.
ServiceDiscoveryAn object that represents the service discovery information for a virtual node.
SubjectAlternativeNameMatchersAn object that represents the methods by which a subject alternative name on a peer Transport Layer Security (TLS) certificate can be matched.
SubjectAlternativeNamesAn object that represents the subject alternative names secured by the certificate.
TagRefOptional metadata that you apply to a resource to assist with categorization and organization. Each tag consists of a key and an optional value, both of which you define. Tag keys can have a maximum character length of 128 characters, and tag values can have a maximum length of 256 characters.
TagResourceInputTagResourceOutputTcpRouteAn object that represents a TCP route type.
TcpRouteActionAn object that represents the action to take if a match is determined.
TcpTimeoutAn object that represents types of timeouts.
TlsValidationContextAn object that represents how the proxy will validate its peer during Transport Layer Security (TLS) negotiation.
TlsValidationContextAcmTrustAn object that represents a Transport Layer Security (TLS) validation context trust for an Certificate Manager certificate.
TlsValidationContextFileTrustAn object that represents a Transport Layer Security (TLS) validation context trust for a local file.
TlsValidationContextSdsTrustAn object that represents a Transport Layer Security (TLS) Secret Discovery Service validation context trust. The proxy must be configured with a local SDS provider via a Unix Domain Socket. See App Mesh TLS documentation for more info.
TlsValidationContextTrustAn object that represents a Transport Layer Security (TLS) validation context trust.
UntagResourceInputUntagResourceOutputUpdateGatewayRouteInputUpdateGatewayRouteOutputUpdateMeshInputUpdateMeshOutputUpdateRouteInputUpdateRouteOutputUpdateVirtualGatewayInputUpdateVirtualGatewayOutputUpdateVirtualNodeInputUpdateVirtualNodeOutputUpdateVirtualRouterInputUpdateVirtualRouterOutputUpdateVirtualServiceInputUpdateVirtualServiceOutputVirtualGatewayAccessLogThe access log configuration for a virtual gateway.
VirtualGatewayBackendDefaultsAn object that represents the default properties for a backend.
VirtualGatewayClientPolicyAn object that represents a client policy.
VirtualGatewayClientPolicyTlsAn object that represents a Transport Layer Security (TLS) client policy.
VirtualGatewayClientTlsCertificateAn object that represents the virtual gateway's client's Transport Layer Security (TLS) certificate.
VirtualGatewayConnectionPoolAn object that represents the type of virtual gateway connection pool.
Only one protocol is used at a time and should be the same protocol as the one chosen under port mapping.
If not present the default value for `maxPendingRequests` is `2147483647`.
VirtualGatewayDataAn object that represents a virtual gateway returned by a describe operation.
VirtualGatewayFileAccessLogAn object that represents an access log file.
VirtualGatewayGrpcConnectionPoolAn object that represents a type of connection pool.
VirtualGatewayHealthCheckPolicyAn object that represents the health check policy for a virtual gateway's listener.
VirtualGatewayHttp2ConnectionPoolAn object that represents a type of connection pool.
VirtualGatewayHttpConnectionPoolAn object that represents a type of connection pool.
VirtualGatewayListenerAn object that represents a listener for a virtual gateway.
VirtualGatewayListenerTlsAn object that represents the Transport Layer Security (TLS) properties for a listener.
VirtualGatewayListenerTlsAcmCertificateAn object that represents an Certificate Manager certificate.
VirtualGatewayListenerTlsCertificateAn object that represents a listener's Transport Layer Security (TLS) certificate.
VirtualGatewayListenerTlsFileCertificateAn object that represents a local file certificate. The certificate must meet specific requirements and you must have proxy authorization enabled. For more information, see Transport Layer Security (TLS).
VirtualGatewayListenerTlsSdsCertificateAn object that represents the virtual gateway's listener's Secret Discovery Service certificate.The proxy must be configured with a local SDS provider via a Unix Domain Socket. See App MeshTLS documentation for more info.
VirtualGatewayListenerTlsValidationContextAn object that represents a virtual gateway's listener's Transport Layer Security (TLS) validation context.
VirtualGatewayListenerTlsValidationContextTrustAn object that represents a virtual gateway's listener's Transport Layer Security (TLS) validation context trust.
VirtualGatewayLoggingAn object that represents logging information.
VirtualGatewayPortMappingAn object that represents a port mapping.
VirtualGatewayRefAn object that represents a virtual gateway returned by a list operation.
VirtualGatewaySpecAn object that represents the specification of a service mesh resource.
VirtualGatewayStatusAn object that represents the status of the mesh resource.
VirtualGatewayTlsValidationContextAn object that represents a Transport Layer Security (TLS) validation context.
VirtualGatewayTlsValidationContextAcmTrustAn object that represents a Transport Layer Security (TLS) validation context trust for an Certificate Manager certificate.
VirtualGatewayTlsValidationContextFileTrustAn object that represents a Transport Layer Security (TLS) validation context trust for a local file.
VirtualGatewayTlsValidationContextSdsTrustAn object that represents a virtual gateway's listener's Transport Layer Security (TLS) Secret Discovery Service validation context trust. The proxy must be configured with a local SDS provider via a Unix Domain Socket. See App Mesh TLS documentation for more info.
VirtualGatewayTlsValidationContextTrustAn object that represents a Transport Layer Security (TLS) validation context trust.
VirtualNodeConnectionPoolAn object that represents the type of virtual node connection pool.
Only one protocol is used at a time and should be the same protocol as the one chosen under port mapping.
If not present the default value for `maxPendingRequests` is `2147483647`.
VirtualNodeDataAn object that represents a virtual node returned by a describe operation.
VirtualNodeGrpcConnectionPoolAn object that represents a type of connection pool.
VirtualNodeHttp2ConnectionPoolAn object that represents a type of connection pool.
VirtualNodeHttpConnectionPoolAn object that represents a type of connection pool.
VirtualNodeRefAn object that represents a virtual node returned by a list operation.
VirtualNodeServiceProviderAn object that represents a virtual node service provider.
VirtualNodeSpecAn object that represents the specification of a virtual node.
VirtualNodeStatusAn object that represents the current status of the virtual node.
VirtualNodeTcpConnectionPoolAn object that represents a type of connection pool.
VirtualRouterDataAn object that represents a virtual router returned by a describe operation.
VirtualRouterListenerAn object that represents a virtual router listener.
VirtualRouterRefAn object that represents a virtual router returned by a list operation.
VirtualRouterServiceProviderAn object that represents a virtual node service provider.
VirtualRouterSpecAn object that represents the specification of a virtual router.
VirtualRouterStatusAn object that represents the status of a virtual router.
VirtualServiceBackendAn object that represents a virtual service backend for a virtual node.
VirtualServiceDataAn object that represents a virtual service returned by a describe operation.
VirtualServiceProviderAn object that represents the provider for a virtual service.
VirtualServiceRefAn object that represents a virtual service returned by a list operation.
VirtualServiceSpecAn object that represents the specification of a virtual service.
VirtualServiceStatusAn object that represents the status of a virtual service.
WeightedTargetAn object that represents a target and its relative weight. Traffic is distributed across targets according to their relative weight. For example, a weighted target with a relative weight of 50 receives five times as much traffic as one with a relative weight of 10. The total weight for all targets combined must be less than or equal to 100.
Enums
---
CreateGatewayRouteErrorErrors returned by CreateGatewayRoute
CreateMeshErrorErrors returned by CreateMesh
CreateRouteErrorErrors returned by CreateRoute
CreateVirtualGatewayErrorErrors returned by CreateVirtualGateway
CreateVirtualNodeErrorErrors returned by CreateVirtualNode
CreateVirtualRouterErrorErrors returned by CreateVirtualRouter
CreateVirtualServiceErrorErrors returned by CreateVirtualService
DeleteGatewayRouteErrorErrors returned by DeleteGatewayRoute
DeleteMeshErrorErrors returned by DeleteMesh
DeleteRouteErrorErrors returned by DeleteRoute
DeleteVirtualGatewayErrorErrors returned by DeleteVirtualGateway
DeleteVirtualNodeErrorErrors returned by DeleteVirtualNode
DeleteVirtualRouterErrorErrors returned by DeleteVirtualRouter
DeleteVirtualServiceErrorErrors returned by DeleteVirtualService
DescribeGatewayRouteErrorErrors returned by DescribeGatewayRoute
DescribeMeshErrorErrors returned by DescribeMesh
DescribeRouteErrorErrors returned by DescribeRoute
DescribeVirtualGatewayErrorErrors returned by DescribeVirtualGateway
DescribeVirtualNodeErrorErrors returned by DescribeVirtualNode
DescribeVirtualRouterErrorErrors returned by DescribeVirtualRouter
DescribeVirtualServiceErrorErrors returned by DescribeVirtualService
ListGatewayRoutesErrorErrors returned by ListGatewayRoutes
ListMeshesErrorErrors returned by ListMeshes
ListRoutesErrorErrors returned by ListRoutes
ListTagsForResourceErrorErrors returned by ListTagsForResource
ListVirtualGatewaysErrorErrors returned by ListVirtualGateways
ListVirtualNodesErrorErrors returned by ListVirtualNodes
ListVirtualRoutersErrorErrors returned by ListVirtualRouters
ListVirtualServicesErrorErrors returned by ListVirtualServices
TagResourceErrorErrors returned by TagResource
UntagResourceErrorErrors returned by UntagResource
UpdateGatewayRouteErrorErrors returned by UpdateGatewayRoute
UpdateMeshErrorErrors returned by UpdateMesh
UpdateRouteErrorErrors returned by UpdateRoute
UpdateVirtualGatewayErrorErrors returned by UpdateVirtualGateway
UpdateVirtualNodeErrorErrors returned by UpdateVirtualNode
UpdateVirtualRouterErrorErrors returned by UpdateVirtualRouter
UpdateVirtualServiceErrorErrors returned by UpdateVirtualService
Traits
---
AppMeshTrait representing the capabilities of the AWS App Mesh API. AWS App Mesh clients implement this trait.
Crate rusoto_appmesh
===
App Mesh is a service mesh based on the Envoy proxy that makes it easy to monitor and control microservices. App Mesh standardizes how your microservices communicate, giving you end-to-end visibility and helping to ensure high availability for your applications.
App Mesh gives you consistent visibility and network traffic controls for every microservice in an application. You can use App Mesh with Amazon Web Services Fargate, Amazon ECS, Amazon EKS, Kubernetes on Amazon Web Services, and Amazon EC2.
App Mesh supports microservice applications that use service discovery naming for their components. For more information about service discovery on Amazon ECS, see Service Discovery in the *Amazon Elastic Container Service Developer Guide*. Kubernetes `kube-dns` and `coredns` are supported. For more information, see DNS for Services and Pods in the Kubernetes documentation.
If you’re using the service, you’re probably looking for AppMeshClient and AppMesh.
Structs
---
AccessLogAn object that represents the access logging information for a virtual node.
AppMeshClientA client for the AWS App Mesh API.
AwsCloudMapInstanceAttributeAn object that represents the Cloud Map attribute information for your virtual node.
AWS Cloud Map is not available in the eu-south-1 Region.
AwsCloudMapServiceDiscoveryAn object that represents the Cloud Map service discovery information for your virtual node.
Cloud Map is not available in the eu-south-1 Region.
BackendAn object that represents the backends that a virtual node is expected to send outbound traffic to.
BackendDefaultsAn object that represents the default properties for a backend.
ClientPolicyAn object that represents a client policy.
ClientPolicyTlsA reference to an object that represents a Transport Layer Security (TLS) client policy.
ClientTlsCertificateAn object that represents the client's certificate.
CreateGatewayRouteInputCreateGatewayRouteOutputCreateMeshInputCreateMeshOutputCreateRouteInputCreateRouteOutputCreateVirtualGatewayInputCreateVirtualGatewayOutputCreateVirtualNodeInputCreateVirtualNodeOutputCreateVirtualRouterInputCreateVirtualRouterOutputCreateVirtualServiceInputCreateVirtualServiceOutputDeleteGatewayRouteInputDeleteGatewayRouteOutputDeleteMeshInputDeleteMeshOutputDeleteRouteInputDeleteRouteOutputDeleteVirtualGatewayInputDeleteVirtualGatewayOutputDeleteVirtualNodeInputDeletes a virtual node input.
DeleteVirtualNodeOutputDeleteVirtualRouterInputDeleteVirtualRouterOutputDeleteVirtualServiceInputDeleteVirtualServiceOutputDescribeGatewayRouteInputDescribeGatewayRouteOutputDescribeMeshInputDescribeMeshOutputDescribeRouteInputDescribeRouteOutputDescribeVirtualGatewayInputDescribeVirtualGatewayOutputDescribeVirtualNodeInputDescribeVirtualNodeOutputDescribeVirtualRouterInputDescribeVirtualRouterOutputDescribeVirtualServiceInputDescribeVirtualServiceOutputDnsServiceDiscoveryAn object that represents the DNS service discovery information for your virtual node.
DurationAn object that represents a duration of time.
EgressFilterAn object that represents the egress filter rules for a service mesh.
FileAccessLogAn object that represents an access log file.
GatewayRouteDataAn object that represents a gateway route returned by a describe operation.
GatewayRouteHostnameMatchAn object representing the gateway route host name to match.
GatewayRouteHostnameRewriteAn object representing the gateway route host name to rewrite.
GatewayRouteRefAn object that represents a gateway route returned by a list operation.
GatewayRouteSpecAn object that represents a gateway route specification. Specify one gateway route type.
GatewayRouteStatusAn object that represents the current status of a gateway route.
GatewayRouteTargetAn object that represents a gateway route target.
GatewayRouteVirtualServiceAn object that represents the virtual service that traffic is routed to.
GrpcGatewayRouteAn object that represents a gRPC gateway route.
GrpcGatewayRouteActionAn object that represents the action to take if a match is determined.
GrpcGatewayRouteMatchAn object that represents the criteria for determining a request match.
GrpcGatewayRouteMetadataAn object representing the metadata of the gateway route.
GrpcGatewayRouteRewriteAn object that represents the gateway route to rewrite.
GrpcMetadataMatchMethodAn object representing the method header to be matched.
GrpcRetryPolicyAn object that represents a retry policy. Specify at least one value for at least one of the types of `RetryEvents`, a value for `maxRetries`, and a value for `perRetryTimeout`. Both `server-error` and `gateway-error` under `httpRetryEvents` include the Envoy `reset` policy. For more information on the `reset` policy, see the Envoy documentation.
GrpcRouteAn object that represents a gRPC route type.
GrpcRouteActionAn object that represents the action to take if a match is determined.
GrpcRouteMatchAn object that represents the criteria for determining a request match.
GrpcRouteMetadataAn object that represents the match metadata for the route.
GrpcRouteMetadataMatchMethodAn object that represents the match method. Specify one of the match values.
GrpcTimeoutAn object that represents types of timeouts.
HeaderMatchMethodAn object that represents the method and value to match with the header value sent in a request. Specify one match method.
HealthCheckPolicyAn object that represents the health check policy for a virtual node's listener.
HttpGatewayRouteAn object that represents an HTTP gateway route.
HttpGatewayRouteActionAn object that represents the action to take if a match is determined.
HttpGatewayRouteHeaderAn object that represents the HTTP header in the gateway route.
HttpGatewayRouteMatchAn object that represents the criteria for determining a request match.
HttpGatewayRoutePathRewriteAn object that represents the path to rewrite.
HttpGatewayRoutePrefixRewriteAn object representing the beginning characters of the route to rewrite.
HttpGatewayRouteRewriteAn object representing the gateway route to rewrite.
HttpPathMatchAn object representing the path to match in the request.
HttpQueryParameterAn object that represents the query parameter in the request.
HttpRetryPolicyAn object that represents a retry policy. Specify at least one value for at least one of the types of `RetryEvents`, a value for `maxRetries`, and a value for `perRetryTimeout`. Both `server-error` and `gateway-error` under `httpRetryEvents` include the Envoy `reset` policy. For more information on the `reset` policy, see the Envoy documentation.
HttpRouteAn object that represents an HTTP or HTTP/2 route type.
HttpRouteActionAn object that represents the action to take if a match is determined.
HttpRouteHeaderAn object that represents the HTTP header in the request.
HttpRouteMatchAn object that represents the requirements for a route to match HTTP requests for a virtual router.
HttpTimeoutAn object that represents types of timeouts.
ListGatewayRoutesInputListGatewayRoutesOutputListMeshesInputListMeshesOutputListRoutesInputListRoutesOutputListTagsForResourceInputListTagsForResourceOutputListVirtualGatewaysInputListVirtualGatewaysOutputListVirtualNodesInputListVirtualNodesOutputListVirtualRoutersInputListVirtualRoutersOutputListVirtualServicesInputListVirtualServicesOutputListenerAn object that represents a listener for a virtual node.
ListenerTimeoutAn object that represents timeouts for different protocols.
ListenerTlsAn object that represents the Transport Layer Security (TLS) properties for a listener.
ListenerTlsAcmCertificateAn object that represents an AWS Certicate Manager (ACM) certificate.
ListenerTlsCertificateAn object that represents a listener's Transport Layer Security (TLS) certificate.
ListenerTlsFileCertificateAn object that represents a local file certificate. The certificate must meet specific requirements and you must have proxy authorization enabled. For more information, see Transport Layer Security (TLS).
ListenerTlsSdsCertificateAn object that represents the listener's Secret Discovery Service certificate. The proxy must be configured with a local SDS provider via a Unix Domain Socket. See App Mesh TLS documentation for more info.
ListenerTlsValidationContextAn object that represents a listener's Transport Layer Security (TLS) validation context.
ListenerTlsValidationContextTrustAn object that represents a listener's Transport Layer Security (TLS) validation context trust.
LoggingAn object that represents the logging information for a virtual node.
MatchRangeAn object that represents the range of values to match on. The first character of the range is included in the range, though the last character is not. For example, if the range specified were 1-100, only values 1-99 would be matched.
MeshDataAn object that represents a service mesh returned by a describe operation.
MeshRefAn object that represents a service mesh returned by a list operation.
MeshSpecAn object that represents the specification of a service mesh.
MeshStatusAn object that represents the status of a service mesh.
OutlierDetectionAn object that represents the outlier detection for a virtual node's listener.
PortMappingAn object that represents a port mapping.
QueryParameterMatchAn object representing the query parameter to match.
ResourceMetadataAn object that represents metadata for a resource.
RouteDataAn object that represents a route returned by a describe operation.
RouteRefAn object that represents a route returned by a list operation.
RouteSpecAn object that represents a route specification. Specify one route type.
RouteStatusAn object that represents the current status of a route.
ServiceDiscoveryAn object that represents the service discovery information for a virtual node.
SubjectAlternativeNameMatchersAn object that represents the methods by which a subject alternative name on a peer Transport Layer Security (TLS) certificate can be matched.
SubjectAlternativeNamesAn object that represents the subject alternative names secured by the certificate.
TagRefOptional metadata that you apply to a resource to assist with categorization and organization. Each tag consists of a key and an optional value, both of which you define. Tag keys can have a maximum character length of 128 characters, and tag values can have a maximum length of 256 characters.
TagResourceInputTagResourceOutputTcpRouteAn object that represents a TCP route type.
TcpRouteActionAn object that represents the action to take if a match is determined.
TcpTimeoutAn object that represents types of timeouts.
TlsValidationContextAn object that represents how the proxy will validate its peer during Transport Layer Security (TLS) negotiation.
TlsValidationContextAcmTrustAn object that represents a Transport Layer Security (TLS) validation context trust for an Certificate Manager certificate.
TlsValidationContextFileTrustAn object that represents a Transport Layer Security (TLS) validation context trust for a local file.
TlsValidationContextSdsTrustAn object that represents a Transport Layer Security (TLS) Secret Discovery Service validation context trust. The proxy must be configured with a local SDS provider via a Unix Domain Socket. See App Mesh TLS documentation for more info.
TlsValidationContextTrustAn object that represents a Transport Layer Security (TLS) validation context trust.
UntagResourceInputUntagResourceOutputUpdateGatewayRouteInputUpdateGatewayRouteOutputUpdateMeshInputUpdateMeshOutputUpdateRouteInputUpdateRouteOutputUpdateVirtualGatewayInputUpdateVirtualGatewayOutputUpdateVirtualNodeInputUpdateVirtualNodeOutputUpdateVirtualRouterInputUpdateVirtualRouterOutputUpdateVirtualServiceInputUpdateVirtualServiceOutputVirtualGatewayAccessLogThe access log configuration for a virtual gateway.
VirtualGatewayBackendDefaultsAn object that represents the default properties for a backend.
VirtualGatewayClientPolicyAn object that represents a client policy.
VirtualGatewayClientPolicyTlsAn object that represents a Transport Layer Security (TLS) client policy.
VirtualGatewayClientTlsCertificateAn object that represents the virtual gateway's client's Transport Layer Security (TLS) certificate.
VirtualGatewayConnectionPoolAn object that represents the type of virtual gateway connection pool.
Only one protocol is used at a time and should be the same protocol as the one chosen under port mapping.
If not present the default value for `maxPendingRequests` is `2147483647`.
VirtualGatewayDataAn object that represents a virtual gateway returned by a describe operation.
VirtualGatewayFileAccessLogAn object that represents an access log file.
VirtualGatewayGrpcConnectionPoolAn object that represents a type of connection pool.
VirtualGatewayHealthCheckPolicyAn object that represents the health check policy for a virtual gateway's listener.
VirtualGatewayHttp2ConnectionPoolAn object that represents a type of connection pool.
VirtualGatewayHttpConnectionPoolAn object that represents a type of connection pool.
VirtualGatewayListenerAn object that represents a listener for a virtual gateway.
VirtualGatewayListenerTlsAn object that represents the Transport Layer Security (TLS) properties for a listener.
VirtualGatewayListenerTlsAcmCertificateAn object that represents an Certificate Manager certificate.
VirtualGatewayListenerTlsCertificateAn object that represents a listener's Transport Layer Security (TLS) certificate.
VirtualGatewayListenerTlsFileCertificateAn object that represents a local file certificate. The certificate must meet specific requirements and you must have proxy authorization enabled. For more information, see Transport Layer Security (TLS).
VirtualGatewayListenerTlsSdsCertificateAn object that represents the virtual gateway's listener's Secret Discovery Service certificate.The proxy must be configured with a local SDS provider via a Unix Domain Socket. See App MeshTLS documentation for more info.
VirtualGatewayListenerTlsValidationContextAn object that represents a virtual gateway's listener's Transport Layer Security (TLS) validation context.
VirtualGatewayListenerTlsValidationContextTrustAn object that represents a virtual gateway's listener's Transport Layer Security (TLS) validation context trust.
VirtualGatewayLoggingAn object that represents logging information.
VirtualGatewayPortMappingAn object that represents a port mapping.
VirtualGatewayRefAn object that represents a virtual gateway returned by a list operation.
VirtualGatewaySpecAn object that represents the specification of a service mesh resource.
VirtualGatewayStatusAn object that represents the status of the mesh resource.
VirtualGatewayTlsValidationContextAn object that represents a Transport Layer Security (TLS) validation context.
VirtualGatewayTlsValidationContextAcmTrustAn object that represents a Transport Layer Security (TLS) validation context trust for an Certificate Manager certificate.
VirtualGatewayTlsValidationContextFileTrustAn object that represents a Transport Layer Security (TLS) validation context trust for a local file.
VirtualGatewayTlsValidationContextSdsTrustAn object that represents a virtual gateway's listener's Transport Layer Security (TLS) Secret Discovery Service validation context trust. The proxy must be configured with a local SDS provider via a Unix Domain Socket. See App Mesh TLS documentation for more info.
VirtualGatewayTlsValidationContextTrustAn object that represents a Transport Layer Security (TLS) validation context trust.
VirtualNodeConnectionPoolAn object that represents the type of virtual node connection pool.
Only one protocol is used at a time and should be the same protocol as the one chosen under port mapping.
If not present the default value for `maxPendingRequests` is `2147483647`.
VirtualNodeDataAn object that represents a virtual node returned by a describe operation.
VirtualNodeGrpcConnectionPoolAn object that represents a type of connection pool.
VirtualNodeHttp2ConnectionPoolAn object that represents a type of connection pool.
VirtualNodeHttpConnectionPoolAn object that represents a type of connection pool.
VirtualNodeRefAn object that represents a virtual node returned by a list operation.
VirtualNodeServiceProviderAn object that represents a virtual node service provider.
VirtualNodeSpecAn object that represents the specification of a virtual node.
VirtualNodeStatusAn object that represents the current status of the virtual node.
VirtualNodeTcpConnectionPoolAn object that represents a type of connection pool.
VirtualRouterDataAn object that represents a virtual router returned by a describe operation.
VirtualRouterListenerAn object that represents a virtual router listener.
VirtualRouterRefAn object that represents a virtual router returned by a list operation.
VirtualRouterServiceProviderAn object that represents a virtual node service provider.
VirtualRouterSpecAn object that represents the specification of a virtual router.
VirtualRouterStatusAn object that represents the status of a virtual router.
VirtualServiceBackendAn object that represents a virtual service backend for a virtual node.
VirtualServiceDataAn object that represents a virtual service returned by a describe operation.
VirtualServiceProviderAn object that represents the provider for a virtual service.
VirtualServiceRefAn object that represents a virtual service returned by a list operation.
VirtualServiceSpecAn object that represents the specification of a virtual service.
VirtualServiceStatusAn object that represents the status of a virtual service.
WeightedTargetAn object that represents a target and its relative weight. Traffic is distributed across targets according to their relative weight. For example, a weighted target with a relative weight of 50 receives five times as much traffic as one with a relative weight of 10. The total weight for all targets combined must be less than or equal to 100.
Enums
---
CreateGatewayRouteErrorErrors returned by CreateGatewayRoute
CreateMeshErrorErrors returned by CreateMesh
CreateRouteErrorErrors returned by CreateRoute
CreateVirtualGatewayErrorErrors returned by CreateVirtualGateway
CreateVirtualNodeErrorErrors returned by CreateVirtualNode
CreateVirtualRouterErrorErrors returned by CreateVirtualRouter
CreateVirtualServiceErrorErrors returned by CreateVirtualService
DeleteGatewayRouteErrorErrors returned by DeleteGatewayRoute
DeleteMeshErrorErrors returned by DeleteMesh
DeleteRouteErrorErrors returned by DeleteRoute
DeleteVirtualGatewayErrorErrors returned by DeleteVirtualGateway
DeleteVirtualNodeErrorErrors returned by DeleteVirtualNode
DeleteVirtualRouterErrorErrors returned by DeleteVirtualRouter
DeleteVirtualServiceErrorErrors returned by DeleteVirtualService
DescribeGatewayRouteErrorErrors returned by DescribeGatewayRoute
DescribeMeshErrorErrors returned by DescribeMesh
DescribeRouteErrorErrors returned by DescribeRoute
DescribeVirtualGatewayErrorErrors returned by DescribeVirtualGateway
DescribeVirtualNodeErrorErrors returned by DescribeVirtualNode
DescribeVirtualRouterErrorErrors returned by DescribeVirtualRouter
DescribeVirtualServiceErrorErrors returned by DescribeVirtualService
ListGatewayRoutesErrorErrors returned by ListGatewayRoutes
ListMeshesErrorErrors returned by ListMeshes
ListRoutesErrorErrors returned by ListRoutes
ListTagsForResourceErrorErrors returned by ListTagsForResource
ListVirtualGatewaysErrorErrors returned by ListVirtualGateways
ListVirtualNodesErrorErrors returned by ListVirtualNodes
ListVirtualRoutersErrorErrors returned by ListVirtualRouters
ListVirtualServicesErrorErrors returned by ListVirtualServices
TagResourceErrorErrors returned by TagResource
UntagResourceErrorErrors returned by UntagResource
UpdateGatewayRouteErrorErrors returned by UpdateGatewayRoute
UpdateMeshErrorErrors returned by UpdateMesh
UpdateRouteErrorErrors returned by UpdateRoute
UpdateVirtualGatewayErrorErrors returned by UpdateVirtualGateway
UpdateVirtualNodeErrorErrors returned by UpdateVirtualNode
UpdateVirtualRouterErrorErrors returned by UpdateVirtualRouter
UpdateVirtualServiceErrorErrors returned by UpdateVirtualService
Traits
---
AppMeshTrait representing the capabilities of the AWS App Mesh API. AWS App Mesh clients implement this trait.
Struct rusoto_appmesh::AppMeshClient
===
```
pub struct AppMeshClient { /* private fields */ }
```
A client for the AWS App Mesh API.
Implementations
---
source### impl AppMeshClient
source#### pub fn new(region: Region) -> AppMeshClient
Creates a client backed by the default tokio event loop.
The client will use the default credentials provider and tls client.
source#### pub fn new_with<P, D>( request_dispatcher: D, credentials_provider: P, region: Region) -> AppMeshClient where P: ProvideAwsCredentials + Send + Sync + 'static, D: DispatchSignedRequest + Send + Sync + 'static,
source#### pub fn new_with_client(client: Client, region: Region) -> AppMeshClient
Trait Implementations
---
source### impl AppMesh for AppMeshClient
source#### fn create_gateway_route<'life0, 'async_trait>( &'life0 self, input: CreateGatewayRouteInput) -> Pin<Box<dyn Future<Output = Result<CreateGatewayRouteOutput, RusotoError<CreateGatewayRouteError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Creates a gateway route.
A gateway route is attached to a virtual gateway and routes traffic to an existing virtual service. If a route matches a request, it can distribute traffic to a target virtual service.
For more information about gateway routes, see Gateway routes.
source#### fn create_mesh<'life0, 'async_trait>( &'life0 self, input: CreateMeshInput) -> Pin<Box<dyn Future<Output = Result<CreateMeshOutput, RusotoError<CreateMeshError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Creates a service mesh.
A service mesh is a logical boundary for network traffic between services that are represented by resources within the mesh. After you create your service mesh, you can create virtual services, virtual nodes, virtual routers, and routes to distribute traffic between the applications in your mesh.
For more information about service meshes, see Service meshes.
source#### fn create_route<'life0, 'async_trait>( &'life0 self, input: CreateRouteInput) -> Pin<Box<dyn Future<Output = Result<CreateRouteOutput, RusotoError<CreateRouteError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Creates a route that is associated with a virtual router.
You can route several different protocols and define a retry policy for a route. Traffic can be routed to one or more virtual nodes.
For more information about routes, see Routes.
source#### fn create_virtual_gateway<'life0, 'async_trait>( &'life0 self, input: CreateVirtualGatewayInput) -> Pin<Box<dyn Future<Output = Result<CreateVirtualGatewayOutput, RusotoError<CreateVirtualGatewayError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Creates a virtual gateway.
A virtual gateway allows resources outside your mesh to communicate to resources that are inside your mesh. The virtual gateway represents an Envoy proxy running in an Amazon ECS task, in a Kubernetes service, or on an Amazon EC2 instance. Unlike a virtual node, which represents an Envoy running with an application, a virtual gateway represents Envoy deployed by itself.
For more information about virtual gateways, see Virtual gateways.
source#### fn create_virtual_node<'life0, 'async_trait>( &'life0 self, input: CreateVirtualNodeInput) -> Pin<Box<dyn Future<Output = Result<CreateVirtualNodeOutput, RusotoError<CreateVirtualNodeError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Creates a virtual node within a service mesh.
A virtual node acts as a logical pointer to a particular task group, such as an Amazon ECS service or a Kubernetes deployment. When you create a virtual node, you can specify the service discovery information for your task group, and whether the proxy running in a task group will communicate with other proxies using Transport Layer Security (TLS).
You define a `listener` for any inbound traffic that your virtual node expects. Any virtual service that your virtual node expects to communicate to is specified as a `backend`.
The response metadata for your new virtual node contains the `arn` that is associated with the virtual node. Set this value to the full ARN; for example, `arn:aws:appmesh:us-west-2:123456789012:myMesh/default/virtualNode/myApp`) as the `APPMESH_RESOURCE_ARN` environment variable for your task group's Envoy proxy container in your task definition or pod spec. This is then mapped to the `node.id` and `node.cluster` Envoy parameters.
By default, App Mesh uses the name of the resource you specified in `APPMESH_RESOURCE_ARN` when Envoy is referring to itself in metrics and traces. You can override this behavior by setting the `APPMESH_RESOURCE_CLUSTER` environment variable with your own name.
For more information about virtual nodes, see Virtual nodes. You must be using `1.15.0` or later of the Envoy image when setting these variables. For more information aboutApp Mesh Envoy variables, see Envoy image in the AWS App Mesh User Guide.
source#### fn create_virtual_router<'life0, 'async_trait>( &'life0 self, input: CreateVirtualRouterInput) -> Pin<Box<dyn Future<Output = Result<CreateVirtualRouterOutput, RusotoError<CreateVirtualRouterError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Creates a virtual router within a service mesh.
Specify a `listener` for any inbound traffic that your virtual router receives. Create a virtual router for each protocol and port that you need to route. Virtual routers handle traffic for one or more virtual services within your mesh. After you create your virtual router, create and associate routes for your virtual router that direct incoming requests to different virtual nodes.
For more information about virtual routers, see Virtual routers.
source#### fn create_virtual_service<'life0, 'async_trait>( &'life0 self, input: CreateVirtualServiceInput) -> Pin<Box<dyn Future<Output = Result<CreateVirtualServiceOutput, RusotoError<CreateVirtualServiceError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Creates a virtual service within a service mesh.
A virtual service is an abstraction of a real service that is provided by a virtual node directly or indirectly by means of a virtual router. Dependent services call your virtual service by its `virtualServiceName`, and those requests are routed to the virtual node or virtual router that is specified as the provider for the virtual service.
For more information about virtual services, see Virtual services.
source#### fn delete_gateway_route<'life0, 'async_trait>( &'life0 self, input: DeleteGatewayRouteInput) -> Pin<Box<dyn Future<Output = Result<DeleteGatewayRouteOutput, RusotoError<DeleteGatewayRouteError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Deletes an existing gateway route.
source#### fn delete_mesh<'life0, 'async_trait>( &'life0 self, input: DeleteMeshInput) -> Pin<Box<dyn Future<Output = Result<DeleteMeshOutput, RusotoError<DeleteMeshError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Deletes an existing service mesh.
You must delete all resources (virtual services, routes, virtual routers, and virtual nodes) in the service mesh before you can delete the mesh itself.
source#### fn delete_route<'life0, 'async_trait>( &'life0 self, input: DeleteRouteInput) -> Pin<Box<dyn Future<Output = Result<DeleteRouteOutput, RusotoError<DeleteRouteError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Deletes an existing route.
source#### fn delete_virtual_gateway<'life0, 'async_trait>( &'life0 self, input: DeleteVirtualGatewayInput) -> Pin<Box<dyn Future<Output = Result<DeleteVirtualGatewayOutput, RusotoError<DeleteVirtualGatewayError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Deletes an existing virtual gateway. You cannot delete a virtual gateway if any gateway routes are associated to it.
source#### fn delete_virtual_node<'life0, 'async_trait>( &'life0 self, input: DeleteVirtualNodeInput) -> Pin<Box<dyn Future<Output = Result<DeleteVirtualNodeOutput, RusotoError<DeleteVirtualNodeError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Deletes an existing virtual node.
You must delete any virtual services that list a virtual node as a service provider before you can delete the virtual node itself.
source#### fn delete_virtual_router<'life0, 'async_trait>( &'life0 self, input: DeleteVirtualRouterInput) -> Pin<Box<dyn Future<Output = Result<DeleteVirtualRouterOutput, RusotoError<DeleteVirtualRouterError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Deletes an existing virtual router.
You must delete any routes associated with the virtual router before you can delete the router itself.
source#### fn delete_virtual_service<'life0, 'async_trait>( &'life0 self, input: DeleteVirtualServiceInput) -> Pin<Box<dyn Future<Output = Result<DeleteVirtualServiceOutput, RusotoError<DeleteVirtualServiceError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Deletes an existing virtual service.
source#### fn describe_gateway_route<'life0, 'async_trait>( &'life0 self, input: DescribeGatewayRouteInput) -> Pin<Box<dyn Future<Output = Result<DescribeGatewayRouteOutput, RusotoError<DescribeGatewayRouteError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Describes an existing gateway route.
source#### fn describe_mesh<'life0, 'async_trait>( &'life0 self, input: DescribeMeshInput) -> Pin<Box<dyn Future<Output = Result<DescribeMeshOutput, RusotoError<DescribeMeshError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Describes an existing service mesh.
source#### fn describe_route<'life0, 'async_trait>( &'life0 self, input: DescribeRouteInput) -> Pin<Box<dyn Future<Output = Result<DescribeRouteOutput, RusotoError<DescribeRouteError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Describes an existing route.
source#### fn describe_virtual_gateway<'life0, 'async_trait>( &'life0 self, input: DescribeVirtualGatewayInput) -> Pin<Box<dyn Future<Output = Result<DescribeVirtualGatewayOutput, RusotoError<DescribeVirtualGatewayError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Describes an existing virtual gateway.
source#### fn describe_virtual_node<'life0, 'async_trait>( &'life0 self, input: DescribeVirtualNodeInput) -> Pin<Box<dyn Future<Output = Result<DescribeVirtualNodeOutput, RusotoError<DescribeVirtualNodeError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Describes an existing virtual node.
source#### fn describe_virtual_router<'life0, 'async_trait>( &'life0 self, input: DescribeVirtualRouterInput) -> Pin<Box<dyn Future<Output = Result<DescribeVirtualRouterOutput, RusotoError<DescribeVirtualRouterError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Describes an existing virtual router.
source#### fn describe_virtual_service<'life0, 'async_trait>( &'life0 self, input: DescribeVirtualServiceInput) -> Pin<Box<dyn Future<Output = Result<DescribeVirtualServiceOutput, RusotoError<DescribeVirtualServiceError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Describes an existing virtual service.
source#### fn list_gateway_routes<'life0, 'async_trait>( &'life0 self, input: ListGatewayRoutesInput) -> Pin<Box<dyn Future<Output = Result<ListGatewayRoutesOutput, RusotoError<ListGatewayRoutesError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Returns a list of existing gateway routes that are associated to a virtual gateway.
source#### fn list_meshes<'life0, 'async_trait>( &'life0 self, input: ListMeshesInput) -> Pin<Box<dyn Future<Output = Result<ListMeshesOutput, RusotoError<ListMeshesError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Returns a list of existing service meshes.
source#### fn list_routes<'life0, 'async_trait>( &'life0 self, input: ListRoutesInput) -> Pin<Box<dyn Future<Output = Result<ListRoutesOutput, RusotoError<ListRoutesError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Returns a list of existing routes in a service mesh.
source#### fn list_tags_for_resource<'life0, 'async_trait>( &'life0 self, input: ListTagsForResourceInput) -> Pin<Box<dyn Future<Output = Result<ListTagsForResourceOutput, RusotoError<ListTagsForResourceError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
List the tags for an App Mesh resource.
source#### fn list_virtual_gateways<'life0, 'async_trait>( &'life0 self, input: ListVirtualGatewaysInput) -> Pin<Box<dyn Future<Output = Result<ListVirtualGatewaysOutput, RusotoError<ListVirtualGatewaysError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Returns a list of existing virtual gateways in a service mesh.
source#### fn list_virtual_nodes<'life0, 'async_trait>( &'life0 self, input: ListVirtualNodesInput) -> Pin<Box<dyn Future<Output = Result<ListVirtualNodesOutput, RusotoError<ListVirtualNodesError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Returns a list of existing virtual nodes.
source#### fn list_virtual_routers<'life0, 'async_trait>( &'life0 self, input: ListVirtualRoutersInput) -> Pin<Box<dyn Future<Output = Result<ListVirtualRoutersOutput, RusotoError<ListVirtualRoutersError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Returns a list of existing virtual routers in a service mesh.
source#### fn list_virtual_services<'life0, 'async_trait>( &'life0 self, input: ListVirtualServicesInput) -> Pin<Box<dyn Future<Output = Result<ListVirtualServicesOutput, RusotoError<ListVirtualServicesError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Returns a list of existing virtual services in a service mesh.
source#### fn tag_resource<'life0, 'async_trait>( &'life0 self, input: TagResourceInput) -> Pin<Box<dyn Future<Output = Result<TagResourceOutput, RusotoError<TagResourceError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Associates the specified tags to a resource with the specified `resourceArn`. If existing tags on a resource aren't specified in the request parameters, they aren't changed. When a resource is deleted, the tags associated with that resource are also deleted.
source#### fn untag_resource<'life0, 'async_trait>( &'life0 self, input: UntagResourceInput) -> Pin<Box<dyn Future<Output = Result<UntagResourceOutput, RusotoError<UntagResourceError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Deletes specified tags from a resource.
source#### fn update_gateway_route<'life0, 'async_trait>( &'life0 self, input: UpdateGatewayRouteInput) -> Pin<Box<dyn Future<Output = Result<UpdateGatewayRouteOutput, RusotoError<UpdateGatewayRouteError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Updates an existing gateway route that is associated to a specified virtual gateway in a service mesh.
source#### fn update_mesh<'life0, 'async_trait>( &'life0 self, input: UpdateMeshInput) -> Pin<Box<dyn Future<Output = Result<UpdateMeshOutput, RusotoError<UpdateMeshError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Updates an existing service mesh.
source#### fn update_route<'life0, 'async_trait>( &'life0 self, input: UpdateRouteInput) -> Pin<Box<dyn Future<Output = Result<UpdateRouteOutput, RusotoError<UpdateRouteError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Updates an existing route for a specified service mesh and virtual router.
source#### fn update_virtual_gateway<'life0, 'async_trait>( &'life0 self, input: UpdateVirtualGatewayInput) -> Pin<Box<dyn Future<Output = Result<UpdateVirtualGatewayOutput, RusotoError<UpdateVirtualGatewayError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Updates an existing virtual gateway in a specified service mesh.
source#### fn update_virtual_node<'life0, 'async_trait>( &'life0 self, input: UpdateVirtualNodeInput) -> Pin<Box<dyn Future<Output = Result<UpdateVirtualNodeOutput, RusotoError<UpdateVirtualNodeError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Updates an existing virtual node in a specified service mesh.
source#### fn update_virtual_router<'life0, 'async_trait>( &'life0 self, input: UpdateVirtualRouterInput) -> Pin<Box<dyn Future<Output = Result<UpdateVirtualRouterOutput, RusotoError<UpdateVirtualRouterError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Updates an existing virtual router in a specified service mesh.
source#### fn update_virtual_service<'life0, 'async_trait>( &'life0 self, input: UpdateVirtualServiceInput) -> Pin<Box<dyn Future<Output = Result<UpdateVirtualServiceOutput, RusotoError<UpdateVirtualServiceError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Updates an existing virtual service in a specified service mesh.
source### impl Clone for AppMeshClient
source#### fn clone(&self) -> AppMeshClient
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
Auto Trait Implementations
---
### impl !RefUnwindSafe for AppMeshClient
### impl Send for AppMeshClient
### impl Sync for AppMeshClient
### impl Unpin for AppMeshClient
### impl !UnwindSafe for AppMeshClient
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Trait rusoto_appmesh::AppMesh
===
```
pub trait AppMesh {
fn create_gateway_route<'life0, 'async_trait>(
&'life0 self,
input: CreateGatewayRouteInput
) -> Pin<Box<dyn Future<Output = Result<CreateGatewayRouteOutput, RusotoError<CreateGatewayRouteError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn create_mesh<'life0, 'async_trait>(
&'life0 self,
input: CreateMeshInput
) -> Pin<Box<dyn Future<Output = Result<CreateMeshOutput, RusotoError<CreateMeshError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn create_route<'life0, 'async_trait>(
&'life0 self,
input: CreateRouteInput
) -> Pin<Box<dyn Future<Output = Result<CreateRouteOutput, RusotoError<CreateRouteError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn create_virtual_gateway<'life0, 'async_trait>(
&'life0 self,
input: CreateVirtualGatewayInput
) -> Pin<Box<dyn Future<Output = Result<CreateVirtualGatewayOutput, RusotoError<CreateVirtualGatewayError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn create_virtual_node<'life0, 'async_trait>(
&'life0 self,
input: CreateVirtualNodeInput
) -> Pin<Box<dyn Future<Output = Result<CreateVirtualNodeOutput, RusotoError<CreateVirtualNodeError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn create_virtual_router<'life0, 'async_trait>(
&'life0 self,
input: CreateVirtualRouterInput
) -> Pin<Box<dyn Future<Output = Result<CreateVirtualRouterOutput, RusotoError<CreateVirtualRouterError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn create_virtual_service<'life0, 'async_trait>(
&'life0 self,
input: CreateVirtualServiceInput
) -> Pin<Box<dyn Future<Output = Result<CreateVirtualServiceOutput, RusotoError<CreateVirtualServiceError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn delete_gateway_route<'life0, 'async_trait>(
&'life0 self,
input: DeleteGatewayRouteInput
) -> Pin<Box<dyn Future<Output = Result<DeleteGatewayRouteOutput, RusotoError<DeleteGatewayRouteError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn delete_mesh<'life0, 'async_trait>(
&'life0 self,
input: DeleteMeshInput
) -> Pin<Box<dyn Future<Output = Result<DeleteMeshOutput, RusotoError<DeleteMeshError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn delete_route<'life0, 'async_trait>(
&'life0 self,
input: DeleteRouteInput
) -> Pin<Box<dyn Future<Output = Result<DeleteRouteOutput, RusotoError<DeleteRouteError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn delete_virtual_gateway<'life0, 'async_trait>(
&'life0 self,
input: DeleteVirtualGatewayInput
) -> Pin<Box<dyn Future<Output = Result<DeleteVirtualGatewayOutput, RusotoError<DeleteVirtualGatewayError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn delete_virtual_node<'life0, 'async_trait>(
&'life0 self,
input: DeleteVirtualNodeInput
) -> Pin<Box<dyn Future<Output = Result<DeleteVirtualNodeOutput, RusotoError<DeleteVirtualNodeError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn delete_virtual_router<'life0, 'async_trait>(
&'life0 self,
input: DeleteVirtualRouterInput
) -> Pin<Box<dyn Future<Output = Result<DeleteVirtualRouterOutput, RusotoError<DeleteVirtualRouterError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn delete_virtual_service<'life0, 'async_trait>(
&'life0 self,
input: DeleteVirtualServiceInput
) -> Pin<Box<dyn Future<Output = Result<DeleteVirtualServiceOutput, RusotoError<DeleteVirtualServiceError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn describe_gateway_route<'life0, 'async_trait>(
&'life0 self,
input: DescribeGatewayRouteInput
) -> Pin<Box<dyn Future<Output = Result<DescribeGatewayRouteOutput, RusotoError<DescribeGatewayRouteError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn describe_mesh<'life0, 'async_trait>(
&'life0 self,
input: DescribeMeshInput
) -> Pin<Box<dyn Future<Output = Result<DescribeMeshOutput, RusotoError<DescribeMeshError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn describe_route<'life0, 'async_trait>(
&'life0 self,
input: DescribeRouteInput
) -> Pin<Box<dyn Future<Output = Result<DescribeRouteOutput, RusotoError<DescribeRouteError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn describe_virtual_gateway<'life0, 'async_trait>(
&'life0 self,
input: DescribeVirtualGatewayInput
) -> Pin<Box<dyn Future<Output = Result<DescribeVirtualGatewayOutput, RusotoError<DescribeVirtualGatewayError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn describe_virtual_node<'life0, 'async_trait>(
&'life0 self,
input: DescribeVirtualNodeInput
) -> Pin<Box<dyn Future<Output = Result<DescribeVirtualNodeOutput, RusotoError<DescribeVirtualNodeError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn describe_virtual_router<'life0, 'async_trait>(
&'life0 self,
input: DescribeVirtualRouterInput
) -> Pin<Box<dyn Future<Output = Result<DescribeVirtualRouterOutput, RusotoError<DescribeVirtualRouterError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn describe_virtual_service<'life0, 'async_trait>(
&'life0 self,
input: DescribeVirtualServiceInput
) -> Pin<Box<dyn Future<Output = Result<DescribeVirtualServiceOutput, RusotoError<DescribeVirtualServiceError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn list_gateway_routes<'life0, 'async_trait>(
&'life0 self,
input: ListGatewayRoutesInput
) -> Pin<Box<dyn Future<Output = Result<ListGatewayRoutesOutput, RusotoError<ListGatewayRoutesError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn list_meshes<'life0, 'async_trait>(
&'life0 self,
input: ListMeshesInput
) -> Pin<Box<dyn Future<Output = Result<ListMeshesOutput, RusotoError<ListMeshesError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn list_routes<'life0, 'async_trait>(
&'life0 self,
input: ListRoutesInput
) -> Pin<Box<dyn Future<Output = Result<ListRoutesOutput, RusotoError<ListRoutesError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn list_tags_for_resource<'life0, 'async_trait>(
&'life0 self,
input: ListTagsForResourceInput
) -> Pin<Box<dyn Future<Output = Result<ListTagsForResourceOutput, RusotoError<ListTagsForResourceError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn list_virtual_gateways<'life0, 'async_trait>(
&'life0 self,
input: ListVirtualGatewaysInput
) -> Pin<Box<dyn Future<Output = Result<ListVirtualGatewaysOutput, RusotoError<ListVirtualGatewaysError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn list_virtual_nodes<'life0, 'async_trait>(
&'life0 self,
input: ListVirtualNodesInput
) -> Pin<Box<dyn Future<Output = Result<ListVirtualNodesOutput, RusotoError<ListVirtualNodesError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn list_virtual_routers<'life0, 'async_trait>(
&'life0 self,
input: ListVirtualRoutersInput
) -> Pin<Box<dyn Future<Output = Result<ListVirtualRoutersOutput, RusotoError<ListVirtualRoutersError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn list_virtual_services<'life0, 'async_trait>(
&'life0 self,
input: ListVirtualServicesInput
) -> Pin<Box<dyn Future<Output = Result<ListVirtualServicesOutput, RusotoError<ListVirtualServicesError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn tag_resource<'life0, 'async_trait>(
&'life0 self,
input: TagResourceInput
) -> Pin<Box<dyn Future<Output = Result<TagResourceOutput, RusotoError<TagResourceError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn untag_resource<'life0, 'async_trait>(
&'life0 self,
input: UntagResourceInput
) -> Pin<Box<dyn Future<Output = Result<UntagResourceOutput, RusotoError<UntagResourceError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn update_gateway_route<'life0, 'async_trait>(
&'life0 self,
input: UpdateGatewayRouteInput
) -> Pin<Box<dyn Future<Output = Result<UpdateGatewayRouteOutput, RusotoError<UpdateGatewayRouteError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn update_mesh<'life0, 'async_trait>(
&'life0 self,
input: UpdateMeshInput
) -> Pin<Box<dyn Future<Output = Result<UpdateMeshOutput, RusotoError<UpdateMeshError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn update_route<'life0, 'async_trait>(
&'life0 self,
input: UpdateRouteInput
) -> Pin<Box<dyn Future<Output = Result<UpdateRouteOutput, RusotoError<UpdateRouteError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn update_virtual_gateway<'life0, 'async_trait>(
&'life0 self,
input: UpdateVirtualGatewayInput
) -> Pin<Box<dyn Future<Output = Result<UpdateVirtualGatewayOutput, RusotoError<UpdateVirtualGatewayError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn update_virtual_node<'life0, 'async_trait>(
&'life0 self,
input: UpdateVirtualNodeInput
) -> Pin<Box<dyn Future<Output = Result<UpdateVirtualNodeOutput, RusotoError<UpdateVirtualNodeError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn update_virtual_router<'life0, 'async_trait>(
&'life0 self,
input: UpdateVirtualRouterInput
) -> Pin<Box<dyn Future<Output = Result<UpdateVirtualRouterOutput, RusotoError<UpdateVirtualRouterError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn update_virtual_service<'life0, 'async_trait>(
&'life0 self,
input: UpdateVirtualServiceInput
) -> Pin<Box<dyn Future<Output = Result<UpdateVirtualServiceOutput, RusotoError<UpdateVirtualServiceError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
}
```
Trait representing the capabilities of the AWS App Mesh API. AWS App Mesh clients implement this trait.
Required Methods
---
source#### fn create_gateway_route<'life0, 'async_trait>( &'life0 self, input: CreateGatewayRouteInput) -> Pin<Box<dyn Future<Output = Result<CreateGatewayRouteOutput, RusotoError<CreateGatewayRouteError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Creates a gateway route.
A gateway route is attached to a virtual gateway and routes traffic to an existing virtual service. If a route matches a request, it can distribute traffic to a target virtual service.
For more information about gateway routes, see Gateway routes.
source#### fn create_mesh<'life0, 'async_trait>( &'life0 self, input: CreateMeshInput) -> Pin<Box<dyn Future<Output = Result<CreateMeshOutput, RusotoError<CreateMeshError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Creates a service mesh.
A service mesh is a logical boundary for network traffic between services that are represented by resources within the mesh. After you create your service mesh, you can create virtual services, virtual nodes, virtual routers, and routes to distribute traffic between the applications in your mesh.
For more information about service meshes, see Service meshes.
source#### fn create_route<'life0, 'async_trait>( &'life0 self, input: CreateRouteInput) -> Pin<Box<dyn Future<Output = Result<CreateRouteOutput, RusotoError<CreateRouteError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Creates a route that is associated with a virtual router.
You can route several different protocols and define a retry policy for a route. Traffic can be routed to one or more virtual nodes.
For more information about routes, see Routes.
source#### fn create_virtual_gateway<'life0, 'async_trait>( &'life0 self, input: CreateVirtualGatewayInput) -> Pin<Box<dyn Future<Output = Result<CreateVirtualGatewayOutput, RusotoError<CreateVirtualGatewayError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Creates a virtual gateway.
A virtual gateway allows resources outside your mesh to communicate to resources that are inside your mesh. The virtual gateway represents an Envoy proxy running in an Amazon ECS task, in a Kubernetes service, or on an Amazon EC2 instance. Unlike a virtual node, which represents an Envoy running with an application, a virtual gateway represents Envoy deployed by itself.
For more information about virtual gateways, see Virtual gateways.
source#### fn create_virtual_node<'life0, 'async_trait>( &'life0 self, input: CreateVirtualNodeInput) -> Pin<Box<dyn Future<Output = Result<CreateVirtualNodeOutput, RusotoError<CreateVirtualNodeError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Creates a virtual node within a service mesh.
A virtual node acts as a logical pointer to a particular task group, such as an Amazon ECS service or a Kubernetes deployment. When you create a virtual node, you can specify the service discovery information for your task group, and whether the proxy running in a task group will communicate with other proxies using Transport Layer Security (TLS).
You define a `listener` for any inbound traffic that your virtual node expects. Any virtual service that your virtual node expects to communicate to is specified as a `backend`.
The response metadata for your new virtual node contains the `arn` that is associated with the virtual node. Set this value to the full ARN; for example, `arn:aws:appmesh:us-west-2:123456789012:myMesh/default/virtualNode/myApp`) as the `APPMESH_RESOURCE_ARN` environment variable for your task group's Envoy proxy container in your task definition or pod spec. This is then mapped to the `node.id` and `node.cluster` Envoy parameters.
By default, App Mesh uses the name of the resource you specified in `APPMESH_RESOURCE_ARN` when Envoy is referring to itself in metrics and traces. You can override this behavior by setting the `APPMESH_RESOURCE_CLUSTER` environment variable with your own name.
For more information about virtual nodes, see Virtual nodes. You must be using `1.15.0` or later of the Envoy image when setting these variables. For more information aboutApp Mesh Envoy variables, see Envoy image in the AWS App Mesh User Guide.
source#### fn create_virtual_router<'life0, 'async_trait>( &'life0 self, input: CreateVirtualRouterInput) -> Pin<Box<dyn Future<Output = Result<CreateVirtualRouterOutput, RusotoError<CreateVirtualRouterError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Creates a virtual router within a service mesh.
Specify a `listener` for any inbound traffic that your virtual router receives. Create a virtual router for each protocol and port that you need to route. Virtual routers handle traffic for one or more virtual services within your mesh. After you create your virtual router, create and associate routes for your virtual router that direct incoming requests to different virtual nodes.
For more information about virtual routers, see Virtual routers.
source#### fn create_virtual_service<'life0, 'async_trait>( &'life0 self, input: CreateVirtualServiceInput) -> Pin<Box<dyn Future<Output = Result<CreateVirtualServiceOutput, RusotoError<CreateVirtualServiceError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Creates a virtual service within a service mesh.
A virtual service is an abstraction of a real service that is provided by a virtual node directly or indirectly by means of a virtual router. Dependent services call your virtual service by its `virtualServiceName`, and those requests are routed to the virtual node or virtual router that is specified as the provider for the virtual service.
For more information about virtual services, see Virtual services.
source#### fn delete_gateway_route<'life0, 'async_trait>( &'life0 self, input: DeleteGatewayRouteInput) -> Pin<Box<dyn Future<Output = Result<DeleteGatewayRouteOutput, RusotoError<DeleteGatewayRouteError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Deletes an existing gateway route.
source#### fn delete_mesh<'life0, 'async_trait>( &'life0 self, input: DeleteMeshInput) -> Pin<Box<dyn Future<Output = Result<DeleteMeshOutput, RusotoError<DeleteMeshError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Deletes an existing service mesh.
You must delete all resources (virtual services, routes, virtual routers, and virtual nodes) in the service mesh before you can delete the mesh itself.
source#### fn delete_route<'life0, 'async_trait>( &'life0 self, input: DeleteRouteInput) -> Pin<Box<dyn Future<Output = Result<DeleteRouteOutput, RusotoError<DeleteRouteError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Deletes an existing route.
source#### fn delete_virtual_gateway<'life0, 'async_trait>( &'life0 self, input: DeleteVirtualGatewayInput) -> Pin<Box<dyn Future<Output = Result<DeleteVirtualGatewayOutput, RusotoError<DeleteVirtualGatewayError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Deletes an existing virtual gateway. You cannot delete a virtual gateway if any gateway routes are associated to it.
source#### fn delete_virtual_node<'life0, 'async_trait>( &'life0 self, input: DeleteVirtualNodeInput) -> Pin<Box<dyn Future<Output = Result<DeleteVirtualNodeOutput, RusotoError<DeleteVirtualNodeError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Deletes an existing virtual node.
You must delete any virtual services that list a virtual node as a service provider before you can delete the virtual node itself.
source#### fn delete_virtual_router<'life0, 'async_trait>( &'life0 self, input: DeleteVirtualRouterInput) -> Pin<Box<dyn Future<Output = Result<DeleteVirtualRouterOutput, RusotoError<DeleteVirtualRouterError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Deletes an existing virtual router.
You must delete any routes associated with the virtual router before you can delete the router itself.
source#### fn delete_virtual_service<'life0, 'async_trait>( &'life0 self, input: DeleteVirtualServiceInput) -> Pin<Box<dyn Future<Output = Result<DeleteVirtualServiceOutput, RusotoError<DeleteVirtualServiceError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Deletes an existing virtual service.
source#### fn describe_gateway_route<'life0, 'async_trait>( &'life0 self, input: DescribeGatewayRouteInput) -> Pin<Box<dyn Future<Output = Result<DescribeGatewayRouteOutput, RusotoError<DescribeGatewayRouteError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Describes an existing gateway route.
source#### fn describe_mesh<'life0, 'async_trait>( &'life0 self, input: DescribeMeshInput) -> Pin<Box<dyn Future<Output = Result<DescribeMeshOutput, RusotoError<DescribeMeshError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Describes an existing service mesh.
source#### fn describe_route<'life0, 'async_trait>( &'life0 self, input: DescribeRouteInput) -> Pin<Box<dyn Future<Output = Result<DescribeRouteOutput, RusotoError<DescribeRouteError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Describes an existing route.
source#### fn describe_virtual_gateway<'life0, 'async_trait>( &'life0 self, input: DescribeVirtualGatewayInput) -> Pin<Box<dyn Future<Output = Result<DescribeVirtualGatewayOutput, RusotoError<DescribeVirtualGatewayError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Describes an existing virtual gateway.
source#### fn describe_virtual_node<'life0, 'async_trait>( &'life0 self, input: DescribeVirtualNodeInput) -> Pin<Box<dyn Future<Output = Result<DescribeVirtualNodeOutput, RusotoError<DescribeVirtualNodeError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Describes an existing virtual node.
source#### fn describe_virtual_router<'life0, 'async_trait>( &'life0 self, input: DescribeVirtualRouterInput) -> Pin<Box<dyn Future<Output = Result<DescribeVirtualRouterOutput, RusotoError<DescribeVirtualRouterError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Describes an existing virtual router.
source#### fn describe_virtual_service<'life0, 'async_trait>( &'life0 self, input: DescribeVirtualServiceInput) -> Pin<Box<dyn Future<Output = Result<DescribeVirtualServiceOutput, RusotoError<DescribeVirtualServiceError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Describes an existing virtual service.
source#### fn list_gateway_routes<'life0, 'async_trait>( &'life0 self, input: ListGatewayRoutesInput) -> Pin<Box<dyn Future<Output = Result<ListGatewayRoutesOutput, RusotoError<ListGatewayRoutesError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Returns a list of existing gateway routes that are associated to a virtual gateway.
source#### fn list_meshes<'life0, 'async_trait>( &'life0 self, input: ListMeshesInput) -> Pin<Box<dyn Future<Output = Result<ListMeshesOutput, RusotoError<ListMeshesError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Returns a list of existing service meshes.
source#### fn list_routes<'life0, 'async_trait>( &'life0 self, input: ListRoutesInput) -> Pin<Box<dyn Future<Output = Result<ListRoutesOutput, RusotoError<ListRoutesError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Returns a list of existing routes in a service mesh.
source#### fn list_tags_for_resource<'life0, 'async_trait>( &'life0 self, input: ListTagsForResourceInput) -> Pin<Box<dyn Future<Output = Result<ListTagsForResourceOutput, RusotoError<ListTagsForResourceError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
List the tags for an App Mesh resource.
source#### fn list_virtual_gateways<'life0, 'async_trait>( &'life0 self, input: ListVirtualGatewaysInput) -> Pin<Box<dyn Future<Output = Result<ListVirtualGatewaysOutput, RusotoError<ListVirtualGatewaysError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Returns a list of existing virtual gateways in a service mesh.
source#### fn list_virtual_nodes<'life0, 'async_trait>( &'life0 self, input: ListVirtualNodesInput) -> Pin<Box<dyn Future<Output = Result<ListVirtualNodesOutput, RusotoError<ListVirtualNodesError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Returns a list of existing virtual nodes.
source#### fn list_virtual_routers<'life0, 'async_trait>( &'life0 self, input: ListVirtualRoutersInput) -> Pin<Box<dyn Future<Output = Result<ListVirtualRoutersOutput, RusotoError<ListVirtualRoutersError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Returns a list of existing virtual routers in a service mesh.
source#### fn list_virtual_services<'life0, 'async_trait>( &'life0 self, input: ListVirtualServicesInput) -> Pin<Box<dyn Future<Output = Result<ListVirtualServicesOutput, RusotoError<ListVirtualServicesError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Returns a list of existing virtual services in a service mesh.
source#### fn tag_resource<'life0, 'async_trait>( &'life0 self, input: TagResourceInput) -> Pin<Box<dyn Future<Output = Result<TagResourceOutput, RusotoError<TagResourceError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Associates the specified tags to a resource with the specified `resourceArn`. If existing tags on a resource aren't specified in the request parameters, they aren't changed. When a resource is deleted, the tags associated with that resource are also deleted.
source#### fn untag_resource<'life0, 'async_trait>( &'life0 self, input: UntagResourceInput) -> Pin<Box<dyn Future<Output = Result<UntagResourceOutput, RusotoError<UntagResourceError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Deletes specified tags from a resource.
source#### fn update_gateway_route<'life0, 'async_trait>( &'life0 self, input: UpdateGatewayRouteInput) -> Pin<Box<dyn Future<Output = Result<UpdateGatewayRouteOutput, RusotoError<UpdateGatewayRouteError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Updates an existing gateway route that is associated to a specified virtual gateway in a service mesh.
source#### fn update_mesh<'life0, 'async_trait>( &'life0 self, input: UpdateMeshInput) -> Pin<Box<dyn Future<Output = Result<UpdateMeshOutput, RusotoError<UpdateMeshError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Updates an existing service mesh.
source#### fn update_route<'life0, 'async_trait>( &'life0 self, input: UpdateRouteInput) -> Pin<Box<dyn Future<Output = Result<UpdateRouteOutput, RusotoError<UpdateRouteError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Updates an existing route for a specified service mesh and virtual router.
source#### fn update_virtual_gateway<'life0, 'async_trait>( &'life0 self, input: UpdateVirtualGatewayInput) -> Pin<Box<dyn Future<Output = Result<UpdateVirtualGatewayOutput, RusotoError<UpdateVirtualGatewayError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Updates an existing virtual gateway in a specified service mesh.
source#### fn update_virtual_node<'life0, 'async_trait>( &'life0 self, input: UpdateVirtualNodeInput) -> Pin<Box<dyn Future<Output = Result<UpdateVirtualNodeOutput, RusotoError<UpdateVirtualNodeError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Updates an existing virtual node in a specified service mesh.
source#### fn update_virtual_router<'life0, 'async_trait>( &'life0 self, input: UpdateVirtualRouterInput) -> Pin<Box<dyn Future<Output = Result<UpdateVirtualRouterOutput, RusotoError<UpdateVirtualRouterError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Updates an existing virtual router in a specified service mesh.
source#### fn update_virtual_service<'life0, 'async_trait>( &'life0 self, input: UpdateVirtualServiceInput) -> Pin<Box<dyn Future<Output = Result<UpdateVirtualServiceOutput, RusotoError<UpdateVirtualServiceError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Updates an existing virtual service in a specified service mesh.
Implementors
---
source### impl AppMesh for AppMeshClient
Struct rusoto_appmesh::AccessLog
===
```
pub struct AccessLog {
pub file: Option<FileAccessLog>,
}
```
An object that represents the access logging information for a virtual node.
Fields
---
`file: Option<FileAccessLog>`The file object to send virtual node access logs to.
Trait Implementations
---
source### impl Clone for AccessLog
source#### fn clone(&self) -> AccessLog
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for AccessLog
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for AccessLog
source#### fn default() -> AccessLog
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for AccessLog
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<AccessLog> for AccessLog
source#### fn eq(&self, other: &AccessLog) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &AccessLog) -> bool
This method tests for `!=`.
source### impl Serialize for AccessLog
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for AccessLog
Auto Trait Implementations
---
### impl RefUnwindSafe for AccessLog
### impl Send for AccessLog
### impl Sync for AccessLog
### impl Unpin for AccessLog
### impl UnwindSafe for AccessLog
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::AwsCloudMapInstanceAttribute
===
```
pub struct AwsCloudMapInstanceAttribute {
pub key: String,
pub value: String,
}
```
An object that represents the Cloud Map attribute information for your virtual node.
AWS Cloud Map is not available in the eu-south-1 Region.
Fields
---
`key: String`The name of an Cloud Map service instance attribute key. Any Cloud Map service instance that contains the specified key and value is returned.
`value: String`The value of an Cloud Map service instance attribute key. Any Cloud Map service instance that contains the specified key and value is returned.
Trait Implementations
---
source### impl Clone for AwsCloudMapInstanceAttribute
source#### fn clone(&self) -> AwsCloudMapInstanceAttribute
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for AwsCloudMapInstanceAttribute
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for AwsCloudMapInstanceAttribute
source#### fn default() -> AwsCloudMapInstanceAttribute
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for AwsCloudMapInstanceAttribute
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<AwsCloudMapInstanceAttribute> for AwsCloudMapInstanceAttribute
source#### fn eq(&self, other: &AwsCloudMapInstanceAttribute) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &AwsCloudMapInstanceAttribute) -> bool
This method tests for `!=`.
source### impl Serialize for AwsCloudMapInstanceAttribute
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for AwsCloudMapInstanceAttribute
Auto Trait Implementations
---
### impl RefUnwindSafe for AwsCloudMapInstanceAttribute
### impl Send for AwsCloudMapInstanceAttribute
### impl Sync for AwsCloudMapInstanceAttribute
### impl Unpin for AwsCloudMapInstanceAttribute
### impl UnwindSafe for AwsCloudMapInstanceAttribute
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::AwsCloudMapServiceDiscovery
===
```
pub struct AwsCloudMapServiceDiscovery {
pub attributes: Option<Vec<AwsCloudMapInstanceAttribute>>,
pub namespace_name: String,
pub service_name: String,
}
```
An object that represents the Cloud Map service discovery information for your virtual node.
Cloud Map is not available in the eu-south-1 Region.
Fields
---
`attributes: Option<Vec<AwsCloudMapInstanceAttribute>>`A string map that contains attributes with values that you can use to filter instances by any custom attribute that you specified when you registered the instance. Only instances that match all of the specified key/value pairs will be returned.
`namespace_name: String`The name of the Cloud Map namespace to use.
`service_name: String`The name of the Cloud Map service to use.
Trait Implementations
---
source### impl Clone for AwsCloudMapServiceDiscovery
source#### fn clone(&self) -> AwsCloudMapServiceDiscovery
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for AwsCloudMapServiceDiscovery
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for AwsCloudMapServiceDiscovery
source#### fn default() -> AwsCloudMapServiceDiscovery
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for AwsCloudMapServiceDiscovery
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<AwsCloudMapServiceDiscovery> for AwsCloudMapServiceDiscovery
source#### fn eq(&self, other: &AwsCloudMapServiceDiscovery) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &AwsCloudMapServiceDiscovery) -> bool
This method tests for `!=`.
source### impl Serialize for AwsCloudMapServiceDiscovery
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for AwsCloudMapServiceDiscovery
Auto Trait Implementations
---
### impl RefUnwindSafe for AwsCloudMapServiceDiscovery
### impl Send for AwsCloudMapServiceDiscovery
### impl Sync for AwsCloudMapServiceDiscovery
### impl Unpin for AwsCloudMapServiceDiscovery
### impl UnwindSafe for AwsCloudMapServiceDiscovery
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::Backend
===
```
pub struct Backend {
pub virtual_service: Option<VirtualServiceBackend>,
}
```
An object that represents the backends that a virtual node is expected to send outbound traffic to.
Fields
---
`virtual_service: Option<VirtualServiceBackend>`Specifies a virtual service to use as a backend.
Trait Implementations
---
source### impl Clone for Backend
source#### fn clone(&self) -> Backend
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for Backend
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for Backend
source#### fn default() -> Backend
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for Backend
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<Backend> for Backend
source#### fn eq(&self, other: &Backend) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &Backend) -> bool
This method tests for `!=`.
source### impl Serialize for Backend
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for Backend
Auto Trait Implementations
---
### impl RefUnwindSafe for Backend
### impl Send for Backend
### impl Sync for Backend
### impl Unpin for Backend
### impl UnwindSafe for Backend
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::BackendDefaults
===
```
pub struct BackendDefaults {
pub client_policy: Option<ClientPolicy>,
}
```
An object that represents the default properties for a backend.
Fields
---
`client_policy: Option<ClientPolicy>`A reference to an object that represents a client policy.
Trait Implementations
---
source### impl Clone for BackendDefaults
source#### fn clone(&self) -> BackendDefaults
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for BackendDefaults
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for BackendDefaults
source#### fn default() -> BackendDefaults
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for BackendDefaults
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<BackendDefaults> for BackendDefaults
source#### fn eq(&self, other: &BackendDefaults) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &BackendDefaults) -> bool
This method tests for `!=`.
source### impl Serialize for BackendDefaults
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for BackendDefaults
Auto Trait Implementations
---
### impl RefUnwindSafe for BackendDefaults
### impl Send for BackendDefaults
### impl Sync for BackendDefaults
### impl Unpin for BackendDefaults
### impl UnwindSafe for BackendDefaults
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::ClientPolicy
===
```
pub struct ClientPolicy {
pub tls: Option<ClientPolicyTls>,
}
```
An object that represents a client policy.
Fields
---
`tls: Option<ClientPolicyTls>`A reference to an object that represents a Transport Layer Security (TLS) client policy.
Trait Implementations
---
source### impl Clone for ClientPolicy
source#### fn clone(&self) -> ClientPolicy
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ClientPolicy
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ClientPolicy
source#### fn default() -> ClientPolicy
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for ClientPolicy
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<ClientPolicy> for ClientPolicy
source#### fn eq(&self, other: &ClientPolicy) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ClientPolicy) -> bool
This method tests for `!=`.
source### impl Serialize for ClientPolicy
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for ClientPolicy
Auto Trait Implementations
---
### impl RefUnwindSafe for ClientPolicy
### impl Send for ClientPolicy
### impl Sync for ClientPolicy
### impl Unpin for ClientPolicy
### impl UnwindSafe for ClientPolicy
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::ClientPolicyTls
===
```
pub struct ClientPolicyTls {
pub certificate: Option<ClientTlsCertificate>,
pub enforce: Option<bool>,
pub ports: Option<Vec<i64>>,
pub validation: TlsValidationContext,
}
```
A reference to an object that represents a Transport Layer Security (TLS) client policy.
Fields
---
`certificate: Option<ClientTlsCertificate>`A reference to an object that represents a client's TLS certificate.
`enforce: Option<bool>`Whether the policy is enforced. The default is `True`, if a value isn't specified.
`ports: Option<Vec<i64>>`One or more ports that the policy is enforced for.
`validation: TlsValidationContext`A reference to an object that represents a TLS validation context.
Trait Implementations
---
source### impl Clone for ClientPolicyTls
source#### fn clone(&self) -> ClientPolicyTls
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ClientPolicyTls
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ClientPolicyTls
source#### fn default() -> ClientPolicyTls
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for ClientPolicyTls
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<ClientPolicyTls> for ClientPolicyTls
source#### fn eq(&self, other: &ClientPolicyTls) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ClientPolicyTls) -> bool
This method tests for `!=`.
source### impl Serialize for ClientPolicyTls
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for ClientPolicyTls
Auto Trait Implementations
---
### impl RefUnwindSafe for ClientPolicyTls
### impl Send for ClientPolicyTls
### impl Sync for ClientPolicyTls
### impl Unpin for ClientPolicyTls
### impl UnwindSafe for ClientPolicyTls
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::ClientTlsCertificate
===
```
pub struct ClientTlsCertificate {
pub file: Option<ListenerTlsFileCertificate>,
pub sds: Option<ListenerTlsSdsCertificate>,
}
```
An object that represents the client's certificate.
Fields
---
`file: Option<ListenerTlsFileCertificate>`An object that represents a local file certificate. The certificate must meet specific requirements and you must have proxy authorization enabled. For more information, see Transport Layer Security (TLS).
`sds: Option<ListenerTlsSdsCertificate>`A reference to an object that represents a client's TLS Secret Discovery Service certificate.
Trait Implementations
---
source### impl Clone for ClientTlsCertificate
source#### fn clone(&self) -> ClientTlsCertificate
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ClientTlsCertificate
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ClientTlsCertificate
source#### fn default() -> ClientTlsCertificate
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for ClientTlsCertificate
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<ClientTlsCertificate> for ClientTlsCertificate
source#### fn eq(&self, other: &ClientTlsCertificate) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ClientTlsCertificate) -> bool
This method tests for `!=`.
source### impl Serialize for ClientTlsCertificate
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for ClientTlsCertificate
Auto Trait Implementations
---
### impl RefUnwindSafe for ClientTlsCertificate
### impl Send for ClientTlsCertificate
### impl Sync for ClientTlsCertificate
### impl Unpin for ClientTlsCertificate
### impl UnwindSafe for ClientTlsCertificate
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::CreateGatewayRouteInput
===
```
pub struct CreateGatewayRouteInput {
pub client_token: Option<String>,
pub gateway_route_name: String,
pub mesh_name: String,
pub mesh_owner: Option<String>,
pub spec: GatewayRouteSpec,
pub tags: Option<Vec<TagRef>>,
pub virtual_gateway_name: String,
}
```
Fields
---
`client_token: Option<String>`Unique, case-sensitive identifier that you provide to ensure the idempotency of the request. Up to 36 letters, numbers, hyphens, and underscores are allowed.
`gateway_route_name: String`The name to use for the gateway route.
`mesh_name: String`The name of the service mesh to create the gateway route in.
`mesh_owner: Option<String>`The AWS IAM account ID of the service mesh owner. If the account ID is not your own, then the account that you specify must share the mesh with your account before you can create the resource in the service mesh. For more information about mesh sharing, see Working with shared meshes.
`spec: GatewayRouteSpec`The gateway route specification to apply.
`tags: Option<Vec<TagRef>>`Optional metadata that you can apply to the gateway route to assist with categorization and organization. Each tag consists of a key and an optional value, both of which you define. Tag keys can have a maximum character length of 128 characters, and tag values can have a maximum length of 256 characters.
`virtual_gateway_name: String`The name of the virtual gateway to associate the gateway route with. If the virtual gateway is in a shared mesh, then you must be the owner of the virtual gateway resource.
Trait Implementations
---
source### impl Clone for CreateGatewayRouteInput
source#### fn clone(&self) -> CreateGatewayRouteInput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for CreateGatewayRouteInput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for CreateGatewayRouteInput
source#### fn default() -> CreateGatewayRouteInput
Returns the “default value” for a type. Read more
source### impl PartialEq<CreateGatewayRouteInput> for CreateGatewayRouteInput
source#### fn eq(&self, other: &CreateGatewayRouteInput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateGatewayRouteInput) -> bool
This method tests for `!=`.
source### impl Serialize for CreateGatewayRouteInput
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for CreateGatewayRouteInput
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateGatewayRouteInput
### impl Send for CreateGatewayRouteInput
### impl Sync for CreateGatewayRouteInput
### impl Unpin for CreateGatewayRouteInput
### impl UnwindSafe for CreateGatewayRouteInput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_appmesh::CreateGatewayRouteOutput
===
```
pub struct CreateGatewayRouteOutput {
pub gateway_route: GatewayRouteData,
}
```
Fields
---
`gateway_route: GatewayRouteData`The full description of your gateway route following the create call.
Trait Implementations
---
source### impl Clone for CreateGatewayRouteOutput
source#### fn clone(&self) -> CreateGatewayRouteOutput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for CreateGatewayRouteOutput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for CreateGatewayRouteOutput
source#### fn default() -> CreateGatewayRouteOutput
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for CreateGatewayRouteOutput
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<CreateGatewayRouteOutput> for CreateGatewayRouteOutput
source#### fn eq(&self, other: &CreateGatewayRouteOutput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateGatewayRouteOutput) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for CreateGatewayRouteOutput
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateGatewayRouteOutput
### impl Send for CreateGatewayRouteOutput
### impl Sync for CreateGatewayRouteOutput
### impl Unpin for CreateGatewayRouteOutput
### impl UnwindSafe for CreateGatewayRouteOutput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::CreateMeshInput
===
```
pub struct CreateMeshInput {
pub client_token: Option<String>,
pub mesh_name: String,
pub spec: Option<MeshSpec>,
pub tags: Option<Vec<TagRef>>,
}
```
Fields
---
`client_token: Option<String>`Unique, case-sensitive identifier that you provide to ensure the idempotency of the request. Up to 36 letters, numbers, hyphens, and underscores are allowed.
`mesh_name: String`The name to use for the service mesh.
`spec: Option<MeshSpec>`The service mesh specification to apply.
`tags: Option<Vec<TagRef>>`Optional metadata that you can apply to the service mesh to assist with categorization and organization. Each tag consists of a key and an optional value, both of which you define. Tag keys can have a maximum character length of 128 characters, and tag values can have a maximum length of 256 characters.
Trait Implementations
---
source### impl Clone for CreateMeshInput
source#### fn clone(&self) -> CreateMeshInput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for CreateMeshInput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for CreateMeshInput
source#### fn default() -> CreateMeshInput
Returns the “default value” for a type. Read more
source### impl PartialEq<CreateMeshInput> for CreateMeshInput
source#### fn eq(&self, other: &CreateMeshInput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateMeshInput) -> bool
This method tests for `!=`.
source### impl Serialize for CreateMeshInput
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for CreateMeshInput
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateMeshInput
### impl Send for CreateMeshInput
### impl Sync for CreateMeshInput
### impl Unpin for CreateMeshInput
### impl UnwindSafe for CreateMeshInput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_appmesh::CreateMeshOutput
===
```
pub struct CreateMeshOutput {
pub mesh: MeshData,
}
```
Fields
---
`mesh: MeshData`The full description of your service mesh following the create call.
Trait Implementations
---
source### impl Clone for CreateMeshOutput
source#### fn clone(&self) -> CreateMeshOutput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for CreateMeshOutput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for CreateMeshOutput
source#### fn default() -> CreateMeshOutput
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for CreateMeshOutput
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<CreateMeshOutput> for CreateMeshOutput
source#### fn eq(&self, other: &CreateMeshOutput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateMeshOutput) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for CreateMeshOutput
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateMeshOutput
### impl Send for CreateMeshOutput
### impl Sync for CreateMeshOutput
### impl Unpin for CreateMeshOutput
### impl UnwindSafe for CreateMeshOutput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::CreateRouteInput
===
```
pub struct CreateRouteInput {
pub client_token: Option<String>,
pub mesh_name: String,
pub mesh_owner: Option<String>,
pub route_name: String,
pub spec: RouteSpec,
pub tags: Option<Vec<TagRef>>,
pub virtual_router_name: String,
}
```
Fields
---
`client_token: Option<String>`Unique, case-sensitive identifier that you provide to ensure the idempotency of the request. Up to 36 letters, numbers, hyphens, and underscores are allowed.
`mesh_name: String`The name of the service mesh to create the route in.
`mesh_owner: Option<String>`The AWS IAM account ID of the service mesh owner. If the account ID is not your own, then the account that you specify must share the mesh with your account before you can create the resource in the service mesh. For more information about mesh sharing, see Working with shared meshes.
`route_name: String`The name to use for the route.
`spec: RouteSpec`The route specification to apply.
`tags: Option<Vec<TagRef>>`Optional metadata that you can apply to the route to assist with categorization and organization. Each tag consists of a key and an optional value, both of which you define. Tag keys can have a maximum character length of 128 characters, and tag values can have a maximum length of 256 characters.
`virtual_router_name: String`The name of the virtual router in which to create the route. If the virtual router is in a shared mesh, then you must be the owner of the virtual router resource.
Trait Implementations
---
source### impl Clone for CreateRouteInput
source#### fn clone(&self) -> CreateRouteInput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for CreateRouteInput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for CreateRouteInput
source#### fn default() -> CreateRouteInput
Returns the “default value” for a type. Read more
source### impl PartialEq<CreateRouteInput> for CreateRouteInput
source#### fn eq(&self, other: &CreateRouteInput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateRouteInput) -> bool
This method tests for `!=`.
source### impl Serialize for CreateRouteInput
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for CreateRouteInput
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateRouteInput
### impl Send for CreateRouteInput
### impl Sync for CreateRouteInput
### impl Unpin for CreateRouteInput
### impl UnwindSafe for CreateRouteInput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_appmesh::CreateRouteOutput
===
```
pub struct CreateRouteOutput {
pub route: RouteData,
}
```
Fields
---
`route: RouteData`The full description of your mesh following the create call.
Trait Implementations
---
source### impl Clone for CreateRouteOutput
source#### fn clone(&self) -> CreateRouteOutput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for CreateRouteOutput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for CreateRouteOutput
source#### fn default() -> CreateRouteOutput
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for CreateRouteOutput
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<CreateRouteOutput> for CreateRouteOutput
source#### fn eq(&self, other: &CreateRouteOutput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateRouteOutput) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for CreateRouteOutput
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateRouteOutput
### impl Send for CreateRouteOutput
### impl Sync for CreateRouteOutput
### impl Unpin for CreateRouteOutput
### impl UnwindSafe for CreateRouteOutput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::CreateVirtualGatewayInput
===
```
pub struct CreateVirtualGatewayInput {
pub client_token: Option<String>,
pub mesh_name: String,
pub mesh_owner: Option<String>,
pub spec: VirtualGatewaySpec,
pub tags: Option<Vec<TagRef>>,
pub virtual_gateway_name: String,
}
```
Fields
---
`client_token: Option<String>`Unique, case-sensitive identifier that you provide to ensure the idempotency of the request. Up to 36 letters, numbers, hyphens, and underscores are allowed.
`mesh_name: String`The name of the service mesh to create the virtual gateway in.
`mesh_owner: Option<String>`The AWS IAM account ID of the service mesh owner. If the account ID is not your own, then the account that you specify must share the mesh with your account before you can create the resource in the service mesh. For more information about mesh sharing, see Working with shared meshes.
`spec: VirtualGatewaySpec`The virtual gateway specification to apply.
`tags: Option<Vec<TagRef>>`Optional metadata that you can apply to the virtual gateway to assist with categorization and organization. Each tag consists of a key and an optional value, both of which you define. Tag keys can have a maximum character length of 128 characters, and tag values can have a maximum length of 256 characters.
`virtual_gateway_name: String`The name to use for the virtual gateway.
Trait Implementations
---
source### impl Clone for CreateVirtualGatewayInput
source#### fn clone(&self) -> CreateVirtualGatewayInput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for CreateVirtualGatewayInput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for CreateVirtualGatewayInput
source#### fn default() -> CreateVirtualGatewayInput
Returns the “default value” for a type. Read more
source### impl PartialEq<CreateVirtualGatewayInput> for CreateVirtualGatewayInput
source#### fn eq(&self, other: &CreateVirtualGatewayInput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateVirtualGatewayInput) -> bool
This method tests for `!=`.
source### impl Serialize for CreateVirtualGatewayInput
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for CreateVirtualGatewayInput
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateVirtualGatewayInput
### impl Send for CreateVirtualGatewayInput
### impl Sync for CreateVirtualGatewayInput
### impl Unpin for CreateVirtualGatewayInput
### impl UnwindSafe for CreateVirtualGatewayInput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_appmesh::CreateVirtualGatewayOutput
===
```
pub struct CreateVirtualGatewayOutput {
pub virtual_gateway: VirtualGatewayData,
}
```
Fields
---
`virtual_gateway: VirtualGatewayData`The full description of your virtual gateway following the create call.
Trait Implementations
---
source### impl Clone for CreateVirtualGatewayOutput
source#### fn clone(&self) -> CreateVirtualGatewayOutput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for CreateVirtualGatewayOutput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for CreateVirtualGatewayOutput
source#### fn default() -> CreateVirtualGatewayOutput
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for CreateVirtualGatewayOutput
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<CreateVirtualGatewayOutput> for CreateVirtualGatewayOutput
source#### fn eq(&self, other: &CreateVirtualGatewayOutput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateVirtualGatewayOutput) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for CreateVirtualGatewayOutput
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateVirtualGatewayOutput
### impl Send for CreateVirtualGatewayOutput
### impl Sync for CreateVirtualGatewayOutput
### impl Unpin for CreateVirtualGatewayOutput
### impl UnwindSafe for CreateVirtualGatewayOutput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::CreateVirtualNodeInput
===
```
pub struct CreateVirtualNodeInput {
pub client_token: Option<String>,
pub mesh_name: String,
pub mesh_owner: Option<String>,
pub spec: VirtualNodeSpec,
pub tags: Option<Vec<TagRef>>,
pub virtual_node_name: String,
}
```
Fields
---
`client_token: Option<String>`Unique, case-sensitive identifier that you provide to ensure the idempotency of the request. Up to 36 letters, numbers, hyphens, and underscores are allowed.
`mesh_name: String`The name of the service mesh to create the virtual node in.
`mesh_owner: Option<String>`The AWS IAM account ID of the service mesh owner. If the account ID is not your own, then the account that you specify must share the mesh with your account before you can create the resource in the service mesh. For more information about mesh sharing, see Working with shared meshes.
`spec: VirtualNodeSpec`The virtual node specification to apply.
`tags: Option<Vec<TagRef>>`Optional metadata that you can apply to the virtual node to assist with categorization and organization. Each tag consists of a key and an optional value, both of which you define. Tag keys can have a maximum character length of 128 characters, and tag values can have a maximum length of 256 characters.
`virtual_node_name: String`The name to use for the virtual node.
Trait Implementations
---
source### impl Clone for CreateVirtualNodeInput
source#### fn clone(&self) -> CreateVirtualNodeInput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for CreateVirtualNodeInput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for CreateVirtualNodeInput
source#### fn default() -> CreateVirtualNodeInput
Returns the “default value” for a type. Read more
source### impl PartialEq<CreateVirtualNodeInput> for CreateVirtualNodeInput
source#### fn eq(&self, other: &CreateVirtualNodeInput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateVirtualNodeInput) -> bool
This method tests for `!=`.
source### impl Serialize for CreateVirtualNodeInput
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for CreateVirtualNodeInput
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateVirtualNodeInput
### impl Send for CreateVirtualNodeInput
### impl Sync for CreateVirtualNodeInput
### impl Unpin for CreateVirtualNodeInput
### impl UnwindSafe for CreateVirtualNodeInput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_appmesh::CreateVirtualNodeOutput
===
```
pub struct CreateVirtualNodeOutput {
pub virtual_node: VirtualNodeData,
}
```
Fields
---
`virtual_node: VirtualNodeData`The full description of your virtual node following the create call.
Trait Implementations
---
source### impl Clone for CreateVirtualNodeOutput
source#### fn clone(&self) -> CreateVirtualNodeOutput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for CreateVirtualNodeOutput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for CreateVirtualNodeOutput
source#### fn default() -> CreateVirtualNodeOutput
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for CreateVirtualNodeOutput
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<CreateVirtualNodeOutput> for CreateVirtualNodeOutput
source#### fn eq(&self, other: &CreateVirtualNodeOutput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateVirtualNodeOutput) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for CreateVirtualNodeOutput
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateVirtualNodeOutput
### impl Send for CreateVirtualNodeOutput
### impl Sync for CreateVirtualNodeOutput
### impl Unpin for CreateVirtualNodeOutput
### impl UnwindSafe for CreateVirtualNodeOutput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::CreateVirtualRouterInput
===
```
pub struct CreateVirtualRouterInput {
pub client_token: Option<String>,
pub mesh_name: String,
pub mesh_owner: Option<String>,
pub spec: VirtualRouterSpec,
pub tags: Option<Vec<TagRef>>,
pub virtual_router_name: String,
}
```
Fields
---
`client_token: Option<String>`Unique, case-sensitive identifier that you provide to ensure the idempotency of the request. Up to 36 letters, numbers, hyphens, and underscores are allowed.
`mesh_name: String`The name of the service mesh to create the virtual router in.
`mesh_owner: Option<String>`The AWS IAM account ID of the service mesh owner. If the account ID is not your own, then the account that you specify must share the mesh with your account before you can create the resource in the service mesh. For more information about mesh sharing, see Working with shared meshes.
`spec: VirtualRouterSpec`The virtual router specification to apply.
`tags: Option<Vec<TagRef>>`Optional metadata that you can apply to the virtual router to assist with categorization and organization. Each tag consists of a key and an optional value, both of which you define. Tag keys can have a maximum character length of 128 characters, and tag values can have a maximum length of 256 characters.
`virtual_router_name: String`The name to use for the virtual router.
Trait Implementations
---
source### impl Clone for CreateVirtualRouterInput
source#### fn clone(&self) -> CreateVirtualRouterInput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for CreateVirtualRouterInput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for CreateVirtualRouterInput
source#### fn default() -> CreateVirtualRouterInput
Returns the “default value” for a type. Read more
source### impl PartialEq<CreateVirtualRouterInput> for CreateVirtualRouterInput
source#### fn eq(&self, other: &CreateVirtualRouterInput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateVirtualRouterInput) -> bool
This method tests for `!=`.
source### impl Serialize for CreateVirtualRouterInput
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for CreateVirtualRouterInput
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateVirtualRouterInput
### impl Send for CreateVirtualRouterInput
### impl Sync for CreateVirtualRouterInput
### impl Unpin for CreateVirtualRouterInput
### impl UnwindSafe for CreateVirtualRouterInput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_appmesh::CreateVirtualRouterOutput
===
```
pub struct CreateVirtualRouterOutput {
pub virtual_router: VirtualRouterData,
}
```
Fields
---
`virtual_router: VirtualRouterData`The full description of your virtual router following the create call.
Trait Implementations
---
source### impl Clone for CreateVirtualRouterOutput
source#### fn clone(&self) -> CreateVirtualRouterOutput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for CreateVirtualRouterOutput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for CreateVirtualRouterOutput
source#### fn default() -> CreateVirtualRouterOutput
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for CreateVirtualRouterOutput
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<CreateVirtualRouterOutput> for CreateVirtualRouterOutput
source#### fn eq(&self, other: &CreateVirtualRouterOutput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateVirtualRouterOutput) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for CreateVirtualRouterOutput
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateVirtualRouterOutput
### impl Send for CreateVirtualRouterOutput
### impl Sync for CreateVirtualRouterOutput
### impl Unpin for CreateVirtualRouterOutput
### impl UnwindSafe for CreateVirtualRouterOutput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::CreateVirtualServiceInput
===
```
pub struct CreateVirtualServiceInput {
pub client_token: Option<String>,
pub mesh_name: String,
pub mesh_owner: Option<String>,
pub spec: VirtualServiceSpec,
pub tags: Option<Vec<TagRef>>,
pub virtual_service_name: String,
}
```
Fields
---
`client_token: Option<String>`Unique, case-sensitive identifier that you provide to ensure the idempotency of the request. Up to 36 letters, numbers, hyphens, and underscores are allowed.
`mesh_name: String`The name of the service mesh to create the virtual service in.
`mesh_owner: Option<String>`The AWS IAM account ID of the service mesh owner. If the account ID is not your own, then the account that you specify must share the mesh with your account before you can create the resource in the service mesh. For more information about mesh sharing, see Working with shared meshes.
`spec: VirtualServiceSpec`The virtual service specification to apply.
`tags: Option<Vec<TagRef>>`Optional metadata that you can apply to the virtual service to assist with categorization and organization. Each tag consists of a key and an optional value, both of which you define. Tag keys can have a maximum character length of 128 characters, and tag values can have a maximum length of 256 characters.
`virtual_service_name: String`The name to use for the virtual service.
Trait Implementations
---
source### impl Clone for CreateVirtualServiceInput
source#### fn clone(&self) -> CreateVirtualServiceInput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for CreateVirtualServiceInput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for CreateVirtualServiceInput
source#### fn default() -> CreateVirtualServiceInput
Returns the “default value” for a type. Read more
source### impl PartialEq<CreateVirtualServiceInput> for CreateVirtualServiceInput
source#### fn eq(&self, other: &CreateVirtualServiceInput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateVirtualServiceInput) -> bool
This method tests for `!=`.
source### impl Serialize for CreateVirtualServiceInput
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for CreateVirtualServiceInput
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateVirtualServiceInput
### impl Send for CreateVirtualServiceInput
### impl Sync for CreateVirtualServiceInput
### impl Unpin for CreateVirtualServiceInput
### impl UnwindSafe for CreateVirtualServiceInput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_appmesh::CreateVirtualServiceOutput
===
```
pub struct CreateVirtualServiceOutput {
pub virtual_service: VirtualServiceData,
}
```
Fields
---
`virtual_service: VirtualServiceData`The full description of your virtual service following the create call.
Trait Implementations
---
source### impl Clone for CreateVirtualServiceOutput
source#### fn clone(&self) -> CreateVirtualServiceOutput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for CreateVirtualServiceOutput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for CreateVirtualServiceOutput
source#### fn default() -> CreateVirtualServiceOutput
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for CreateVirtualServiceOutput
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<CreateVirtualServiceOutput> for CreateVirtualServiceOutput
source#### fn eq(&self, other: &CreateVirtualServiceOutput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateVirtualServiceOutput) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for CreateVirtualServiceOutput
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateVirtualServiceOutput
### impl Send for CreateVirtualServiceOutput
### impl Sync for CreateVirtualServiceOutput
### impl Unpin for CreateVirtualServiceOutput
### impl UnwindSafe for CreateVirtualServiceOutput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::DeleteGatewayRouteInput
===
```
pub struct DeleteGatewayRouteInput {
pub gateway_route_name: String,
pub mesh_name: String,
pub mesh_owner: Option<String>,
pub virtual_gateway_name: String,
}
```
Fields
---
`gateway_route_name: String`The name of the gateway route to delete.
`mesh_name: String`The name of the service mesh to delete the gateway route from.
`mesh_owner: Option<String>`The AWS IAM account ID of the service mesh owner. If the account ID is not your own, then it's the ID of the account that shared the mesh with your account. For more information about mesh sharing, see Working with shared meshes.
`virtual_gateway_name: String`The name of the virtual gateway to delete the route from.
Trait Implementations
---
source### impl Clone for DeleteGatewayRouteInput
source#### fn clone(&self) -> DeleteGatewayRouteInput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DeleteGatewayRouteInput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DeleteGatewayRouteInput
source#### fn default() -> DeleteGatewayRouteInput
Returns the “default value” for a type. Read more
source### impl PartialEq<DeleteGatewayRouteInput> for DeleteGatewayRouteInput
source#### fn eq(&self, other: &DeleteGatewayRouteInput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteGatewayRouteInput) -> bool
This method tests for `!=`.
source### impl Serialize for DeleteGatewayRouteInput
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for DeleteGatewayRouteInput
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteGatewayRouteInput
### impl Send for DeleteGatewayRouteInput
### impl Sync for DeleteGatewayRouteInput
### impl Unpin for DeleteGatewayRouteInput
### impl UnwindSafe for DeleteGatewayRouteInput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_appmesh::DeleteGatewayRouteOutput
===
```
pub struct DeleteGatewayRouteOutput {
pub gateway_route: GatewayRouteData,
}
```
Fields
---
`gateway_route: GatewayRouteData`The gateway route that was deleted.
Trait Implementations
---
source### impl Clone for DeleteGatewayRouteOutput
source#### fn clone(&self) -> DeleteGatewayRouteOutput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DeleteGatewayRouteOutput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DeleteGatewayRouteOutput
source#### fn default() -> DeleteGatewayRouteOutput
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for DeleteGatewayRouteOutput
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<DeleteGatewayRouteOutput> for DeleteGatewayRouteOutput
source#### fn eq(&self, other: &DeleteGatewayRouteOutput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteGatewayRouteOutput) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DeleteGatewayRouteOutput
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteGatewayRouteOutput
### impl Send for DeleteGatewayRouteOutput
### impl Sync for DeleteGatewayRouteOutput
### impl Unpin for DeleteGatewayRouteOutput
### impl UnwindSafe for DeleteGatewayRouteOutput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::DeleteMeshInput
===
```
pub struct DeleteMeshInput {
pub mesh_name: String,
}
```
Fields
---
`mesh_name: String`The name of the service mesh to delete.
Trait Implementations
---
source### impl Clone for DeleteMeshInput
source#### fn clone(&self) -> DeleteMeshInput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DeleteMeshInput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DeleteMeshInput
source#### fn default() -> DeleteMeshInput
Returns the “default value” for a type. Read more
source### impl PartialEq<DeleteMeshInput> for DeleteMeshInput
source#### fn eq(&self, other: &DeleteMeshInput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteMeshInput) -> bool
This method tests for `!=`.
source### impl Serialize for DeleteMeshInput
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for DeleteMeshInput
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteMeshInput
### impl Send for DeleteMeshInput
### impl Sync for DeleteMeshInput
### impl Unpin for DeleteMeshInput
### impl UnwindSafe for DeleteMeshInput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_appmesh::DeleteMeshOutput
===
```
pub struct DeleteMeshOutput {
pub mesh: MeshData,
}
```
Fields
---
`mesh: MeshData`The service mesh that was deleted.
Trait Implementations
---
source### impl Clone for DeleteMeshOutput
source#### fn clone(&self) -> DeleteMeshOutput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DeleteMeshOutput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DeleteMeshOutput
source#### fn default() -> DeleteMeshOutput
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for DeleteMeshOutput
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<DeleteMeshOutput> for DeleteMeshOutput
source#### fn eq(&self, other: &DeleteMeshOutput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteMeshOutput) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DeleteMeshOutput
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteMeshOutput
### impl Send for DeleteMeshOutput
### impl Sync for DeleteMeshOutput
### impl Unpin for DeleteMeshOutput
### impl UnwindSafe for DeleteMeshOutput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::DeleteRouteInput
===
```
pub struct DeleteRouteInput {
pub mesh_name: String,
pub mesh_owner: Option<String>,
pub route_name: String,
pub virtual_router_name: String,
}
```
Fields
---
`mesh_name: String`The name of the service mesh to delete the route in.
`mesh_owner: Option<String>`The AWS IAM account ID of the service mesh owner. If the account ID is not your own, then it's the ID of the account that shared the mesh with your account. For more information about mesh sharing, see Working with shared meshes.
`route_name: String`The name of the route to delete.
`virtual_router_name: String`The name of the virtual router to delete the route in.
Trait Implementations
---
source### impl Clone for DeleteRouteInput
source#### fn clone(&self) -> DeleteRouteInput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DeleteRouteInput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DeleteRouteInput
source#### fn default() -> DeleteRouteInput
Returns the “default value” for a type. Read more
source### impl PartialEq<DeleteRouteInput> for DeleteRouteInput
source#### fn eq(&self, other: &DeleteRouteInput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteRouteInput) -> bool
This method tests for `!=`.
source### impl Serialize for DeleteRouteInput
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for DeleteRouteInput
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteRouteInput
### impl Send for DeleteRouteInput
### impl Sync for DeleteRouteInput
### impl Unpin for DeleteRouteInput
### impl UnwindSafe for DeleteRouteInput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_appmesh::DeleteRouteOutput
===
```
pub struct DeleteRouteOutput {
pub route: RouteData,
}
```
Fields
---
`route: RouteData`The route that was deleted.
Trait Implementations
---
source### impl Clone for DeleteRouteOutput
source#### fn clone(&self) -> DeleteRouteOutput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DeleteRouteOutput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DeleteRouteOutput
source#### fn default() -> DeleteRouteOutput
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for DeleteRouteOutput
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<DeleteRouteOutput> for DeleteRouteOutput
source#### fn eq(&self, other: &DeleteRouteOutput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteRouteOutput) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DeleteRouteOutput
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteRouteOutput
### impl Send for DeleteRouteOutput
### impl Sync for DeleteRouteOutput
### impl Unpin for DeleteRouteOutput
### impl UnwindSafe for DeleteRouteOutput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::DeleteVirtualGatewayInput
===
```
pub struct DeleteVirtualGatewayInput {
pub mesh_name: String,
pub mesh_owner: Option<String>,
pub virtual_gateway_name: String,
}
```
Fields
---
`mesh_name: String`The name of the service mesh to delete the virtual gateway from.
`mesh_owner: Option<String>`The AWS IAM account ID of the service mesh owner. If the account ID is not your own, then it's the ID of the account that shared the mesh with your account. For more information about mesh sharing, see Working with shared meshes.
`virtual_gateway_name: String`The name of the virtual gateway to delete.
Trait Implementations
---
source### impl Clone for DeleteVirtualGatewayInput
source#### fn clone(&self) -> DeleteVirtualGatewayInput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DeleteVirtualGatewayInput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DeleteVirtualGatewayInput
source#### fn default() -> DeleteVirtualGatewayInput
Returns the “default value” for a type. Read more
source### impl PartialEq<DeleteVirtualGatewayInput> for DeleteVirtualGatewayInput
source#### fn eq(&self, other: &DeleteVirtualGatewayInput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteVirtualGatewayInput) -> bool
This method tests for `!=`.
source### impl Serialize for DeleteVirtualGatewayInput
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for DeleteVirtualGatewayInput
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteVirtualGatewayInput
### impl Send for DeleteVirtualGatewayInput
### impl Sync for DeleteVirtualGatewayInput
### impl Unpin for DeleteVirtualGatewayInput
### impl UnwindSafe for DeleteVirtualGatewayInput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_appmesh::DeleteVirtualGatewayOutput
===
```
pub struct DeleteVirtualGatewayOutput {
pub virtual_gateway: VirtualGatewayData,
}
```
Fields
---
`virtual_gateway: VirtualGatewayData`The virtual gateway that was deleted.
Trait Implementations
---
source### impl Clone for DeleteVirtualGatewayOutput
source#### fn clone(&self) -> DeleteVirtualGatewayOutput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DeleteVirtualGatewayOutput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DeleteVirtualGatewayOutput
source#### fn default() -> DeleteVirtualGatewayOutput
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for DeleteVirtualGatewayOutput
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<DeleteVirtualGatewayOutput> for DeleteVirtualGatewayOutput
source#### fn eq(&self, other: &DeleteVirtualGatewayOutput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteVirtualGatewayOutput) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DeleteVirtualGatewayOutput
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteVirtualGatewayOutput
### impl Send for DeleteVirtualGatewayOutput
### impl Sync for DeleteVirtualGatewayOutput
### impl Unpin for DeleteVirtualGatewayOutput
### impl UnwindSafe for DeleteVirtualGatewayOutput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::DeleteVirtualNodeInput
===
```
pub struct DeleteVirtualNodeInput {
pub mesh_name: String,
pub mesh_owner: Option<String>,
pub virtual_node_name: String,
}
```
Deletes a virtual node input.
Fields
---
`mesh_name: String`The name of the service mesh to delete the virtual node in.
`mesh_owner: Option<String>`The AWS IAM account ID of the service mesh owner. If the account ID is not your own, then it's the ID of the account that shared the mesh with your account. For more information about mesh sharing, see Working with shared meshes.
`virtual_node_name: String`The name of the virtual node to delete.
Trait Implementations
---
source### impl Clone for DeleteVirtualNodeInput
source#### fn clone(&self) -> DeleteVirtualNodeInput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DeleteVirtualNodeInput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DeleteVirtualNodeInput
source#### fn default() -> DeleteVirtualNodeInput
Returns the “default value” for a type. Read more
source### impl PartialEq<DeleteVirtualNodeInput> for DeleteVirtualNodeInput
source#### fn eq(&self, other: &DeleteVirtualNodeInput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteVirtualNodeInput) -> bool
This method tests for `!=`.
source### impl Serialize for DeleteVirtualNodeInput
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for DeleteVirtualNodeInput
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteVirtualNodeInput
### impl Send for DeleteVirtualNodeInput
### impl Sync for DeleteVirtualNodeInput
### impl Unpin for DeleteVirtualNodeInput
### impl UnwindSafe for DeleteVirtualNodeInput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_appmesh::DeleteVirtualNodeOutput
===
```
pub struct DeleteVirtualNodeOutput {
pub virtual_node: VirtualNodeData,
}
```
Fields
---
`virtual_node: VirtualNodeData`The virtual node that was deleted.
Trait Implementations
---
source### impl Clone for DeleteVirtualNodeOutput
source#### fn clone(&self) -> DeleteVirtualNodeOutput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DeleteVirtualNodeOutput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DeleteVirtualNodeOutput
source#### fn default() -> DeleteVirtualNodeOutput
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for DeleteVirtualNodeOutput
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<DeleteVirtualNodeOutput> for DeleteVirtualNodeOutput
source#### fn eq(&self, other: &DeleteVirtualNodeOutput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteVirtualNodeOutput) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DeleteVirtualNodeOutput
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteVirtualNodeOutput
### impl Send for DeleteVirtualNodeOutput
### impl Sync for DeleteVirtualNodeOutput
### impl Unpin for DeleteVirtualNodeOutput
### impl UnwindSafe for DeleteVirtualNodeOutput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::DeleteVirtualRouterInput
===
```
pub struct DeleteVirtualRouterInput {
pub mesh_name: String,
pub mesh_owner: Option<String>,
pub virtual_router_name: String,
}
```
Fields
---
`mesh_name: String`The name of the service mesh to delete the virtual router in.
`mesh_owner: Option<String>`The AWS IAM account ID of the service mesh owner. If the account ID is not your own, then it's the ID of the account that shared the mesh with your account. For more information about mesh sharing, see Working with shared meshes.
`virtual_router_name: String`The name of the virtual router to delete.
Trait Implementations
---
source### impl Clone for DeleteVirtualRouterInput
source#### fn clone(&self) -> DeleteVirtualRouterInput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DeleteVirtualRouterInput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DeleteVirtualRouterInput
source#### fn default() -> DeleteVirtualRouterInput
Returns the “default value” for a type. Read more
source### impl PartialEq<DeleteVirtualRouterInput> for DeleteVirtualRouterInput
source#### fn eq(&self, other: &DeleteVirtualRouterInput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteVirtualRouterInput) -> bool
This method tests for `!=`.
source### impl Serialize for DeleteVirtualRouterInput
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for DeleteVirtualRouterInput
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteVirtualRouterInput
### impl Send for DeleteVirtualRouterInput
### impl Sync for DeleteVirtualRouterInput
### impl Unpin for DeleteVirtualRouterInput
### impl UnwindSafe for DeleteVirtualRouterInput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_appmesh::DeleteVirtualRouterOutput
===
```
pub struct DeleteVirtualRouterOutput {
pub virtual_router: VirtualRouterData,
}
```
Fields
---
`virtual_router: VirtualRouterData`The virtual router that was deleted.
Trait Implementations
---
source### impl Clone for DeleteVirtualRouterOutput
source#### fn clone(&self) -> DeleteVirtualRouterOutput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DeleteVirtualRouterOutput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DeleteVirtualRouterOutput
source#### fn default() -> DeleteVirtualRouterOutput
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for DeleteVirtualRouterOutput
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<DeleteVirtualRouterOutput> for DeleteVirtualRouterOutput
source#### fn eq(&self, other: &DeleteVirtualRouterOutput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteVirtualRouterOutput) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DeleteVirtualRouterOutput
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteVirtualRouterOutput
### impl Send for DeleteVirtualRouterOutput
### impl Sync for DeleteVirtualRouterOutput
### impl Unpin for DeleteVirtualRouterOutput
### impl UnwindSafe for DeleteVirtualRouterOutput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::DeleteVirtualServiceInput
===
```
pub struct DeleteVirtualServiceInput {
pub mesh_name: String,
pub mesh_owner: Option<String>,
pub virtual_service_name: String,
}
```
Fields
---
`mesh_name: String`The name of the service mesh to delete the virtual service in.
`mesh_owner: Option<String>`The AWS IAM account ID of the service mesh owner. If the account ID is not your own, then it's the ID of the account that shared the mesh with your account. For more information about mesh sharing, see Working with shared meshes.
`virtual_service_name: String`The name of the virtual service to delete.
Trait Implementations
---
source### impl Clone for DeleteVirtualServiceInput
source#### fn clone(&self) -> DeleteVirtualServiceInput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DeleteVirtualServiceInput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DeleteVirtualServiceInput
source#### fn default() -> DeleteVirtualServiceInput
Returns the “default value” for a type. Read more
source### impl PartialEq<DeleteVirtualServiceInput> for DeleteVirtualServiceInput
source#### fn eq(&self, other: &DeleteVirtualServiceInput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteVirtualServiceInput) -> bool
This method tests for `!=`.
source### impl Serialize for DeleteVirtualServiceInput
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for DeleteVirtualServiceInput
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteVirtualServiceInput
### impl Send for DeleteVirtualServiceInput
### impl Sync for DeleteVirtualServiceInput
### impl Unpin for DeleteVirtualServiceInput
### impl UnwindSafe for DeleteVirtualServiceInput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_appmesh::DeleteVirtualServiceOutput
===
```
pub struct DeleteVirtualServiceOutput {
pub virtual_service: VirtualServiceData,
}
```
Fields
---
`virtual_service: VirtualServiceData`The virtual service that was deleted.
Trait Implementations
---
source### impl Clone for DeleteVirtualServiceOutput
source#### fn clone(&self) -> DeleteVirtualServiceOutput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DeleteVirtualServiceOutput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DeleteVirtualServiceOutput
source#### fn default() -> DeleteVirtualServiceOutput
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for DeleteVirtualServiceOutput
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<DeleteVirtualServiceOutput> for DeleteVirtualServiceOutput
source#### fn eq(&self, other: &DeleteVirtualServiceOutput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteVirtualServiceOutput) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DeleteVirtualServiceOutput
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteVirtualServiceOutput
### impl Send for DeleteVirtualServiceOutput
### impl Sync for DeleteVirtualServiceOutput
### impl Unpin for DeleteVirtualServiceOutput
### impl UnwindSafe for DeleteVirtualServiceOutput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::DescribeGatewayRouteInput
===
```
pub struct DescribeGatewayRouteInput {
pub gateway_route_name: String,
pub mesh_name: String,
pub mesh_owner: Option<String>,
pub virtual_gateway_name: String,
}
```
Fields
---
`gateway_route_name: String`The name of the gateway route to describe.
`mesh_name: String`The name of the service mesh that the gateway route resides in.
`mesh_owner: Option<String>`The AWS IAM account ID of the service mesh owner. If the account ID is not your own, then it's the ID of the account that shared the mesh with your account. For more information about mesh sharing, see Working with shared meshes.
`virtual_gateway_name: String`The name of the virtual gateway that the gateway route is associated with.
Trait Implementations
---
source### impl Clone for DescribeGatewayRouteInput
source#### fn clone(&self) -> DescribeGatewayRouteInput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DescribeGatewayRouteInput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DescribeGatewayRouteInput
source#### fn default() -> DescribeGatewayRouteInput
Returns the “default value” for a type. Read more
source### impl PartialEq<DescribeGatewayRouteInput> for DescribeGatewayRouteInput
source#### fn eq(&self, other: &DescribeGatewayRouteInput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeGatewayRouteInput) -> bool
This method tests for `!=`.
source### impl Serialize for DescribeGatewayRouteInput
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for DescribeGatewayRouteInput
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeGatewayRouteInput
### impl Send for DescribeGatewayRouteInput
### impl Sync for DescribeGatewayRouteInput
### impl Unpin for DescribeGatewayRouteInput
### impl UnwindSafe for DescribeGatewayRouteInput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_appmesh::DescribeGatewayRouteOutput
===
```
pub struct DescribeGatewayRouteOutput {
pub gateway_route: GatewayRouteData,
}
```
Fields
---
`gateway_route: GatewayRouteData`The full description of your gateway route.
Trait Implementations
---
source### impl Clone for DescribeGatewayRouteOutput
source#### fn clone(&self) -> DescribeGatewayRouteOutput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DescribeGatewayRouteOutput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DescribeGatewayRouteOutput
source#### fn default() -> DescribeGatewayRouteOutput
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for DescribeGatewayRouteOutput
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<DescribeGatewayRouteOutput> for DescribeGatewayRouteOutput
source#### fn eq(&self, other: &DescribeGatewayRouteOutput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeGatewayRouteOutput) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeGatewayRouteOutput
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeGatewayRouteOutput
### impl Send for DescribeGatewayRouteOutput
### impl Sync for DescribeGatewayRouteOutput
### impl Unpin for DescribeGatewayRouteOutput
### impl UnwindSafe for DescribeGatewayRouteOutput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::DescribeMeshInput
===
```
pub struct DescribeMeshInput {
pub mesh_name: String,
pub mesh_owner: Option<String>,
}
```
Fields
---
`mesh_name: String`The name of the service mesh to describe.
`mesh_owner: Option<String>`The AWS IAM account ID of the service mesh owner. If the account ID is not your own, then it's the ID of the account that shared the mesh with your account. For more information about mesh sharing, see Working with shared meshes.
Trait Implementations
---
source### impl Clone for DescribeMeshInput
source#### fn clone(&self) -> DescribeMeshInput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DescribeMeshInput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DescribeMeshInput
source#### fn default() -> DescribeMeshInput
Returns the “default value” for a type. Read more
source### impl PartialEq<DescribeMeshInput> for DescribeMeshInput
source#### fn eq(&self, other: &DescribeMeshInput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeMeshInput) -> bool
This method tests for `!=`.
source### impl Serialize for DescribeMeshInput
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for DescribeMeshInput
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeMeshInput
### impl Send for DescribeMeshInput
### impl Sync for DescribeMeshInput
### impl Unpin for DescribeMeshInput
### impl UnwindSafe for DescribeMeshInput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_appmesh::DescribeMeshOutput
===
```
pub struct DescribeMeshOutput {
pub mesh: MeshData,
}
```
Fields
---
`mesh: MeshData`The full description of your service mesh.
Trait Implementations
---
source### impl Clone for DescribeMeshOutput
source#### fn clone(&self) -> DescribeMeshOutput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DescribeMeshOutput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DescribeMeshOutput
source#### fn default() -> DescribeMeshOutput
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for DescribeMeshOutput
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<DescribeMeshOutput> for DescribeMeshOutput
source#### fn eq(&self, other: &DescribeMeshOutput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeMeshOutput) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeMeshOutput
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeMeshOutput
### impl Send for DescribeMeshOutput
### impl Sync for DescribeMeshOutput
### impl Unpin for DescribeMeshOutput
### impl UnwindSafe for DescribeMeshOutput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::DescribeRouteInput
===
```
pub struct DescribeRouteInput {
pub mesh_name: String,
pub mesh_owner: Option<String>,
pub route_name: String,
pub virtual_router_name: String,
}
```
Fields
---
`mesh_name: String`The name of the service mesh that the route resides in.
`mesh_owner: Option<String>`The AWS IAM account ID of the service mesh owner. If the account ID is not your own, then it's the ID of the account that shared the mesh with your account. For more information about mesh sharing, see Working with shared meshes.
`route_name: String`The name of the route to describe.
`virtual_router_name: String`The name of the virtual router that the route is associated with.
Trait Implementations
---
source### impl Clone for DescribeRouteInput
source#### fn clone(&self) -> DescribeRouteInput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DescribeRouteInput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DescribeRouteInput
source#### fn default() -> DescribeRouteInput
Returns the “default value” for a type. Read more
source### impl PartialEq<DescribeRouteInput> for DescribeRouteInput
source#### fn eq(&self, other: &DescribeRouteInput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeRouteInput) -> bool
This method tests for `!=`.
source### impl Serialize for DescribeRouteInput
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for DescribeRouteInput
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeRouteInput
### impl Send for DescribeRouteInput
### impl Sync for DescribeRouteInput
### impl Unpin for DescribeRouteInput
### impl UnwindSafe for DescribeRouteInput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_appmesh::DescribeRouteOutput
===
```
pub struct DescribeRouteOutput {
pub route: RouteData,
}
```
Fields
---
`route: RouteData`The full description of your route.
Trait Implementations
---
source### impl Clone for DescribeRouteOutput
source#### fn clone(&self) -> DescribeRouteOutput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DescribeRouteOutput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DescribeRouteOutput
source#### fn default() -> DescribeRouteOutput
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for DescribeRouteOutput
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<DescribeRouteOutput> for DescribeRouteOutput
source#### fn eq(&self, other: &DescribeRouteOutput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeRouteOutput) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeRouteOutput
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeRouteOutput
### impl Send for DescribeRouteOutput
### impl Sync for DescribeRouteOutput
### impl Unpin for DescribeRouteOutput
### impl UnwindSafe for DescribeRouteOutput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::DescribeVirtualGatewayInput
===
```
pub struct DescribeVirtualGatewayInput {
pub mesh_name: String,
pub mesh_owner: Option<String>,
pub virtual_gateway_name: String,
}
```
Fields
---
`mesh_name: String`The name of the service mesh that the gateway route resides in.
`mesh_owner: Option<String>`The AWS IAM account ID of the service mesh owner. If the account ID is not your own, then it's the ID of the account that shared the mesh with your account. For more information about mesh sharing, see Working with shared meshes.
`virtual_gateway_name: String`The name of the virtual gateway to describe.
Trait Implementations
---
source### impl Clone for DescribeVirtualGatewayInput
source#### fn clone(&self) -> DescribeVirtualGatewayInput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DescribeVirtualGatewayInput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DescribeVirtualGatewayInput
source#### fn default() -> DescribeVirtualGatewayInput
Returns the “default value” for a type. Read more
source### impl PartialEq<DescribeVirtualGatewayInput> for DescribeVirtualGatewayInput
source#### fn eq(&self, other: &DescribeVirtualGatewayInput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeVirtualGatewayInput) -> bool
This method tests for `!=`.
source### impl Serialize for DescribeVirtualGatewayInput
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for DescribeVirtualGatewayInput
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeVirtualGatewayInput
### impl Send for DescribeVirtualGatewayInput
### impl Sync for DescribeVirtualGatewayInput
### impl Unpin for DescribeVirtualGatewayInput
### impl UnwindSafe for DescribeVirtualGatewayInput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_appmesh::DescribeVirtualGatewayOutput
===
```
pub struct DescribeVirtualGatewayOutput {
pub virtual_gateway: VirtualGatewayData,
}
```
Fields
---
`virtual_gateway: VirtualGatewayData`The full description of your virtual gateway.
Trait Implementations
---
source### impl Clone for DescribeVirtualGatewayOutput
source#### fn clone(&self) -> DescribeVirtualGatewayOutput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DescribeVirtualGatewayOutput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DescribeVirtualGatewayOutput
source#### fn default() -> DescribeVirtualGatewayOutput
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for DescribeVirtualGatewayOutput
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<DescribeVirtualGatewayOutput> for DescribeVirtualGatewayOutput
source#### fn eq(&self, other: &DescribeVirtualGatewayOutput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeVirtualGatewayOutput) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeVirtualGatewayOutput
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeVirtualGatewayOutput
### impl Send for DescribeVirtualGatewayOutput
### impl Sync for DescribeVirtualGatewayOutput
### impl Unpin for DescribeVirtualGatewayOutput
### impl UnwindSafe for DescribeVirtualGatewayOutput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::DescribeVirtualNodeInput
===
```
pub struct DescribeVirtualNodeInput {
pub mesh_name: String,
pub mesh_owner: Option<String>,
pub virtual_node_name: String,
}
```
Fields
---
`mesh_name: String`The name of the service mesh that the virtual node resides in.
`mesh_owner: Option<String>`The AWS IAM account ID of the service mesh owner. If the account ID is not your own, then it's the ID of the account that shared the mesh with your account. For more information about mesh sharing, see Working with shared meshes.
`virtual_node_name: String`The name of the virtual node to describe.
Trait Implementations
---
source### impl Clone for DescribeVirtualNodeInput
source#### fn clone(&self) -> DescribeVirtualNodeInput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DescribeVirtualNodeInput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DescribeVirtualNodeInput
source#### fn default() -> DescribeVirtualNodeInput
Returns the “default value” for a type. Read more
source### impl PartialEq<DescribeVirtualNodeInput> for DescribeVirtualNodeInput
source#### fn eq(&self, other: &DescribeVirtualNodeInput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeVirtualNodeInput) -> bool
This method tests for `!=`.
source### impl Serialize for DescribeVirtualNodeInput
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for DescribeVirtualNodeInput
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeVirtualNodeInput
### impl Send for DescribeVirtualNodeInput
### impl Sync for DescribeVirtualNodeInput
### impl Unpin for DescribeVirtualNodeInput
### impl UnwindSafe for DescribeVirtualNodeInput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_appmesh::DescribeVirtualNodeOutput
===
```
pub struct DescribeVirtualNodeOutput {
pub virtual_node: VirtualNodeData,
}
```
Fields
---
`virtual_node: VirtualNodeData`The full description of your virtual node.
Trait Implementations
---
source### impl Clone for DescribeVirtualNodeOutput
source#### fn clone(&self) -> DescribeVirtualNodeOutput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DescribeVirtualNodeOutput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DescribeVirtualNodeOutput
source#### fn default() -> DescribeVirtualNodeOutput
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for DescribeVirtualNodeOutput
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<DescribeVirtualNodeOutput> for DescribeVirtualNodeOutput
source#### fn eq(&self, other: &DescribeVirtualNodeOutput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeVirtualNodeOutput) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeVirtualNodeOutput
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeVirtualNodeOutput
### impl Send for DescribeVirtualNodeOutput
### impl Sync for DescribeVirtualNodeOutput
### impl Unpin for DescribeVirtualNodeOutput
### impl UnwindSafe for DescribeVirtualNodeOutput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::DescribeVirtualRouterInput
===
```
pub struct DescribeVirtualRouterInput {
pub mesh_name: String,
pub mesh_owner: Option<String>,
pub virtual_router_name: String,
}
```
Fields
---
`mesh_name: String`The name of the service mesh that the virtual router resides in.
`mesh_owner: Option<String>`The AWS IAM account ID of the service mesh owner. If the account ID is not your own, then it's the ID of the account that shared the mesh with your account. For more information about mesh sharing, see Working with shared meshes.
`virtual_router_name: String`The name of the virtual router to describe.
Trait Implementations
---
source### impl Clone for DescribeVirtualRouterInput
source#### fn clone(&self) -> DescribeVirtualRouterInput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DescribeVirtualRouterInput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DescribeVirtualRouterInput
source#### fn default() -> DescribeVirtualRouterInput
Returns the “default value” for a type. Read more
source### impl PartialEq<DescribeVirtualRouterInput> for DescribeVirtualRouterInput
source#### fn eq(&self, other: &DescribeVirtualRouterInput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeVirtualRouterInput) -> bool
This method tests for `!=`.
source### impl Serialize for DescribeVirtualRouterInput
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for DescribeVirtualRouterInput
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeVirtualRouterInput
### impl Send for DescribeVirtualRouterInput
### impl Sync for DescribeVirtualRouterInput
### impl Unpin for DescribeVirtualRouterInput
### impl UnwindSafe for DescribeVirtualRouterInput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_appmesh::DescribeVirtualRouterOutput
===
```
pub struct DescribeVirtualRouterOutput {
pub virtual_router: VirtualRouterData,
}
```
Fields
---
`virtual_router: VirtualRouterData`The full description of your virtual router.
Trait Implementations
---
source### impl Clone for DescribeVirtualRouterOutput
source#### fn clone(&self) -> DescribeVirtualRouterOutput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DescribeVirtualRouterOutput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DescribeVirtualRouterOutput
source#### fn default() -> DescribeVirtualRouterOutput
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for DescribeVirtualRouterOutput
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<DescribeVirtualRouterOutput> for DescribeVirtualRouterOutput
source#### fn eq(&self, other: &DescribeVirtualRouterOutput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeVirtualRouterOutput) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeVirtualRouterOutput
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeVirtualRouterOutput
### impl Send for DescribeVirtualRouterOutput
### impl Sync for DescribeVirtualRouterOutput
### impl Unpin for DescribeVirtualRouterOutput
### impl UnwindSafe for DescribeVirtualRouterOutput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::DescribeVirtualServiceInput
===
```
pub struct DescribeVirtualServiceInput {
pub mesh_name: String,
pub mesh_owner: Option<String>,
pub virtual_service_name: String,
}
```
Fields
---
`mesh_name: String`The name of the service mesh that the virtual service resides in.
`mesh_owner: Option<String>`The AWS IAM account ID of the service mesh owner. If the account ID is not your own, then it's the ID of the account that shared the mesh with your account. For more information about mesh sharing, see Working with shared meshes.
`virtual_service_name: String`The name of the virtual service to describe.
Trait Implementations
---
source### impl Clone for DescribeVirtualServiceInput
source#### fn clone(&self) -> DescribeVirtualServiceInput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DescribeVirtualServiceInput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DescribeVirtualServiceInput
source#### fn default() -> DescribeVirtualServiceInput
Returns the “default value” for a type. Read more
source### impl PartialEq<DescribeVirtualServiceInput> for DescribeVirtualServiceInput
source#### fn eq(&self, other: &DescribeVirtualServiceInput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeVirtualServiceInput) -> bool
This method tests for `!=`.
source### impl Serialize for DescribeVirtualServiceInput
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for DescribeVirtualServiceInput
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeVirtualServiceInput
### impl Send for DescribeVirtualServiceInput
### impl Sync for DescribeVirtualServiceInput
### impl Unpin for DescribeVirtualServiceInput
### impl UnwindSafe for DescribeVirtualServiceInput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_appmesh::DescribeVirtualServiceOutput
===
```
pub struct DescribeVirtualServiceOutput {
pub virtual_service: VirtualServiceData,
}
```
Fields
---
`virtual_service: VirtualServiceData`The full description of your virtual service.
Trait Implementations
---
source### impl Clone for DescribeVirtualServiceOutput
source#### fn clone(&self) -> DescribeVirtualServiceOutput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DescribeVirtualServiceOutput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DescribeVirtualServiceOutput
source#### fn default() -> DescribeVirtualServiceOutput
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for DescribeVirtualServiceOutput
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<DescribeVirtualServiceOutput> for DescribeVirtualServiceOutput
source#### fn eq(&self, other: &DescribeVirtualServiceOutput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeVirtualServiceOutput) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeVirtualServiceOutput
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeVirtualServiceOutput
### impl Send for DescribeVirtualServiceOutput
### impl Sync for DescribeVirtualServiceOutput
### impl Unpin for DescribeVirtualServiceOutput
### impl UnwindSafe for DescribeVirtualServiceOutput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::DnsServiceDiscovery
===
```
pub struct DnsServiceDiscovery {
pub hostname: String,
pub response_type: Option<String>,
}
```
An object that represents the DNS service discovery information for your virtual node.
Fields
---
`hostname: String`Specifies the DNS service discovery hostname for the virtual node.
`response_type: Option<String>`Specifies the DNS response type for the virtual node.
Trait Implementations
---
source### impl Clone for DnsServiceDiscovery
source#### fn clone(&self) -> DnsServiceDiscovery
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DnsServiceDiscovery
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DnsServiceDiscovery
source#### fn default() -> DnsServiceDiscovery
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for DnsServiceDiscovery
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<DnsServiceDiscovery> for DnsServiceDiscovery
source#### fn eq(&self, other: &DnsServiceDiscovery) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DnsServiceDiscovery) -> bool
This method tests for `!=`.
source### impl Serialize for DnsServiceDiscovery
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for DnsServiceDiscovery
Auto Trait Implementations
---
### impl RefUnwindSafe for DnsServiceDiscovery
### impl Send for DnsServiceDiscovery
### impl Sync for DnsServiceDiscovery
### impl Unpin for DnsServiceDiscovery
### impl UnwindSafe for DnsServiceDiscovery
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::Duration
===
```
pub struct Duration {
pub unit: Option<String>,
pub value: Option<i64>,
}
```
An object that represents a duration of time.
Fields
---
`unit: Option<String>`A unit of time.
`value: Option<i64>`A number of time units.
Trait Implementations
---
source### impl Clone for Duration
source#### fn clone(&self) -> Duration
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for Duration
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for Duration
source#### fn default() -> Duration
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for Duration
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<Duration> for Duration
source#### fn eq(&self, other: &Duration) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &Duration) -> bool
This method tests for `!=`.
source### impl Serialize for Duration
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for Duration
Auto Trait Implementations
---
### impl RefUnwindSafe for Duration
### impl Send for Duration
### impl Sync for Duration
### impl Unpin for Duration
### impl UnwindSafe for Duration
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::EgressFilter
===
```
pub struct EgressFilter {
pub type_: String,
}
```
An object that represents the egress filter rules for a service mesh.
Fields
---
`type_: String`The egress filter type. By default, the type is `DROP_ALL`, which allows egress only from virtual nodes to other defined resources in the service mesh (and any traffic to `*.amazonaws.com` for Amazon Web Services API calls). You can set the egress filter type to `ALLOW_ALL` to allow egress to any endpoint inside or outside of the service mesh.
Trait Implementations
---
source### impl Clone for EgressFilter
source#### fn clone(&self) -> EgressFilter
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for EgressFilter
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for EgressFilter
source#### fn default() -> EgressFilter
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for EgressFilter
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<EgressFilter> for EgressFilter
source#### fn eq(&self, other: &EgressFilter) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &EgressFilter) -> bool
This method tests for `!=`.
source### impl Serialize for EgressFilter
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for EgressFilter
Auto Trait Implementations
---
### impl RefUnwindSafe for EgressFilter
### impl Send for EgressFilter
### impl Sync for EgressFilter
### impl Unpin for EgressFilter
### impl UnwindSafe for EgressFilter
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::FileAccessLog
===
```
pub struct FileAccessLog {
pub path: String,
}
```
An object that represents an access log file.
Fields
---
`path: String`The file path to write access logs to. You can use `/dev/stdout` to send access logs to standard out and configure your Envoy container to use a log driver, such as `awslogs`, to export the access logs to a log storage service such as Amazon CloudWatch Logs. You can also specify a path in the Envoy container's file system to write the files to disk.
The Envoy process must have write permissions to the path that you specify here. Otherwise, Envoy fails to bootstrap properly.
Trait Implementations
---
source### impl Clone for FileAccessLog
source#### fn clone(&self) -> FileAccessLog
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for FileAccessLog
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for FileAccessLog
source#### fn default() -> FileAccessLog
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for FileAccessLog
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<FileAccessLog> for FileAccessLog
source#### fn eq(&self, other: &FileAccessLog) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &FileAccessLog) -> bool
This method tests for `!=`.
source### impl Serialize for FileAccessLog
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for FileAccessLog
Auto Trait Implementations
---
### impl RefUnwindSafe for FileAccessLog
### impl Send for FileAccessLog
### impl Sync for FileAccessLog
### impl Unpin for FileAccessLog
### impl UnwindSafe for FileAccessLog
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::GatewayRouteData
===
```
pub struct GatewayRouteData {
pub gateway_route_name: String,
pub mesh_name: String,
pub metadata: ResourceMetadata,
pub spec: GatewayRouteSpec,
pub status: GatewayRouteStatus,
pub virtual_gateway_name: String,
}
```
An object that represents a gateway route returned by a describe operation.
Fields
---
`gateway_route_name: String`The name of the gateway route.
`mesh_name: String`The name of the service mesh that the resource resides in.
`metadata: ResourceMetadata``spec: GatewayRouteSpec`The specifications of the gateway route.
`status: GatewayRouteStatus`The status of the gateway route.
`virtual_gateway_name: String`The virtual gateway that the gateway route is associated with.
Trait Implementations
---
source### impl Clone for GatewayRouteData
source#### fn clone(&self) -> GatewayRouteData
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for GatewayRouteData
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for GatewayRouteData
source#### fn default() -> GatewayRouteData
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for GatewayRouteData
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<GatewayRouteData> for GatewayRouteData
source#### fn eq(&self, other: &GatewayRouteData) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &GatewayRouteData) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for GatewayRouteData
Auto Trait Implementations
---
### impl RefUnwindSafe for GatewayRouteData
### impl Send for GatewayRouteData
### impl Sync for GatewayRouteData
### impl Unpin for GatewayRouteData
### impl UnwindSafe for GatewayRouteData
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::GatewayRouteHostnameMatch
===
```
pub struct GatewayRouteHostnameMatch {
pub exact: Option<String>,
pub suffix: Option<String>,
}
```
An object representing the gateway route host name to match.
Fields
---
`exact: Option<String>`The exact host name to match on.
`suffix: Option<String>`The specified ending characters of the host name to match on.
Trait Implementations
---
source### impl Clone for GatewayRouteHostnameMatch
source#### fn clone(&self) -> GatewayRouteHostnameMatch
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for GatewayRouteHostnameMatch
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for GatewayRouteHostnameMatch
source#### fn default() -> GatewayRouteHostnameMatch
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for GatewayRouteHostnameMatch
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<GatewayRouteHostnameMatch> for GatewayRouteHostnameMatch
source#### fn eq(&self, other: &GatewayRouteHostnameMatch) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &GatewayRouteHostnameMatch) -> bool
This method tests for `!=`.
source### impl Serialize for GatewayRouteHostnameMatch
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for GatewayRouteHostnameMatch
Auto Trait Implementations
---
### impl RefUnwindSafe for GatewayRouteHostnameMatch
### impl Send for GatewayRouteHostnameMatch
### impl Sync for GatewayRouteHostnameMatch
### impl Unpin for GatewayRouteHostnameMatch
### impl UnwindSafe for GatewayRouteHostnameMatch
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::GatewayRouteHostnameRewrite
===
```
pub struct GatewayRouteHostnameRewrite {
pub default_target_hostname: Option<String>,
}
```
An object representing the gateway route host name to rewrite.
Fields
---
`default_target_hostname: Option<String>`The default target host name to write to.
Trait Implementations
---
source### impl Clone for GatewayRouteHostnameRewrite
source#### fn clone(&self) -> GatewayRouteHostnameRewrite
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for GatewayRouteHostnameRewrite
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for GatewayRouteHostnameRewrite
source#### fn default() -> GatewayRouteHostnameRewrite
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for GatewayRouteHostnameRewrite
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<GatewayRouteHostnameRewrite> for GatewayRouteHostnameRewrite
source#### fn eq(&self, other: &GatewayRouteHostnameRewrite) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &GatewayRouteHostnameRewrite) -> bool
This method tests for `!=`.
source### impl Serialize for GatewayRouteHostnameRewrite
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for GatewayRouteHostnameRewrite
Auto Trait Implementations
---
### impl RefUnwindSafe for GatewayRouteHostnameRewrite
### impl Send for GatewayRouteHostnameRewrite
### impl Sync for GatewayRouteHostnameRewrite
### impl Unpin for GatewayRouteHostnameRewrite
### impl UnwindSafe for GatewayRouteHostnameRewrite
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::GatewayRouteRef
===
```
pub struct GatewayRouteRef {
pub arn: String,
pub created_at: f64,
pub gateway_route_name: String,
pub last_updated_at: f64,
pub mesh_name: String,
pub mesh_owner: String,
pub resource_owner: String,
pub version: i64,
pub virtual_gateway_name: String,
}
```
An object that represents a gateway route returned by a list operation.
Fields
---
`arn: String`The full Amazon Resource Name (ARN) for the gateway route.
`created_at: f64`The Unix epoch timestamp in seconds for when the resource was created.
`gateway_route_name: String`The name of the gateway route.
`last_updated_at: f64`The Unix epoch timestamp in seconds for when the resource was last updated.
`mesh_name: String`The name of the service mesh that the resource resides in.
`mesh_owner: String`The AWS IAM account ID of the service mesh owner. If the account ID is not your own, then it's the ID of the account that shared the mesh with your account. For more information about mesh sharing, see Working with shared meshes.
`resource_owner: String`The AWS IAM account ID of the resource owner. If the account ID is not your own, then it's the ID of the mesh owner or of another account that the mesh is shared with. For more information about mesh sharing, see Working with shared meshes.
`version: i64`The version of the resource. Resources are created at version 1, and this version is incremented each time that they're updated.
`virtual_gateway_name: String`The virtual gateway that the gateway route is associated with.
Trait Implementations
---
source### impl Clone for GatewayRouteRef
source#### fn clone(&self) -> GatewayRouteRef
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for GatewayRouteRef
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for GatewayRouteRef
source#### fn default() -> GatewayRouteRef
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for GatewayRouteRef
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<GatewayRouteRef> for GatewayRouteRef
source#### fn eq(&self, other: &GatewayRouteRef) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &GatewayRouteRef) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for GatewayRouteRef
Auto Trait Implementations
---
### impl RefUnwindSafe for GatewayRouteRef
### impl Send for GatewayRouteRef
### impl Sync for GatewayRouteRef
### impl Unpin for GatewayRouteRef
### impl UnwindSafe for GatewayRouteRef
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::GatewayRouteSpec
===
```
pub struct GatewayRouteSpec {
pub grpc_route: Option<GrpcGatewayRoute>,
pub http_2_route: Option<HttpGatewayRoute>,
pub http_route: Option<HttpGatewayRoute>,
pub priority: Option<i64>,
}
```
An object that represents a gateway route specification. Specify one gateway route type.
Fields
---
`grpc_route: Option<GrpcGatewayRoute>`An object that represents the specification of a gRPC gateway route.
`http_2_route: Option<HttpGatewayRoute>`An object that represents the specification of an HTTP/2 gateway route.
`http_route: Option<HttpGatewayRoute>`An object that represents the specification of an HTTP gateway route.
`priority: Option<i64>`The ordering of the gateway routes spec.
Trait Implementations
---
source### impl Clone for GatewayRouteSpec
source#### fn clone(&self) -> GatewayRouteSpec
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for GatewayRouteSpec
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for GatewayRouteSpec
source#### fn default() -> GatewayRouteSpec
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for GatewayRouteSpec
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<GatewayRouteSpec> for GatewayRouteSpec
source#### fn eq(&self, other: &GatewayRouteSpec) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &GatewayRouteSpec) -> bool
This method tests for `!=`.
source### impl Serialize for GatewayRouteSpec
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for GatewayRouteSpec
Auto Trait Implementations
---
### impl RefUnwindSafe for GatewayRouteSpec
### impl Send for GatewayRouteSpec
### impl Sync for GatewayRouteSpec
### impl Unpin for GatewayRouteSpec
### impl UnwindSafe for GatewayRouteSpec
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::GatewayRouteStatus
===
```
pub struct GatewayRouteStatus {
pub status: String,
}
```
An object that represents the current status of a gateway route.
Fields
---
`status: String`The current status for the gateway route.
Trait Implementations
---
source### impl Clone for GatewayRouteStatus
source#### fn clone(&self) -> GatewayRouteStatus
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for GatewayRouteStatus
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for GatewayRouteStatus
source#### fn default() -> GatewayRouteStatus
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for GatewayRouteStatus
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<GatewayRouteStatus> for GatewayRouteStatus
source#### fn eq(&self, other: &GatewayRouteStatus) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &GatewayRouteStatus) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for GatewayRouteStatus
Auto Trait Implementations
---
### impl RefUnwindSafe for GatewayRouteStatus
### impl Send for GatewayRouteStatus
### impl Sync for GatewayRouteStatus
### impl Unpin for GatewayRouteStatus
### impl UnwindSafe for GatewayRouteStatus
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::GatewayRouteTarget
===
```
pub struct GatewayRouteTarget {
pub virtual_service: GatewayRouteVirtualService,
}
```
An object that represents a gateway route target.
Fields
---
`virtual_service: GatewayRouteVirtualService`An object that represents a virtual service gateway route target.
Trait Implementations
---
source### impl Clone for GatewayRouteTarget
source#### fn clone(&self) -> GatewayRouteTarget
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for GatewayRouteTarget
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for GatewayRouteTarget
source#### fn default() -> GatewayRouteTarget
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for GatewayRouteTarget
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<GatewayRouteTarget> for GatewayRouteTarget
source#### fn eq(&self, other: &GatewayRouteTarget) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &GatewayRouteTarget) -> bool
This method tests for `!=`.
source### impl Serialize for GatewayRouteTarget
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for GatewayRouteTarget
Auto Trait Implementations
---
### impl RefUnwindSafe for GatewayRouteTarget
### impl Send for GatewayRouteTarget
### impl Sync for GatewayRouteTarget
### impl Unpin for GatewayRouteTarget
### impl UnwindSafe for GatewayRouteTarget
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::GatewayRouteVirtualService
===
```
pub struct GatewayRouteVirtualService {
pub virtual_service_name: String,
}
```
An object that represents the virtual service that traffic is routed to.
Fields
---
`virtual_service_name: String`The name of the virtual service that traffic is routed to.
Trait Implementations
---
source### impl Clone for GatewayRouteVirtualService
source#### fn clone(&self) -> GatewayRouteVirtualService
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for GatewayRouteVirtualService
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for GatewayRouteVirtualService
source#### fn default() -> GatewayRouteVirtualService
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for GatewayRouteVirtualService
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<GatewayRouteVirtualService> for GatewayRouteVirtualService
source#### fn eq(&self, other: &GatewayRouteVirtualService) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &GatewayRouteVirtualService) -> bool
This method tests for `!=`.
source### impl Serialize for GatewayRouteVirtualService
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for GatewayRouteVirtualService
Auto Trait Implementations
---
### impl RefUnwindSafe for GatewayRouteVirtualService
### impl Send for GatewayRouteVirtualService
### impl Sync for GatewayRouteVirtualService
### impl Unpin for GatewayRouteVirtualService
### impl UnwindSafe for GatewayRouteVirtualService
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::GrpcGatewayRoute
===
```
pub struct GrpcGatewayRoute {
pub action: GrpcGatewayRouteAction,
pub route_match: Option<GrpcGatewayRouteMatch>,
}
```
An object that represents a gRPC gateway route.
Fields
---
`action: GrpcGatewayRouteAction`An object that represents the action to take if a match is determined.
`route_match: Option<GrpcGatewayRouteMatch>`An object that represents the criteria for determining a request match.
Trait Implementations
---
source### impl Clone for GrpcGatewayRoute
source#### fn clone(&self) -> GrpcGatewayRoute
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for GrpcGatewayRoute
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for GrpcGatewayRoute
source#### fn default() -> GrpcGatewayRoute
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for GrpcGatewayRoute
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<GrpcGatewayRoute> for GrpcGatewayRoute
source#### fn eq(&self, other: &GrpcGatewayRoute) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &GrpcGatewayRoute) -> bool
This method tests for `!=`.
source### impl Serialize for GrpcGatewayRoute
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for GrpcGatewayRoute
Auto Trait Implementations
---
### impl RefUnwindSafe for GrpcGatewayRoute
### impl Send for GrpcGatewayRoute
### impl Sync for GrpcGatewayRoute
### impl Unpin for GrpcGatewayRoute
### impl UnwindSafe for GrpcGatewayRoute
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::GrpcGatewayRouteAction
===
```
pub struct GrpcGatewayRouteAction {
pub rewrite: Option<GrpcGatewayRouteRewrite>,
pub target: GatewayRouteTarget,
}
```
An object that represents the action to take if a match is determined.
Fields
---
`rewrite: Option<GrpcGatewayRouteRewrite>`The gateway route action to rewrite.
`target: GatewayRouteTarget`An object that represents the target that traffic is routed to when a request matches the gateway route.
Trait Implementations
---
source### impl Clone for GrpcGatewayRouteAction
source#### fn clone(&self) -> GrpcGatewayRouteAction
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for GrpcGatewayRouteAction
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for GrpcGatewayRouteAction
source#### fn default() -> GrpcGatewayRouteAction
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for GrpcGatewayRouteAction
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<GrpcGatewayRouteAction> for GrpcGatewayRouteAction
source#### fn eq(&self, other: &GrpcGatewayRouteAction) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &GrpcGatewayRouteAction) -> bool
This method tests for `!=`.
source### impl Serialize for GrpcGatewayRouteAction
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for GrpcGatewayRouteAction
Auto Trait Implementations
---
### impl RefUnwindSafe for GrpcGatewayRouteAction
### impl Send for GrpcGatewayRouteAction
### impl Sync for GrpcGatewayRouteAction
### impl Unpin for GrpcGatewayRouteAction
### impl UnwindSafe for GrpcGatewayRouteAction
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::GrpcGatewayRouteMatch
===
```
pub struct GrpcGatewayRouteMatch {
pub hostname: Option<GatewayRouteHostnameMatch>,
pub metadata: Option<Vec<GrpcGatewayRouteMetadata>>,
pub service_name: Option<String>,
}
```
An object that represents the criteria for determining a request match.
Fields
---
`hostname: Option<GatewayRouteHostnameMatch>`The gateway route host name to be matched on.
`metadata: Option<Vec<GrpcGatewayRouteMetadata>>`The gateway route metadata to be matched on.
`service_name: Option<String>`The fully qualified domain name for the service to match from the request.
Trait Implementations
---
source### impl Clone for GrpcGatewayRouteMatch
source#### fn clone(&self) -> GrpcGatewayRouteMatch
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for GrpcGatewayRouteMatch
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for GrpcGatewayRouteMatch
source#### fn default() -> GrpcGatewayRouteMatch
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for GrpcGatewayRouteMatch
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<GrpcGatewayRouteMatch> for GrpcGatewayRouteMatch
source#### fn eq(&self, other: &GrpcGatewayRouteMatch) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &GrpcGatewayRouteMatch) -> bool
This method tests for `!=`.
source### impl Serialize for GrpcGatewayRouteMatch
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for GrpcGatewayRouteMatch
Auto Trait Implementations
---
### impl RefUnwindSafe for GrpcGatewayRouteMatch
### impl Send for GrpcGatewayRouteMatch
### impl Sync for GrpcGatewayRouteMatch
### impl Unpin for GrpcGatewayRouteMatch
### impl UnwindSafe for GrpcGatewayRouteMatch
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::GrpcGatewayRouteMetadata
===
```
pub struct GrpcGatewayRouteMetadata {
pub invert: Option<bool>,
pub route_match: Option<GrpcMetadataMatchMethod>,
pub name: String,
}
```
An object representing the metadata of the gateway route.
Fields
---
`invert: Option<bool>`Specify `True` to match anything except the match criteria. The default value is `False`.
`route_match: Option<GrpcMetadataMatchMethod>`The criteria for determining a metadata match.
`name: String`A name for the gateway route metadata.
Trait Implementations
---
source### impl Clone for GrpcGatewayRouteMetadata
source#### fn clone(&self) -> GrpcGatewayRouteMetadata
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for GrpcGatewayRouteMetadata
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for GrpcGatewayRouteMetadata
source#### fn default() -> GrpcGatewayRouteMetadata
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for GrpcGatewayRouteMetadata
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<GrpcGatewayRouteMetadata> for GrpcGatewayRouteMetadata
source#### fn eq(&self, other: &GrpcGatewayRouteMetadata) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &GrpcGatewayRouteMetadata) -> bool
This method tests for `!=`.
source### impl Serialize for GrpcGatewayRouteMetadata
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for GrpcGatewayRouteMetadata
Auto Trait Implementations
---
### impl RefUnwindSafe for GrpcGatewayRouteMetadata
### impl Send for GrpcGatewayRouteMetadata
### impl Sync for GrpcGatewayRouteMetadata
### impl Unpin for GrpcGatewayRouteMetadata
### impl UnwindSafe for GrpcGatewayRouteMetadata
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::GrpcGatewayRouteRewrite
===
```
pub struct GrpcGatewayRouteRewrite {
pub hostname: Option<GatewayRouteHostnameRewrite>,
}
```
An object that represents the gateway route to rewrite.
Fields
---
`hostname: Option<GatewayRouteHostnameRewrite>`The host name of the gateway route to rewrite.
Trait Implementations
---
source### impl Clone for GrpcGatewayRouteRewrite
source#### fn clone(&self) -> GrpcGatewayRouteRewrite
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for GrpcGatewayRouteRewrite
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for GrpcGatewayRouteRewrite
source#### fn default() -> GrpcGatewayRouteRewrite
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for GrpcGatewayRouteRewrite
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<GrpcGatewayRouteRewrite> for GrpcGatewayRouteRewrite
source#### fn eq(&self, other: &GrpcGatewayRouteRewrite) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &GrpcGatewayRouteRewrite) -> bool
This method tests for `!=`.
source### impl Serialize for GrpcGatewayRouteRewrite
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for GrpcGatewayRouteRewrite
Auto Trait Implementations
---
### impl RefUnwindSafe for GrpcGatewayRouteRewrite
### impl Send for GrpcGatewayRouteRewrite
### impl Sync for GrpcGatewayRouteRewrite
### impl Unpin for GrpcGatewayRouteRewrite
### impl UnwindSafe for GrpcGatewayRouteRewrite
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::GrpcMetadataMatchMethod
===
```
pub struct GrpcMetadataMatchMethod {
pub exact: Option<String>,
pub prefix: Option<String>,
pub range: Option<MatchRange>,
pub regex: Option<String>,
pub suffix: Option<String>,
}
```
An object representing the method header to be matched.
Fields
---
`exact: Option<String>`The exact method header to be matched on.
`prefix: Option<String>`The specified beginning characters of the method header to be matched on.
`range: Option<MatchRange>``regex: Option<String>`The regex used to match the method header.
`suffix: Option<String>`The specified ending characters of the method header to match on.
Trait Implementations
---
source### impl Clone for GrpcMetadataMatchMethod
source#### fn clone(&self) -> GrpcMetadataMatchMethod
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for GrpcMetadataMatchMethod
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for GrpcMetadataMatchMethod
source#### fn default() -> GrpcMetadataMatchMethod
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for GrpcMetadataMatchMethod
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<GrpcMetadataMatchMethod> for GrpcMetadataMatchMethod
source#### fn eq(&self, other: &GrpcMetadataMatchMethod) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &GrpcMetadataMatchMethod) -> bool
This method tests for `!=`.
source### impl Serialize for GrpcMetadataMatchMethod
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for GrpcMetadataMatchMethod
Auto Trait Implementations
---
### impl RefUnwindSafe for GrpcMetadataMatchMethod
### impl Send for GrpcMetadataMatchMethod
### impl Sync for GrpcMetadataMatchMethod
### impl Unpin for GrpcMetadataMatchMethod
### impl UnwindSafe for GrpcMetadataMatchMethod
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::GrpcRetryPolicy
===
```
pub struct GrpcRetryPolicy {
pub grpc_retry_events: Option<Vec<String>>,
pub http_retry_events: Option<Vec<String>>,
pub max_retries: i64,
pub per_retry_timeout: Duration,
pub tcp_retry_events: Option<Vec<String>>,
}
```
An object that represents a retry policy. Specify at least one value for at least one of the types of `RetryEvents`, a value for `maxRetries`, and a value for `perRetryTimeout`. Both `server-error` and `gateway-error` under `httpRetryEvents` include the Envoy `reset` policy. For more information on the `reset` policy, see the Envoy documentation.
Fields
---
`grpc_retry_events: Option<Vec<String>>`Specify at least one of the valid values.
`http_retry_events: Option<Vec<String>>`Specify at least one of the following values.
* **server-error** – HTTP status codes 500, 501, 502, 503, 504, 505, 506, 507, 508, 510, and 511
* **gateway-error** – HTTP status codes 502, 503, and 504
* **client-error** – HTTP status code 409
* **stream-error** – Retry on refused stream
`max_retries: i64`The maximum number of retry attempts.
`per_retry_timeout: Duration`The timeout for each retry attempt.
`tcp_retry_events: Option<Vec<String>>`Specify a valid value. The event occurs before any processing of a request has started and is encountered when the upstream is temporarily or permanently unavailable.
Trait Implementations
---
source### impl Clone for GrpcRetryPolicy
source#### fn clone(&self) -> GrpcRetryPolicy
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for GrpcRetryPolicy
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for GrpcRetryPolicy
source#### fn default() -> GrpcRetryPolicy
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for GrpcRetryPolicy
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<GrpcRetryPolicy> for GrpcRetryPolicy
source#### fn eq(&self, other: &GrpcRetryPolicy) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &GrpcRetryPolicy) -> bool
This method tests for `!=`.
source### impl Serialize for GrpcRetryPolicy
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for GrpcRetryPolicy
Auto Trait Implementations
---
### impl RefUnwindSafe for GrpcRetryPolicy
### impl Send for GrpcRetryPolicy
### impl Sync for GrpcRetryPolicy
### impl Unpin for GrpcRetryPolicy
### impl UnwindSafe for GrpcRetryPolicy
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::GrpcRoute
===
```
pub struct GrpcRoute {
pub action: GrpcRouteAction,
pub route_match: Option<GrpcRouteMatch>,
pub retry_policy: Option<GrpcRetryPolicy>,
pub timeout: Option<GrpcTimeout>,
}
```
An object that represents a gRPC route type.
Fields
---
`action: GrpcRouteAction`An object that represents the action to take if a match is determined.
`route_match: Option<GrpcRouteMatch>`An object that represents the criteria for determining a request match.
`retry_policy: Option<GrpcRetryPolicy>`An object that represents a retry policy.
`timeout: Option<GrpcTimeout>`An object that represents types of timeouts.
Trait Implementations
---
source### impl Clone for GrpcRoute
source#### fn clone(&self) -> GrpcRoute
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for GrpcRoute
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for GrpcRoute
source#### fn default() -> GrpcRoute
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for GrpcRoute
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<GrpcRoute> for GrpcRoute
source#### fn eq(&self, other: &GrpcRoute) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &GrpcRoute) -> bool
This method tests for `!=`.
source### impl Serialize for GrpcRoute
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for GrpcRoute
Auto Trait Implementations
---
### impl RefUnwindSafe for GrpcRoute
### impl Send for GrpcRoute
### impl Sync for GrpcRoute
### impl Unpin for GrpcRoute
### impl UnwindSafe for GrpcRoute
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::GrpcRouteAction
===
```
pub struct GrpcRouteAction {
pub weighted_targets: Vec<WeightedTarget>,
}
```
An object that represents the action to take if a match is determined.
Fields
---
`weighted_targets: Vec<WeightedTarget>`An object that represents the targets that traffic is routed to when a request matches the route.
Trait Implementations
---
source### impl Clone for GrpcRouteAction
source#### fn clone(&self) -> GrpcRouteAction
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for GrpcRouteAction
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for GrpcRouteAction
source#### fn default() -> GrpcRouteAction
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for GrpcRouteAction
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<GrpcRouteAction> for GrpcRouteAction
source#### fn eq(&self, other: &GrpcRouteAction) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &GrpcRouteAction) -> bool
This method tests for `!=`.
source### impl Serialize for GrpcRouteAction
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for GrpcRouteAction
Auto Trait Implementations
---
### impl RefUnwindSafe for GrpcRouteAction
### impl Send for GrpcRouteAction
### impl Sync for GrpcRouteAction
### impl Unpin for GrpcRouteAction
### impl UnwindSafe for GrpcRouteAction
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::GrpcRouteMatch
===
```
pub struct GrpcRouteMatch {
pub metadata: Option<Vec<GrpcRouteMetadata>>,
pub method_name: Option<String>,
pub service_name: Option<String>,
}
```
An object that represents the criteria for determining a request match.
Fields
---
`metadata: Option<Vec<GrpcRouteMetadata>>`An object that represents the data to match from the request.
`method_name: Option<String>`The method name to match from the request. If you specify a name, you must also specify a `serviceName`.
`service_name: Option<String>`The fully qualified domain name for the service to match from the request.
Trait Implementations
---
source### impl Clone for GrpcRouteMatch
source#### fn clone(&self) -> GrpcRouteMatch
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for GrpcRouteMatch
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for GrpcRouteMatch
source#### fn default() -> GrpcRouteMatch
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for GrpcRouteMatch
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<GrpcRouteMatch> for GrpcRouteMatch
source#### fn eq(&self, other: &GrpcRouteMatch) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &GrpcRouteMatch) -> bool
This method tests for `!=`.
source### impl Serialize for GrpcRouteMatch
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for GrpcRouteMatch
Auto Trait Implementations
---
### impl RefUnwindSafe for GrpcRouteMatch
### impl Send for GrpcRouteMatch
### impl Sync for GrpcRouteMatch
### impl Unpin for GrpcRouteMatch
### impl UnwindSafe for GrpcRouteMatch
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::GrpcRouteMetadata
===
```
pub struct GrpcRouteMetadata {
pub invert: Option<bool>,
pub route_match: Option<GrpcRouteMetadataMatchMethod>,
pub name: String,
}
```
An object that represents the match metadata for the route.
Fields
---
`invert: Option<bool>`Specify `True` to match anything except the match criteria. The default value is `False`.
`route_match: Option<GrpcRouteMetadataMatchMethod>`An object that represents the data to match from the request.
`name: String`The name of the route.
Trait Implementations
---
source### impl Clone for GrpcRouteMetadata
source#### fn clone(&self) -> GrpcRouteMetadata
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for GrpcRouteMetadata
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for GrpcRouteMetadata
source#### fn default() -> GrpcRouteMetadata
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for GrpcRouteMetadata
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<GrpcRouteMetadata> for GrpcRouteMetadata
source#### fn eq(&self, other: &GrpcRouteMetadata) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &GrpcRouteMetadata) -> bool
This method tests for `!=`.
source### impl Serialize for GrpcRouteMetadata
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for GrpcRouteMetadata
Auto Trait Implementations
---
### impl RefUnwindSafe for GrpcRouteMetadata
### impl Send for GrpcRouteMetadata
### impl Sync for GrpcRouteMetadata
### impl Unpin for GrpcRouteMetadata
### impl UnwindSafe for GrpcRouteMetadata
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::GrpcRouteMetadataMatchMethod
===
```
pub struct GrpcRouteMetadataMatchMethod {
pub exact: Option<String>,
pub prefix: Option<String>,
pub range: Option<MatchRange>,
pub regex: Option<String>,
pub suffix: Option<String>,
}
```
An object that represents the match method. Specify one of the match values.
Fields
---
`exact: Option<String>`The value sent by the client must match the specified value exactly.
`prefix: Option<String>`The value sent by the client must begin with the specified characters.
`range: Option<MatchRange>`An object that represents the range of values to match on.
`regex: Option<String>`The value sent by the client must include the specified characters.
`suffix: Option<String>`The value sent by the client must end with the specified characters.
Trait Implementations
---
source### impl Clone for GrpcRouteMetadataMatchMethod
source#### fn clone(&self) -> GrpcRouteMetadataMatchMethod
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for GrpcRouteMetadataMatchMethod
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for GrpcRouteMetadataMatchMethod
source#### fn default() -> GrpcRouteMetadataMatchMethod
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for GrpcRouteMetadataMatchMethod
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<GrpcRouteMetadataMatchMethod> for GrpcRouteMetadataMatchMethod
source#### fn eq(&self, other: &GrpcRouteMetadataMatchMethod) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &GrpcRouteMetadataMatchMethod) -> bool
This method tests for `!=`.
source### impl Serialize for GrpcRouteMetadataMatchMethod
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for GrpcRouteMetadataMatchMethod
Auto Trait Implementations
---
### impl RefUnwindSafe for GrpcRouteMetadataMatchMethod
### impl Send for GrpcRouteMetadataMatchMethod
### impl Sync for GrpcRouteMetadataMatchMethod
### impl Unpin for GrpcRouteMetadataMatchMethod
### impl UnwindSafe for GrpcRouteMetadataMatchMethod
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::GrpcTimeout
===
```
pub struct GrpcTimeout {
pub idle: Option<Duration>,
pub per_request: Option<Duration>,
}
```
An object that represents types of timeouts.
Fields
---
`idle: Option<Duration>`An object that represents an idle timeout. An idle timeout bounds the amount of time that a connection may be idle. The default value is none.
`per_request: Option<Duration>`An object that represents a per request timeout. The default value is 15 seconds. If you set a higher timeout, then make sure that the higher value is set for each App Mesh resource in a conversation. For example, if a virtual node backend uses a virtual router provider to route to another virtual node, then the timeout should be greater than 15 seconds for the source and destination virtual node and the route.
Trait Implementations
---
source### impl Clone for GrpcTimeout
source#### fn clone(&self) -> GrpcTimeout
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for GrpcTimeout
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for GrpcTimeout
source#### fn default() -> GrpcTimeout
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for GrpcTimeout
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<GrpcTimeout> for GrpcTimeout
source#### fn eq(&self, other: &GrpcTimeout) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &GrpcTimeout) -> bool
This method tests for `!=`.
source### impl Serialize for GrpcTimeout
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for GrpcTimeout
Auto Trait Implementations
---
### impl RefUnwindSafe for GrpcTimeout
### impl Send for GrpcTimeout
### impl Sync for GrpcTimeout
### impl Unpin for GrpcTimeout
### impl UnwindSafe for GrpcTimeout
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::HeaderMatchMethod
===
```
pub struct HeaderMatchMethod {
pub exact: Option<String>,
pub prefix: Option<String>,
pub range: Option<MatchRange>,
pub regex: Option<String>,
pub suffix: Option<String>,
}
```
An object that represents the method and value to match with the header value sent in a request. Specify one match method.
Fields
---
`exact: Option<String>`The value sent by the client must match the specified value exactly.
`prefix: Option<String>`The value sent by the client must begin with the specified characters.
`range: Option<MatchRange>`An object that represents the range of values to match on.
`regex: Option<String>`The value sent by the client must include the specified characters.
`suffix: Option<String>`The value sent by the client must end with the specified characters.
Trait Implementations
---
source### impl Clone for HeaderMatchMethod
source#### fn clone(&self) -> HeaderMatchMethod
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for HeaderMatchMethod
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for HeaderMatchMethod
source#### fn default() -> HeaderMatchMethod
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for HeaderMatchMethod
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<HeaderMatchMethod> for HeaderMatchMethod
source#### fn eq(&self, other: &HeaderMatchMethod) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &HeaderMatchMethod) -> bool
This method tests for `!=`.
source### impl Serialize for HeaderMatchMethod
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for HeaderMatchMethod
Auto Trait Implementations
---
### impl RefUnwindSafe for HeaderMatchMethod
### impl Send for HeaderMatchMethod
### impl Sync for HeaderMatchMethod
### impl Unpin for HeaderMatchMethod
### impl UnwindSafe for HeaderMatchMethod
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::HealthCheckPolicy
===
```
pub struct HealthCheckPolicy {
pub healthy_threshold: i64,
pub interval_millis: i64,
pub path: Option<String>,
pub port: Option<i64>,
pub protocol: String,
pub timeout_millis: i64,
pub unhealthy_threshold: i64,
}
```
An object that represents the health check policy for a virtual node's listener.
Fields
---
`healthy_threshold: i64`The number of consecutive successful health checks that must occur before declaring listener healthy.
`interval_millis: i64`The time period in milliseconds between each health check execution.
`path: Option<String>`The destination path for the health check request. This value is only used if the specified protocol is HTTP or HTTP/2. For any other protocol, this value is ignored.
`port: Option<i64>`The destination port for the health check request. This port must match the port defined in the PortMapping for the listener.
`protocol: String`The protocol for the health check request. If you specify `grpc`, then your service must conform to the GRPC Health Checking Protocol.
`timeout_millis: i64`The amount of time to wait when receiving a response from the health check, in milliseconds.
`unhealthy_threshold: i64`The number of consecutive failed health checks that must occur before declaring a virtual node unhealthy.
Trait Implementations
---
source### impl Clone for HealthCheckPolicy
source#### fn clone(&self) -> HealthCheckPolicy
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for HealthCheckPolicy
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for HealthCheckPolicy
source#### fn default() -> HealthCheckPolicy
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for HealthCheckPolicy
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<HealthCheckPolicy> for HealthCheckPolicy
source#### fn eq(&self, other: &HealthCheckPolicy) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &HealthCheckPolicy) -> bool
This method tests for `!=`.
source### impl Serialize for HealthCheckPolicy
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for HealthCheckPolicy
Auto Trait Implementations
---
### impl RefUnwindSafe for HealthCheckPolicy
### impl Send for HealthCheckPolicy
### impl Sync for HealthCheckPolicy
### impl Unpin for HealthCheckPolicy
### impl UnwindSafe for HealthCheckPolicy
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::HttpGatewayRoute
===
```
pub struct HttpGatewayRoute {
pub action: HttpGatewayRouteAction,
pub route_match: Option<HttpGatewayRouteMatch>,
}
```
An object that represents an HTTP gateway route.
Fields
---
`action: HttpGatewayRouteAction`An object that represents the action to take if a match is determined.
`route_match: Option<HttpGatewayRouteMatch>`An object that represents the criteria for determining a request match.
Trait Implementations
---
source### impl Clone for HttpGatewayRoute
source#### fn clone(&self) -> HttpGatewayRoute
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for HttpGatewayRoute
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for HttpGatewayRoute
source#### fn default() -> HttpGatewayRoute
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for HttpGatewayRoute
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<HttpGatewayRoute> for HttpGatewayRoute
source#### fn eq(&self, other: &HttpGatewayRoute) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &HttpGatewayRoute) -> bool
This method tests for `!=`.
source### impl Serialize for HttpGatewayRoute
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for HttpGatewayRoute
Auto Trait Implementations
---
### impl RefUnwindSafe for HttpGatewayRoute
### impl Send for HttpGatewayRoute
### impl Sync for HttpGatewayRoute
### impl Unpin for HttpGatewayRoute
### impl UnwindSafe for HttpGatewayRoute
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::HttpGatewayRouteAction
===
```
pub struct HttpGatewayRouteAction {
pub rewrite: Option<HttpGatewayRouteRewrite>,
pub target: GatewayRouteTarget,
}
```
An object that represents the action to take if a match is determined.
Fields
---
`rewrite: Option<HttpGatewayRouteRewrite>`The gateway route action to rewrite.
`target: GatewayRouteTarget`An object that represents the target that traffic is routed to when a request matches the gateway route.
Trait Implementations
---
source### impl Clone for HttpGatewayRouteAction
source#### fn clone(&self) -> HttpGatewayRouteAction
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for HttpGatewayRouteAction
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for HttpGatewayRouteAction
source#### fn default() -> HttpGatewayRouteAction
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for HttpGatewayRouteAction
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<HttpGatewayRouteAction> for HttpGatewayRouteAction
source#### fn eq(&self, other: &HttpGatewayRouteAction) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &HttpGatewayRouteAction) -> bool
This method tests for `!=`.
source### impl Serialize for HttpGatewayRouteAction
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for HttpGatewayRouteAction
Auto Trait Implementations
---
### impl RefUnwindSafe for HttpGatewayRouteAction
### impl Send for HttpGatewayRouteAction
### impl Sync for HttpGatewayRouteAction
### impl Unpin for HttpGatewayRouteAction
### impl UnwindSafe for HttpGatewayRouteAction
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::HttpGatewayRouteHeader
===
```
pub struct HttpGatewayRouteHeader {
pub invert: Option<bool>,
pub route_match: Option<HeaderMatchMethod>,
pub name: String,
}
```
An object that represents the HTTP header in the gateway route.
Fields
---
`invert: Option<bool>`Specify `True` to match anything except the match criteria. The default value is `False`.
`route_match: Option<HeaderMatchMethod>``name: String`A name for the HTTP header in the gateway route that will be matched on.
Trait Implementations
---
source### impl Clone for HttpGatewayRouteHeader
source#### fn clone(&self) -> HttpGatewayRouteHeader
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for HttpGatewayRouteHeader
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for HttpGatewayRouteHeader
source#### fn default() -> HttpGatewayRouteHeader
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for HttpGatewayRouteHeader
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<HttpGatewayRouteHeader> for HttpGatewayRouteHeader
source#### fn eq(&self, other: &HttpGatewayRouteHeader) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &HttpGatewayRouteHeader) -> bool
This method tests for `!=`.
source### impl Serialize for HttpGatewayRouteHeader
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for HttpGatewayRouteHeader
Auto Trait Implementations
---
### impl RefUnwindSafe for HttpGatewayRouteHeader
### impl Send for HttpGatewayRouteHeader
### impl Sync for HttpGatewayRouteHeader
### impl Unpin for HttpGatewayRouteHeader
### impl UnwindSafe for HttpGatewayRouteHeader
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::HttpGatewayRouteMatch
===
```
pub struct HttpGatewayRouteMatch {
pub headers: Option<Vec<HttpGatewayRouteHeader>>,
pub hostname: Option<GatewayRouteHostnameMatch>,
pub method: Option<String>,
pub path: Option<HttpPathMatch>,
pub prefix: Option<String>,
pub query_parameters: Option<Vec<HttpQueryParameter>>,
}
```
An object that represents the criteria for determining a request match.
Fields
---
`headers: Option<Vec<HttpGatewayRouteHeader>>`The client request headers to match on.
`hostname: Option<GatewayRouteHostnameMatch>`The host name to match on.
`method: Option<String>`The method to match on.
`path: Option<HttpPathMatch>`The path to match on.
`prefix: Option<String>`Specifies the path to match requests with. This parameter must always start with `/`, which by itself matches all requests to the virtual service name. You can also match for path-based routing of requests. For example, if your virtual service name is `my-service.local` and you want the route to match requests to `my-service.local/metrics`, your prefix should be `/metrics`.
`query_parameters: Option<Vec<HttpQueryParameter>>`The query parameter to match on.
Trait Implementations
---
source### impl Clone for HttpGatewayRouteMatch
source#### fn clone(&self) -> HttpGatewayRouteMatch
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for HttpGatewayRouteMatch
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for HttpGatewayRouteMatch
source#### fn default() -> HttpGatewayRouteMatch
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for HttpGatewayRouteMatch
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<HttpGatewayRouteMatch> for HttpGatewayRouteMatch
source#### fn eq(&self, other: &HttpGatewayRouteMatch) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &HttpGatewayRouteMatch) -> bool
This method tests for `!=`.
source### impl Serialize for HttpGatewayRouteMatch
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for HttpGatewayRouteMatch
Auto Trait Implementations
---
### impl RefUnwindSafe for HttpGatewayRouteMatch
### impl Send for HttpGatewayRouteMatch
### impl Sync for HttpGatewayRouteMatch
### impl Unpin for HttpGatewayRouteMatch
### impl UnwindSafe for HttpGatewayRouteMatch
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::HttpGatewayRoutePathRewrite
===
```
pub struct HttpGatewayRoutePathRewrite {
pub exact: Option<String>,
}
```
An object that represents the path to rewrite.
Fields
---
`exact: Option<String>`The exact path to rewrite.
Trait Implementations
---
source### impl Clone for HttpGatewayRoutePathRewrite
source#### fn clone(&self) -> HttpGatewayRoutePathRewrite
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for HttpGatewayRoutePathRewrite
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for HttpGatewayRoutePathRewrite
source#### fn default() -> HttpGatewayRoutePathRewrite
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for HttpGatewayRoutePathRewrite
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<HttpGatewayRoutePathRewrite> for HttpGatewayRoutePathRewrite
source#### fn eq(&self, other: &HttpGatewayRoutePathRewrite) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &HttpGatewayRoutePathRewrite) -> bool
This method tests for `!=`.
source### impl Serialize for HttpGatewayRoutePathRewrite
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for HttpGatewayRoutePathRewrite
Auto Trait Implementations
---
### impl RefUnwindSafe for HttpGatewayRoutePathRewrite
### impl Send for HttpGatewayRoutePathRewrite
### impl Sync for HttpGatewayRoutePathRewrite
### impl Unpin for HttpGatewayRoutePathRewrite
### impl UnwindSafe for HttpGatewayRoutePathRewrite
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::HttpGatewayRoutePrefixRewrite
===
```
pub struct HttpGatewayRoutePrefixRewrite {
pub default_prefix: Option<String>,
pub value: Option<String>,
}
```
An object representing the beginning characters of the route to rewrite.
Fields
---
`default_prefix: Option<String>`The default prefix used to replace the incoming route prefix when rewritten.
`value: Option<String>`The value used to replace the incoming route prefix when rewritten.
Trait Implementations
---
source### impl Clone for HttpGatewayRoutePrefixRewrite
source#### fn clone(&self) -> HttpGatewayRoutePrefixRewrite
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for HttpGatewayRoutePrefixRewrite
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for HttpGatewayRoutePrefixRewrite
source#### fn default() -> HttpGatewayRoutePrefixRewrite
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for HttpGatewayRoutePrefixRewrite
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<HttpGatewayRoutePrefixRewrite> for HttpGatewayRoutePrefixRewrite
source#### fn eq(&self, other: &HttpGatewayRoutePrefixRewrite) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &HttpGatewayRoutePrefixRewrite) -> bool
This method tests for `!=`.
source### impl Serialize for HttpGatewayRoutePrefixRewrite
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for HttpGatewayRoutePrefixRewrite
Auto Trait Implementations
---
### impl RefUnwindSafe for HttpGatewayRoutePrefixRewrite
### impl Send for HttpGatewayRoutePrefixRewrite
### impl Sync for HttpGatewayRoutePrefixRewrite
### impl Unpin for HttpGatewayRoutePrefixRewrite
### impl UnwindSafe for HttpGatewayRoutePrefixRewrite
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::HttpGatewayRouteRewrite
===
```
pub struct HttpGatewayRouteRewrite {
pub hostname: Option<GatewayRouteHostnameRewrite>,
pub path: Option<HttpGatewayRoutePathRewrite>,
pub prefix: Option<HttpGatewayRoutePrefixRewrite>,
}
```
An object representing the gateway route to rewrite.
Fields
---
`hostname: Option<GatewayRouteHostnameRewrite>`The host name to rewrite.
`path: Option<HttpGatewayRoutePathRewrite>`The path to rewrite.
`prefix: Option<HttpGatewayRoutePrefixRewrite>`The specified beginning characters to rewrite.
Trait Implementations
---
source### impl Clone for HttpGatewayRouteRewrite
source#### fn clone(&self) -> HttpGatewayRouteRewrite
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for HttpGatewayRouteRewrite
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for HttpGatewayRouteRewrite
source#### fn default() -> HttpGatewayRouteRewrite
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for HttpGatewayRouteRewrite
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<HttpGatewayRouteRewrite> for HttpGatewayRouteRewrite
source#### fn eq(&self, other: &HttpGatewayRouteRewrite) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &HttpGatewayRouteRewrite) -> bool
This method tests for `!=`.
source### impl Serialize for HttpGatewayRouteRewrite
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for HttpGatewayRouteRewrite
Auto Trait Implementations
---
### impl RefUnwindSafe for HttpGatewayRouteRewrite
### impl Send for HttpGatewayRouteRewrite
### impl Sync for HttpGatewayRouteRewrite
### impl Unpin for HttpGatewayRouteRewrite
### impl UnwindSafe for HttpGatewayRouteRewrite
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::HttpPathMatch
===
```
pub struct HttpPathMatch {
pub exact: Option<String>,
pub regex: Option<String>,
}
```
An object representing the path to match in the request.
Fields
---
`exact: Option<String>`The exact path to match on.
`regex: Option<String>`The regex used to match the path.
Trait Implementations
---
source### impl Clone for HttpPathMatch
source#### fn clone(&self) -> HttpPathMatch
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for HttpPathMatch
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for HttpPathMatch
source#### fn default() -> HttpPathMatch
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for HttpPathMatch
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<HttpPathMatch> for HttpPathMatch
source#### fn eq(&self, other: &HttpPathMatch) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &HttpPathMatch) -> bool
This method tests for `!=`.
source### impl Serialize for HttpPathMatch
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for HttpPathMatch
Auto Trait Implementations
---
### impl RefUnwindSafe for HttpPathMatch
### impl Send for HttpPathMatch
### impl Sync for HttpPathMatch
### impl Unpin for HttpPathMatch
### impl UnwindSafe for HttpPathMatch
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::HttpQueryParameter
===
```
pub struct HttpQueryParameter {
pub route_match: Option<QueryParameterMatch>,
pub name: String,
}
```
An object that represents the query parameter in the request.
Fields
---
`route_match: Option<QueryParameterMatch>`The query parameter to match on.
`name: String`A name for the query parameter that will be matched on.
Trait Implementations
---
source### impl Clone for HttpQueryParameter
source#### fn clone(&self) -> HttpQueryParameter
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for HttpQueryParameter
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for HttpQueryParameter
source#### fn default() -> HttpQueryParameter
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for HttpQueryParameter
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<HttpQueryParameter> for HttpQueryParameter
source#### fn eq(&self, other: &HttpQueryParameter) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &HttpQueryParameter) -> bool
This method tests for `!=`.
source### impl Serialize for HttpQueryParameter
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for HttpQueryParameter
Auto Trait Implementations
---
### impl RefUnwindSafe for HttpQueryParameter
### impl Send for HttpQueryParameter
### impl Sync for HttpQueryParameter
### impl Unpin for HttpQueryParameter
### impl UnwindSafe for HttpQueryParameter
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::HttpRetryPolicy
===
```
pub struct HttpRetryPolicy {
pub http_retry_events: Option<Vec<String>>,
pub max_retries: i64,
pub per_retry_timeout: Duration,
pub tcp_retry_events: Option<Vec<String>>,
}
```
An object that represents a retry policy. Specify at least one value for at least one of the types of `RetryEvents`, a value for `maxRetries`, and a value for `perRetryTimeout`. Both `server-error` and `gateway-error` under `httpRetryEvents` include the Envoy `reset` policy. For more information on the `reset` policy, see the Envoy documentation.
Fields
---
`http_retry_events: Option<Vec<String>>`Specify at least one of the following values.
* **server-error** – HTTP status codes 500, 501, 502, 503, 504, 505, 506, 507, 508, 510, and 511
* **gateway-error** – HTTP status codes 502, 503, and 504
* **client-error** – HTTP status code 409
* **stream-error** – Retry on refused stream
`max_retries: i64`The maximum number of retry attempts.
`per_retry_timeout: Duration`The timeout for each retry attempt.
`tcp_retry_events: Option<Vec<String>>`Specify a valid value. The event occurs before any processing of a request has started and is encountered when the upstream is temporarily or permanently unavailable.
Trait Implementations
---
source### impl Clone for HttpRetryPolicy
source#### fn clone(&self) -> HttpRetryPolicy
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for HttpRetryPolicy
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for HttpRetryPolicy
source#### fn default() -> HttpRetryPolicy
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for HttpRetryPolicy
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<HttpRetryPolicy> for HttpRetryPolicy
source#### fn eq(&self, other: &HttpRetryPolicy) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &HttpRetryPolicy) -> bool
This method tests for `!=`.
source### impl Serialize for HttpRetryPolicy
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for HttpRetryPolicy
Auto Trait Implementations
---
### impl RefUnwindSafe for HttpRetryPolicy
### impl Send for HttpRetryPolicy
### impl Sync for HttpRetryPolicy
### impl Unpin for HttpRetryPolicy
### impl UnwindSafe for HttpRetryPolicy
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::HttpRoute
===
```
pub struct HttpRoute {
pub action: HttpRouteAction,
pub route_match: Option<HttpRouteMatch>,
pub retry_policy: Option<HttpRetryPolicy>,
pub timeout: Option<HttpTimeout>,
}
```
An object that represents an HTTP or HTTP/2 route type.
Fields
---
`action: HttpRouteAction`An object that represents the action to take if a match is determined.
`route_match: Option<HttpRouteMatch>`An object that represents the criteria for determining a request match.
`retry_policy: Option<HttpRetryPolicy>`An object that represents a retry policy.
`timeout: Option<HttpTimeout>`An object that represents types of timeouts.
Trait Implementations
---
source### impl Clone for HttpRoute
source#### fn clone(&self) -> HttpRoute
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for HttpRoute
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for HttpRoute
source#### fn default() -> HttpRoute
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for HttpRoute
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<HttpRoute> for HttpRoute
source#### fn eq(&self, other: &HttpRoute) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &HttpRoute) -> bool
This method tests for `!=`.
source### impl Serialize for HttpRoute
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for HttpRoute
Auto Trait Implementations
---
### impl RefUnwindSafe for HttpRoute
### impl Send for HttpRoute
### impl Sync for HttpRoute
### impl Unpin for HttpRoute
### impl UnwindSafe for HttpRoute
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::HttpRouteAction
===
```
pub struct HttpRouteAction {
pub weighted_targets: Vec<WeightedTarget>,
}
```
An object that represents the action to take if a match is determined.
Fields
---
`weighted_targets: Vec<WeightedTarget>`An object that represents the targets that traffic is routed to when a request matches the route.
Trait Implementations
---
source### impl Clone for HttpRouteAction
source#### fn clone(&self) -> HttpRouteAction
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for HttpRouteAction
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for HttpRouteAction
source#### fn default() -> HttpRouteAction
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for HttpRouteAction
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<HttpRouteAction> for HttpRouteAction
source#### fn eq(&self, other: &HttpRouteAction) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &HttpRouteAction) -> bool
This method tests for `!=`.
source### impl Serialize for HttpRouteAction
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for HttpRouteAction
Auto Trait Implementations
---
### impl RefUnwindSafe for HttpRouteAction
### impl Send for HttpRouteAction
### impl Sync for HttpRouteAction
### impl Unpin for HttpRouteAction
### impl UnwindSafe for HttpRouteAction
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::HttpRouteHeader
===
```
pub struct HttpRouteHeader {
pub invert: Option<bool>,
pub route_match: Option<HeaderMatchMethod>,
pub name: String,
}
```
An object that represents the HTTP header in the request.
Fields
---
`invert: Option<bool>`Specify `True` to match anything except the match criteria. The default value is `False`.
`route_match: Option<HeaderMatchMethod>`The `HeaderMatchMethod` object.
`name: String`A name for the HTTP header in the client request that will be matched on.
Trait Implementations
---
source### impl Clone for HttpRouteHeader
source#### fn clone(&self) -> HttpRouteHeader
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for HttpRouteHeader
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for HttpRouteHeader
source#### fn default() -> HttpRouteHeader
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for HttpRouteHeader
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<HttpRouteHeader> for HttpRouteHeader
source#### fn eq(&self, other: &HttpRouteHeader) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &HttpRouteHeader) -> bool
This method tests for `!=`.
source### impl Serialize for HttpRouteHeader
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for HttpRouteHeader
Auto Trait Implementations
---
### impl RefUnwindSafe for HttpRouteHeader
### impl Send for HttpRouteHeader
### impl Sync for HttpRouteHeader
### impl Unpin for HttpRouteHeader
### impl UnwindSafe for HttpRouteHeader
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::HttpRouteMatch
===
```
pub struct HttpRouteMatch {
pub headers: Option<Vec<HttpRouteHeader>>,
pub method: Option<String>,
pub path: Option<HttpPathMatch>,
pub prefix: Option<String>,
pub query_parameters: Option<Vec<HttpQueryParameter>>,
pub scheme: Option<String>,
}
```
An object that represents the requirements for a route to match HTTP requests for a virtual router.
Fields
---
`headers: Option<Vec<HttpRouteHeader>>`The client request headers to match on.
`method: Option<String>`The client request method to match on. Specify only one.
`path: Option<HttpPathMatch>`The client request path to match on.
`prefix: Option<String>`Specifies the path to match requests with. This parameter must always start with `/`, which by itself matches all requests to the virtual service name. You can also match for path-based routing of requests. For example, if your virtual service name is `my-service.local` and you want the route to match requests to `my-service.local/metrics`, your prefix should be `/metrics`.
`query_parameters: Option<Vec<HttpQueryParameter>>`The client request query parameters to match on.
`scheme: Option<String>`The client request scheme to match on. Specify only one. Applicable only for HTTP2 routes.
Trait Implementations
---
source### impl Clone for HttpRouteMatch
source#### fn clone(&self) -> HttpRouteMatch
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for HttpRouteMatch
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for HttpRouteMatch
source#### fn default() -> HttpRouteMatch
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for HttpRouteMatch
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<HttpRouteMatch> for HttpRouteMatch
source#### fn eq(&self, other: &HttpRouteMatch) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &HttpRouteMatch) -> bool
This method tests for `!=`.
source### impl Serialize for HttpRouteMatch
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for HttpRouteMatch
Auto Trait Implementations
---
### impl RefUnwindSafe for HttpRouteMatch
### impl Send for HttpRouteMatch
### impl Sync for HttpRouteMatch
### impl Unpin for HttpRouteMatch
### impl UnwindSafe for HttpRouteMatch
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::HttpTimeout
===
```
pub struct HttpTimeout {
pub idle: Option<Duration>,
pub per_request: Option<Duration>,
}
```
An object that represents types of timeouts.
Fields
---
`idle: Option<Duration>`An object that represents an idle timeout. An idle timeout bounds the amount of time that a connection may be idle. The default value is none.
`per_request: Option<Duration>`An object that represents a per request timeout. The default value is 15 seconds. If you set a higher timeout, then make sure that the higher value is set for each App Mesh resource in a conversation. For example, if a virtual node backend uses a virtual router provider to route to another virtual node, then the timeout should be greater than 15 seconds for the source and destination virtual node and the route.
Trait Implementations
---
source### impl Clone for HttpTimeout
source#### fn clone(&self) -> HttpTimeout
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for HttpTimeout
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for HttpTimeout
source#### fn default() -> HttpTimeout
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for HttpTimeout
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<HttpTimeout> for HttpTimeout
source#### fn eq(&self, other: &HttpTimeout) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &HttpTimeout) -> bool
This method tests for `!=`.
source### impl Serialize for HttpTimeout
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for HttpTimeout
Auto Trait Implementations
---
### impl RefUnwindSafe for HttpTimeout
### impl Send for HttpTimeout
### impl Sync for HttpTimeout
### impl Unpin for HttpTimeout
### impl UnwindSafe for HttpTimeout
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::ListGatewayRoutesInput
===
```
pub struct ListGatewayRoutesInput {
pub limit: Option<i64>,
pub mesh_name: String,
pub mesh_owner: Option<String>,
pub next_token: Option<String>,
pub virtual_gateway_name: String,
}
```
Fields
---
`limit: Option<i64>`The maximum number of results returned by `ListGatewayRoutes` in paginated output. When you use this parameter, `ListGatewayRoutes` returns only `limit` results in a single page along with a `nextToken` response element. You can see the remaining results of the initial request by sending another `ListGatewayRoutes` request with the returned `nextToken` value. This value can be between 1 and 100. If you don't use this parameter, `ListGatewayRoutes` returns up to 100 results and a `nextToken` value if applicable.
`mesh_name: String`The name of the service mesh to list gateway routes in.
`mesh_owner: Option<String>`The AWS IAM account ID of the service mesh owner. If the account ID is not your own, then it's the ID of the account that shared the mesh with your account. For more information about mesh sharing, see Working with shared meshes.
`next_token: Option<String>`The `nextToken` value returned from a previous paginated `ListGatewayRoutes` request where `limit` was used and the results exceeded the value of that parameter. Pagination continues from the end of the previous results that returned the `nextToken` value.
`virtual_gateway_name: String`The name of the virtual gateway to list gateway routes in.
Trait Implementations
---
source### impl Clone for ListGatewayRoutesInput
source#### fn clone(&self) -> ListGatewayRoutesInput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ListGatewayRoutesInput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ListGatewayRoutesInput
source#### fn default() -> ListGatewayRoutesInput
Returns the “default value” for a type. Read more
source### impl PartialEq<ListGatewayRoutesInput> for ListGatewayRoutesInput
source#### fn eq(&self, other: &ListGatewayRoutesInput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListGatewayRoutesInput) -> bool
This method tests for `!=`.
source### impl Serialize for ListGatewayRoutesInput
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for ListGatewayRoutesInput
Auto Trait Implementations
---
### impl RefUnwindSafe for ListGatewayRoutesInput
### impl Send for ListGatewayRoutesInput
### impl Sync for ListGatewayRoutesInput
### impl Unpin for ListGatewayRoutesInput
### impl UnwindSafe for ListGatewayRoutesInput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_appmesh::ListGatewayRoutesOutput
===
```
pub struct ListGatewayRoutesOutput {
pub gateway_routes: Vec<GatewayRouteRef>,
pub next_token: Option<String>,
}
```
Fields
---
`gateway_routes: Vec<GatewayRouteRef>`The list of existing gateway routes for the specified service mesh and virtual gateway.
`next_token: Option<String>`The `nextToken` value to include in a future `ListGatewayRoutes` request. When the results of a `ListGatewayRoutes` request exceed `limit`, you can use this value to retrieve the next page of results. This value is `null` when there are no more results to return.
Trait Implementations
---
source### impl Clone for ListGatewayRoutesOutput
source#### fn clone(&self) -> ListGatewayRoutesOutput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ListGatewayRoutesOutput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ListGatewayRoutesOutput
source#### fn default() -> ListGatewayRoutesOutput
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for ListGatewayRoutesOutput
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<ListGatewayRoutesOutput> for ListGatewayRoutesOutput
source#### fn eq(&self, other: &ListGatewayRoutesOutput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListGatewayRoutesOutput) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ListGatewayRoutesOutput
Auto Trait Implementations
---
### impl RefUnwindSafe for ListGatewayRoutesOutput
### impl Send for ListGatewayRoutesOutput
### impl Sync for ListGatewayRoutesOutput
### impl Unpin for ListGatewayRoutesOutput
### impl UnwindSafe for ListGatewayRoutesOutput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::ListMeshesInput
===
```
pub struct ListMeshesInput {
pub limit: Option<i64>,
pub next_token: Option<String>,
}
```
Fields
---
`limit: Option<i64>`The maximum number of results returned by `ListMeshes` in paginated output. When you use this parameter, `ListMeshes` returns only `limit` results in a single page along with a `nextToken` response element. You can see the remaining results of the initial request by sending another `ListMeshes` request with the returned `nextToken` value. This value can be between 1 and 100. If you don't use this parameter, `ListMeshes` returns up to 100 results and a `nextToken` value if applicable.
`next_token: Option<String>`The `nextToken` value returned from a previous paginated `ListMeshes` request where `limit` was used and the results exceeded the value of that parameter. Pagination continues from the end of the previous results that returned the `nextToken` value.
This token should be treated as an opaque identifier that is used only to retrieve the next items in a list and not for other programmatic purposes.
Trait Implementations
---
source### impl Clone for ListMeshesInput
source#### fn clone(&self) -> ListMeshesInput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ListMeshesInput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ListMeshesInput
source#### fn default() -> ListMeshesInput
Returns the “default value” for a type. Read more
source### impl PartialEq<ListMeshesInput> for ListMeshesInput
source#### fn eq(&self, other: &ListMeshesInput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListMeshesInput) -> bool
This method tests for `!=`.
source### impl Serialize for ListMeshesInput
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for ListMeshesInput
Auto Trait Implementations
---
### impl RefUnwindSafe for ListMeshesInput
### impl Send for ListMeshesInput
### impl Sync for ListMeshesInput
### impl Unpin for ListMeshesInput
### impl UnwindSafe for ListMeshesInput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_appmesh::ListMeshesOutput
===
```
pub struct ListMeshesOutput {
pub meshes: Vec<MeshRef>,
pub next_token: Option<String>,
}
```
Fields
---
`meshes: Vec<MeshRef>`The list of existing service meshes.
`next_token: Option<String>`The `nextToken` value to include in a future `ListMeshes` request. When the results of a `ListMeshes` request exceed `limit`, you can use this value to retrieve the next page of results. This value is `null` when there are no more results to return.
Trait Implementations
---
source### impl Clone for ListMeshesOutput
source#### fn clone(&self) -> ListMeshesOutput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ListMeshesOutput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ListMeshesOutput
source#### fn default() -> ListMeshesOutput
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for ListMeshesOutput
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<ListMeshesOutput> for ListMeshesOutput
source#### fn eq(&self, other: &ListMeshesOutput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListMeshesOutput) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ListMeshesOutput
Auto Trait Implementations
---
### impl RefUnwindSafe for ListMeshesOutput
### impl Send for ListMeshesOutput
### impl Sync for ListMeshesOutput
### impl Unpin for ListMeshesOutput
### impl UnwindSafe for ListMeshesOutput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::ListRoutesInput
===
```
pub struct ListRoutesInput {
pub limit: Option<i64>,
pub mesh_name: String,
pub mesh_owner: Option<String>,
pub next_token: Option<String>,
pub virtual_router_name: String,
}
```
Fields
---
`limit: Option<i64>`The maximum number of results returned by `ListRoutes` in paginated output. When you use this parameter, `ListRoutes` returns only `limit` results in a single page along with a `nextToken` response element. You can see the remaining results of the initial request by sending another `ListRoutes` request with the returned `nextToken` value. This value can be between 1 and 100. If you don't use this parameter, `ListRoutes` returns up to 100 results and a `nextToken` value if applicable.
`mesh_name: String`The name of the service mesh to list routes in.
`mesh_owner: Option<String>`The AWS IAM account ID of the service mesh owner. If the account ID is not your own, then it's the ID of the account that shared the mesh with your account. For more information about mesh sharing, see Working with shared meshes.
`next_token: Option<String>`The `nextToken` value returned from a previous paginated `ListRoutes` request where `limit` was used and the results exceeded the value of that parameter. Pagination continues from the end of the previous results that returned the `nextToken` value.
`virtual_router_name: String`The name of the virtual router to list routes in.
Trait Implementations
---
source### impl Clone for ListRoutesInput
source#### fn clone(&self) -> ListRoutesInput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ListRoutesInput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ListRoutesInput
source#### fn default() -> ListRoutesInput
Returns the “default value” for a type. Read more
source### impl PartialEq<ListRoutesInput> for ListRoutesInput
source#### fn eq(&self, other: &ListRoutesInput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListRoutesInput) -> bool
This method tests for `!=`.
source### impl Serialize for ListRoutesInput
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for ListRoutesInput
Auto Trait Implementations
---
### impl RefUnwindSafe for ListRoutesInput
### impl Send for ListRoutesInput
### impl Sync for ListRoutesInput
### impl Unpin for ListRoutesInput
### impl UnwindSafe for ListRoutesInput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_appmesh::ListRoutesOutput
===
```
pub struct ListRoutesOutput {
pub next_token: Option<String>,
pub routes: Vec<RouteRef>,
}
```
Fields
---
`next_token: Option<String>`The `nextToken` value to include in a future `ListRoutes` request. When the results of a `ListRoutes` request exceed `limit`, you can use this value to retrieve the next page of results. This value is `null` when there are no more results to return.
`routes: Vec<RouteRef>`The list of existing routes for the specified service mesh and virtual router.
Trait Implementations
---
source### impl Clone for ListRoutesOutput
source#### fn clone(&self) -> ListRoutesOutput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ListRoutesOutput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ListRoutesOutput
source#### fn default() -> ListRoutesOutput
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for ListRoutesOutput
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<ListRoutesOutput> for ListRoutesOutput
source#### fn eq(&self, other: &ListRoutesOutput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListRoutesOutput) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ListRoutesOutput
Auto Trait Implementations
---
### impl RefUnwindSafe for ListRoutesOutput
### impl Send for ListRoutesOutput
### impl Sync for ListRoutesOutput
### impl Unpin for ListRoutesOutput
### impl UnwindSafe for ListRoutesOutput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::ListTagsForResourceInput
===
```
pub struct ListTagsForResourceInput {
pub limit: Option<i64>,
pub next_token: Option<String>,
pub resource_arn: String,
}
```
Fields
---
`limit: Option<i64>`The maximum number of tag results returned by `ListTagsForResource` in paginated output. When this parameter is used, `ListTagsForResource` returns only `limit` results in a single page along with a `nextToken` response element. You can see the remaining results of the initial request by sending another `ListTagsForResource` request with the returned `nextToken` value. This value can be between 1 and 100. If you don't use this parameter, `ListTagsForResource` returns up to 100 results and a `nextToken` value if applicable.
`next_token: Option<String>`The `nextToken` value returned from a previous paginated `ListTagsForResource` request where `limit` was used and the results exceeded the value of that parameter. Pagination continues from the end of the previous results that returned the `nextToken` value.
`resource_arn: String`The Amazon Resource Name (ARN) that identifies the resource to list the tags for.
Trait Implementations
---
source### impl Clone for ListTagsForResourceInput
source#### fn clone(&self) -> ListTagsForResourceInput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ListTagsForResourceInput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ListTagsForResourceInput
source#### fn default() -> ListTagsForResourceInput
Returns the “default value” for a type. Read more
source### impl PartialEq<ListTagsForResourceInput> for ListTagsForResourceInput
source#### fn eq(&self, other: &ListTagsForResourceInput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListTagsForResourceInput) -> bool
This method tests for `!=`.
source### impl Serialize for ListTagsForResourceInput
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for ListTagsForResourceInput
Auto Trait Implementations
---
### impl RefUnwindSafe for ListTagsForResourceInput
### impl Send for ListTagsForResourceInput
### impl Sync for ListTagsForResourceInput
### impl Unpin for ListTagsForResourceInput
### impl UnwindSafe for ListTagsForResourceInput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_appmesh::ListTagsForResourceOutput
===
```
pub struct ListTagsForResourceOutput {
pub next_token: Option<String>,
pub tags: Vec<TagRef>,
}
```
Fields
---
`next_token: Option<String>`The `nextToken` value to include in a future `ListTagsForResource` request. When the results of a `ListTagsForResource` request exceed `limit`, you can use this value to retrieve the next page of results. This value is `null` when there are no more results to return.
`tags: Vec<TagRef>`The tags for the resource.
Trait Implementations
---
source### impl Clone for ListTagsForResourceOutput
source#### fn clone(&self) -> ListTagsForResourceOutput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ListTagsForResourceOutput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ListTagsForResourceOutput
source#### fn default() -> ListTagsForResourceOutput
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for ListTagsForResourceOutput
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<ListTagsForResourceOutput> for ListTagsForResourceOutput
source#### fn eq(&self, other: &ListTagsForResourceOutput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListTagsForResourceOutput) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ListTagsForResourceOutput
Auto Trait Implementations
---
### impl RefUnwindSafe for ListTagsForResourceOutput
### impl Send for ListTagsForResourceOutput
### impl Sync for ListTagsForResourceOutput
### impl Unpin for ListTagsForResourceOutput
### impl UnwindSafe for ListTagsForResourceOutput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::ListVirtualGatewaysInput
===
```
pub struct ListVirtualGatewaysInput {
pub limit: Option<i64>,
pub mesh_name: String,
pub mesh_owner: Option<String>,
pub next_token: Option<String>,
}
```
Fields
---
`limit: Option<i64>`The maximum number of results returned by `ListVirtualGateways` in paginated output. When you use this parameter, `ListVirtualGateways` returns only `limit` results in a single page along with a `nextToken` response element. You can see the remaining results of the initial request by sending another `ListVirtualGateways` request with the returned `nextToken` value. This value can be between 1 and 100. If you don't use this parameter, `ListVirtualGateways` returns up to 100 results and a `nextToken` value if applicable.
`mesh_name: String`The name of the service mesh to list virtual gateways in.
`mesh_owner: Option<String>`The AWS IAM account ID of the service mesh owner. If the account ID is not your own, then it's the ID of the account that shared the mesh with your account. For more information about mesh sharing, see Working with shared meshes.
`next_token: Option<String>`The `nextToken` value returned from a previous paginated `ListVirtualGateways` request where `limit` was used and the results exceeded the value of that parameter. Pagination continues from the end of the previous results that returned the `nextToken` value.
Trait Implementations
---
source### impl Clone for ListVirtualGatewaysInput
source#### fn clone(&self) -> ListVirtualGatewaysInput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ListVirtualGatewaysInput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ListVirtualGatewaysInput
source#### fn default() -> ListVirtualGatewaysInput
Returns the “default value” for a type. Read more
source### impl PartialEq<ListVirtualGatewaysInput> for ListVirtualGatewaysInput
source#### fn eq(&self, other: &ListVirtualGatewaysInput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListVirtualGatewaysInput) -> bool
This method tests for `!=`.
source### impl Serialize for ListVirtualGatewaysInput
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for ListVirtualGatewaysInput
Auto Trait Implementations
---
### impl RefUnwindSafe for ListVirtualGatewaysInput
### impl Send for ListVirtualGatewaysInput
### impl Sync for ListVirtualGatewaysInput
### impl Unpin for ListVirtualGatewaysInput
### impl UnwindSafe for ListVirtualGatewaysInput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_appmesh::ListVirtualGatewaysOutput
===
```
pub struct ListVirtualGatewaysOutput {
pub next_token: Option<String>,
pub virtual_gateways: Vec<VirtualGatewayRef>,
}
```
Fields
---
`next_token: Option<String>`The `nextToken` value to include in a future `ListVirtualGateways` request. When the results of a `ListVirtualGateways` request exceed `limit`, you can use this value to retrieve the next page of results. This value is `null` when there are no more results to return.
`virtual_gateways: Vec<VirtualGatewayRef>`The list of existing virtual gateways for the specified service mesh.
Trait Implementations
---
source### impl Clone for ListVirtualGatewaysOutput
source#### fn clone(&self) -> ListVirtualGatewaysOutput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ListVirtualGatewaysOutput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ListVirtualGatewaysOutput
source#### fn default() -> ListVirtualGatewaysOutput
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for ListVirtualGatewaysOutput
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<ListVirtualGatewaysOutput> for ListVirtualGatewaysOutput
source#### fn eq(&self, other: &ListVirtualGatewaysOutput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListVirtualGatewaysOutput) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ListVirtualGatewaysOutput
Auto Trait Implementations
---
### impl RefUnwindSafe for ListVirtualGatewaysOutput
### impl Send for ListVirtualGatewaysOutput
### impl Sync for ListVirtualGatewaysOutput
### impl Unpin for ListVirtualGatewaysOutput
### impl UnwindSafe for ListVirtualGatewaysOutput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::ListVirtualNodesInput
===
```
pub struct ListVirtualNodesInput {
pub limit: Option<i64>,
pub mesh_name: String,
pub mesh_owner: Option<String>,
pub next_token: Option<String>,
}
```
Fields
---
`limit: Option<i64>`The maximum number of results returned by `ListVirtualNodes` in paginated output. When you use this parameter, `ListVirtualNodes` returns only `limit` results in a single page along with a `nextToken` response element. You can see the remaining results of the initial request by sending another `ListVirtualNodes` request with the returned `nextToken` value. This value can be between 1 and 100. If you don't use this parameter, `ListVirtualNodes` returns up to 100 results and a `nextToken` value if applicable.
`mesh_name: String`The name of the service mesh to list virtual nodes in.
`mesh_owner: Option<String>`The AWS IAM account ID of the service mesh owner. If the account ID is not your own, then it's the ID of the account that shared the mesh with your account. For more information about mesh sharing, see Working with shared meshes.
`next_token: Option<String>`The `nextToken` value returned from a previous paginated `ListVirtualNodes` request where `limit` was used and the results exceeded the value of that parameter. Pagination continues from the end of the previous results that returned the `nextToken` value.
Trait Implementations
---
source### impl Clone for ListVirtualNodesInput
source#### fn clone(&self) -> ListVirtualNodesInput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ListVirtualNodesInput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ListVirtualNodesInput
source#### fn default() -> ListVirtualNodesInput
Returns the “default value” for a type. Read more
source### impl PartialEq<ListVirtualNodesInput> for ListVirtualNodesInput
source#### fn eq(&self, other: &ListVirtualNodesInput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListVirtualNodesInput) -> bool
This method tests for `!=`.
source### impl Serialize for ListVirtualNodesInput
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for ListVirtualNodesInput
Auto Trait Implementations
---
### impl RefUnwindSafe for ListVirtualNodesInput
### impl Send for ListVirtualNodesInput
### impl Sync for ListVirtualNodesInput
### impl Unpin for ListVirtualNodesInput
### impl UnwindSafe for ListVirtualNodesInput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_appmesh::ListVirtualNodesOutput
===
```
pub struct ListVirtualNodesOutput {
pub next_token: Option<String>,
pub virtual_nodes: Vec<VirtualNodeRef>,
}
```
Fields
---
`next_token: Option<String>`The `nextToken` value to include in a future `ListVirtualNodes` request. When the results of a `ListVirtualNodes` request exceed `limit`, you can use this value to retrieve the next page of results. This value is `null` when there are no more results to return.
`virtual_nodes: Vec<VirtualNodeRef>`The list of existing virtual nodes for the specified service mesh.
Trait Implementations
---
source### impl Clone for ListVirtualNodesOutput
source#### fn clone(&self) -> ListVirtualNodesOutput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ListVirtualNodesOutput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ListVirtualNodesOutput
source#### fn default() -> ListVirtualNodesOutput
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for ListVirtualNodesOutput
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<ListVirtualNodesOutput> for ListVirtualNodesOutput
source#### fn eq(&self, other: &ListVirtualNodesOutput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListVirtualNodesOutput) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ListVirtualNodesOutput
Auto Trait Implementations
---
### impl RefUnwindSafe for ListVirtualNodesOutput
### impl Send for ListVirtualNodesOutput
### impl Sync for ListVirtualNodesOutput
### impl Unpin for ListVirtualNodesOutput
### impl UnwindSafe for ListVirtualNodesOutput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::ListVirtualRoutersInput
===
```
pub struct ListVirtualRoutersInput {
pub limit: Option<i64>,
pub mesh_name: String,
pub mesh_owner: Option<String>,
pub next_token: Option<String>,
}
```
Fields
---
`limit: Option<i64>`The maximum number of results returned by `ListVirtualRouters` in paginated output. When you use this parameter, `ListVirtualRouters` returns only `limit` results in a single page along with a `nextToken` response element. You can see the remaining results of the initial request by sending another `ListVirtualRouters` request with the returned `nextToken` value. This value can be between 1 and 100. If you don't use this parameter, `ListVirtualRouters` returns up to 100 results and a `nextToken` value if applicable.
`mesh_name: String`The name of the service mesh to list virtual routers in.
`mesh_owner: Option<String>`The AWS IAM account ID of the service mesh owner. If the account ID is not your own, then it's the ID of the account that shared the mesh with your account. For more information about mesh sharing, see Working with shared meshes.
`next_token: Option<String>`The `nextToken` value returned from a previous paginated `ListVirtualRouters` request where `limit` was used and the results exceeded the value of that parameter. Pagination continues from the end of the previous results that returned the `nextToken` value.
Trait Implementations
---
source### impl Clone for ListVirtualRoutersInput
source#### fn clone(&self) -> ListVirtualRoutersInput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ListVirtualRoutersInput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ListVirtualRoutersInput
source#### fn default() -> ListVirtualRoutersInput
Returns the “default value” for a type. Read more
source### impl PartialEq<ListVirtualRoutersInput> for ListVirtualRoutersInput
source#### fn eq(&self, other: &ListVirtualRoutersInput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListVirtualRoutersInput) -> bool
This method tests for `!=`.
source### impl Serialize for ListVirtualRoutersInput
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for ListVirtualRoutersInput
Auto Trait Implementations
---
### impl RefUnwindSafe for ListVirtualRoutersInput
### impl Send for ListVirtualRoutersInput
### impl Sync for ListVirtualRoutersInput
### impl Unpin for ListVirtualRoutersInput
### impl UnwindSafe for ListVirtualRoutersInput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_appmesh::ListVirtualRoutersOutput
===
```
pub struct ListVirtualRoutersOutput {
pub next_token: Option<String>,
pub virtual_routers: Vec<VirtualRouterRef>,
}
```
Fields
---
`next_token: Option<String>`The `nextToken` value to include in a future `ListVirtualRouters` request. When the results of a `ListVirtualRouters` request exceed `limit`, you can use this value to retrieve the next page of results. This value is `null` when there are no more results to return.
`virtual_routers: Vec<VirtualRouterRef>`The list of existing virtual routers for the specified service mesh.
Trait Implementations
---
source### impl Clone for ListVirtualRoutersOutput
source#### fn clone(&self) -> ListVirtualRoutersOutput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ListVirtualRoutersOutput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ListVirtualRoutersOutput
source#### fn default() -> ListVirtualRoutersOutput
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for ListVirtualRoutersOutput
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<ListVirtualRoutersOutput> for ListVirtualRoutersOutput
source#### fn eq(&self, other: &ListVirtualRoutersOutput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListVirtualRoutersOutput) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ListVirtualRoutersOutput
Auto Trait Implementations
---
### impl RefUnwindSafe for ListVirtualRoutersOutput
### impl Send for ListVirtualRoutersOutput
### impl Sync for ListVirtualRoutersOutput
### impl Unpin for ListVirtualRoutersOutput
### impl UnwindSafe for ListVirtualRoutersOutput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::ListVirtualServicesInput
===
```
pub struct ListVirtualServicesInput {
pub limit: Option<i64>,
pub mesh_name: String,
pub mesh_owner: Option<String>,
pub next_token: Option<String>,
}
```
Fields
---
`limit: Option<i64>`The maximum number of results returned by `ListVirtualServices` in paginated output. When you use this parameter, `ListVirtualServices` returns only `limit` results in a single page along with a `nextToken` response element. You can see the remaining results of the initial request by sending another `ListVirtualServices` request with the returned `nextToken` value. This value can be between 1 and 100. If you don't use this parameter, `ListVirtualServices` returns up to 100 results and a `nextToken` value if applicable.
`mesh_name: String`The name of the service mesh to list virtual services in.
`mesh_owner: Option<String>`The AWS IAM account ID of the service mesh owner. If the account ID is not your own, then it's the ID of the account that shared the mesh with your account. For more information about mesh sharing, see Working with shared meshes.
`next_token: Option<String>`The `nextToken` value returned from a previous paginated `ListVirtualServices` request where `limit` was used and the results exceeded the value of that parameter. Pagination continues from the end of the previous results that returned the `nextToken` value.
Trait Implementations
---
source### impl Clone for ListVirtualServicesInput
source#### fn clone(&self) -> ListVirtualServicesInput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ListVirtualServicesInput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ListVirtualServicesInput
source#### fn default() -> ListVirtualServicesInput
Returns the “default value” for a type. Read more
source### impl PartialEq<ListVirtualServicesInput> for ListVirtualServicesInput
source#### fn eq(&self, other: &ListVirtualServicesInput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListVirtualServicesInput) -> bool
This method tests for `!=`.
source### impl Serialize for ListVirtualServicesInput
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for ListVirtualServicesInput
Auto Trait Implementations
---
### impl RefUnwindSafe for ListVirtualServicesInput
### impl Send for ListVirtualServicesInput
### impl Sync for ListVirtualServicesInput
### impl Unpin for ListVirtualServicesInput
### impl UnwindSafe for ListVirtualServicesInput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_appmesh::ListVirtualServicesOutput
===
```
pub struct ListVirtualServicesOutput {
pub next_token: Option<String>,
pub virtual_services: Vec<VirtualServiceRef>,
}
```
Fields
---
`next_token: Option<String>`The `nextToken` value to include in a future `ListVirtualServices` request. When the results of a `ListVirtualServices` request exceed `limit`, you can use this value to retrieve the next page of results. This value is `null` when there are no more results to return.
`virtual_services: Vec<VirtualServiceRef>`The list of existing virtual services for the specified service mesh.
Trait Implementations
---
source### impl Clone for ListVirtualServicesOutput
source#### fn clone(&self) -> ListVirtualServicesOutput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ListVirtualServicesOutput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ListVirtualServicesOutput
source#### fn default() -> ListVirtualServicesOutput
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for ListVirtualServicesOutput
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<ListVirtualServicesOutput> for ListVirtualServicesOutput
source#### fn eq(&self, other: &ListVirtualServicesOutput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListVirtualServicesOutput) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ListVirtualServicesOutput
Auto Trait Implementations
---
### impl RefUnwindSafe for ListVirtualServicesOutput
### impl Send for ListVirtualServicesOutput
### impl Sync for ListVirtualServicesOutput
### impl Unpin for ListVirtualServicesOutput
### impl UnwindSafe for ListVirtualServicesOutput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::Listener
===
```
pub struct Listener {
pub connection_pool: Option<VirtualNodeConnectionPool>,
pub health_check: Option<HealthCheckPolicy>,
pub outlier_detection: Option<OutlierDetection>,
pub port_mapping: PortMapping,
pub timeout: Option<ListenerTimeout>,
pub tls: Option<ListenerTls>,
}
```
An object that represents a listener for a virtual node.
Fields
---
`connection_pool: Option<VirtualNodeConnectionPool>`The connection pool information for the listener.
`health_check: Option<HealthCheckPolicy>`The health check information for the listener.
`outlier_detection: Option<OutlierDetection>`The outlier detection information for the listener.
`port_mapping: PortMapping`The port mapping information for the listener.
`timeout: Option<ListenerTimeout>`An object that represents timeouts for different protocols.
`tls: Option<ListenerTls>`A reference to an object that represents the Transport Layer Security (TLS) properties for a listener.
Trait Implementations
---
source### impl Clone for Listener
source#### fn clone(&self) -> Listener
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for Listener
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for Listener
source#### fn default() -> Listener
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for Listener
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<Listener> for Listener
source#### fn eq(&self, other: &Listener) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &Listener) -> bool
This method tests for `!=`.
source### impl Serialize for Listener
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for Listener
Auto Trait Implementations
---
### impl RefUnwindSafe for Listener
### impl Send for Listener
### impl Sync for Listener
### impl Unpin for Listener
### impl UnwindSafe for Listener
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::ListenerTimeout
===
```
pub struct ListenerTimeout {
pub grpc: Option<GrpcTimeout>,
pub http: Option<HttpTimeout>,
pub http_2: Option<HttpTimeout>,
pub tcp: Option<TcpTimeout>,
}
```
An object that represents timeouts for different protocols.
Fields
---
`grpc: Option<GrpcTimeout>`An object that represents types of timeouts.
`http: Option<HttpTimeout>`An object that represents types of timeouts.
`http_2: Option<HttpTimeout>`An object that represents types of timeouts.
`tcp: Option<TcpTimeout>`An object that represents types of timeouts.
Trait Implementations
---
source### impl Clone for ListenerTimeout
source#### fn clone(&self) -> ListenerTimeout
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ListenerTimeout
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ListenerTimeout
source#### fn default() -> ListenerTimeout
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for ListenerTimeout
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<ListenerTimeout> for ListenerTimeout
source#### fn eq(&self, other: &ListenerTimeout) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListenerTimeout) -> bool
This method tests for `!=`.
source### impl Serialize for ListenerTimeout
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for ListenerTimeout
Auto Trait Implementations
---
### impl RefUnwindSafe for ListenerTimeout
### impl Send for ListenerTimeout
### impl Sync for ListenerTimeout
### impl Unpin for ListenerTimeout
### impl UnwindSafe for ListenerTimeout
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::ListenerTls
===
```
pub struct ListenerTls {
pub certificate: ListenerTlsCertificate,
pub mode: String,
pub validation: Option<ListenerTlsValidationContext>,
}
```
An object that represents the Transport Layer Security (TLS) properties for a listener.
Fields
---
`certificate: ListenerTlsCertificate`A reference to an object that represents a listener's Transport Layer Security (TLS) certificate.
`mode: String`Specify one of the following modes.
* STRICT – Listener only accepts connections with TLS enabled.
* PERMISSIVE – Listener accepts connections with or without TLS enabled.
* DISABLED – Listener only accepts connections without TLS.
`validation: Option<ListenerTlsValidationContext>`A reference to an object that represents a listener's Transport Layer Security (TLS) validation context.
Trait Implementations
---
source### impl Clone for ListenerTls
source#### fn clone(&self) -> ListenerTls
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ListenerTls
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ListenerTls
source#### fn default() -> ListenerTls
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for ListenerTls
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<ListenerTls> for ListenerTls
source#### fn eq(&self, other: &ListenerTls) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListenerTls) -> bool
This method tests for `!=`.
source### impl Serialize for ListenerTls
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for ListenerTls
Auto Trait Implementations
---
### impl RefUnwindSafe for ListenerTls
### impl Send for ListenerTls
### impl Sync for ListenerTls
### impl Unpin for ListenerTls
### impl UnwindSafe for ListenerTls
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::ListenerTlsAcmCertificate
===
```
pub struct ListenerTlsAcmCertificate {
pub certificate_arn: String,
}
```
An object that represents an AWS Certicate Manager (ACM) certificate.
Fields
---
`certificate_arn: String`The Amazon Resource Name (ARN) for the certificate. The certificate must meet specific requirements and you must have proxy authorization enabled. For more information, see Transport Layer Security (TLS).
Trait Implementations
---
source### impl Clone for ListenerTlsAcmCertificate
source#### fn clone(&self) -> ListenerTlsAcmCertificate
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ListenerTlsAcmCertificate
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ListenerTlsAcmCertificate
source#### fn default() -> ListenerTlsAcmCertificate
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for ListenerTlsAcmCertificate
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<ListenerTlsAcmCertificate> for ListenerTlsAcmCertificate
source#### fn eq(&self, other: &ListenerTlsAcmCertificate) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListenerTlsAcmCertificate) -> bool
This method tests for `!=`.
source### impl Serialize for ListenerTlsAcmCertificate
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for ListenerTlsAcmCertificate
Auto Trait Implementations
---
### impl RefUnwindSafe for ListenerTlsAcmCertificate
### impl Send for ListenerTlsAcmCertificate
### impl Sync for ListenerTlsAcmCertificate
### impl Unpin for ListenerTlsAcmCertificate
### impl UnwindSafe for ListenerTlsAcmCertificate
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::ListenerTlsCertificate
===
```
pub struct ListenerTlsCertificate {
pub acm: Option<ListenerTlsAcmCertificate>,
pub file: Option<ListenerTlsFileCertificate>,
pub sds: Option<ListenerTlsSdsCertificate>,
}
```
An object that represents a listener's Transport Layer Security (TLS) certificate.
Fields
---
`acm: Option<ListenerTlsAcmCertificate>`A reference to an object that represents an AWS Certicate Manager (ACM) certificate.
`file: Option<ListenerTlsFileCertificate>`A reference to an object that represents a local file certificate.
`sds: Option<ListenerTlsSdsCertificate>`A reference to an object that represents a listener's Secret Discovery Service certificate.
Trait Implementations
---
source### impl Clone for ListenerTlsCertificate
source#### fn clone(&self) -> ListenerTlsCertificate
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ListenerTlsCertificate
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ListenerTlsCertificate
source#### fn default() -> ListenerTlsCertificate
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for ListenerTlsCertificate
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<ListenerTlsCertificate> for ListenerTlsCertificate
source#### fn eq(&self, other: &ListenerTlsCertificate) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListenerTlsCertificate) -> bool
This method tests for `!=`.
source### impl Serialize for ListenerTlsCertificate
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for ListenerTlsCertificate
Auto Trait Implementations
---
### impl RefUnwindSafe for ListenerTlsCertificate
### impl Send for ListenerTlsCertificate
### impl Sync for ListenerTlsCertificate
### impl Unpin for ListenerTlsCertificate
### impl UnwindSafe for ListenerTlsCertificate
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::ListenerTlsFileCertificate
===
```
pub struct ListenerTlsFileCertificate {
pub certificate_chain: String,
pub private_key: String,
}
```
An object that represents a local file certificate. The certificate must meet specific requirements and you must have proxy authorization enabled. For more information, see Transport Layer Security (TLS).
Fields
---
`certificate_chain: String`The certificate chain for the certificate.
`private_key: String`The private key for a certificate stored on the file system of the virtual node that the proxy is running on.
Trait Implementations
---
source### impl Clone for ListenerTlsFileCertificate
source#### fn clone(&self) -> ListenerTlsFileCertificate
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ListenerTlsFileCertificate
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ListenerTlsFileCertificate
source#### fn default() -> ListenerTlsFileCertificate
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for ListenerTlsFileCertificate
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<ListenerTlsFileCertificate> for ListenerTlsFileCertificate
source#### fn eq(&self, other: &ListenerTlsFileCertificate) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListenerTlsFileCertificate) -> bool
This method tests for `!=`.
source### impl Serialize for ListenerTlsFileCertificate
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for ListenerTlsFileCertificate
Auto Trait Implementations
---
### impl RefUnwindSafe for ListenerTlsFileCertificate
### impl Send for ListenerTlsFileCertificate
### impl Sync for ListenerTlsFileCertificate
### impl Unpin for ListenerTlsFileCertificate
### impl UnwindSafe for ListenerTlsFileCertificate
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::ListenerTlsSdsCertificate
===
```
pub struct ListenerTlsSdsCertificate {
pub secret_name: String,
}
```
An object that represents the listener's Secret Discovery Service certificate. The proxy must be configured with a local SDS provider via a Unix Domain Socket. See App Mesh TLS documentation for more info.
Fields
---
`secret_name: String`A reference to an object that represents the name of the secret requested from the Secret Discovery Service provider representing Transport Layer Security (TLS) materials like a certificate or certificate chain.
Trait Implementations
---
source### impl Clone for ListenerTlsSdsCertificate
source#### fn clone(&self) -> ListenerTlsSdsCertificate
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ListenerTlsSdsCertificate
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ListenerTlsSdsCertificate
source#### fn default() -> ListenerTlsSdsCertificate
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for ListenerTlsSdsCertificate
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<ListenerTlsSdsCertificate> for ListenerTlsSdsCertificate
source#### fn eq(&self, other: &ListenerTlsSdsCertificate) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListenerTlsSdsCertificate) -> bool
This method tests for `!=`.
source### impl Serialize for ListenerTlsSdsCertificate
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for ListenerTlsSdsCertificate
Auto Trait Implementations
---
### impl RefUnwindSafe for ListenerTlsSdsCertificate
### impl Send for ListenerTlsSdsCertificate
### impl Sync for ListenerTlsSdsCertificate
### impl Unpin for ListenerTlsSdsCertificate
### impl UnwindSafe for ListenerTlsSdsCertificate
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::ListenerTlsValidationContext
===
```
pub struct ListenerTlsValidationContext {
pub subject_alternative_names: Option<SubjectAlternativeNames>,
pub trust: ListenerTlsValidationContextTrust,
}
```
An object that represents a listener's Transport Layer Security (TLS) validation context.
Fields
---
`subject_alternative_names: Option<SubjectAlternativeNames>`A reference to an object that represents the SANs for a listener's Transport Layer Security (TLS) validation context.
`trust: ListenerTlsValidationContextTrust`A reference to where to retrieve the trust chain when validating a peer’s Transport Layer Security (TLS) certificate.
Trait Implementations
---
source### impl Clone for ListenerTlsValidationContext
source#### fn clone(&self) -> ListenerTlsValidationContext
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ListenerTlsValidationContext
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ListenerTlsValidationContext
source#### fn default() -> ListenerTlsValidationContext
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for ListenerTlsValidationContext
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<ListenerTlsValidationContext> for ListenerTlsValidationContext
source#### fn eq(&self, other: &ListenerTlsValidationContext) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListenerTlsValidationContext) -> bool
This method tests for `!=`.
source### impl Serialize for ListenerTlsValidationContext
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for ListenerTlsValidationContext
Auto Trait Implementations
---
### impl RefUnwindSafe for ListenerTlsValidationContext
### impl Send for ListenerTlsValidationContext
### impl Sync for ListenerTlsValidationContext
### impl Unpin for ListenerTlsValidationContext
### impl UnwindSafe for ListenerTlsValidationContext
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::ListenerTlsValidationContextTrust
===
```
pub struct ListenerTlsValidationContextTrust {
pub file: Option<TlsValidationContextFileTrust>,
pub sds: Option<TlsValidationContextSdsTrust>,
}
```
An object that represents a listener's Transport Layer Security (TLS) validation context trust.
Fields
---
`file: Option<TlsValidationContextFileTrust>`An object that represents a Transport Layer Security (TLS) validation context trust for a local file.
`sds: Option<TlsValidationContextSdsTrust>`A reference to an object that represents a listener's Transport Layer Security (TLS) Secret Discovery Service validation context trust.
Trait Implementations
---
source### impl Clone for ListenerTlsValidationContextTrust
source#### fn clone(&self) -> ListenerTlsValidationContextTrust
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ListenerTlsValidationContextTrust
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ListenerTlsValidationContextTrust
source#### fn default() -> ListenerTlsValidationContextTrust
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for ListenerTlsValidationContextTrust
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<ListenerTlsValidationContextTrust> for ListenerTlsValidationContextTrust
source#### fn eq(&self, other: &ListenerTlsValidationContextTrust) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListenerTlsValidationContextTrust) -> bool
This method tests for `!=`.
source### impl Serialize for ListenerTlsValidationContextTrust
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for ListenerTlsValidationContextTrust
Auto Trait Implementations
---
### impl RefUnwindSafe for ListenerTlsValidationContextTrust
### impl Send for ListenerTlsValidationContextTrust
### impl Sync for ListenerTlsValidationContextTrust
### impl Unpin for ListenerTlsValidationContextTrust
### impl UnwindSafe for ListenerTlsValidationContextTrust
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::Logging
===
```
pub struct Logging {
pub access_log: Option<AccessLog>,
}
```
An object that represents the logging information for a virtual node.
Fields
---
`access_log: Option<AccessLog>`The access log configuration for a virtual node.
Trait Implementations
---
source### impl Clone for Logging
source#### fn clone(&self) -> Logging
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for Logging
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for Logging
source#### fn default() -> Logging
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for Logging
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<Logging> for Logging
source#### fn eq(&self, other: &Logging) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &Logging) -> bool
This method tests for `!=`.
source### impl Serialize for Logging
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for Logging
Auto Trait Implementations
---
### impl RefUnwindSafe for Logging
### impl Send for Logging
### impl Sync for Logging
### impl Unpin for Logging
### impl UnwindSafe for Logging
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::MatchRange
===
```
pub struct MatchRange {
pub end: i64,
pub start: i64,
}
```
An object that represents the range of values to match on. The first character of the range is included in the range, though the last character is not. For example, if the range specified were 1-100, only values 1-99 would be matched.
Fields
---
`end: i64`The end of the range.
`start: i64`The start of the range.
Trait Implementations
---
source### impl Clone for MatchRange
source#### fn clone(&self) -> MatchRange
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for MatchRange
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for MatchRange
source#### fn default() -> MatchRange
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for MatchRange
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<MatchRange> for MatchRange
source#### fn eq(&self, other: &MatchRange) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &MatchRange) -> bool
This method tests for `!=`.
source### impl Serialize for MatchRange
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for MatchRange
Auto Trait Implementations
---
### impl RefUnwindSafe for MatchRange
### impl Send for MatchRange
### impl Sync for MatchRange
### impl Unpin for MatchRange
### impl UnwindSafe for MatchRange
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::MeshData
===
```
pub struct MeshData {
pub mesh_name: String,
pub metadata: ResourceMetadata,
pub spec: MeshSpec,
pub status: MeshStatus,
}
```
An object that represents a service mesh returned by a describe operation.
Fields
---
`mesh_name: String`The name of the service mesh.
`metadata: ResourceMetadata`The associated metadata for the service mesh.
`spec: MeshSpec`The associated specification for the service mesh.
`status: MeshStatus`The status of the service mesh.
Trait Implementations
---
source### impl Clone for MeshData
source#### fn clone(&self) -> MeshData
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for MeshData
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for MeshData
source#### fn default() -> MeshData
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for MeshData
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<MeshData> for MeshData
source#### fn eq(&self, other: &MeshData) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &MeshData) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for MeshData
Auto Trait Implementations
---
### impl RefUnwindSafe for MeshData
### impl Send for MeshData
### impl Sync for MeshData
### impl Unpin for MeshData
### impl UnwindSafe for MeshData
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::MeshRef
===
```
pub struct MeshRef {
pub arn: String,
pub created_at: f64,
pub last_updated_at: f64,
pub mesh_name: String,
pub mesh_owner: String,
pub resource_owner: String,
pub version: i64,
}
```
An object that represents a service mesh returned by a list operation.
Fields
---
`arn: String`The full Amazon Resource Name (ARN) of the service mesh.
`created_at: f64`The Unix epoch timestamp in seconds for when the resource was created.
`last_updated_at: f64`The Unix epoch timestamp in seconds for when the resource was last updated.
`mesh_name: String`The name of the service mesh.
`mesh_owner: String`The AWS IAM account ID of the service mesh owner. If the account ID is not your own, then it's the ID of the account that shared the mesh with your account. For more information about mesh sharing, see Working with shared meshes.
`resource_owner: String`The AWS IAM account ID of the resource owner. If the account ID is not your own, then it's the ID of the mesh owner or of another account that the mesh is shared with. For more information about mesh sharing, see Working with shared meshes.
`version: i64`The version of the resource. Resources are created at version 1, and this version is incremented each time that they're updated.
Trait Implementations
---
source### impl Clone for MeshRef
source#### fn clone(&self) -> MeshRef
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for MeshRef
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for MeshRef
source#### fn default() -> MeshRef
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for MeshRef
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<MeshRef> for MeshRef
source#### fn eq(&self, other: &MeshRef) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &MeshRef) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for MeshRef
Auto Trait Implementations
---
### impl RefUnwindSafe for MeshRef
### impl Send for MeshRef
### impl Sync for MeshRef
### impl Unpin for MeshRef
### impl UnwindSafe for MeshRef
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::MeshSpec
===
```
pub struct MeshSpec {
pub egress_filter: Option<EgressFilter>,
}
```
An object that represents the specification of a service mesh.
Fields
---
`egress_filter: Option<EgressFilter>`The egress filter rules for the service mesh.
Trait Implementations
---
source### impl Clone for MeshSpec
source#### fn clone(&self) -> MeshSpec
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for MeshSpec
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for MeshSpec
source#### fn default() -> MeshSpec
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for MeshSpec
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<MeshSpec> for MeshSpec
source#### fn eq(&self, other: &MeshSpec) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &MeshSpec) -> bool
This method tests for `!=`.
source### impl Serialize for MeshSpec
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for MeshSpec
Auto Trait Implementations
---
### impl RefUnwindSafe for MeshSpec
### impl Send for MeshSpec
### impl Sync for MeshSpec
### impl Unpin for MeshSpec
### impl UnwindSafe for MeshSpec
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::MeshStatus
===
```
pub struct MeshStatus {
pub status: Option<String>,
}
```
An object that represents the status of a service mesh.
Fields
---
`status: Option<String>`The current mesh status.
Trait Implementations
---
source### impl Clone for MeshStatus
source#### fn clone(&self) -> MeshStatus
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for MeshStatus
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for MeshStatus
source#### fn default() -> MeshStatus
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for MeshStatus
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<MeshStatus> for MeshStatus
source#### fn eq(&self, other: &MeshStatus) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &MeshStatus) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for MeshStatus
Auto Trait Implementations
---
### impl RefUnwindSafe for MeshStatus
### impl Send for MeshStatus
### impl Sync for MeshStatus
### impl Unpin for MeshStatus
### impl UnwindSafe for MeshStatus
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::OutlierDetection
===
```
pub struct OutlierDetection {
pub base_ejection_duration: Duration,
pub interval: Duration,
pub max_ejection_percent: i64,
pub max_server_errors: i64,
}
```
An object that represents the outlier detection for a virtual node's listener.
Fields
---
`base_ejection_duration: Duration`The base amount of time for which a host is ejected.
`interval: Duration`The time interval between ejection sweep analysis.
`max_ejection_percent: i64`Maximum percentage of hosts in load balancing pool for upstream service that can be ejected. Will eject at least one host regardless of the value.
`max_server_errors: i64`Number of consecutive `5xx` errors required for ejection.
Trait Implementations
---
source### impl Clone for OutlierDetection
source#### fn clone(&self) -> OutlierDetection
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for OutlierDetection
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for OutlierDetection
source#### fn default() -> OutlierDetection
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for OutlierDetection
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<OutlierDetection> for OutlierDetection
source#### fn eq(&self, other: &OutlierDetection) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &OutlierDetection) -> bool
This method tests for `!=`.
source### impl Serialize for OutlierDetection
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for OutlierDetection
Auto Trait Implementations
---
### impl RefUnwindSafe for OutlierDetection
### impl Send for OutlierDetection
### impl Sync for OutlierDetection
### impl Unpin for OutlierDetection
### impl UnwindSafe for OutlierDetection
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::PortMapping
===
```
pub struct PortMapping {
pub port: i64,
pub protocol: String,
}
```
An object that represents a port mapping.
Fields
---
`port: i64`The port used for the port mapping.
`protocol: String`The protocol used for the port mapping. Specify one protocol.
Trait Implementations
---
source### impl Clone for PortMapping
source#### fn clone(&self) -> PortMapping
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for PortMapping
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for PortMapping
source#### fn default() -> PortMapping
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for PortMapping
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<PortMapping> for PortMapping
source#### fn eq(&self, other: &PortMapping) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &PortMapping) -> bool
This method tests for `!=`.
source### impl Serialize for PortMapping
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for PortMapping
Auto Trait Implementations
---
### impl RefUnwindSafe for PortMapping
### impl Send for PortMapping
### impl Sync for PortMapping
### impl Unpin for PortMapping
### impl UnwindSafe for PortMapping
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::QueryParameterMatch
===
```
pub struct QueryParameterMatch {
pub exact: Option<String>,
}
```
An object representing the query parameter to match.
Fields
---
`exact: Option<String>`The exact query parameter to match on.
Trait Implementations
---
source### impl Clone for QueryParameterMatch
source#### fn clone(&self) -> QueryParameterMatch
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for QueryParameterMatch
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for QueryParameterMatch
source#### fn default() -> QueryParameterMatch
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for QueryParameterMatch
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<QueryParameterMatch> for QueryParameterMatch
source#### fn eq(&self, other: &QueryParameterMatch) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &QueryParameterMatch) -> bool
This method tests for `!=`.
source### impl Serialize for QueryParameterMatch
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for QueryParameterMatch
Auto Trait Implementations
---
### impl RefUnwindSafe for QueryParameterMatch
### impl Send for QueryParameterMatch
### impl Sync for QueryParameterMatch
### impl Unpin for QueryParameterMatch
### impl UnwindSafe for QueryParameterMatch
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::ResourceMetadata
===
```
pub struct ResourceMetadata {
pub arn: String,
pub created_at: f64,
pub last_updated_at: f64,
pub mesh_owner: String,
pub resource_owner: String,
pub uid: String,
pub version: i64,
}
```
An object that represents metadata for a resource.
Fields
---
`arn: String`The full Amazon Resource Name (ARN) for the resource.
`created_at: f64`The Unix epoch timestamp in seconds for when the resource was created.
`last_updated_at: f64`The Unix epoch timestamp in seconds for when the resource was last updated.
`mesh_owner: String`The AWS IAM account ID of the service mesh owner. If the account ID is not your own, then it's the ID of the account that shared the mesh with your account. For more information about mesh sharing, see Working with shared meshes.
`resource_owner: String`The AWS IAM account ID of the resource owner. If the account ID is not your own, then it's the ID of the mesh owner or of another account that the mesh is shared with. For more information about mesh sharing, see Working with shared meshes.
`uid: String`The unique identifier for the resource.
`version: i64`The version of the resource. Resources are created at version 1, and this version is incremented each time that they're updated.
Trait Implementations
---
source### impl Clone for ResourceMetadata
source#### fn clone(&self) -> ResourceMetadata
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ResourceMetadata
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ResourceMetadata
source#### fn default() -> ResourceMetadata
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for ResourceMetadata
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<ResourceMetadata> for ResourceMetadata
source#### fn eq(&self, other: &ResourceMetadata) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ResourceMetadata) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ResourceMetadata
Auto Trait Implementations
---
### impl RefUnwindSafe for ResourceMetadata
### impl Send for ResourceMetadata
### impl Sync for ResourceMetadata
### impl Unpin for ResourceMetadata
### impl UnwindSafe for ResourceMetadata
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::RouteData
===
```
pub struct RouteData {
pub mesh_name: String,
pub metadata: ResourceMetadata,
pub route_name: String,
pub spec: RouteSpec,
pub status: RouteStatus,
pub virtual_router_name: String,
}
```
An object that represents a route returned by a describe operation.
Fields
---
`mesh_name: String`The name of the service mesh that the route resides in.
`metadata: ResourceMetadata`The associated metadata for the route.
`route_name: String`The name of the route.
`spec: RouteSpec`The specifications of the route.
`status: RouteStatus`The status of the route.
`virtual_router_name: String`The virtual router that the route is associated with.
Trait Implementations
---
source### impl Clone for RouteData
source#### fn clone(&self) -> RouteData
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for RouteData
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for RouteData
source#### fn default() -> RouteData
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for RouteData
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<RouteData> for RouteData
source#### fn eq(&self, other: &RouteData) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &RouteData) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for RouteData
Auto Trait Implementations
---
### impl RefUnwindSafe for RouteData
### impl Send for RouteData
### impl Sync for RouteData
### impl Unpin for RouteData
### impl UnwindSafe for RouteData
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::RouteRef
===
```
pub struct RouteRef {
pub arn: String,
pub created_at: f64,
pub last_updated_at: f64,
pub mesh_name: String,
pub mesh_owner: String,
pub resource_owner: String,
pub route_name: String,
pub version: i64,
pub virtual_router_name: String,
}
```
An object that represents a route returned by a list operation.
Fields
---
`arn: String`The full Amazon Resource Name (ARN) for the route.
`created_at: f64`The Unix epoch timestamp in seconds for when the resource was created.
`last_updated_at: f64`The Unix epoch timestamp in seconds for when the resource was last updated.
`mesh_name: String`The name of the service mesh that the route resides in.
`mesh_owner: String`The AWS IAM account ID of the service mesh owner. If the account ID is not your own, then it's the ID of the account that shared the mesh with your account. For more information about mesh sharing, see Working with shared meshes.
`resource_owner: String`The AWS IAM account ID of the resource owner. If the account ID is not your own, then it's the ID of the mesh owner or of another account that the mesh is shared with. For more information about mesh sharing, see Working with shared meshes.
`route_name: String`The name of the route.
`version: i64`The version of the resource. Resources are created at version 1, and this version is incremented each time that they're updated.
`virtual_router_name: String`The virtual router that the route is associated with.
Trait Implementations
---
source### impl Clone for RouteRef
source#### fn clone(&self) -> RouteRef
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for RouteRef
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for RouteRef
source#### fn default() -> RouteRef
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for RouteRef
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<RouteRef> for RouteRef
source#### fn eq(&self, other: &RouteRef) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &RouteRef) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for RouteRef
Auto Trait Implementations
---
### impl RefUnwindSafe for RouteRef
### impl Send for RouteRef
### impl Sync for RouteRef
### impl Unpin for RouteRef
### impl UnwindSafe for RouteRef
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::RouteSpec
===
```
pub struct RouteSpec {
pub grpc_route: Option<GrpcRoute>,
pub http_2_route: Option<HttpRoute>,
pub http_route: Option<HttpRoute>,
pub priority: Option<i64>,
pub tcp_route: Option<TcpRoute>,
}
```
An object that represents a route specification. Specify one route type.
Fields
---
`grpc_route: Option<GrpcRoute>`An object that represents the specification of a gRPC route.
`http_2_route: Option<HttpRoute>`An object that represents the specification of an HTTP/2 route.
`http_route: Option<HttpRoute>`An object that represents the specification of an HTTP route.
`priority: Option<i64>`The priority for the route. Routes are matched based on the specified value, where 0 is the highest priority.
`tcp_route: Option<TcpRoute>`An object that represents the specification of a TCP route.
Trait Implementations
---
source### impl Clone for RouteSpec
source#### fn clone(&self) -> RouteSpec
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for RouteSpec
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for RouteSpec
source#### fn default() -> RouteSpec
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for RouteSpec
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<RouteSpec> for RouteSpec
source#### fn eq(&self, other: &RouteSpec) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &RouteSpec) -> bool
This method tests for `!=`.
source### impl Serialize for RouteSpec
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for RouteSpec
Auto Trait Implementations
---
### impl RefUnwindSafe for RouteSpec
### impl Send for RouteSpec
### impl Sync for RouteSpec
### impl Unpin for RouteSpec
### impl UnwindSafe for RouteSpec
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::RouteStatus
===
```
pub struct RouteStatus {
pub status: String,
}
```
An object that represents the current status of a route.
Fields
---
`status: String`The current status for the route.
Trait Implementations
---
source### impl Clone for RouteStatus
source#### fn clone(&self) -> RouteStatus
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for RouteStatus
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for RouteStatus
source#### fn default() -> RouteStatus
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for RouteStatus
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<RouteStatus> for RouteStatus
source#### fn eq(&self, other: &RouteStatus) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &RouteStatus) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for RouteStatus
Auto Trait Implementations
---
### impl RefUnwindSafe for RouteStatus
### impl Send for RouteStatus
### impl Sync for RouteStatus
### impl Unpin for RouteStatus
### impl UnwindSafe for RouteStatus
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::ServiceDiscovery
===
```
pub struct ServiceDiscovery {
pub aws_cloud_map: Option<AwsCloudMapServiceDiscovery>,
pub dns: Option<DnsServiceDiscovery>,
}
```
An object that represents the service discovery information for a virtual node.
Fields
---
`aws_cloud_map: Option<AwsCloudMapServiceDiscovery>`Specifies any Cloud Map information for the virtual node.
`dns: Option<DnsServiceDiscovery>`Specifies the DNS information for the virtual node.
Trait Implementations
---
source### impl Clone for ServiceDiscovery
source#### fn clone(&self) -> ServiceDiscovery
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ServiceDiscovery
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ServiceDiscovery
source#### fn default() -> ServiceDiscovery
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for ServiceDiscovery
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<ServiceDiscovery> for ServiceDiscovery
source#### fn eq(&self, other: &ServiceDiscovery) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ServiceDiscovery) -> bool
This method tests for `!=`.
source### impl Serialize for ServiceDiscovery
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for ServiceDiscovery
Auto Trait Implementations
---
### impl RefUnwindSafe for ServiceDiscovery
### impl Send for ServiceDiscovery
### impl Sync for ServiceDiscovery
### impl Unpin for ServiceDiscovery
### impl UnwindSafe for ServiceDiscovery
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::SubjectAlternativeNameMatchers
===
```
pub struct SubjectAlternativeNameMatchers {
pub exact: Vec<String>,
}
```
An object that represents the methods by which a subject alternative name on a peer Transport Layer Security (TLS) certificate can be matched.
Fields
---
`exact: Vec<String>`The values sent must match the specified values exactly.
Trait Implementations
---
source### impl Clone for SubjectAlternativeNameMatchers
source#### fn clone(&self) -> SubjectAlternativeNameMatchers
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for SubjectAlternativeNameMatchers
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for SubjectAlternativeNameMatchers
source#### fn default() -> SubjectAlternativeNameMatchers
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for SubjectAlternativeNameMatchers
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<SubjectAlternativeNameMatchers> for SubjectAlternativeNameMatchers
source#### fn eq(&self, other: &SubjectAlternativeNameMatchers) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &SubjectAlternativeNameMatchers) -> bool
This method tests for `!=`.
source### impl Serialize for SubjectAlternativeNameMatchers
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for SubjectAlternativeNameMatchers
Auto Trait Implementations
---
### impl RefUnwindSafe for SubjectAlternativeNameMatchers
### impl Send for SubjectAlternativeNameMatchers
### impl Sync for SubjectAlternativeNameMatchers
### impl Unpin for SubjectAlternativeNameMatchers
### impl UnwindSafe for SubjectAlternativeNameMatchers
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::SubjectAlternativeNames
===
```
pub struct SubjectAlternativeNames {
pub route_match: Option<SubjectAlternativeNameMatchers>,
}
```
An object that represents the subject alternative names secured by the certificate.
Fields
---
`route_match: Option<SubjectAlternativeNameMatchers>`An object that represents the criteria for determining a SANs match.
Trait Implementations
---
source### impl Clone for SubjectAlternativeNames
source#### fn clone(&self) -> SubjectAlternativeNames
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for SubjectAlternativeNames
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for SubjectAlternativeNames
source#### fn default() -> SubjectAlternativeNames
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for SubjectAlternativeNames
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<SubjectAlternativeNames> for SubjectAlternativeNames
source#### fn eq(&self, other: &SubjectAlternativeNames) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &SubjectAlternativeNames) -> bool
This method tests for `!=`.
source### impl Serialize for SubjectAlternativeNames
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for SubjectAlternativeNames
Auto Trait Implementations
---
### impl RefUnwindSafe for SubjectAlternativeNames
### impl Send for SubjectAlternativeNames
### impl Sync for SubjectAlternativeNames
### impl Unpin for SubjectAlternativeNames
### impl UnwindSafe for SubjectAlternativeNames
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::TagRef
===
```
pub struct TagRef {
pub key: String,
pub value: String,
}
```
Optional metadata that you apply to a resource to assist with categorization and organization. Each tag consists of a key and an optional value, both of which you define. Tag keys can have a maximum character length of 128 characters, and tag values can have a maximum length of 256 characters.
Fields
---
`key: String`One part of a key-value pair that make up a tag. A `key` is a general label that acts like a category for more specific tag values.
`value: String`The optional part of a key-value pair that make up a tag. A `value` acts as a descriptor within a tag category (key).
Trait Implementations
---
source### impl Clone for TagRef
source#### fn clone(&self) -> TagRef
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for TagRef
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for TagRef
source#### fn default() -> TagRef
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for TagRef
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<TagRef> for TagRef
source#### fn eq(&self, other: &TagRef) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &TagRef) -> bool
This method tests for `!=`.
source### impl Serialize for TagRef
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for TagRef
Auto Trait Implementations
---
### impl RefUnwindSafe for TagRef
### impl Send for TagRef
### impl Sync for TagRef
### impl Unpin for TagRef
### impl UnwindSafe for TagRef
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::TagResourceInput
===
```
pub struct TagResourceInput {
pub resource_arn: String,
pub tags: Vec<TagRef>,
}
```
Fields
---
`resource_arn: String`The Amazon Resource Name (ARN) of the resource to add tags to.
`tags: Vec<TagRef>`The tags to add to the resource. A tag is an array of key-value pairs. Tag keys can have a maximum character length of 128 characters, and tag values can have a maximum length of 256 characters.
Trait Implementations
---
source### impl Clone for TagResourceInput
source#### fn clone(&self) -> TagResourceInput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for TagResourceInput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for TagResourceInput
source#### fn default() -> TagResourceInput
Returns the “default value” for a type. Read more
source### impl PartialEq<TagResourceInput> for TagResourceInput
source#### fn eq(&self, other: &TagResourceInput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &TagResourceInput) -> bool
This method tests for `!=`.
source### impl Serialize for TagResourceInput
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for TagResourceInput
Auto Trait Implementations
---
### impl RefUnwindSafe for TagResourceInput
### impl Send for TagResourceInput
### impl Sync for TagResourceInput
### impl Unpin for TagResourceInput
### impl UnwindSafe for TagResourceInput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_appmesh::TagResourceOutput
===
```
pub struct TagResourceOutput {}
```
Trait Implementations
---
source### impl Clone for TagResourceOutput
source#### fn clone(&self) -> TagResourceOutput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for TagResourceOutput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for TagResourceOutput
source#### fn default() -> TagResourceOutput
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for TagResourceOutput
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<TagResourceOutput> for TagResourceOutput
source#### fn eq(&self, other: &TagResourceOutput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for TagResourceOutput
Auto Trait Implementations
---
### impl RefUnwindSafe for TagResourceOutput
### impl Send for TagResourceOutput
### impl Sync for TagResourceOutput
### impl Unpin for TagResourceOutput
### impl UnwindSafe for TagResourceOutput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::TcpRoute
===
```
pub struct TcpRoute {
pub action: TcpRouteAction,
pub timeout: Option<TcpTimeout>,
}
```
An object that represents a TCP route type.
Fields
---
`action: TcpRouteAction`The action to take if a match is determined.
`timeout: Option<TcpTimeout>`An object that represents types of timeouts.
Trait Implementations
---
source### impl Clone for TcpRoute
source#### fn clone(&self) -> TcpRoute
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for TcpRoute
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for TcpRoute
source#### fn default() -> TcpRoute
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for TcpRoute
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<TcpRoute> for TcpRoute
source#### fn eq(&self, other: &TcpRoute) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &TcpRoute) -> bool
This method tests for `!=`.
source### impl Serialize for TcpRoute
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for TcpRoute
Auto Trait Implementations
---
### impl RefUnwindSafe for TcpRoute
### impl Send for TcpRoute
### impl Sync for TcpRoute
### impl Unpin for TcpRoute
### impl UnwindSafe for TcpRoute
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::TcpRouteAction
===
```
pub struct TcpRouteAction {
pub weighted_targets: Vec<WeightedTarget>,
}
```
An object that represents the action to take if a match is determined.
Fields
---
`weighted_targets: Vec<WeightedTarget>`An object that represents the targets that traffic is routed to when a request matches the route.
Trait Implementations
---
source### impl Clone for TcpRouteAction
source#### fn clone(&self) -> TcpRouteAction
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for TcpRouteAction
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for TcpRouteAction
source#### fn default() -> TcpRouteAction
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for TcpRouteAction
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<TcpRouteAction> for TcpRouteAction
source#### fn eq(&self, other: &TcpRouteAction) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &TcpRouteAction) -> bool
This method tests for `!=`.
source### impl Serialize for TcpRouteAction
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for TcpRouteAction
Auto Trait Implementations
---
### impl RefUnwindSafe for TcpRouteAction
### impl Send for TcpRouteAction
### impl Sync for TcpRouteAction
### impl Unpin for TcpRouteAction
### impl UnwindSafe for TcpRouteAction
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::TcpTimeout
===
```
pub struct TcpTimeout {
pub idle: Option<Duration>,
}
```
An object that represents types of timeouts.
Fields
---
`idle: Option<Duration>`An object that represents an idle timeout. An idle timeout bounds the amount of time that a connection may be idle. The default value is none.
Trait Implementations
---
source### impl Clone for TcpTimeout
source#### fn clone(&self) -> TcpTimeout
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for TcpTimeout
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for TcpTimeout
source#### fn default() -> TcpTimeout
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for TcpTimeout
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<TcpTimeout> for TcpTimeout
source#### fn eq(&self, other: &TcpTimeout) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &TcpTimeout) -> bool
This method tests for `!=`.
source### impl Serialize for TcpTimeout
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for TcpTimeout
Auto Trait Implementations
---
### impl RefUnwindSafe for TcpTimeout
### impl Send for TcpTimeout
### impl Sync for TcpTimeout
### impl Unpin for TcpTimeout
### impl UnwindSafe for TcpTimeout
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::TlsValidationContext
===
```
pub struct TlsValidationContext {
pub subject_alternative_names: Option<SubjectAlternativeNames>,
pub trust: TlsValidationContextTrust,
}
```
An object that represents how the proxy will validate its peer during Transport Layer Security (TLS) negotiation.
Fields
---
`subject_alternative_names: Option<SubjectAlternativeNames>`A reference to an object that represents the SANs for a Transport Layer Security (TLS) validation context.
`trust: TlsValidationContextTrust`A reference to where to retrieve the trust chain when validating a peer’s Transport Layer Security (TLS) certificate.
Trait Implementations
---
source### impl Clone for TlsValidationContext
source#### fn clone(&self) -> TlsValidationContext
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for TlsValidationContext
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for TlsValidationContext
source#### fn default() -> TlsValidationContext
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for TlsValidationContext
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<TlsValidationContext> for TlsValidationContext
source#### fn eq(&self, other: &TlsValidationContext) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &TlsValidationContext) -> bool
This method tests for `!=`.
source### impl Serialize for TlsValidationContext
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for TlsValidationContext
Auto Trait Implementations
---
### impl RefUnwindSafe for TlsValidationContext
### impl Send for TlsValidationContext
### impl Sync for TlsValidationContext
### impl Unpin for TlsValidationContext
### impl UnwindSafe for TlsValidationContext
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::TlsValidationContextAcmTrust
===
```
pub struct TlsValidationContextAcmTrust {
pub certificate_authority_arns: Vec<String>,
}
```
An object that represents a Transport Layer Security (TLS) validation context trust for an Certificate Manager certificate.
Fields
---
`certificate_authority_arns: Vec<String>`One or more ACM Amazon Resource Name (ARN)s.
Trait Implementations
---
source### impl Clone for TlsValidationContextAcmTrust
source#### fn clone(&self) -> TlsValidationContextAcmTrust
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for TlsValidationContextAcmTrust
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for TlsValidationContextAcmTrust
source#### fn default() -> TlsValidationContextAcmTrust
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for TlsValidationContextAcmTrust
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<TlsValidationContextAcmTrust> for TlsValidationContextAcmTrust
source#### fn eq(&self, other: &TlsValidationContextAcmTrust) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &TlsValidationContextAcmTrust) -> bool
This method tests for `!=`.
source### impl Serialize for TlsValidationContextAcmTrust
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for TlsValidationContextAcmTrust
Auto Trait Implementations
---
### impl RefUnwindSafe for TlsValidationContextAcmTrust
### impl Send for TlsValidationContextAcmTrust
### impl Sync for TlsValidationContextAcmTrust
### impl Unpin for TlsValidationContextAcmTrust
### impl UnwindSafe for TlsValidationContextAcmTrust
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::TlsValidationContextFileTrust
===
```
pub struct TlsValidationContextFileTrust {
pub certificate_chain: String,
}
```
An object that represents a Transport Layer Security (TLS) validation context trust for a local file.
Fields
---
`certificate_chain: String`The certificate trust chain for a certificate stored on the file system of the virtual node that the proxy is running on.
Trait Implementations
---
source### impl Clone for TlsValidationContextFileTrust
source#### fn clone(&self) -> TlsValidationContextFileTrust
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for TlsValidationContextFileTrust
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for TlsValidationContextFileTrust
source#### fn default() -> TlsValidationContextFileTrust
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for TlsValidationContextFileTrust
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<TlsValidationContextFileTrust> for TlsValidationContextFileTrust
source#### fn eq(&self, other: &TlsValidationContextFileTrust) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &TlsValidationContextFileTrust) -> bool
This method tests for `!=`.
source### impl Serialize for TlsValidationContextFileTrust
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for TlsValidationContextFileTrust
Auto Trait Implementations
---
### impl RefUnwindSafe for TlsValidationContextFileTrust
### impl Send for TlsValidationContextFileTrust
### impl Sync for TlsValidationContextFileTrust
### impl Unpin for TlsValidationContextFileTrust
### impl UnwindSafe for TlsValidationContextFileTrust
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::TlsValidationContextSdsTrust
===
```
pub struct TlsValidationContextSdsTrust {
pub secret_name: String,
}
```
An object that represents a Transport Layer Security (TLS) Secret Discovery Service validation context trust. The proxy must be configured with a local SDS provider via a Unix Domain Socket. See App Mesh TLS documentation for more info.
Fields
---
`secret_name: String`A reference to an object that represents the name of the secret for a Transport Layer Security (TLS) Secret Discovery Service validation context trust.
Trait Implementations
---
source### impl Clone for TlsValidationContextSdsTrust
source#### fn clone(&self) -> TlsValidationContextSdsTrust
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for TlsValidationContextSdsTrust
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for TlsValidationContextSdsTrust
source#### fn default() -> TlsValidationContextSdsTrust
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for TlsValidationContextSdsTrust
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<TlsValidationContextSdsTrust> for TlsValidationContextSdsTrust
source#### fn eq(&self, other: &TlsValidationContextSdsTrust) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &TlsValidationContextSdsTrust) -> bool
This method tests for `!=`.
source### impl Serialize for TlsValidationContextSdsTrust
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for TlsValidationContextSdsTrust
Auto Trait Implementations
---
### impl RefUnwindSafe for TlsValidationContextSdsTrust
### impl Send for TlsValidationContextSdsTrust
### impl Sync for TlsValidationContextSdsTrust
### impl Unpin for TlsValidationContextSdsTrust
### impl UnwindSafe for TlsValidationContextSdsTrust
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::TlsValidationContextTrust
===
```
pub struct TlsValidationContextTrust {
pub acm: Option<TlsValidationContextAcmTrust>,
pub file: Option<TlsValidationContextFileTrust>,
pub sds: Option<TlsValidationContextSdsTrust>,
}
```
An object that represents a Transport Layer Security (TLS) validation context trust.
Fields
---
`acm: Option<TlsValidationContextAcmTrust>`A reference to an object that represents a Transport Layer Security (TLS) validation context trust for an Certificate Manager certificate.
`file: Option<TlsValidationContextFileTrust>`An object that represents a Transport Layer Security (TLS) validation context trust for a local file.
`sds: Option<TlsValidationContextSdsTrust>`A reference to an object that represents a Transport Layer Security (TLS) Secret Discovery Service validation context trust.
Trait Implementations
---
source### impl Clone for TlsValidationContextTrust
source#### fn clone(&self) -> TlsValidationContextTrust
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for TlsValidationContextTrust
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for TlsValidationContextTrust
source#### fn default() -> TlsValidationContextTrust
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for TlsValidationContextTrust
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<TlsValidationContextTrust> for TlsValidationContextTrust
source#### fn eq(&self, other: &TlsValidationContextTrust) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &TlsValidationContextTrust) -> bool
This method tests for `!=`.
source### impl Serialize for TlsValidationContextTrust
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for TlsValidationContextTrust
Auto Trait Implementations
---
### impl RefUnwindSafe for TlsValidationContextTrust
### impl Send for TlsValidationContextTrust
### impl Sync for TlsValidationContextTrust
### impl Unpin for TlsValidationContextTrust
### impl UnwindSafe for TlsValidationContextTrust
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::UntagResourceInput
===
```
pub struct UntagResourceInput {
pub resource_arn: String,
pub tag_keys: Vec<String>,
}
```
Fields
---
`resource_arn: String`The Amazon Resource Name (ARN) of the resource to delete tags from.
`tag_keys: Vec<String>`The keys of the tags to be removed.
Trait Implementations
---
source### impl Clone for UntagResourceInput
source#### fn clone(&self) -> UntagResourceInput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for UntagResourceInput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for UntagResourceInput
source#### fn default() -> UntagResourceInput
Returns the “default value” for a type. Read more
source### impl PartialEq<UntagResourceInput> for UntagResourceInput
source#### fn eq(&self, other: &UntagResourceInput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &UntagResourceInput) -> bool
This method tests for `!=`.
source### impl Serialize for UntagResourceInput
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for UntagResourceInput
Auto Trait Implementations
---
### impl RefUnwindSafe for UntagResourceInput
### impl Send for UntagResourceInput
### impl Sync for UntagResourceInput
### impl Unpin for UntagResourceInput
### impl UnwindSafe for UntagResourceInput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_appmesh::UntagResourceOutput
===
```
pub struct UntagResourceOutput {}
```
Trait Implementations
---
source### impl Clone for UntagResourceOutput
source#### fn clone(&self) -> UntagResourceOutput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for UntagResourceOutput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for UntagResourceOutput
source#### fn default() -> UntagResourceOutput
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for UntagResourceOutput
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<UntagResourceOutput> for UntagResourceOutput
source#### fn eq(&self, other: &UntagResourceOutput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for UntagResourceOutput
Auto Trait Implementations
---
### impl RefUnwindSafe for UntagResourceOutput
### impl Send for UntagResourceOutput
### impl Sync for UntagResourceOutput
### impl Unpin for UntagResourceOutput
### impl UnwindSafe for UntagResourceOutput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::UpdateGatewayRouteInput
===
```
pub struct UpdateGatewayRouteInput {
pub client_token: Option<String>,
pub gateway_route_name: String,
pub mesh_name: String,
pub mesh_owner: Option<String>,
pub spec: GatewayRouteSpec,
pub virtual_gateway_name: String,
}
```
Fields
---
`client_token: Option<String>`Unique, case-sensitive identifier that you provide to ensure the idempotency of the request. Up to 36 letters, numbers, hyphens, and underscores are allowed.
`gateway_route_name: String`The name of the gateway route to update.
`mesh_name: String`The name of the service mesh that the gateway route resides in.
`mesh_owner: Option<String>`The AWS IAM account ID of the service mesh owner. If the account ID is not your own, then it's the ID of the account that shared the mesh with your account. For more information about mesh sharing, see Working with shared meshes.
`spec: GatewayRouteSpec`The new gateway route specification to apply. This overwrites the existing data.
`virtual_gateway_name: String`The name of the virtual gateway that the gateway route is associated with.
Trait Implementations
---
source### impl Clone for UpdateGatewayRouteInput
source#### fn clone(&self) -> UpdateGatewayRouteInput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for UpdateGatewayRouteInput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for UpdateGatewayRouteInput
source#### fn default() -> UpdateGatewayRouteInput
Returns the “default value” for a type. Read more
source### impl PartialEq<UpdateGatewayRouteInput> for UpdateGatewayRouteInput
source#### fn eq(&self, other: &UpdateGatewayRouteInput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &UpdateGatewayRouteInput) -> bool
This method tests for `!=`.
source### impl Serialize for UpdateGatewayRouteInput
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for UpdateGatewayRouteInput
Auto Trait Implementations
---
### impl RefUnwindSafe for UpdateGatewayRouteInput
### impl Send for UpdateGatewayRouteInput
### impl Sync for UpdateGatewayRouteInput
### impl Unpin for UpdateGatewayRouteInput
### impl UnwindSafe for UpdateGatewayRouteInput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_appmesh::UpdateGatewayRouteOutput
===
```
pub struct UpdateGatewayRouteOutput {
pub gateway_route: GatewayRouteData,
}
```
Fields
---
`gateway_route: GatewayRouteData`A full description of the gateway route that was updated.
Trait Implementations
---
source### impl Clone for UpdateGatewayRouteOutput
source#### fn clone(&self) -> UpdateGatewayRouteOutput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for UpdateGatewayRouteOutput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for UpdateGatewayRouteOutput
source#### fn default() -> UpdateGatewayRouteOutput
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for UpdateGatewayRouteOutput
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<UpdateGatewayRouteOutput> for UpdateGatewayRouteOutput
source#### fn eq(&self, other: &UpdateGatewayRouteOutput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &UpdateGatewayRouteOutput) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for UpdateGatewayRouteOutput
Auto Trait Implementations
---
### impl RefUnwindSafe for UpdateGatewayRouteOutput
### impl Send for UpdateGatewayRouteOutput
### impl Sync for UpdateGatewayRouteOutput
### impl Unpin for UpdateGatewayRouteOutput
### impl UnwindSafe for UpdateGatewayRouteOutput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::UpdateMeshInput
===
```
pub struct UpdateMeshInput {
pub client_token: Option<String>,
pub mesh_name: String,
pub spec: Option<MeshSpec>,
}
```
Fields
---
`client_token: Option<String>`Unique, case-sensitive identifier that you provide to ensure the idempotency of the request. Up to 36 letters, numbers, hyphens, and underscores are allowed.
`mesh_name: String`The name of the service mesh to update.
`spec: Option<MeshSpec>`The service mesh specification to apply.
Trait Implementations
---
source### impl Clone for UpdateMeshInput
source#### fn clone(&self) -> UpdateMeshInput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for UpdateMeshInput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for UpdateMeshInput
source#### fn default() -> UpdateMeshInput
Returns the “default value” for a type. Read more
source### impl PartialEq<UpdateMeshInput> for UpdateMeshInput
source#### fn eq(&self, other: &UpdateMeshInput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &UpdateMeshInput) -> bool
This method tests for `!=`.
source### impl Serialize for UpdateMeshInput
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for UpdateMeshInput
Auto Trait Implementations
---
### impl RefUnwindSafe for UpdateMeshInput
### impl Send for UpdateMeshInput
### impl Sync for UpdateMeshInput
### impl Unpin for UpdateMeshInput
### impl UnwindSafe for UpdateMeshInput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_appmesh::UpdateMeshOutput
===
```
pub struct UpdateMeshOutput {
pub mesh: MeshData,
}
```
Fields
---
`mesh: MeshData`Trait Implementations
---
source### impl Clone for UpdateMeshOutput
source#### fn clone(&self) -> UpdateMeshOutput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for UpdateMeshOutput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for UpdateMeshOutput
source#### fn default() -> UpdateMeshOutput
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for UpdateMeshOutput
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<UpdateMeshOutput> for UpdateMeshOutput
source#### fn eq(&self, other: &UpdateMeshOutput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &UpdateMeshOutput) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for UpdateMeshOutput
Auto Trait Implementations
---
### impl RefUnwindSafe for UpdateMeshOutput
### impl Send for UpdateMeshOutput
### impl Sync for UpdateMeshOutput
### impl Unpin for UpdateMeshOutput
### impl UnwindSafe for UpdateMeshOutput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::UpdateRouteInput
===
```
pub struct UpdateRouteInput {
pub client_token: Option<String>,
pub mesh_name: String,
pub mesh_owner: Option<String>,
pub route_name: String,
pub spec: RouteSpec,
pub virtual_router_name: String,
}
```
Fields
---
`client_token: Option<String>`Unique, case-sensitive identifier that you provide to ensure the idempotency of the request. Up to 36 letters, numbers, hyphens, and underscores are allowed.
`mesh_name: String`The name of the service mesh that the route resides in.
`mesh_owner: Option<String>`The AWS IAM account ID of the service mesh owner. If the account ID is not your own, then it's the ID of the account that shared the mesh with your account. For more information about mesh sharing, see Working with shared meshes.
`route_name: String`The name of the route to update.
`spec: RouteSpec`The new route specification to apply. This overwrites the existing data.
`virtual_router_name: String`The name of the virtual router that the route is associated with.
Trait Implementations
---
source### impl Clone for UpdateRouteInput
source#### fn clone(&self) -> UpdateRouteInput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for UpdateRouteInput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for UpdateRouteInput
source#### fn default() -> UpdateRouteInput
Returns the “default value” for a type. Read more
source### impl PartialEq<UpdateRouteInput> for UpdateRouteInput
source#### fn eq(&self, other: &UpdateRouteInput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &UpdateRouteInput) -> bool
This method tests for `!=`.
source### impl Serialize for UpdateRouteInput
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for UpdateRouteInput
Auto Trait Implementations
---
### impl RefUnwindSafe for UpdateRouteInput
### impl Send for UpdateRouteInput
### impl Sync for UpdateRouteInput
### impl Unpin for UpdateRouteInput
### impl UnwindSafe for UpdateRouteInput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_appmesh::UpdateRouteOutput
===
```
pub struct UpdateRouteOutput {
pub route: RouteData,
}
```
Fields
---
`route: RouteData`A full description of the route that was updated.
Trait Implementations
---
source### impl Clone for UpdateRouteOutput
source#### fn clone(&self) -> UpdateRouteOutput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for UpdateRouteOutput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for UpdateRouteOutput
source#### fn default() -> UpdateRouteOutput
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for UpdateRouteOutput
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<UpdateRouteOutput> for UpdateRouteOutput
source#### fn eq(&self, other: &UpdateRouteOutput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &UpdateRouteOutput) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for UpdateRouteOutput
Auto Trait Implementations
---
### impl RefUnwindSafe for UpdateRouteOutput
### impl Send for UpdateRouteOutput
### impl Sync for UpdateRouteOutput
### impl Unpin for UpdateRouteOutput
### impl UnwindSafe for UpdateRouteOutput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::UpdateVirtualGatewayInput
===
```
pub struct UpdateVirtualGatewayInput {
pub client_token: Option<String>,
pub mesh_name: String,
pub mesh_owner: Option<String>,
pub spec: VirtualGatewaySpec,
pub virtual_gateway_name: String,
}
```
Fields
---
`client_token: Option<String>`Unique, case-sensitive identifier that you provide to ensure the idempotency of the request. Up to 36 letters, numbers, hyphens, and underscores are allowed.
`mesh_name: String`The name of the service mesh that the virtual gateway resides in.
`mesh_owner: Option<String>`The AWS IAM account ID of the service mesh owner. If the account ID is not your own, then it's the ID of the account that shared the mesh with your account. For more information about mesh sharing, see Working with shared meshes.
`spec: VirtualGatewaySpec`The new virtual gateway specification to apply. This overwrites the existing data.
`virtual_gateway_name: String`The name of the virtual gateway to update.
Trait Implementations
---
source### impl Clone for UpdateVirtualGatewayInput
source#### fn clone(&self) -> UpdateVirtualGatewayInput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for UpdateVirtualGatewayInput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for UpdateVirtualGatewayInput
source#### fn default() -> UpdateVirtualGatewayInput
Returns the “default value” for a type. Read more
source### impl PartialEq<UpdateVirtualGatewayInput> for UpdateVirtualGatewayInput
source#### fn eq(&self, other: &UpdateVirtualGatewayInput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &UpdateVirtualGatewayInput) -> bool
This method tests for `!=`.
source### impl Serialize for UpdateVirtualGatewayInput
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for UpdateVirtualGatewayInput
Auto Trait Implementations
---
### impl RefUnwindSafe for UpdateVirtualGatewayInput
### impl Send for UpdateVirtualGatewayInput
### impl Sync for UpdateVirtualGatewayInput
### impl Unpin for UpdateVirtualGatewayInput
### impl UnwindSafe for UpdateVirtualGatewayInput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_appmesh::UpdateVirtualGatewayOutput
===
```
pub struct UpdateVirtualGatewayOutput {
pub virtual_gateway: VirtualGatewayData,
}
```
Fields
---
`virtual_gateway: VirtualGatewayData`A full description of the virtual gateway that was updated.
Trait Implementations
---
source### impl Clone for UpdateVirtualGatewayOutput
source#### fn clone(&self) -> UpdateVirtualGatewayOutput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for UpdateVirtualGatewayOutput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for UpdateVirtualGatewayOutput
source#### fn default() -> UpdateVirtualGatewayOutput
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for UpdateVirtualGatewayOutput
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<UpdateVirtualGatewayOutput> for UpdateVirtualGatewayOutput
source#### fn eq(&self, other: &UpdateVirtualGatewayOutput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &UpdateVirtualGatewayOutput) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for UpdateVirtualGatewayOutput
Auto Trait Implementations
---
### impl RefUnwindSafe for UpdateVirtualGatewayOutput
### impl Send for UpdateVirtualGatewayOutput
### impl Sync for UpdateVirtualGatewayOutput
### impl Unpin for UpdateVirtualGatewayOutput
### impl UnwindSafe for UpdateVirtualGatewayOutput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::UpdateVirtualNodeInput
===
```
pub struct UpdateVirtualNodeInput {
pub client_token: Option<String>,
pub mesh_name: String,
pub mesh_owner: Option<String>,
pub spec: VirtualNodeSpec,
pub virtual_node_name: String,
}
```
Fields
---
`client_token: Option<String>`Unique, case-sensitive identifier that you provide to ensure the idempotency of the request. Up to 36 letters, numbers, hyphens, and underscores are allowed.
`mesh_name: String`The name of the service mesh that the virtual node resides in.
`mesh_owner: Option<String>`The AWS IAM account ID of the service mesh owner. If the account ID is not your own, then it's the ID of the account that shared the mesh with your account. For more information about mesh sharing, see Working with shared meshes.
`spec: VirtualNodeSpec`The new virtual node specification to apply. This overwrites the existing data.
`virtual_node_name: String`The name of the virtual node to update.
Trait Implementations
---
source### impl Clone for UpdateVirtualNodeInput
source#### fn clone(&self) -> UpdateVirtualNodeInput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for UpdateVirtualNodeInput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for UpdateVirtualNodeInput
source#### fn default() -> UpdateVirtualNodeInput
Returns the “default value” for a type. Read more
source### impl PartialEq<UpdateVirtualNodeInput> for UpdateVirtualNodeInput
source#### fn eq(&self, other: &UpdateVirtualNodeInput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &UpdateVirtualNodeInput) -> bool
This method tests for `!=`.
source### impl Serialize for UpdateVirtualNodeInput
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for UpdateVirtualNodeInput
Auto Trait Implementations
---
### impl RefUnwindSafe for UpdateVirtualNodeInput
### impl Send for UpdateVirtualNodeInput
### impl Sync for UpdateVirtualNodeInput
### impl Unpin for UpdateVirtualNodeInput
### impl UnwindSafe for UpdateVirtualNodeInput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_appmesh::UpdateVirtualNodeOutput
===
```
pub struct UpdateVirtualNodeOutput {
pub virtual_node: VirtualNodeData,
}
```
Fields
---
`virtual_node: VirtualNodeData`A full description of the virtual node that was updated.
Trait Implementations
---
source### impl Clone for UpdateVirtualNodeOutput
source#### fn clone(&self) -> UpdateVirtualNodeOutput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for UpdateVirtualNodeOutput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for UpdateVirtualNodeOutput
source#### fn default() -> UpdateVirtualNodeOutput
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for UpdateVirtualNodeOutput
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<UpdateVirtualNodeOutput> for UpdateVirtualNodeOutput
source#### fn eq(&self, other: &UpdateVirtualNodeOutput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &UpdateVirtualNodeOutput) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for UpdateVirtualNodeOutput
Auto Trait Implementations
---
### impl RefUnwindSafe for UpdateVirtualNodeOutput
### impl Send for UpdateVirtualNodeOutput
### impl Sync for UpdateVirtualNodeOutput
### impl Unpin for UpdateVirtualNodeOutput
### impl UnwindSafe for UpdateVirtualNodeOutput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::UpdateVirtualRouterInput
===
```
pub struct UpdateVirtualRouterInput {
pub client_token: Option<String>,
pub mesh_name: String,
pub mesh_owner: Option<String>,
pub spec: VirtualRouterSpec,
pub virtual_router_name: String,
}
```
Fields
---
`client_token: Option<String>`Unique, case-sensitive identifier that you provide to ensure the idempotency of the request. Up to 36 letters, numbers, hyphens, and underscores are allowed.
`mesh_name: String`The name of the service mesh that the virtual router resides in.
`mesh_owner: Option<String>`The AWS IAM account ID of the service mesh owner. If the account ID is not your own, then it's the ID of the account that shared the mesh with your account. For more information about mesh sharing, see Working with shared meshes.
`spec: VirtualRouterSpec`The new virtual router specification to apply. This overwrites the existing data.
`virtual_router_name: String`The name of the virtual router to update.
Trait Implementations
---
source### impl Clone for UpdateVirtualRouterInput
source#### fn clone(&self) -> UpdateVirtualRouterInput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for UpdateVirtualRouterInput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for UpdateVirtualRouterInput
source#### fn default() -> UpdateVirtualRouterInput
Returns the “default value” for a type. Read more
source### impl PartialEq<UpdateVirtualRouterInput> for UpdateVirtualRouterInput
source#### fn eq(&self, other: &UpdateVirtualRouterInput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &UpdateVirtualRouterInput) -> bool
This method tests for `!=`.
source### impl Serialize for UpdateVirtualRouterInput
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for UpdateVirtualRouterInput
Auto Trait Implementations
---
### impl RefUnwindSafe for UpdateVirtualRouterInput
### impl Send for UpdateVirtualRouterInput
### impl Sync for UpdateVirtualRouterInput
### impl Unpin for UpdateVirtualRouterInput
### impl UnwindSafe for UpdateVirtualRouterInput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_appmesh::UpdateVirtualRouterOutput
===
```
pub struct UpdateVirtualRouterOutput {
pub virtual_router: VirtualRouterData,
}
```
Fields
---
`virtual_router: VirtualRouterData`A full description of the virtual router that was updated.
Trait Implementations
---
source### impl Clone for UpdateVirtualRouterOutput
source#### fn clone(&self) -> UpdateVirtualRouterOutput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for UpdateVirtualRouterOutput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for UpdateVirtualRouterOutput
source#### fn default() -> UpdateVirtualRouterOutput
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for UpdateVirtualRouterOutput
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<UpdateVirtualRouterOutput> for UpdateVirtualRouterOutput
source#### fn eq(&self, other: &UpdateVirtualRouterOutput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &UpdateVirtualRouterOutput) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for UpdateVirtualRouterOutput
Auto Trait Implementations
---
### impl RefUnwindSafe for UpdateVirtualRouterOutput
### impl Send for UpdateVirtualRouterOutput
### impl Sync for UpdateVirtualRouterOutput
### impl Unpin for UpdateVirtualRouterOutput
### impl UnwindSafe for UpdateVirtualRouterOutput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::UpdateVirtualServiceInput
===
```
pub struct UpdateVirtualServiceInput {
pub client_token: Option<String>,
pub mesh_name: String,
pub mesh_owner: Option<String>,
pub spec: VirtualServiceSpec,
pub virtual_service_name: String,
}
```
Fields
---
`client_token: Option<String>`Unique, case-sensitive identifier that you provide to ensure the idempotency of the request. Up to 36 letters, numbers, hyphens, and underscores are allowed.
`mesh_name: String`The name of the service mesh that the virtual service resides in.
`mesh_owner: Option<String>`The AWS IAM account ID of the service mesh owner. If the account ID is not your own, then it's the ID of the account that shared the mesh with your account. For more information about mesh sharing, see Working with shared meshes.
`spec: VirtualServiceSpec`The new virtual service specification to apply. This overwrites the existing data.
`virtual_service_name: String`The name of the virtual service to update.
Trait Implementations
---
source### impl Clone for UpdateVirtualServiceInput
source#### fn clone(&self) -> UpdateVirtualServiceInput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for UpdateVirtualServiceInput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for UpdateVirtualServiceInput
source#### fn default() -> UpdateVirtualServiceInput
Returns the “default value” for a type. Read more
source### impl PartialEq<UpdateVirtualServiceInput> for UpdateVirtualServiceInput
source#### fn eq(&self, other: &UpdateVirtualServiceInput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &UpdateVirtualServiceInput) -> bool
This method tests for `!=`.
source### impl Serialize for UpdateVirtualServiceInput
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for UpdateVirtualServiceInput
Auto Trait Implementations
---
### impl RefUnwindSafe for UpdateVirtualServiceInput
### impl Send for UpdateVirtualServiceInput
### impl Sync for UpdateVirtualServiceInput
### impl Unpin for UpdateVirtualServiceInput
### impl UnwindSafe for UpdateVirtualServiceInput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_appmesh::UpdateVirtualServiceOutput
===
```
pub struct UpdateVirtualServiceOutput {
pub virtual_service: VirtualServiceData,
}
```
Fields
---
`virtual_service: VirtualServiceData`A full description of the virtual service that was updated.
Trait Implementations
---
source### impl Clone for UpdateVirtualServiceOutput
source#### fn clone(&self) -> UpdateVirtualServiceOutput
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for UpdateVirtualServiceOutput
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for UpdateVirtualServiceOutput
source#### fn default() -> UpdateVirtualServiceOutput
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for UpdateVirtualServiceOutput
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<UpdateVirtualServiceOutput> for UpdateVirtualServiceOutput
source#### fn eq(&self, other: &UpdateVirtualServiceOutput) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &UpdateVirtualServiceOutput) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for UpdateVirtualServiceOutput
Auto Trait Implementations
---
### impl RefUnwindSafe for UpdateVirtualServiceOutput
### impl Send for UpdateVirtualServiceOutput
### impl Sync for UpdateVirtualServiceOutput
### impl Unpin for UpdateVirtualServiceOutput
### impl UnwindSafe for UpdateVirtualServiceOutput
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualGatewayAccessLog
===
```
pub struct VirtualGatewayAccessLog {
pub file: Option<VirtualGatewayFileAccessLog>,
}
```
The access log configuration for a virtual gateway.
Fields
---
`file: Option<VirtualGatewayFileAccessLog>`The file object to send virtual gateway access logs to.
Trait Implementations
---
source### impl Clone for VirtualGatewayAccessLog
source#### fn clone(&self) -> VirtualGatewayAccessLog
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualGatewayAccessLog
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualGatewayAccessLog
source#### fn default() -> VirtualGatewayAccessLog
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualGatewayAccessLog
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualGatewayAccessLog> for VirtualGatewayAccessLog
source#### fn eq(&self, other: &VirtualGatewayAccessLog) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualGatewayAccessLog) -> bool
This method tests for `!=`.
source### impl Serialize for VirtualGatewayAccessLog
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for VirtualGatewayAccessLog
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualGatewayAccessLog
### impl Send for VirtualGatewayAccessLog
### impl Sync for VirtualGatewayAccessLog
### impl Unpin for VirtualGatewayAccessLog
### impl UnwindSafe for VirtualGatewayAccessLog
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualGatewayBackendDefaults
===
```
pub struct VirtualGatewayBackendDefaults {
pub client_policy: Option<VirtualGatewayClientPolicy>,
}
```
An object that represents the default properties for a backend.
Fields
---
`client_policy: Option<VirtualGatewayClientPolicy>`A reference to an object that represents a client policy.
Trait Implementations
---
source### impl Clone for VirtualGatewayBackendDefaults
source#### fn clone(&self) -> VirtualGatewayBackendDefaults
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualGatewayBackendDefaults
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualGatewayBackendDefaults
source#### fn default() -> VirtualGatewayBackendDefaults
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualGatewayBackendDefaults
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualGatewayBackendDefaults> for VirtualGatewayBackendDefaults
source#### fn eq(&self, other: &VirtualGatewayBackendDefaults) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualGatewayBackendDefaults) -> bool
This method tests for `!=`.
source### impl Serialize for VirtualGatewayBackendDefaults
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for VirtualGatewayBackendDefaults
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualGatewayBackendDefaults
### impl Send for VirtualGatewayBackendDefaults
### impl Sync for VirtualGatewayBackendDefaults
### impl Unpin for VirtualGatewayBackendDefaults
### impl UnwindSafe for VirtualGatewayBackendDefaults
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualGatewayClientPolicy
===
```
pub struct VirtualGatewayClientPolicy {
pub tls: Option<VirtualGatewayClientPolicyTls>,
}
```
An object that represents a client policy.
Fields
---
`tls: Option<VirtualGatewayClientPolicyTls>`A reference to an object that represents a Transport Layer Security (TLS) client policy.
Trait Implementations
---
source### impl Clone for VirtualGatewayClientPolicy
source#### fn clone(&self) -> VirtualGatewayClientPolicy
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualGatewayClientPolicy
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualGatewayClientPolicy
source#### fn default() -> VirtualGatewayClientPolicy
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualGatewayClientPolicy
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualGatewayClientPolicy> for VirtualGatewayClientPolicy
source#### fn eq(&self, other: &VirtualGatewayClientPolicy) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualGatewayClientPolicy) -> bool
This method tests for `!=`.
source### impl Serialize for VirtualGatewayClientPolicy
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for VirtualGatewayClientPolicy
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualGatewayClientPolicy
### impl Send for VirtualGatewayClientPolicy
### impl Sync for VirtualGatewayClientPolicy
### impl Unpin for VirtualGatewayClientPolicy
### impl UnwindSafe for VirtualGatewayClientPolicy
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualGatewayClientPolicyTls
===
```
pub struct VirtualGatewayClientPolicyTls {
pub certificate: Option<VirtualGatewayClientTlsCertificate>,
pub enforce: Option<bool>,
pub ports: Option<Vec<i64>>,
pub validation: VirtualGatewayTlsValidationContext,
}
```
An object that represents a Transport Layer Security (TLS) client policy.
Fields
---
`certificate: Option<VirtualGatewayClientTlsCertificate>`A reference to an object that represents a virtual gateway's client's Transport Layer Security (TLS) certificate.
`enforce: Option<bool>`Whether the policy is enforced. The default is `True`, if a value isn't specified.
`ports: Option<Vec<i64>>`One or more ports that the policy is enforced for.
`validation: VirtualGatewayTlsValidationContext`A reference to an object that represents a Transport Layer Security (TLS) validation context.
Trait Implementations
---
source### impl Clone for VirtualGatewayClientPolicyTls
source#### fn clone(&self) -> VirtualGatewayClientPolicyTls
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualGatewayClientPolicyTls
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualGatewayClientPolicyTls
source#### fn default() -> VirtualGatewayClientPolicyTls
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualGatewayClientPolicyTls
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualGatewayClientPolicyTls> for VirtualGatewayClientPolicyTls
source#### fn eq(&self, other: &VirtualGatewayClientPolicyTls) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualGatewayClientPolicyTls) -> bool
This method tests for `!=`.
source### impl Serialize for VirtualGatewayClientPolicyTls
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for VirtualGatewayClientPolicyTls
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualGatewayClientPolicyTls
### impl Send for VirtualGatewayClientPolicyTls
### impl Sync for VirtualGatewayClientPolicyTls
### impl Unpin for VirtualGatewayClientPolicyTls
### impl UnwindSafe for VirtualGatewayClientPolicyTls
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualGatewayClientTlsCertificate
===
```
pub struct VirtualGatewayClientTlsCertificate {
pub file: Option<VirtualGatewayListenerTlsFileCertificate>,
pub sds: Option<VirtualGatewayListenerTlsSdsCertificate>,
}
```
An object that represents the virtual gateway's client's Transport Layer Security (TLS) certificate.
Fields
---
`file: Option<VirtualGatewayListenerTlsFileCertificate>`An object that represents a local file certificate. The certificate must meet specific requirements and you must have proxy authorization enabled. For more information, see Transport Layer Security (TLS).
`sds: Option<VirtualGatewayListenerTlsSdsCertificate>`A reference to an object that represents a virtual gateway's client's Secret Discovery Service certificate.
Trait Implementations
---
source### impl Clone for VirtualGatewayClientTlsCertificate
source#### fn clone(&self) -> VirtualGatewayClientTlsCertificate
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualGatewayClientTlsCertificate
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualGatewayClientTlsCertificate
source#### fn default() -> VirtualGatewayClientTlsCertificate
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualGatewayClientTlsCertificate
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualGatewayClientTlsCertificate> for VirtualGatewayClientTlsCertificate
source#### fn eq(&self, other: &VirtualGatewayClientTlsCertificate) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualGatewayClientTlsCertificate) -> bool
This method tests for `!=`.
source### impl Serialize for VirtualGatewayClientTlsCertificate
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for VirtualGatewayClientTlsCertificate
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualGatewayClientTlsCertificate
### impl Send for VirtualGatewayClientTlsCertificate
### impl Sync for VirtualGatewayClientTlsCertificate
### impl Unpin for VirtualGatewayClientTlsCertificate
### impl UnwindSafe for VirtualGatewayClientTlsCertificate
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualGatewayConnectionPool
===
```
pub struct VirtualGatewayConnectionPool {
pub grpc: Option<VirtualGatewayGrpcConnectionPool>,
pub http: Option<VirtualGatewayHttpConnectionPool>,
pub http_2: Option<VirtualGatewayHttp2ConnectionPool>,
}
```
An object that represents the type of virtual gateway connection pool.
Only one protocol is used at a time and should be the same protocol as the one chosen under port mapping.
If not present the default value for `maxPendingRequests` is `2147483647`.
Fields
---
`grpc: Option<VirtualGatewayGrpcConnectionPool>`An object that represents a type of connection pool.
`http: Option<VirtualGatewayHttpConnectionPool>`An object that represents a type of connection pool.
`http_2: Option<VirtualGatewayHttp2ConnectionPool>`An object that represents a type of connection pool.
Trait Implementations
---
source### impl Clone for VirtualGatewayConnectionPool
source#### fn clone(&self) -> VirtualGatewayConnectionPool
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualGatewayConnectionPool
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualGatewayConnectionPool
source#### fn default() -> VirtualGatewayConnectionPool
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualGatewayConnectionPool
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualGatewayConnectionPool> for VirtualGatewayConnectionPool
source#### fn eq(&self, other: &VirtualGatewayConnectionPool) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualGatewayConnectionPool) -> bool
This method tests for `!=`.
source### impl Serialize for VirtualGatewayConnectionPool
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for VirtualGatewayConnectionPool
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualGatewayConnectionPool
### impl Send for VirtualGatewayConnectionPool
### impl Sync for VirtualGatewayConnectionPool
### impl Unpin for VirtualGatewayConnectionPool
### impl UnwindSafe for VirtualGatewayConnectionPool
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualGatewayData
===
```
pub struct VirtualGatewayData {
pub mesh_name: String,
pub metadata: ResourceMetadata,
pub spec: VirtualGatewaySpec,
pub status: VirtualGatewayStatus,
pub virtual_gateway_name: String,
}
```
An object that represents a virtual gateway returned by a describe operation.
Fields
---
`mesh_name: String`The name of the service mesh that the virtual gateway resides in.
`metadata: ResourceMetadata``spec: VirtualGatewaySpec`The specifications of the virtual gateway.
`status: VirtualGatewayStatus`The current status of the virtual gateway.
`virtual_gateway_name: String`The name of the virtual gateway.
Trait Implementations
---
source### impl Clone for VirtualGatewayData
source#### fn clone(&self) -> VirtualGatewayData
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualGatewayData
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualGatewayData
source#### fn default() -> VirtualGatewayData
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualGatewayData
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualGatewayData> for VirtualGatewayData
source#### fn eq(&self, other: &VirtualGatewayData) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualGatewayData) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for VirtualGatewayData
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualGatewayData
### impl Send for VirtualGatewayData
### impl Sync for VirtualGatewayData
### impl Unpin for VirtualGatewayData
### impl UnwindSafe for VirtualGatewayData
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualGatewayFileAccessLog
===
```
pub struct VirtualGatewayFileAccessLog {
pub path: String,
}
```
An object that represents an access log file.
Fields
---
`path: String`The file path to write access logs to. You can use `/dev/stdout` to send access logs to standard out and configure your Envoy container to use a log driver, such as `awslogs`, to export the access logs to a log storage service such as Amazon CloudWatch Logs. You can also specify a path in the Envoy container's file system to write the files to disk.
Trait Implementations
---
source### impl Clone for VirtualGatewayFileAccessLog
source#### fn clone(&self) -> VirtualGatewayFileAccessLog
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualGatewayFileAccessLog
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualGatewayFileAccessLog
source#### fn default() -> VirtualGatewayFileAccessLog
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualGatewayFileAccessLog
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualGatewayFileAccessLog> for VirtualGatewayFileAccessLog
source#### fn eq(&self, other: &VirtualGatewayFileAccessLog) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualGatewayFileAccessLog) -> bool
This method tests for `!=`.
source### impl Serialize for VirtualGatewayFileAccessLog
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for VirtualGatewayFileAccessLog
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualGatewayFileAccessLog
### impl Send for VirtualGatewayFileAccessLog
### impl Sync for VirtualGatewayFileAccessLog
### impl Unpin for VirtualGatewayFileAccessLog
### impl UnwindSafe for VirtualGatewayFileAccessLog
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualGatewayGrpcConnectionPool
===
```
pub struct VirtualGatewayGrpcConnectionPool {
pub max_requests: i64,
}
```
An object that represents a type of connection pool.
Fields
---
`max_requests: i64`Maximum number of inflight requests Envoy can concurrently support across hosts in upstream cluster.
Trait Implementations
---
source### impl Clone for VirtualGatewayGrpcConnectionPool
source#### fn clone(&self) -> VirtualGatewayGrpcConnectionPool
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualGatewayGrpcConnectionPool
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualGatewayGrpcConnectionPool
source#### fn default() -> VirtualGatewayGrpcConnectionPool
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualGatewayGrpcConnectionPool
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualGatewayGrpcConnectionPool> for VirtualGatewayGrpcConnectionPool
source#### fn eq(&self, other: &VirtualGatewayGrpcConnectionPool) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualGatewayGrpcConnectionPool) -> bool
This method tests for `!=`.
source### impl Serialize for VirtualGatewayGrpcConnectionPool
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for VirtualGatewayGrpcConnectionPool
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualGatewayGrpcConnectionPool
### impl Send for VirtualGatewayGrpcConnectionPool
### impl Sync for VirtualGatewayGrpcConnectionPool
### impl Unpin for VirtualGatewayGrpcConnectionPool
### impl UnwindSafe for VirtualGatewayGrpcConnectionPool
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualGatewayHealthCheckPolicy
===
```
pub struct VirtualGatewayHealthCheckPolicy {
pub healthy_threshold: i64,
pub interval_millis: i64,
pub path: Option<String>,
pub port: Option<i64>,
pub protocol: String,
pub timeout_millis: i64,
pub unhealthy_threshold: i64,
}
```
An object that represents the health check policy for a virtual gateway's listener.
Fields
---
`healthy_threshold: i64`The number of consecutive successful health checks that must occur before declaring the listener healthy.
`interval_millis: i64`The time period in milliseconds between each health check execution.
`path: Option<String>`The destination path for the health check request. This value is only used if the specified protocol is HTTP or HTTP/2. For any other protocol, this value is ignored.
`port: Option<i64>`The destination port for the health check request. This port must match the port defined in the PortMapping for the listener.
`protocol: String`The protocol for the health check request. If you specify `grpc`, then your service must conform to the GRPC Health Checking Protocol.
`timeout_millis: i64`The amount of time to wait when receiving a response from the health check, in milliseconds.
`unhealthy_threshold: i64`The number of consecutive failed health checks that must occur before declaring a virtual gateway unhealthy.
Trait Implementations
---
source### impl Clone for VirtualGatewayHealthCheckPolicy
source#### fn clone(&self) -> VirtualGatewayHealthCheckPolicy
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualGatewayHealthCheckPolicy
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualGatewayHealthCheckPolicy
source#### fn default() -> VirtualGatewayHealthCheckPolicy
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualGatewayHealthCheckPolicy
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualGatewayHealthCheckPolicy> for VirtualGatewayHealthCheckPolicy
source#### fn eq(&self, other: &VirtualGatewayHealthCheckPolicy) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualGatewayHealthCheckPolicy) -> bool
This method tests for `!=`.
source### impl Serialize for VirtualGatewayHealthCheckPolicy
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for VirtualGatewayHealthCheckPolicy
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualGatewayHealthCheckPolicy
### impl Send for VirtualGatewayHealthCheckPolicy
### impl Sync for VirtualGatewayHealthCheckPolicy
### impl Unpin for VirtualGatewayHealthCheckPolicy
### impl UnwindSafe for VirtualGatewayHealthCheckPolicy
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualGatewayHttp2ConnectionPool
===
```
pub struct VirtualGatewayHttp2ConnectionPool {
pub max_requests: i64,
}
```
An object that represents a type of connection pool.
Fields
---
`max_requests: i64`Maximum number of inflight requests Envoy can concurrently support across hosts in upstream cluster.
Trait Implementations
---
source### impl Clone for VirtualGatewayHttp2ConnectionPool
source#### fn clone(&self) -> VirtualGatewayHttp2ConnectionPool
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualGatewayHttp2ConnectionPool
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualGatewayHttp2ConnectionPool
source#### fn default() -> VirtualGatewayHttp2ConnectionPool
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualGatewayHttp2ConnectionPool
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualGatewayHttp2ConnectionPool> for VirtualGatewayHttp2ConnectionPool
source#### fn eq(&self, other: &VirtualGatewayHttp2ConnectionPool) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualGatewayHttp2ConnectionPool) -> bool
This method tests for `!=`.
source### impl Serialize for VirtualGatewayHttp2ConnectionPool
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for VirtualGatewayHttp2ConnectionPool
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualGatewayHttp2ConnectionPool
### impl Send for VirtualGatewayHttp2ConnectionPool
### impl Sync for VirtualGatewayHttp2ConnectionPool
### impl Unpin for VirtualGatewayHttp2ConnectionPool
### impl UnwindSafe for VirtualGatewayHttp2ConnectionPool
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualGatewayHttpConnectionPool
===
```
pub struct VirtualGatewayHttpConnectionPool {
pub max_connections: i64,
pub max_pending_requests: Option<i64>,
}
```
An object that represents a type of connection pool.
Fields
---
`max_connections: i64`Maximum number of outbound TCP connections Envoy can establish concurrently with all hosts in upstream cluster.
`max_pending_requests: Option<i64>`Number of overflowing requests after `max_connections` Envoy will queue to upstream cluster.
Trait Implementations
---
source### impl Clone for VirtualGatewayHttpConnectionPool
source#### fn clone(&self) -> VirtualGatewayHttpConnectionPool
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualGatewayHttpConnectionPool
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualGatewayHttpConnectionPool
source#### fn default() -> VirtualGatewayHttpConnectionPool
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualGatewayHttpConnectionPool
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualGatewayHttpConnectionPool> for VirtualGatewayHttpConnectionPool
source#### fn eq(&self, other: &VirtualGatewayHttpConnectionPool) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualGatewayHttpConnectionPool) -> bool
This method tests for `!=`.
source### impl Serialize for VirtualGatewayHttpConnectionPool
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for VirtualGatewayHttpConnectionPool
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualGatewayHttpConnectionPool
### impl Send for VirtualGatewayHttpConnectionPool
### impl Sync for VirtualGatewayHttpConnectionPool
### impl Unpin for VirtualGatewayHttpConnectionPool
### impl UnwindSafe for VirtualGatewayHttpConnectionPool
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualGatewayListener
===
```
pub struct VirtualGatewayListener {
pub connection_pool: Option<VirtualGatewayConnectionPool>,
pub health_check: Option<VirtualGatewayHealthCheckPolicy>,
pub port_mapping: VirtualGatewayPortMapping,
pub tls: Option<VirtualGatewayListenerTls>,
}
```
An object that represents a listener for a virtual gateway.
Fields
---
`connection_pool: Option<VirtualGatewayConnectionPool>`The connection pool information for the virtual gateway listener.
`health_check: Option<VirtualGatewayHealthCheckPolicy>`The health check information for the listener.
`port_mapping: VirtualGatewayPortMapping`The port mapping information for the listener.
`tls: Option<VirtualGatewayListenerTls>`A reference to an object that represents the Transport Layer Security (TLS) properties for the listener.
Trait Implementations
---
source### impl Clone for VirtualGatewayListener
source#### fn clone(&self) -> VirtualGatewayListener
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualGatewayListener
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualGatewayListener
source#### fn default() -> VirtualGatewayListener
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualGatewayListener
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualGatewayListener> for VirtualGatewayListener
source#### fn eq(&self, other: &VirtualGatewayListener) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualGatewayListener) -> bool
This method tests for `!=`.
source### impl Serialize for VirtualGatewayListener
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for VirtualGatewayListener
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualGatewayListener
### impl Send for VirtualGatewayListener
### impl Sync for VirtualGatewayListener
### impl Unpin for VirtualGatewayListener
### impl UnwindSafe for VirtualGatewayListener
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualGatewayListenerTls
===
```
pub struct VirtualGatewayListenerTls {
pub certificate: VirtualGatewayListenerTlsCertificate,
pub mode: String,
pub validation: Option<VirtualGatewayListenerTlsValidationContext>,
}
```
An object that represents the Transport Layer Security (TLS) properties for a listener.
Fields
---
`certificate: VirtualGatewayListenerTlsCertificate`An object that represents a Transport Layer Security (TLS) certificate.
`mode: String`Specify one of the following modes.
* STRICT – Listener only accepts connections with TLS enabled.
* PERMISSIVE – Listener accepts connections with or without TLS enabled.
* DISABLED – Listener only accepts connections without TLS.
`validation: Option<VirtualGatewayListenerTlsValidationContext>`A reference to an object that represents a virtual gateway's listener's Transport Layer Security (TLS) validation context.
Trait Implementations
---
source### impl Clone for VirtualGatewayListenerTls
source#### fn clone(&self) -> VirtualGatewayListenerTls
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualGatewayListenerTls
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualGatewayListenerTls
source#### fn default() -> VirtualGatewayListenerTls
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualGatewayListenerTls
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualGatewayListenerTls> for VirtualGatewayListenerTls
source#### fn eq(&self, other: &VirtualGatewayListenerTls) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualGatewayListenerTls) -> bool
This method tests for `!=`.
source### impl Serialize for VirtualGatewayListenerTls
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for VirtualGatewayListenerTls
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualGatewayListenerTls
### impl Send for VirtualGatewayListenerTls
### impl Sync for VirtualGatewayListenerTls
### impl Unpin for VirtualGatewayListenerTls
### impl UnwindSafe for VirtualGatewayListenerTls
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualGatewayListenerTlsAcmCertificate
===
```
pub struct VirtualGatewayListenerTlsAcmCertificate {
pub certificate_arn: String,
}
```
An object that represents an Certificate Manager certificate.
Fields
---
`certificate_arn: String`The Amazon Resource Name (ARN) for the certificate. The certificate must meet specific requirements and you must have proxy authorization enabled. For more information, see Transport Layer Security (TLS).
Trait Implementations
---
source### impl Clone for VirtualGatewayListenerTlsAcmCertificate
source#### fn clone(&self) -> VirtualGatewayListenerTlsAcmCertificate
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualGatewayListenerTlsAcmCertificate
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualGatewayListenerTlsAcmCertificate
source#### fn default() -> VirtualGatewayListenerTlsAcmCertificate
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualGatewayListenerTlsAcmCertificate
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualGatewayListenerTlsAcmCertificate> for VirtualGatewayListenerTlsAcmCertificate
source#### fn eq(&self, other: &VirtualGatewayListenerTlsAcmCertificate) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualGatewayListenerTlsAcmCertificate) -> bool
This method tests for `!=`.
source### impl Serialize for VirtualGatewayListenerTlsAcmCertificate
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for VirtualGatewayListenerTlsAcmCertificate
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualGatewayListenerTlsAcmCertificate
### impl Send for VirtualGatewayListenerTlsAcmCertificate
### impl Sync for VirtualGatewayListenerTlsAcmCertificate
### impl Unpin for VirtualGatewayListenerTlsAcmCertificate
### impl UnwindSafe for VirtualGatewayListenerTlsAcmCertificate
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualGatewayListenerTlsCertificate
===
```
pub struct VirtualGatewayListenerTlsCertificate {
pub acm: Option<VirtualGatewayListenerTlsAcmCertificate>,
pub file: Option<VirtualGatewayListenerTlsFileCertificate>,
pub sds: Option<VirtualGatewayListenerTlsSdsCertificate>,
}
```
An object that represents a listener's Transport Layer Security (TLS) certificate.
Fields
---
`acm: Option<VirtualGatewayListenerTlsAcmCertificate>`A reference to an object that represents an Certificate Manager certificate.
`file: Option<VirtualGatewayListenerTlsFileCertificate>`A reference to an object that represents a local file certificate.
`sds: Option<VirtualGatewayListenerTlsSdsCertificate>`A reference to an object that represents a virtual gateway's listener's Secret Discovery Service certificate.
Trait Implementations
---
source### impl Clone for VirtualGatewayListenerTlsCertificate
source#### fn clone(&self) -> VirtualGatewayListenerTlsCertificate
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualGatewayListenerTlsCertificate
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualGatewayListenerTlsCertificate
source#### fn default() -> VirtualGatewayListenerTlsCertificate
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualGatewayListenerTlsCertificate
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualGatewayListenerTlsCertificate> for VirtualGatewayListenerTlsCertificate
source#### fn eq(&self, other: &VirtualGatewayListenerTlsCertificate) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualGatewayListenerTlsCertificate) -> bool
This method tests for `!=`.
source### impl Serialize for VirtualGatewayListenerTlsCertificate
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for VirtualGatewayListenerTlsCertificate
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualGatewayListenerTlsCertificate
### impl Send for VirtualGatewayListenerTlsCertificate
### impl Sync for VirtualGatewayListenerTlsCertificate
### impl Unpin for VirtualGatewayListenerTlsCertificate
### impl UnwindSafe for VirtualGatewayListenerTlsCertificate
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualGatewayListenerTlsFileCertificate
===
```
pub struct VirtualGatewayListenerTlsFileCertificate {
pub certificate_chain: String,
pub private_key: String,
}
```
An object that represents a local file certificate. The certificate must meet specific requirements and you must have proxy authorization enabled. For more information, see Transport Layer Security (TLS).
Fields
---
`certificate_chain: String`The certificate chain for the certificate.
`private_key: String`The private key for a certificate stored on the file system of the mesh endpoint that the proxy is running on.
Trait Implementations
---
source### impl Clone for VirtualGatewayListenerTlsFileCertificate
source#### fn clone(&self) -> VirtualGatewayListenerTlsFileCertificate
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualGatewayListenerTlsFileCertificate
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualGatewayListenerTlsFileCertificate
source#### fn default() -> VirtualGatewayListenerTlsFileCertificate
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualGatewayListenerTlsFileCertificate
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualGatewayListenerTlsFileCertificate> for VirtualGatewayListenerTlsFileCertificate
source#### fn eq(&self, other: &VirtualGatewayListenerTlsFileCertificate) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualGatewayListenerTlsFileCertificate) -> bool
This method tests for `!=`.
source### impl Serialize for VirtualGatewayListenerTlsFileCertificate
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for VirtualGatewayListenerTlsFileCertificate
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualGatewayListenerTlsFileCertificate
### impl Send for VirtualGatewayListenerTlsFileCertificate
### impl Sync for VirtualGatewayListenerTlsFileCertificate
### impl Unpin for VirtualGatewayListenerTlsFileCertificate
### impl UnwindSafe for VirtualGatewayListenerTlsFileCertificate
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualGatewayListenerTlsSdsCertificate
===
```
pub struct VirtualGatewayListenerTlsSdsCertificate {
pub secret_name: String,
}
```
An object that represents the virtual gateway's listener's Secret Discovery Service certificate.The proxy must be configured with a local SDS provider via a Unix Domain Socket. See App MeshTLS documentation for more info.
Fields
---
`secret_name: String`A reference to an object that represents the name of the secret secret requested from the Secret Discovery Service provider representing Transport Layer Security (TLS) materials like a certificate or certificate chain.
Trait Implementations
---
source### impl Clone for VirtualGatewayListenerTlsSdsCertificate
source#### fn clone(&self) -> VirtualGatewayListenerTlsSdsCertificate
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualGatewayListenerTlsSdsCertificate
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualGatewayListenerTlsSdsCertificate
source#### fn default() -> VirtualGatewayListenerTlsSdsCertificate
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualGatewayListenerTlsSdsCertificate
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualGatewayListenerTlsSdsCertificate> for VirtualGatewayListenerTlsSdsCertificate
source#### fn eq(&self, other: &VirtualGatewayListenerTlsSdsCertificate) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualGatewayListenerTlsSdsCertificate) -> bool
This method tests for `!=`.
source### impl Serialize for VirtualGatewayListenerTlsSdsCertificate
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for VirtualGatewayListenerTlsSdsCertificate
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualGatewayListenerTlsSdsCertificate
### impl Send for VirtualGatewayListenerTlsSdsCertificate
### impl Sync for VirtualGatewayListenerTlsSdsCertificate
### impl Unpin for VirtualGatewayListenerTlsSdsCertificate
### impl UnwindSafe for VirtualGatewayListenerTlsSdsCertificate
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualGatewayListenerTlsValidationContext
===
```
pub struct VirtualGatewayListenerTlsValidationContext {
pub subject_alternative_names: Option<SubjectAlternativeNames>,
pub trust: VirtualGatewayListenerTlsValidationContextTrust,
}
```
An object that represents a virtual gateway's listener's Transport Layer Security (TLS) validation context.
Fields
---
`subject_alternative_names: Option<SubjectAlternativeNames>`A reference to an object that represents the SANs for a virtual gateway listener's Transport Layer Security (TLS) validation context.
`trust: VirtualGatewayListenerTlsValidationContextTrust`A reference to where to retrieve the trust chain when validating a peer’s Transport Layer Security (TLS) certificate.
Trait Implementations
---
source### impl Clone for VirtualGatewayListenerTlsValidationContext
source#### fn clone(&self) -> VirtualGatewayListenerTlsValidationContext
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualGatewayListenerTlsValidationContext
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualGatewayListenerTlsValidationContext
source#### fn default() -> VirtualGatewayListenerTlsValidationContext
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualGatewayListenerTlsValidationContext
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualGatewayListenerTlsValidationContext> for VirtualGatewayListenerTlsValidationContext
source#### fn eq(&self, other: &VirtualGatewayListenerTlsValidationContext) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualGatewayListenerTlsValidationContext) -> bool
This method tests for `!=`.
source### impl Serialize for VirtualGatewayListenerTlsValidationContext
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for VirtualGatewayListenerTlsValidationContext
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualGatewayListenerTlsValidationContext
### impl Send for VirtualGatewayListenerTlsValidationContext
### impl Sync for VirtualGatewayListenerTlsValidationContext
### impl Unpin for VirtualGatewayListenerTlsValidationContext
### impl UnwindSafe for VirtualGatewayListenerTlsValidationContext
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualGatewayListenerTlsValidationContextTrust
===
```
pub struct VirtualGatewayListenerTlsValidationContextTrust {
pub file: Option<VirtualGatewayTlsValidationContextFileTrust>,
pub sds: Option<VirtualGatewayTlsValidationContextSdsTrust>,
}
```
An object that represents a virtual gateway's listener's Transport Layer Security (TLS) validation context trust.
Fields
---
`file: Option<VirtualGatewayTlsValidationContextFileTrust>`An object that represents a Transport Layer Security (TLS) validation context trust for a local file.
`sds: Option<VirtualGatewayTlsValidationContextSdsTrust>`A reference to an object that represents a virtual gateway's listener's Transport Layer Security (TLS) Secret Discovery Service validation context trust.
Trait Implementations
---
source### impl Clone for VirtualGatewayListenerTlsValidationContextTrust
source#### fn clone(&self) -> VirtualGatewayListenerTlsValidationContextTrust
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualGatewayListenerTlsValidationContextTrust
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualGatewayListenerTlsValidationContextTrust
source#### fn default() -> VirtualGatewayListenerTlsValidationContextTrust
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualGatewayListenerTlsValidationContextTrust
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualGatewayListenerTlsValidationContextTrust> for VirtualGatewayListenerTlsValidationContextTrust
source#### fn eq(&self, other: &VirtualGatewayListenerTlsValidationContextTrust) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualGatewayListenerTlsValidationContextTrust) -> bool
This method tests for `!=`.
source### impl Serialize for VirtualGatewayListenerTlsValidationContextTrust
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for VirtualGatewayListenerTlsValidationContextTrust
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualGatewayListenerTlsValidationContextTrust
### impl Send for VirtualGatewayListenerTlsValidationContextTrust
### impl Sync for VirtualGatewayListenerTlsValidationContextTrust
### impl Unpin for VirtualGatewayListenerTlsValidationContextTrust
### impl UnwindSafe for VirtualGatewayListenerTlsValidationContextTrust
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualGatewayLogging
===
```
pub struct VirtualGatewayLogging {
pub access_log: Option<VirtualGatewayAccessLog>,
}
```
An object that represents logging information.
Fields
---
`access_log: Option<VirtualGatewayAccessLog>`The access log configuration.
Trait Implementations
---
source### impl Clone for VirtualGatewayLogging
source#### fn clone(&self) -> VirtualGatewayLogging
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualGatewayLogging
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualGatewayLogging
source#### fn default() -> VirtualGatewayLogging
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualGatewayLogging
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualGatewayLogging> for VirtualGatewayLogging
source#### fn eq(&self, other: &VirtualGatewayLogging) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualGatewayLogging) -> bool
This method tests for `!=`.
source### impl Serialize for VirtualGatewayLogging
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for VirtualGatewayLogging
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualGatewayLogging
### impl Send for VirtualGatewayLogging
### impl Sync for VirtualGatewayLogging
### impl Unpin for VirtualGatewayLogging
### impl UnwindSafe for VirtualGatewayLogging
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualGatewayPortMapping
===
```
pub struct VirtualGatewayPortMapping {
pub port: i64,
pub protocol: String,
}
```
An object that represents a port mapping.
Fields
---
`port: i64`The port used for the port mapping. Specify one protocol.
`protocol: String`The protocol used for the port mapping.
Trait Implementations
---
source### impl Clone for VirtualGatewayPortMapping
source#### fn clone(&self) -> VirtualGatewayPortMapping
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualGatewayPortMapping
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualGatewayPortMapping
source#### fn default() -> VirtualGatewayPortMapping
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualGatewayPortMapping
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualGatewayPortMapping> for VirtualGatewayPortMapping
source#### fn eq(&self, other: &VirtualGatewayPortMapping) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualGatewayPortMapping) -> bool
This method tests for `!=`.
source### impl Serialize for VirtualGatewayPortMapping
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for VirtualGatewayPortMapping
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualGatewayPortMapping
### impl Send for VirtualGatewayPortMapping
### impl Sync for VirtualGatewayPortMapping
### impl Unpin for VirtualGatewayPortMapping
### impl UnwindSafe for VirtualGatewayPortMapping
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualGatewayRef
===
```
pub struct VirtualGatewayRef {
pub arn: String,
pub created_at: f64,
pub last_updated_at: f64,
pub mesh_name: String,
pub mesh_owner: String,
pub resource_owner: String,
pub version: i64,
pub virtual_gateway_name: String,
}
```
An object that represents a virtual gateway returned by a list operation.
Fields
---
`arn: String`The full Amazon Resource Name (ARN) for the resource.
`created_at: f64`The Unix epoch timestamp in seconds for when the resource was created.
`last_updated_at: f64`The Unix epoch timestamp in seconds for when the resource was last updated.
`mesh_name: String`The name of the service mesh that the resource resides in.
`mesh_owner: String`The AWS IAM account ID of the service mesh owner. If the account ID is not your own, then it's the ID of the account that shared the mesh with your account. For more information about mesh sharing, see Working with shared meshes.
`resource_owner: String`The AWS IAM account ID of the resource owner. If the account ID is not your own, then it's the ID of the mesh owner or of another account that the mesh is shared with. For more information about mesh sharing, see Working with shared meshes.
`version: i64`The version of the resource. Resources are created at version 1, and this version is incremented each time that they're updated.
`virtual_gateway_name: String`The name of the resource.
Trait Implementations
---
source### impl Clone for VirtualGatewayRef
source#### fn clone(&self) -> VirtualGatewayRef
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualGatewayRef
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualGatewayRef
source#### fn default() -> VirtualGatewayRef
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualGatewayRef
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualGatewayRef> for VirtualGatewayRef
source#### fn eq(&self, other: &VirtualGatewayRef) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualGatewayRef) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for VirtualGatewayRef
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualGatewayRef
### impl Send for VirtualGatewayRef
### impl Sync for VirtualGatewayRef
### impl Unpin for VirtualGatewayRef
### impl UnwindSafe for VirtualGatewayRef
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualGatewaySpec
===
```
pub struct VirtualGatewaySpec {
pub backend_defaults: Option<VirtualGatewayBackendDefaults>,
pub listeners: Vec<VirtualGatewayListener>,
pub logging: Option<VirtualGatewayLogging>,
}
```
An object that represents the specification of a service mesh resource.
Fields
---
`backend_defaults: Option<VirtualGatewayBackendDefaults>`A reference to an object that represents the defaults for backends.
`listeners: Vec<VirtualGatewayListener>`The listeners that the mesh endpoint is expected to receive inbound traffic from. You can specify one listener.
`logging: Option<VirtualGatewayLogging>`Trait Implementations
---
source### impl Clone for VirtualGatewaySpec
source#### fn clone(&self) -> VirtualGatewaySpec
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualGatewaySpec
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualGatewaySpec
source#### fn default() -> VirtualGatewaySpec
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualGatewaySpec
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualGatewaySpec> for VirtualGatewaySpec
source#### fn eq(&self, other: &VirtualGatewaySpec) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualGatewaySpec) -> bool
This method tests for `!=`.
source### impl Serialize for VirtualGatewaySpec
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for VirtualGatewaySpec
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualGatewaySpec
### impl Send for VirtualGatewaySpec
### impl Sync for VirtualGatewaySpec
### impl Unpin for VirtualGatewaySpec
### impl UnwindSafe for VirtualGatewaySpec
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualGatewayStatus
===
```
pub struct VirtualGatewayStatus {
pub status: String,
}
```
An object that represents the status of the mesh resource.
Fields
---
`status: String`The current status.
Trait Implementations
---
source### impl Clone for VirtualGatewayStatus
source#### fn clone(&self) -> VirtualGatewayStatus
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualGatewayStatus
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualGatewayStatus
source#### fn default() -> VirtualGatewayStatus
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualGatewayStatus
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualGatewayStatus> for VirtualGatewayStatus
source#### fn eq(&self, other: &VirtualGatewayStatus) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualGatewayStatus) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for VirtualGatewayStatus
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualGatewayStatus
### impl Send for VirtualGatewayStatus
### impl Sync for VirtualGatewayStatus
### impl Unpin for VirtualGatewayStatus
### impl UnwindSafe for VirtualGatewayStatus
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualGatewayTlsValidationContext
===
```
pub struct VirtualGatewayTlsValidationContext {
pub subject_alternative_names: Option<SubjectAlternativeNames>,
pub trust: VirtualGatewayTlsValidationContextTrust,
}
```
An object that represents a Transport Layer Security (TLS) validation context.
Fields
---
`subject_alternative_names: Option<SubjectAlternativeNames>`A reference to an object that represents the SANs for a virtual gateway's listener's Transport Layer Security (TLS) validation context.
`trust: VirtualGatewayTlsValidationContextTrust`A reference to where to retrieve the trust chain when validating a peer’s Transport Layer Security (TLS) certificate.
Trait Implementations
---
source### impl Clone for VirtualGatewayTlsValidationContext
source#### fn clone(&self) -> VirtualGatewayTlsValidationContext
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualGatewayTlsValidationContext
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualGatewayTlsValidationContext
source#### fn default() -> VirtualGatewayTlsValidationContext
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualGatewayTlsValidationContext
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualGatewayTlsValidationContext> for VirtualGatewayTlsValidationContext
source#### fn eq(&self, other: &VirtualGatewayTlsValidationContext) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualGatewayTlsValidationContext) -> bool
This method tests for `!=`.
source### impl Serialize for VirtualGatewayTlsValidationContext
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for VirtualGatewayTlsValidationContext
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualGatewayTlsValidationContext
### impl Send for VirtualGatewayTlsValidationContext
### impl Sync for VirtualGatewayTlsValidationContext
### impl Unpin for VirtualGatewayTlsValidationContext
### impl UnwindSafe for VirtualGatewayTlsValidationContext
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualGatewayTlsValidationContextAcmTrust
===
```
pub struct VirtualGatewayTlsValidationContextAcmTrust {
pub certificate_authority_arns: Vec<String>,
}
```
An object that represents a Transport Layer Security (TLS) validation context trust for an Certificate Manager certificate.
Fields
---
`certificate_authority_arns: Vec<String>`One or more ACM Amazon Resource Name (ARN)s.
Trait Implementations
---
source### impl Clone for VirtualGatewayTlsValidationContextAcmTrust
source#### fn clone(&self) -> VirtualGatewayTlsValidationContextAcmTrust
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualGatewayTlsValidationContextAcmTrust
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualGatewayTlsValidationContextAcmTrust
source#### fn default() -> VirtualGatewayTlsValidationContextAcmTrust
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualGatewayTlsValidationContextAcmTrust
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualGatewayTlsValidationContextAcmTrust> for VirtualGatewayTlsValidationContextAcmTrust
source#### fn eq(&self, other: &VirtualGatewayTlsValidationContextAcmTrust) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualGatewayTlsValidationContextAcmTrust) -> bool
This method tests for `!=`.
source### impl Serialize for VirtualGatewayTlsValidationContextAcmTrust
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for VirtualGatewayTlsValidationContextAcmTrust
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualGatewayTlsValidationContextAcmTrust
### impl Send for VirtualGatewayTlsValidationContextAcmTrust
### impl Sync for VirtualGatewayTlsValidationContextAcmTrust
### impl Unpin for VirtualGatewayTlsValidationContextAcmTrust
### impl UnwindSafe for VirtualGatewayTlsValidationContextAcmTrust
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualGatewayTlsValidationContextFileTrust
===
```
pub struct VirtualGatewayTlsValidationContextFileTrust {
pub certificate_chain: String,
}
```
An object that represents a Transport Layer Security (TLS) validation context trust for a local file.
Fields
---
`certificate_chain: String`The certificate trust chain for a certificate stored on the file system of the virtual node that the proxy is running on.
Trait Implementations
---
source### impl Clone for VirtualGatewayTlsValidationContextFileTrust
source#### fn clone(&self) -> VirtualGatewayTlsValidationContextFileTrust
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualGatewayTlsValidationContextFileTrust
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualGatewayTlsValidationContextFileTrust
source#### fn default() -> VirtualGatewayTlsValidationContextFileTrust
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualGatewayTlsValidationContextFileTrust
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualGatewayTlsValidationContextFileTrust> for VirtualGatewayTlsValidationContextFileTrust
source#### fn eq(&self, other: &VirtualGatewayTlsValidationContextFileTrust) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualGatewayTlsValidationContextFileTrust) -> bool
This method tests for `!=`.
source### impl Serialize for VirtualGatewayTlsValidationContextFileTrust
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for VirtualGatewayTlsValidationContextFileTrust
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualGatewayTlsValidationContextFileTrust
### impl Send for VirtualGatewayTlsValidationContextFileTrust
### impl Sync for VirtualGatewayTlsValidationContextFileTrust
### impl Unpin for VirtualGatewayTlsValidationContextFileTrust
### impl UnwindSafe for VirtualGatewayTlsValidationContextFileTrust
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualGatewayTlsValidationContextSdsTrust
===
```
pub struct VirtualGatewayTlsValidationContextSdsTrust {
pub secret_name: String,
}
```
An object that represents a virtual gateway's listener's Transport Layer Security (TLS) Secret Discovery Service validation context trust. The proxy must be configured with a local SDS provider via a Unix Domain Socket. See App Mesh TLS documentation for more info.
Fields
---
`secret_name: String`A reference to an object that represents the name of the secret for a virtual gateway's Transport Layer Security (TLS) Secret Discovery Service validation context trust.
Trait Implementations
---
source### impl Clone for VirtualGatewayTlsValidationContextSdsTrust
source#### fn clone(&self) -> VirtualGatewayTlsValidationContextSdsTrust
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualGatewayTlsValidationContextSdsTrust
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualGatewayTlsValidationContextSdsTrust
source#### fn default() -> VirtualGatewayTlsValidationContextSdsTrust
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualGatewayTlsValidationContextSdsTrust
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualGatewayTlsValidationContextSdsTrust> for VirtualGatewayTlsValidationContextSdsTrust
source#### fn eq(&self, other: &VirtualGatewayTlsValidationContextSdsTrust) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualGatewayTlsValidationContextSdsTrust) -> bool
This method tests for `!=`.
source### impl Serialize for VirtualGatewayTlsValidationContextSdsTrust
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for VirtualGatewayTlsValidationContextSdsTrust
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualGatewayTlsValidationContextSdsTrust
### impl Send for VirtualGatewayTlsValidationContextSdsTrust
### impl Sync for VirtualGatewayTlsValidationContextSdsTrust
### impl Unpin for VirtualGatewayTlsValidationContextSdsTrust
### impl UnwindSafe for VirtualGatewayTlsValidationContextSdsTrust
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualGatewayTlsValidationContextTrust
===
```
pub struct VirtualGatewayTlsValidationContextTrust {
pub acm: Option<VirtualGatewayTlsValidationContextAcmTrust>,
pub file: Option<VirtualGatewayTlsValidationContextFileTrust>,
pub sds: Option<VirtualGatewayTlsValidationContextSdsTrust>,
}
```
An object that represents a Transport Layer Security (TLS) validation context trust.
Fields
---
`acm: Option<VirtualGatewayTlsValidationContextAcmTrust>`A reference to an object that represents a Transport Layer Security (TLS) validation context trust for an Certificate Manager certificate.
`file: Option<VirtualGatewayTlsValidationContextFileTrust>`An object that represents a Transport Layer Security (TLS) validation context trust for a local file.
`sds: Option<VirtualGatewayTlsValidationContextSdsTrust>`A reference to an object that represents a virtual gateway's Transport Layer Security (TLS) Secret Discovery Service validation context trust.
Trait Implementations
---
source### impl Clone for VirtualGatewayTlsValidationContextTrust
source#### fn clone(&self) -> VirtualGatewayTlsValidationContextTrust
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualGatewayTlsValidationContextTrust
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualGatewayTlsValidationContextTrust
source#### fn default() -> VirtualGatewayTlsValidationContextTrust
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualGatewayTlsValidationContextTrust
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualGatewayTlsValidationContextTrust> for VirtualGatewayTlsValidationContextTrust
source#### fn eq(&self, other: &VirtualGatewayTlsValidationContextTrust) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualGatewayTlsValidationContextTrust) -> bool
This method tests for `!=`.
source### impl Serialize for VirtualGatewayTlsValidationContextTrust
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for VirtualGatewayTlsValidationContextTrust
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualGatewayTlsValidationContextTrust
### impl Send for VirtualGatewayTlsValidationContextTrust
### impl Sync for VirtualGatewayTlsValidationContextTrust
### impl Unpin for VirtualGatewayTlsValidationContextTrust
### impl UnwindSafe for VirtualGatewayTlsValidationContextTrust
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualNodeConnectionPool
===
```
pub struct VirtualNodeConnectionPool {
pub grpc: Option<VirtualNodeGrpcConnectionPool>,
pub http: Option<VirtualNodeHttpConnectionPool>,
pub http_2: Option<VirtualNodeHttp2ConnectionPool>,
pub tcp: Option<VirtualNodeTcpConnectionPool>,
}
```
An object that represents the type of virtual node connection pool.
Only one protocol is used at a time and should be the same protocol as the one chosen under port mapping.
If not present the default value for `maxPendingRequests` is `2147483647`.
Fields
---
`grpc: Option<VirtualNodeGrpcConnectionPool>`An object that represents a type of connection pool.
`http: Option<VirtualNodeHttpConnectionPool>`An object that represents a type of connection pool.
`http_2: Option<VirtualNodeHttp2ConnectionPool>`An object that represents a type of connection pool.
`tcp: Option<VirtualNodeTcpConnectionPool>`An object that represents a type of connection pool.
Trait Implementations
---
source### impl Clone for VirtualNodeConnectionPool
source#### fn clone(&self) -> VirtualNodeConnectionPool
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualNodeConnectionPool
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualNodeConnectionPool
source#### fn default() -> VirtualNodeConnectionPool
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualNodeConnectionPool
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualNodeConnectionPool> for VirtualNodeConnectionPool
source#### fn eq(&self, other: &VirtualNodeConnectionPool) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualNodeConnectionPool) -> bool
This method tests for `!=`.
source### impl Serialize for VirtualNodeConnectionPool
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for VirtualNodeConnectionPool
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualNodeConnectionPool
### impl Send for VirtualNodeConnectionPool
### impl Sync for VirtualNodeConnectionPool
### impl Unpin for VirtualNodeConnectionPool
### impl UnwindSafe for VirtualNodeConnectionPool
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualNodeData
===
```
pub struct VirtualNodeData {
pub mesh_name: String,
pub metadata: ResourceMetadata,
pub spec: VirtualNodeSpec,
pub status: VirtualNodeStatus,
pub virtual_node_name: String,
}
```
An object that represents a virtual node returned by a describe operation.
Fields
---
`mesh_name: String`The name of the service mesh that the virtual node resides in.
`metadata: ResourceMetadata`The associated metadata for the virtual node.
`spec: VirtualNodeSpec`The specifications of the virtual node.
`status: VirtualNodeStatus`The current status for the virtual node.
`virtual_node_name: String`The name of the virtual node.
Trait Implementations
---
source### impl Clone for VirtualNodeData
source#### fn clone(&self) -> VirtualNodeData
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualNodeData
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualNodeData
source#### fn default() -> VirtualNodeData
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualNodeData
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualNodeData> for VirtualNodeData
source#### fn eq(&self, other: &VirtualNodeData) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualNodeData) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for VirtualNodeData
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualNodeData
### impl Send for VirtualNodeData
### impl Sync for VirtualNodeData
### impl Unpin for VirtualNodeData
### impl UnwindSafe for VirtualNodeData
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualNodeGrpcConnectionPool
===
```
pub struct VirtualNodeGrpcConnectionPool {
pub max_requests: i64,
}
```
An object that represents a type of connection pool.
Fields
---
`max_requests: i64`Maximum number of inflight requests Envoy can concurrently support across hosts in upstream cluster.
Trait Implementations
---
source### impl Clone for VirtualNodeGrpcConnectionPool
source#### fn clone(&self) -> VirtualNodeGrpcConnectionPool
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualNodeGrpcConnectionPool
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualNodeGrpcConnectionPool
source#### fn default() -> VirtualNodeGrpcConnectionPool
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualNodeGrpcConnectionPool
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualNodeGrpcConnectionPool> for VirtualNodeGrpcConnectionPool
source#### fn eq(&self, other: &VirtualNodeGrpcConnectionPool) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualNodeGrpcConnectionPool) -> bool
This method tests for `!=`.
source### impl Serialize for VirtualNodeGrpcConnectionPool
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for VirtualNodeGrpcConnectionPool
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualNodeGrpcConnectionPool
### impl Send for VirtualNodeGrpcConnectionPool
### impl Sync for VirtualNodeGrpcConnectionPool
### impl Unpin for VirtualNodeGrpcConnectionPool
### impl UnwindSafe for VirtualNodeGrpcConnectionPool
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualNodeHttp2ConnectionPool
===
```
pub struct VirtualNodeHttp2ConnectionPool {
pub max_requests: i64,
}
```
An object that represents a type of connection pool.
Fields
---
`max_requests: i64`Maximum number of inflight requests Envoy can concurrently support across hosts in upstream cluster.
Trait Implementations
---
source### impl Clone for VirtualNodeHttp2ConnectionPool
source#### fn clone(&self) -> VirtualNodeHttp2ConnectionPool
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualNodeHttp2ConnectionPool
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualNodeHttp2ConnectionPool
source#### fn default() -> VirtualNodeHttp2ConnectionPool
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualNodeHttp2ConnectionPool
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualNodeHttp2ConnectionPool> for VirtualNodeHttp2ConnectionPool
source#### fn eq(&self, other: &VirtualNodeHttp2ConnectionPool) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualNodeHttp2ConnectionPool) -> bool
This method tests for `!=`.
source### impl Serialize for VirtualNodeHttp2ConnectionPool
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for VirtualNodeHttp2ConnectionPool
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualNodeHttp2ConnectionPool
### impl Send for VirtualNodeHttp2ConnectionPool
### impl Sync for VirtualNodeHttp2ConnectionPool
### impl Unpin for VirtualNodeHttp2ConnectionPool
### impl UnwindSafe for VirtualNodeHttp2ConnectionPool
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualNodeHttpConnectionPool
===
```
pub struct VirtualNodeHttpConnectionPool {
pub max_connections: i64,
pub max_pending_requests: Option<i64>,
}
```
An object that represents a type of connection pool.
Fields
---
`max_connections: i64`Maximum number of outbound TCP connections Envoy can establish concurrently with all hosts in upstream cluster.
`max_pending_requests: Option<i64>`Number of overflowing requests after `max_connections` Envoy will queue to upstream cluster.
Trait Implementations
---
source### impl Clone for VirtualNodeHttpConnectionPool
source#### fn clone(&self) -> VirtualNodeHttpConnectionPool
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualNodeHttpConnectionPool
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualNodeHttpConnectionPool
source#### fn default() -> VirtualNodeHttpConnectionPool
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualNodeHttpConnectionPool
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualNodeHttpConnectionPool> for VirtualNodeHttpConnectionPool
source#### fn eq(&self, other: &VirtualNodeHttpConnectionPool) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualNodeHttpConnectionPool) -> bool
This method tests for `!=`.
source### impl Serialize for VirtualNodeHttpConnectionPool
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for VirtualNodeHttpConnectionPool
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualNodeHttpConnectionPool
### impl Send for VirtualNodeHttpConnectionPool
### impl Sync for VirtualNodeHttpConnectionPool
### impl Unpin for VirtualNodeHttpConnectionPool
### impl UnwindSafe for VirtualNodeHttpConnectionPool
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualNodeRef
===
```
pub struct VirtualNodeRef {
pub arn: String,
pub created_at: f64,
pub last_updated_at: f64,
pub mesh_name: String,
pub mesh_owner: String,
pub resource_owner: String,
pub version: i64,
pub virtual_node_name: String,
}
```
An object that represents a virtual node returned by a list operation.
Fields
---
`arn: String`The full Amazon Resource Name (ARN) for the virtual node.
`created_at: f64`The Unix epoch timestamp in seconds for when the resource was created.
`last_updated_at: f64`The Unix epoch timestamp in seconds for when the resource was last updated.
`mesh_name: String`The name of the service mesh that the virtual node resides in.
`mesh_owner: String`The AWS IAM account ID of the service mesh owner. If the account ID is not your own, then it's the ID of the account that shared the mesh with your account. For more information about mesh sharing, see Working with shared meshes.
`resource_owner: String`The AWS IAM account ID of the resource owner. If the account ID is not your own, then it's the ID of the mesh owner or of another account that the mesh is shared with. For more information about mesh sharing, see Working with shared meshes.
`version: i64`The version of the resource. Resources are created at version 1, and this version is incremented each time that they're updated.
`virtual_node_name: String`The name of the virtual node.
Trait Implementations
---
source### impl Clone for VirtualNodeRef
source#### fn clone(&self) -> VirtualNodeRef
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualNodeRef
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualNodeRef
source#### fn default() -> VirtualNodeRef
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualNodeRef
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualNodeRef> for VirtualNodeRef
source#### fn eq(&self, other: &VirtualNodeRef) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualNodeRef) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for VirtualNodeRef
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualNodeRef
### impl Send for VirtualNodeRef
### impl Sync for VirtualNodeRef
### impl Unpin for VirtualNodeRef
### impl UnwindSafe for VirtualNodeRef
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualNodeServiceProvider
===
```
pub struct VirtualNodeServiceProvider {
pub virtual_node_name: String,
}
```
An object that represents a virtual node service provider.
Fields
---
`virtual_node_name: String`The name of the virtual node that is acting as a service provider.
Trait Implementations
---
source### impl Clone for VirtualNodeServiceProvider
source#### fn clone(&self) -> VirtualNodeServiceProvider
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualNodeServiceProvider
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualNodeServiceProvider
source#### fn default() -> VirtualNodeServiceProvider
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualNodeServiceProvider
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualNodeServiceProvider> for VirtualNodeServiceProvider
source#### fn eq(&self, other: &VirtualNodeServiceProvider) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualNodeServiceProvider) -> bool
This method tests for `!=`.
source### impl Serialize for VirtualNodeServiceProvider
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for VirtualNodeServiceProvider
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualNodeServiceProvider
### impl Send for VirtualNodeServiceProvider
### impl Sync for VirtualNodeServiceProvider
### impl Unpin for VirtualNodeServiceProvider
### impl UnwindSafe for VirtualNodeServiceProvider
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualNodeSpec
===
```
pub struct VirtualNodeSpec {
pub backend_defaults: Option<BackendDefaults>,
pub backends: Option<Vec<Backend>>,
pub listeners: Option<Vec<Listener>>,
pub logging: Option<Logging>,
pub service_discovery: Option<ServiceDiscovery>,
}
```
An object that represents the specification of a virtual node.
Fields
---
`backend_defaults: Option<BackendDefaults>`A reference to an object that represents the defaults for backends.
`backends: Option<Vec<Backend>>`The backends that the virtual node is expected to send outbound traffic to.
`listeners: Option<Vec<Listener>>`The listener that the virtual node is expected to receive inbound traffic from. You can specify one listener.
`logging: Option<Logging>`The inbound and outbound access logging information for the virtual node.
`service_discovery: Option<ServiceDiscovery>`The service discovery information for the virtual node. If your virtual node does not expect ingress traffic, you can omit this parameter. If you specify a `listener`, then you must specify service discovery information.
Trait Implementations
---
source### impl Clone for VirtualNodeSpec
source#### fn clone(&self) -> VirtualNodeSpec
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualNodeSpec
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualNodeSpec
source#### fn default() -> VirtualNodeSpec
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualNodeSpec
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualNodeSpec> for VirtualNodeSpec
source#### fn eq(&self, other: &VirtualNodeSpec) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualNodeSpec) -> bool
This method tests for `!=`.
source### impl Serialize for VirtualNodeSpec
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for VirtualNodeSpec
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualNodeSpec
### impl Send for VirtualNodeSpec
### impl Sync for VirtualNodeSpec
### impl Unpin for VirtualNodeSpec
### impl UnwindSafe for VirtualNodeSpec
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualNodeStatus
===
```
pub struct VirtualNodeStatus {
pub status: String,
}
```
An object that represents the current status of the virtual node.
Fields
---
`status: String`The current status of the virtual node.
Trait Implementations
---
source### impl Clone for VirtualNodeStatus
source#### fn clone(&self) -> VirtualNodeStatus
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualNodeStatus
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualNodeStatus
source#### fn default() -> VirtualNodeStatus
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualNodeStatus
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualNodeStatus> for VirtualNodeStatus
source#### fn eq(&self, other: &VirtualNodeStatus) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualNodeStatus) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for VirtualNodeStatus
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualNodeStatus
### impl Send for VirtualNodeStatus
### impl Sync for VirtualNodeStatus
### impl Unpin for VirtualNodeStatus
### impl UnwindSafe for VirtualNodeStatus
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualNodeTcpConnectionPool
===
```
pub struct VirtualNodeTcpConnectionPool {
pub max_connections: i64,
}
```
An object that represents a type of connection pool.
Fields
---
`max_connections: i64`Maximum number of outbound TCP connections Envoy can establish concurrently with all hosts in upstream cluster.
Trait Implementations
---
source### impl Clone for VirtualNodeTcpConnectionPool
source#### fn clone(&self) -> VirtualNodeTcpConnectionPool
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualNodeTcpConnectionPool
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualNodeTcpConnectionPool
source#### fn default() -> VirtualNodeTcpConnectionPool
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualNodeTcpConnectionPool
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualNodeTcpConnectionPool> for VirtualNodeTcpConnectionPool
source#### fn eq(&self, other: &VirtualNodeTcpConnectionPool) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualNodeTcpConnectionPool) -> bool
This method tests for `!=`.
source### impl Serialize for VirtualNodeTcpConnectionPool
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for VirtualNodeTcpConnectionPool
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualNodeTcpConnectionPool
### impl Send for VirtualNodeTcpConnectionPool
### impl Sync for VirtualNodeTcpConnectionPool
### impl Unpin for VirtualNodeTcpConnectionPool
### impl UnwindSafe for VirtualNodeTcpConnectionPool
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualRouterData
===
```
pub struct VirtualRouterData {
pub mesh_name: String,
pub metadata: ResourceMetadata,
pub spec: VirtualRouterSpec,
pub status: VirtualRouterStatus,
pub virtual_router_name: String,
}
```
An object that represents a virtual router returned by a describe operation.
Fields
---
`mesh_name: String`The name of the service mesh that the virtual router resides in.
`metadata: ResourceMetadata`The associated metadata for the virtual router.
`spec: VirtualRouterSpec`The specifications of the virtual router.
`status: VirtualRouterStatus`The current status of the virtual router.
`virtual_router_name: String`The name of the virtual router.
Trait Implementations
---
source### impl Clone for VirtualRouterData
source#### fn clone(&self) -> VirtualRouterData
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualRouterData
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualRouterData
source#### fn default() -> VirtualRouterData
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualRouterData
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualRouterData> for VirtualRouterData
source#### fn eq(&self, other: &VirtualRouterData) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualRouterData) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for VirtualRouterData
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualRouterData
### impl Send for VirtualRouterData
### impl Sync for VirtualRouterData
### impl Unpin for VirtualRouterData
### impl UnwindSafe for VirtualRouterData
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualRouterListener
===
```
pub struct VirtualRouterListener {
pub port_mapping: PortMapping,
}
```
An object that represents a virtual router listener.
Fields
---
`port_mapping: PortMapping`Trait Implementations
---
source### impl Clone for VirtualRouterListener
source#### fn clone(&self) -> VirtualRouterListener
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualRouterListener
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualRouterListener
source#### fn default() -> VirtualRouterListener
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualRouterListener
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualRouterListener> for VirtualRouterListener
source#### fn eq(&self, other: &VirtualRouterListener) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualRouterListener) -> bool
This method tests for `!=`.
source### impl Serialize for VirtualRouterListener
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for VirtualRouterListener
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualRouterListener
### impl Send for VirtualRouterListener
### impl Sync for VirtualRouterListener
### impl Unpin for VirtualRouterListener
### impl UnwindSafe for VirtualRouterListener
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualRouterRef
===
```
pub struct VirtualRouterRef {
pub arn: String,
pub created_at: f64,
pub last_updated_at: f64,
pub mesh_name: String,
pub mesh_owner: String,
pub resource_owner: String,
pub version: i64,
pub virtual_router_name: String,
}
```
An object that represents a virtual router returned by a list operation.
Fields
---
`arn: String`The full Amazon Resource Name (ARN) for the virtual router.
`created_at: f64`The Unix epoch timestamp in seconds for when the resource was created.
`last_updated_at: f64`The Unix epoch timestamp in seconds for when the resource was last updated.
`mesh_name: String`The name of the service mesh that the virtual router resides in.
`mesh_owner: String`The AWS IAM account ID of the service mesh owner. If the account ID is not your own, then it's the ID of the account that shared the mesh with your account. For more information about mesh sharing, see Working with shared meshes.
`resource_owner: String`The AWS IAM account ID of the resource owner. If the account ID is not your own, then it's the ID of the mesh owner or of another account that the mesh is shared with. For more information about mesh sharing, see Working with shared meshes.
`version: i64`The version of the resource. Resources are created at version 1, and this version is incremented each time that they're updated.
`virtual_router_name: String`The name of the virtual router.
Trait Implementations
---
source### impl Clone for VirtualRouterRef
source#### fn clone(&self) -> VirtualRouterRef
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualRouterRef
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualRouterRef
source#### fn default() -> VirtualRouterRef
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualRouterRef
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualRouterRef> for VirtualRouterRef
source#### fn eq(&self, other: &VirtualRouterRef) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualRouterRef) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for VirtualRouterRef
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualRouterRef
### impl Send for VirtualRouterRef
### impl Sync for VirtualRouterRef
### impl Unpin for VirtualRouterRef
### impl UnwindSafe for VirtualRouterRef
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualRouterServiceProvider
===
```
pub struct VirtualRouterServiceProvider {
pub virtual_router_name: String,
}
```
An object that represents a virtual node service provider.
Fields
---
`virtual_router_name: String`The name of the virtual router that is acting as a service provider.
Trait Implementations
---
source### impl Clone for VirtualRouterServiceProvider
source#### fn clone(&self) -> VirtualRouterServiceProvider
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualRouterServiceProvider
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualRouterServiceProvider
source#### fn default() -> VirtualRouterServiceProvider
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualRouterServiceProvider
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualRouterServiceProvider> for VirtualRouterServiceProvider
source#### fn eq(&self, other: &VirtualRouterServiceProvider) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualRouterServiceProvider) -> bool
This method tests for `!=`.
source### impl Serialize for VirtualRouterServiceProvider
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for VirtualRouterServiceProvider
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualRouterServiceProvider
### impl Send for VirtualRouterServiceProvider
### impl Sync for VirtualRouterServiceProvider
### impl Unpin for VirtualRouterServiceProvider
### impl UnwindSafe for VirtualRouterServiceProvider
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualRouterSpec
===
```
pub struct VirtualRouterSpec {
pub listeners: Option<Vec<VirtualRouterListener>>,
}
```
An object that represents the specification of a virtual router.
Fields
---
`listeners: Option<Vec<VirtualRouterListener>>`The listeners that the virtual router is expected to receive inbound traffic from. You can specify one listener.
Trait Implementations
---
source### impl Clone for VirtualRouterSpec
source#### fn clone(&self) -> VirtualRouterSpec
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualRouterSpec
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualRouterSpec
source#### fn default() -> VirtualRouterSpec
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualRouterSpec
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualRouterSpec> for VirtualRouterSpec
source#### fn eq(&self, other: &VirtualRouterSpec) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualRouterSpec) -> bool
This method tests for `!=`.
source### impl Serialize for VirtualRouterSpec
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for VirtualRouterSpec
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualRouterSpec
### impl Send for VirtualRouterSpec
### impl Sync for VirtualRouterSpec
### impl Unpin for VirtualRouterSpec
### impl UnwindSafe for VirtualRouterSpec
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualRouterStatus
===
```
pub struct VirtualRouterStatus {
pub status: String,
}
```
An object that represents the status of a virtual router.
Fields
---
`status: String`The current status of the virtual router.
Trait Implementations
---
source### impl Clone for VirtualRouterStatus
source#### fn clone(&self) -> VirtualRouterStatus
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualRouterStatus
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualRouterStatus
source#### fn default() -> VirtualRouterStatus
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualRouterStatus
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualRouterStatus> for VirtualRouterStatus
source#### fn eq(&self, other: &VirtualRouterStatus) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualRouterStatus) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for VirtualRouterStatus
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualRouterStatus
### impl Send for VirtualRouterStatus
### impl Sync for VirtualRouterStatus
### impl Unpin for VirtualRouterStatus
### impl UnwindSafe for VirtualRouterStatus
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualServiceBackend
===
```
pub struct VirtualServiceBackend {
pub client_policy: Option<ClientPolicy>,
pub virtual_service_name: String,
}
```
An object that represents a virtual service backend for a virtual node.
Fields
---
`client_policy: Option<ClientPolicy>`A reference to an object that represents the client policy for a backend.
`virtual_service_name: String`The name of the virtual service that is acting as a virtual node backend.
Trait Implementations
---
source### impl Clone for VirtualServiceBackend
source#### fn clone(&self) -> VirtualServiceBackend
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualServiceBackend
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualServiceBackend
source#### fn default() -> VirtualServiceBackend
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualServiceBackend
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualServiceBackend> for VirtualServiceBackend
source#### fn eq(&self, other: &VirtualServiceBackend) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualServiceBackend) -> bool
This method tests for `!=`.
source### impl Serialize for VirtualServiceBackend
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for VirtualServiceBackend
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualServiceBackend
### impl Send for VirtualServiceBackend
### impl Sync for VirtualServiceBackend
### impl Unpin for VirtualServiceBackend
### impl UnwindSafe for VirtualServiceBackend
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualServiceData
===
```
pub struct VirtualServiceData {
pub mesh_name: String,
pub metadata: ResourceMetadata,
pub spec: VirtualServiceSpec,
pub status: VirtualServiceStatus,
pub virtual_service_name: String,
}
```
An object that represents a virtual service returned by a describe operation.
Fields
---
`mesh_name: String`The name of the service mesh that the virtual service resides in.
`metadata: ResourceMetadata``spec: VirtualServiceSpec`The specifications of the virtual service.
`status: VirtualServiceStatus`The current status of the virtual service.
`virtual_service_name: String`The name of the virtual service.
Trait Implementations
---
source### impl Clone for VirtualServiceData
source#### fn clone(&self) -> VirtualServiceData
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualServiceData
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualServiceData
source#### fn default() -> VirtualServiceData
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualServiceData
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualServiceData> for VirtualServiceData
source#### fn eq(&self, other: &VirtualServiceData) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualServiceData) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for VirtualServiceData
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualServiceData
### impl Send for VirtualServiceData
### impl Sync for VirtualServiceData
### impl Unpin for VirtualServiceData
### impl UnwindSafe for VirtualServiceData
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualServiceProvider
===
```
pub struct VirtualServiceProvider {
pub virtual_node: Option<VirtualNodeServiceProvider>,
pub virtual_router: Option<VirtualRouterServiceProvider>,
}
```
An object that represents the provider for a virtual service.
Fields
---
`virtual_node: Option<VirtualNodeServiceProvider>`The virtual node associated with a virtual service.
`virtual_router: Option<VirtualRouterServiceProvider>`The virtual router associated with a virtual service.
Trait Implementations
---
source### impl Clone for VirtualServiceProvider
source#### fn clone(&self) -> VirtualServiceProvider
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualServiceProvider
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualServiceProvider
source#### fn default() -> VirtualServiceProvider
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualServiceProvider
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualServiceProvider> for VirtualServiceProvider
source#### fn eq(&self, other: &VirtualServiceProvider) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualServiceProvider) -> bool
This method tests for `!=`.
source### impl Serialize for VirtualServiceProvider
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for VirtualServiceProvider
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualServiceProvider
### impl Send for VirtualServiceProvider
### impl Sync for VirtualServiceProvider
### impl Unpin for VirtualServiceProvider
### impl UnwindSafe for VirtualServiceProvider
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualServiceRef
===
```
pub struct VirtualServiceRef {
pub arn: String,
pub created_at: f64,
pub last_updated_at: f64,
pub mesh_name: String,
pub mesh_owner: String,
pub resource_owner: String,
pub version: i64,
pub virtual_service_name: String,
}
```
An object that represents a virtual service returned by a list operation.
Fields
---
`arn: String`The full Amazon Resource Name (ARN) for the virtual service.
`created_at: f64`The Unix epoch timestamp in seconds for when the resource was created.
`last_updated_at: f64`The Unix epoch timestamp in seconds for when the resource was last updated.
`mesh_name: String`The name of the service mesh that the virtual service resides in.
`mesh_owner: String`The AWS IAM account ID of the service mesh owner. If the account ID is not your own, then it's the ID of the account that shared the mesh with your account. For more information about mesh sharing, see Working with shared meshes.
`resource_owner: String`The AWS IAM account ID of the resource owner. If the account ID is not your own, then it's the ID of the mesh owner or of another account that the mesh is shared with. For more information about mesh sharing, see Working with shared meshes.
`version: i64`The version of the resource. Resources are created at version 1, and this version is incremented each time that they're updated.
`virtual_service_name: String`The name of the virtual service.
Trait Implementations
---
source### impl Clone for VirtualServiceRef
source#### fn clone(&self) -> VirtualServiceRef
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualServiceRef
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualServiceRef
source#### fn default() -> VirtualServiceRef
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualServiceRef
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualServiceRef> for VirtualServiceRef
source#### fn eq(&self, other: &VirtualServiceRef) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualServiceRef) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for VirtualServiceRef
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualServiceRef
### impl Send for VirtualServiceRef
### impl Sync for VirtualServiceRef
### impl Unpin for VirtualServiceRef
### impl UnwindSafe for VirtualServiceRef
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualServiceSpec
===
```
pub struct VirtualServiceSpec {
pub provider: Option<VirtualServiceProvider>,
}
```
An object that represents the specification of a virtual service.
Fields
---
`provider: Option<VirtualServiceProvider>`The App Mesh object that is acting as the provider for a virtual service. You can specify a single virtual node or virtual router.
Trait Implementations
---
source### impl Clone for VirtualServiceSpec
source#### fn clone(&self) -> VirtualServiceSpec
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualServiceSpec
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualServiceSpec
source#### fn default() -> VirtualServiceSpec
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualServiceSpec
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualServiceSpec> for VirtualServiceSpec
source#### fn eq(&self, other: &VirtualServiceSpec) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualServiceSpec) -> bool
This method tests for `!=`.
source### impl Serialize for VirtualServiceSpec
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for VirtualServiceSpec
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualServiceSpec
### impl Send for VirtualServiceSpec
### impl Sync for VirtualServiceSpec
### impl Unpin for VirtualServiceSpec
### impl UnwindSafe for VirtualServiceSpec
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::VirtualServiceStatus
===
```
pub struct VirtualServiceStatus {
pub status: String,
}
```
An object that represents the status of a virtual service.
Fields
---
`status: String`The current status of the virtual service.
Trait Implementations
---
source### impl Clone for VirtualServiceStatus
source#### fn clone(&self) -> VirtualServiceStatus
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VirtualServiceStatus
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VirtualServiceStatus
source#### fn default() -> VirtualServiceStatus
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for VirtualServiceStatus
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<VirtualServiceStatus> for VirtualServiceStatus
source#### fn eq(&self, other: &VirtualServiceStatus) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VirtualServiceStatus) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for VirtualServiceStatus
Auto Trait Implementations
---
### impl RefUnwindSafe for VirtualServiceStatus
### impl Send for VirtualServiceStatus
### impl Sync for VirtualServiceStatus
### impl Unpin for VirtualServiceStatus
### impl UnwindSafe for VirtualServiceStatus
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_appmesh::WeightedTarget
===
```
pub struct WeightedTarget {
pub virtual_node: String,
pub weight: i64,
}
```
An object that represents a target and its relative weight. Traffic is distributed across targets according to their relative weight. For example, a weighted target with a relative weight of 50 receives five times as much traffic as one with a relative weight of 10. The total weight for all targets combined must be less than or equal to 100.
Fields
---
`virtual_node: String`The virtual node to associate with the weighted target.
`weight: i64`The relative weight of the weighted target.
Trait Implementations
---
source### impl Clone for WeightedTarget
source#### fn clone(&self) -> WeightedTarget
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for WeightedTarget
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for WeightedTarget
source#### fn default() -> WeightedTarget
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for WeightedTarget
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<WeightedTarget> for WeightedTarget
source#### fn eq(&self, other: &WeightedTarget) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &WeightedTarget) -> bool
This method tests for `!=`.
source### impl Serialize for WeightedTarget
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for WeightedTarget
Auto Trait Implementations
---
### impl RefUnwindSafe for WeightedTarget
### impl Send for WeightedTarget
### impl Sync for WeightedTarget
### impl Unpin for WeightedTarget
### impl UnwindSafe for WeightedTarget
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Enum rusoto_appmesh::CreateGatewayRouteError
===
```
pub enum CreateGatewayRouteError {
BadRequest(String),
Conflict(String),
Forbidden(String),
InternalServerError(String),
LimitExceeded(String),
NotFound(String),
ServiceUnavailable(String),
TooManyRequests(String),
}
```
Errors returned by CreateGatewayRoute
Variants
---
### `BadRequest(String)`
The request syntax was malformed. Check your request syntax and try again.
### `Conflict(String)`
The request contains a client token that was used for a previous update resource call with different specifications. Try the request again with a new client token.
### `Forbidden(String)`
You don't have permissions to perform this action.
### `InternalServerError(String)`
The request processing has failed because of an unknown error, exception, or failure.
### `LimitExceeded(String)`
You have exceeded a service limit for your account. For more information, see Service Limits in the *AWS App Mesh User Guide*.
### `NotFound(String)`
The specified resource doesn't exist. Check your request syntax and try again.
### `ServiceUnavailable(String)`
The request has failed due to a temporary failure of the service.
### `TooManyRequests(String)`
The maximum request rate permitted by the App Mesh APIs has been exceeded for your account. For best results, use an increasing or variable sleep interval between requests.
Implementations
---
source### impl CreateGatewayRouteError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<CreateGatewayRouteErrorTrait Implementations
---
source### impl Debug for CreateGatewayRouteError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for CreateGatewayRouteError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for CreateGatewayRouteError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<CreateGatewayRouteError> for CreateGatewayRouteError
source#### fn eq(&self, other: &CreateGatewayRouteError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateGatewayRouteError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for CreateGatewayRouteError
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateGatewayRouteError
### impl Send for CreateGatewayRouteError
### impl Sync for CreateGatewayRouteError
### impl Unpin for CreateGatewayRouteError
### impl UnwindSafe for CreateGatewayRouteError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_appmesh::CreateMeshError
===
```
pub enum CreateMeshError {
BadRequest(String),
Conflict(String),
Forbidden(String),
InternalServerError(String),
LimitExceeded(String),
NotFound(String),
ServiceUnavailable(String),
TooManyRequests(String),
}
```
Errors returned by CreateMesh
Variants
---
### `BadRequest(String)`
The request syntax was malformed. Check your request syntax and try again.
### `Conflict(String)`
The request contains a client token that was used for a previous update resource call with different specifications. Try the request again with a new client token.
### `Forbidden(String)`
You don't have permissions to perform this action.
### `InternalServerError(String)`
The request processing has failed because of an unknown error, exception, or failure.
### `LimitExceeded(String)`
You have exceeded a service limit for your account. For more information, see Service Limits in the *AWS App Mesh User Guide*.
### `NotFound(String)`
The specified resource doesn't exist. Check your request syntax and try again.
### `ServiceUnavailable(String)`
The request has failed due to a temporary failure of the service.
### `TooManyRequests(String)`
The maximum request rate permitted by the App Mesh APIs has been exceeded for your account. For best results, use an increasing or variable sleep interval between requests.
Implementations
---
source### impl CreateMeshError
source#### pub fn from_response(res: BufferedHttpResponse) -> RusotoError<CreateMeshErrorTrait Implementations
---
source### impl Debug for CreateMeshError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for CreateMeshError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for CreateMeshError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<CreateMeshError> for CreateMeshError
source#### fn eq(&self, other: &CreateMeshError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateMeshError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for CreateMeshError
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateMeshError
### impl Send for CreateMeshError
### impl Sync for CreateMeshError
### impl Unpin for CreateMeshError
### impl UnwindSafe for CreateMeshError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_appmesh::CreateRouteError
===
```
pub enum CreateRouteError {
BadRequest(String),
Conflict(String),
Forbidden(String),
InternalServerError(String),
LimitExceeded(String),
NotFound(String),
ServiceUnavailable(String),
TooManyRequests(String),
}
```
Errors returned by CreateRoute
Variants
---
### `BadRequest(String)`
The request syntax was malformed. Check your request syntax and try again.
### `Conflict(String)`
The request contains a client token that was used for a previous update resource call with different specifications. Try the request again with a new client token.
### `Forbidden(String)`
You don't have permissions to perform this action.
### `InternalServerError(String)`
The request processing has failed because of an unknown error, exception, or failure.
### `LimitExceeded(String)`
You have exceeded a service limit for your account. For more information, see Service Limits in the *AWS App Mesh User Guide*.
### `NotFound(String)`
The specified resource doesn't exist. Check your request syntax and try again.
### `ServiceUnavailable(String)`
The request has failed due to a temporary failure of the service.
### `TooManyRequests(String)`
The maximum request rate permitted by the App Mesh APIs has been exceeded for your account. For best results, use an increasing or variable sleep interval between requests.
Implementations
---
source### impl CreateRouteError
source#### pub fn from_response(res: BufferedHttpResponse) -> RusotoError<CreateRouteErrorTrait Implementations
---
source### impl Debug for CreateRouteError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for CreateRouteError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for CreateRouteError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<CreateRouteError> for CreateRouteError
source#### fn eq(&self, other: &CreateRouteError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateRouteError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for CreateRouteError
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateRouteError
### impl Send for CreateRouteError
### impl Sync for CreateRouteError
### impl Unpin for CreateRouteError
### impl UnwindSafe for CreateRouteError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_appmesh::CreateVirtualGatewayError
===
```
pub enum CreateVirtualGatewayError {
BadRequest(String),
Conflict(String),
Forbidden(String),
InternalServerError(String),
LimitExceeded(String),
NotFound(String),
ServiceUnavailable(String),
TooManyRequests(String),
}
```
Errors returned by CreateVirtualGateway
Variants
---
### `BadRequest(String)`
The request syntax was malformed. Check your request syntax and try again.
### `Conflict(String)`
The request contains a client token that was used for a previous update resource call with different specifications. Try the request again with a new client token.
### `Forbidden(String)`
You don't have permissions to perform this action.
### `InternalServerError(String)`
The request processing has failed because of an unknown error, exception, or failure.
### `LimitExceeded(String)`
You have exceeded a service limit for your account. For more information, see Service Limits in the *AWS App Mesh User Guide*.
### `NotFound(String)`
The specified resource doesn't exist. Check your request syntax and try again.
### `ServiceUnavailable(String)`
The request has failed due to a temporary failure of the service.
### `TooManyRequests(String)`
The maximum request rate permitted by the App Mesh APIs has been exceeded for your account. For best results, use an increasing or variable sleep interval between requests.
Implementations
---
source### impl CreateVirtualGatewayError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<CreateVirtualGatewayErrorTrait Implementations
---
source### impl Debug for CreateVirtualGatewayError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for CreateVirtualGatewayError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for CreateVirtualGatewayError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<CreateVirtualGatewayError> for CreateVirtualGatewayError
source#### fn eq(&self, other: &CreateVirtualGatewayError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateVirtualGatewayError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for CreateVirtualGatewayError
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateVirtualGatewayError
### impl Send for CreateVirtualGatewayError
### impl Sync for CreateVirtualGatewayError
### impl Unpin for CreateVirtualGatewayError
### impl UnwindSafe for CreateVirtualGatewayError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_appmesh::CreateVirtualNodeError
===
```
pub enum CreateVirtualNodeError {
BadRequest(String),
Conflict(String),
Forbidden(String),
InternalServerError(String),
LimitExceeded(String),
NotFound(String),
ServiceUnavailable(String),
TooManyRequests(String),
}
```
Errors returned by CreateVirtualNode
Variants
---
### `BadRequest(String)`
The request syntax was malformed. Check your request syntax and try again.
### `Conflict(String)`
The request contains a client token that was used for a previous update resource call with different specifications. Try the request again with a new client token.
### `Forbidden(String)`
You don't have permissions to perform this action.
### `InternalServerError(String)`
The request processing has failed because of an unknown error, exception, or failure.
### `LimitExceeded(String)`
You have exceeded a service limit for your account. For more information, see Service Limits in the *AWS App Mesh User Guide*.
### `NotFound(String)`
The specified resource doesn't exist. Check your request syntax and try again.
### `ServiceUnavailable(String)`
The request has failed due to a temporary failure of the service.
### `TooManyRequests(String)`
The maximum request rate permitted by the App Mesh APIs has been exceeded for your account. For best results, use an increasing or variable sleep interval between requests.
Implementations
---
source### impl CreateVirtualNodeError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<CreateVirtualNodeErrorTrait Implementations
---
source### impl Debug for CreateVirtualNodeError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for CreateVirtualNodeError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for CreateVirtualNodeError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<CreateVirtualNodeError> for CreateVirtualNodeError
source#### fn eq(&self, other: &CreateVirtualNodeError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateVirtualNodeError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for CreateVirtualNodeError
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateVirtualNodeError
### impl Send for CreateVirtualNodeError
### impl Sync for CreateVirtualNodeError
### impl Unpin for CreateVirtualNodeError
### impl UnwindSafe for CreateVirtualNodeError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_appmesh::CreateVirtualRouterError
===
```
pub enum CreateVirtualRouterError {
BadRequest(String),
Conflict(String),
Forbidden(String),
InternalServerError(String),
LimitExceeded(String),
NotFound(String),
ServiceUnavailable(String),
TooManyRequests(String),
}
```
Errors returned by CreateVirtualRouter
Variants
---
### `BadRequest(String)`
The request syntax was malformed. Check your request syntax and try again.
### `Conflict(String)`
The request contains a client token that was used for a previous update resource call with different specifications. Try the request again with a new client token.
### `Forbidden(String)`
You don't have permissions to perform this action.
### `InternalServerError(String)`
The request processing has failed because of an unknown error, exception, or failure.
### `LimitExceeded(String)`
You have exceeded a service limit for your account. For more information, see Service Limits in the *AWS App Mesh User Guide*.
### `NotFound(String)`
The specified resource doesn't exist. Check your request syntax and try again.
### `ServiceUnavailable(String)`
The request has failed due to a temporary failure of the service.
### `TooManyRequests(String)`
The maximum request rate permitted by the App Mesh APIs has been exceeded for your account. For best results, use an increasing or variable sleep interval between requests.
Implementations
---
source### impl CreateVirtualRouterError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<CreateVirtualRouterErrorTrait Implementations
---
source### impl Debug for CreateVirtualRouterError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for CreateVirtualRouterError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for CreateVirtualRouterError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<CreateVirtualRouterError> for CreateVirtualRouterError
source#### fn eq(&self, other: &CreateVirtualRouterError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateVirtualRouterError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for CreateVirtualRouterError
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateVirtualRouterError
### impl Send for CreateVirtualRouterError
### impl Sync for CreateVirtualRouterError
### impl Unpin for CreateVirtualRouterError
### impl UnwindSafe for CreateVirtualRouterError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_appmesh::CreateVirtualServiceError
===
```
pub enum CreateVirtualServiceError {
BadRequest(String),
Conflict(String),
Forbidden(String),
InternalServerError(String),
LimitExceeded(String),
NotFound(String),
ServiceUnavailable(String),
TooManyRequests(String),
}
```
Errors returned by CreateVirtualService
Variants
---
### `BadRequest(String)`
The request syntax was malformed. Check your request syntax and try again.
### `Conflict(String)`
The request contains a client token that was used for a previous update resource call with different specifications. Try the request again with a new client token.
### `Forbidden(String)`
You don't have permissions to perform this action.
### `InternalServerError(String)`
The request processing has failed because of an unknown error, exception, or failure.
### `LimitExceeded(String)`
You have exceeded a service limit for your account. For more information, see Service Limits in the *AWS App Mesh User Guide*.
### `NotFound(String)`
The specified resource doesn't exist. Check your request syntax and try again.
### `ServiceUnavailable(String)`
The request has failed due to a temporary failure of the service.
### `TooManyRequests(String)`
The maximum request rate permitted by the App Mesh APIs has been exceeded for your account. For best results, use an increasing or variable sleep interval between requests.
Implementations
---
source### impl CreateVirtualServiceError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<CreateVirtualServiceErrorTrait Implementations
---
source### impl Debug for CreateVirtualServiceError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for CreateVirtualServiceError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for CreateVirtualServiceError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<CreateVirtualServiceError> for CreateVirtualServiceError
source#### fn eq(&self, other: &CreateVirtualServiceError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateVirtualServiceError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for CreateVirtualServiceError
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateVirtualServiceError
### impl Send for CreateVirtualServiceError
### impl Sync for CreateVirtualServiceError
### impl Unpin for CreateVirtualServiceError
### impl UnwindSafe for CreateVirtualServiceError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_appmesh::DeleteGatewayRouteError
===
```
pub enum DeleteGatewayRouteError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
NotFound(String),
ResourceInUse(String),
ServiceUnavailable(String),
TooManyRequests(String),
}
```
Errors returned by DeleteGatewayRoute
Variants
---
### `BadRequest(String)`
The request syntax was malformed. Check your request syntax and try again.
### `Forbidden(String)`
You don't have permissions to perform this action.
### `InternalServerError(String)`
The request processing has failed because of an unknown error, exception, or failure.
### `NotFound(String)`
The specified resource doesn't exist. Check your request syntax and try again.
### `ResourceInUse(String)`
You can't delete the specified resource because it's in use or required by another resource.
### `ServiceUnavailable(String)`
The request has failed due to a temporary failure of the service.
### `TooManyRequests(String)`
The maximum request rate permitted by the App Mesh APIs has been exceeded for your account. For best results, use an increasing or variable sleep interval between requests.
Implementations
---
source### impl DeleteGatewayRouteError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<DeleteGatewayRouteErrorTrait Implementations
---
source### impl Debug for DeleteGatewayRouteError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for DeleteGatewayRouteError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for DeleteGatewayRouteError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<DeleteGatewayRouteError> for DeleteGatewayRouteError
source#### fn eq(&self, other: &DeleteGatewayRouteError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteGatewayRouteError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DeleteGatewayRouteError
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteGatewayRouteError
### impl Send for DeleteGatewayRouteError
### impl Sync for DeleteGatewayRouteError
### impl Unpin for DeleteGatewayRouteError
### impl UnwindSafe for DeleteGatewayRouteError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_appmesh::DeleteMeshError
===
```
pub enum DeleteMeshError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
NotFound(String),
ResourceInUse(String),
ServiceUnavailable(String),
TooManyRequests(String),
}
```
Errors returned by DeleteMesh
Variants
---
### `BadRequest(String)`
The request syntax was malformed. Check your request syntax and try again.
### `Forbidden(String)`
You don't have permissions to perform this action.
### `InternalServerError(String)`
The request processing has failed because of an unknown error, exception, or failure.
### `NotFound(String)`
The specified resource doesn't exist. Check your request syntax and try again.
### `ResourceInUse(String)`
You can't delete the specified resource because it's in use or required by another resource.
### `ServiceUnavailable(String)`
The request has failed due to a temporary failure of the service.
### `TooManyRequests(String)`
The maximum request rate permitted by the App Mesh APIs has been exceeded for your account. For best results, use an increasing or variable sleep interval between requests.
Implementations
---
source### impl DeleteMeshError
source#### pub fn from_response(res: BufferedHttpResponse) -> RusotoError<DeleteMeshErrorTrait Implementations
---
source### impl Debug for DeleteMeshError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for DeleteMeshError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for DeleteMeshError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<DeleteMeshError> for DeleteMeshError
source#### fn eq(&self, other: &DeleteMeshError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteMeshError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DeleteMeshError
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteMeshError
### impl Send for DeleteMeshError
### impl Sync for DeleteMeshError
### impl Unpin for DeleteMeshError
### impl UnwindSafe for DeleteMeshError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_appmesh::DeleteRouteError
===
```
pub enum DeleteRouteError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
NotFound(String),
ResourceInUse(String),
ServiceUnavailable(String),
TooManyRequests(String),
}
```
Errors returned by DeleteRoute
Variants
---
### `BadRequest(String)`
The request syntax was malformed. Check your request syntax and try again.
### `Forbidden(String)`
You don't have permissions to perform this action.
### `InternalServerError(String)`
The request processing has failed because of an unknown error, exception, or failure.
### `NotFound(String)`
The specified resource doesn't exist. Check your request syntax and try again.
### `ResourceInUse(String)`
You can't delete the specified resource because it's in use or required by another resource.
### `ServiceUnavailable(String)`
The request has failed due to a temporary failure of the service.
### `TooManyRequests(String)`
The maximum request rate permitted by the App Mesh APIs has been exceeded for your account. For best results, use an increasing or variable sleep interval between requests.
Implementations
---
source### impl DeleteRouteError
source#### pub fn from_response(res: BufferedHttpResponse) -> RusotoError<DeleteRouteErrorTrait Implementations
---
source### impl Debug for DeleteRouteError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for DeleteRouteError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for DeleteRouteError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<DeleteRouteError> for DeleteRouteError
source#### fn eq(&self, other: &DeleteRouteError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteRouteError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DeleteRouteError
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteRouteError
### impl Send for DeleteRouteError
### impl Sync for DeleteRouteError
### impl Unpin for DeleteRouteError
### impl UnwindSafe for DeleteRouteError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_appmesh::DeleteVirtualGatewayError
===
```
pub enum DeleteVirtualGatewayError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
NotFound(String),
ResourceInUse(String),
ServiceUnavailable(String),
TooManyRequests(String),
}
```
Errors returned by DeleteVirtualGateway
Variants
---
### `BadRequest(String)`
The request syntax was malformed. Check your request syntax and try again.
### `Forbidden(String)`
You don't have permissions to perform this action.
### `InternalServerError(String)`
The request processing has failed because of an unknown error, exception, or failure.
### `NotFound(String)`
The specified resource doesn't exist. Check your request syntax and try again.
### `ResourceInUse(String)`
You can't delete the specified resource because it's in use or required by another resource.
### `ServiceUnavailable(String)`
The request has failed due to a temporary failure of the service.
### `TooManyRequests(String)`
The maximum request rate permitted by the App Mesh APIs has been exceeded for your account. For best results, use an increasing or variable sleep interval between requests.
Implementations
---
source### impl DeleteVirtualGatewayError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<DeleteVirtualGatewayErrorTrait Implementations
---
source### impl Debug for DeleteVirtualGatewayError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for DeleteVirtualGatewayError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for DeleteVirtualGatewayError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<DeleteVirtualGatewayError> for DeleteVirtualGatewayError
source#### fn eq(&self, other: &DeleteVirtualGatewayError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteVirtualGatewayError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DeleteVirtualGatewayError
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteVirtualGatewayError
### impl Send for DeleteVirtualGatewayError
### impl Sync for DeleteVirtualGatewayError
### impl Unpin for DeleteVirtualGatewayError
### impl UnwindSafe for DeleteVirtualGatewayError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_appmesh::DeleteVirtualNodeError
===
```
pub enum DeleteVirtualNodeError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
NotFound(String),
ResourceInUse(String),
ServiceUnavailable(String),
TooManyRequests(String),
}
```
Errors returned by DeleteVirtualNode
Variants
---
### `BadRequest(String)`
The request syntax was malformed. Check your request syntax and try again.
### `Forbidden(String)`
You don't have permissions to perform this action.
### `InternalServerError(String)`
The request processing has failed because of an unknown error, exception, or failure.
### `NotFound(String)`
The specified resource doesn't exist. Check your request syntax and try again.
### `ResourceInUse(String)`
You can't delete the specified resource because it's in use or required by another resource.
### `ServiceUnavailable(String)`
The request has failed due to a temporary failure of the service.
### `TooManyRequests(String)`
The maximum request rate permitted by the App Mesh APIs has been exceeded for your account. For best results, use an increasing or variable sleep interval between requests.
Implementations
---
source### impl DeleteVirtualNodeError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<DeleteVirtualNodeErrorTrait Implementations
---
source### impl Debug for DeleteVirtualNodeError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for DeleteVirtualNodeError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for DeleteVirtualNodeError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<DeleteVirtualNodeError> for DeleteVirtualNodeError
source#### fn eq(&self, other: &DeleteVirtualNodeError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteVirtualNodeError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DeleteVirtualNodeError
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteVirtualNodeError
### impl Send for DeleteVirtualNodeError
### impl Sync for DeleteVirtualNodeError
### impl Unpin for DeleteVirtualNodeError
### impl UnwindSafe for DeleteVirtualNodeError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_appmesh::DeleteVirtualRouterError
===
```
pub enum DeleteVirtualRouterError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
NotFound(String),
ResourceInUse(String),
ServiceUnavailable(String),
TooManyRequests(String),
}
```
Errors returned by DeleteVirtualRouter
Variants
---
### `BadRequest(String)`
The request syntax was malformed. Check your request syntax and try again.
### `Forbidden(String)`
You don't have permissions to perform this action.
### `InternalServerError(String)`
The request processing has failed because of an unknown error, exception, or failure.
### `NotFound(String)`
The specified resource doesn't exist. Check your request syntax and try again.
### `ResourceInUse(String)`
You can't delete the specified resource because it's in use or required by another resource.
### `ServiceUnavailable(String)`
The request has failed due to a temporary failure of the service.
### `TooManyRequests(String)`
The maximum request rate permitted by the App Mesh APIs has been exceeded for your account. For best results, use an increasing or variable sleep interval between requests.
Implementations
---
source### impl DeleteVirtualRouterError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<DeleteVirtualRouterErrorTrait Implementations
---
source### impl Debug for DeleteVirtualRouterError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for DeleteVirtualRouterError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for DeleteVirtualRouterError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<DeleteVirtualRouterError> for DeleteVirtualRouterError
source#### fn eq(&self, other: &DeleteVirtualRouterError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteVirtualRouterError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DeleteVirtualRouterError
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteVirtualRouterError
### impl Send for DeleteVirtualRouterError
### impl Sync for DeleteVirtualRouterError
### impl Unpin for DeleteVirtualRouterError
### impl UnwindSafe for DeleteVirtualRouterError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_appmesh::DeleteVirtualServiceError
===
```
pub enum DeleteVirtualServiceError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
NotFound(String),
ResourceInUse(String),
ServiceUnavailable(String),
TooManyRequests(String),
}
```
Errors returned by DeleteVirtualService
Variants
---
### `BadRequest(String)`
The request syntax was malformed. Check your request syntax and try again.
### `Forbidden(String)`
You don't have permissions to perform this action.
### `InternalServerError(String)`
The request processing has failed because of an unknown error, exception, or failure.
### `NotFound(String)`
The specified resource doesn't exist. Check your request syntax and try again.
### `ResourceInUse(String)`
You can't delete the specified resource because it's in use or required by another resource.
### `ServiceUnavailable(String)`
The request has failed due to a temporary failure of the service.
### `TooManyRequests(String)`
The maximum request rate permitted by the App Mesh APIs has been exceeded for your account. For best results, use an increasing or variable sleep interval between requests.
Implementations
---
source### impl DeleteVirtualServiceError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<DeleteVirtualServiceErrorTrait Implementations
---
source### impl Debug for DeleteVirtualServiceError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for DeleteVirtualServiceError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for DeleteVirtualServiceError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<DeleteVirtualServiceError> for DeleteVirtualServiceError
source#### fn eq(&self, other: &DeleteVirtualServiceError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteVirtualServiceError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DeleteVirtualServiceError
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteVirtualServiceError
### impl Send for DeleteVirtualServiceError
### impl Sync for DeleteVirtualServiceError
### impl Unpin for DeleteVirtualServiceError
### impl UnwindSafe for DeleteVirtualServiceError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_appmesh::DescribeGatewayRouteError
===
```
pub enum DescribeGatewayRouteError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
NotFound(String),
ServiceUnavailable(String),
TooManyRequests(String),
}
```
Errors returned by DescribeGatewayRoute
Variants
---
### `BadRequest(String)`
The request syntax was malformed. Check your request syntax and try again.
### `Forbidden(String)`
You don't have permissions to perform this action.
### `InternalServerError(String)`
The request processing has failed because of an unknown error, exception, or failure.
### `NotFound(String)`
The specified resource doesn't exist. Check your request syntax and try again.
### `ServiceUnavailable(String)`
The request has failed due to a temporary failure of the service.
### `TooManyRequests(String)`
The maximum request rate permitted by the App Mesh APIs has been exceeded for your account. For best results, use an increasing or variable sleep interval between requests.
Implementations
---
source### impl DescribeGatewayRouteError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<DescribeGatewayRouteErrorTrait Implementations
---
source### impl Debug for DescribeGatewayRouteError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for DescribeGatewayRouteError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for DescribeGatewayRouteError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<DescribeGatewayRouteError> for DescribeGatewayRouteError
source#### fn eq(&self, other: &DescribeGatewayRouteError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeGatewayRouteError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeGatewayRouteError
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeGatewayRouteError
### impl Send for DescribeGatewayRouteError
### impl Sync for DescribeGatewayRouteError
### impl Unpin for DescribeGatewayRouteError
### impl UnwindSafe for DescribeGatewayRouteError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_appmesh::DescribeMeshError
===
```
pub enum DescribeMeshError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
NotFound(String),
ServiceUnavailable(String),
TooManyRequests(String),
}
```
Errors returned by DescribeMesh
Variants
---
### `BadRequest(String)`
The request syntax was malformed. Check your request syntax and try again.
### `Forbidden(String)`
You don't have permissions to perform this action.
### `InternalServerError(String)`
The request processing has failed because of an unknown error, exception, or failure.
### `NotFound(String)`
The specified resource doesn't exist. Check your request syntax and try again.
### `ServiceUnavailable(String)`
The request has failed due to a temporary failure of the service.
### `TooManyRequests(String)`
The maximum request rate permitted by the App Mesh APIs has been exceeded for your account. For best results, use an increasing or variable sleep interval between requests.
Implementations
---
source### impl DescribeMeshError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<DescribeMeshErrorTrait Implementations
---
source### impl Debug for DescribeMeshError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for DescribeMeshError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for DescribeMeshError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<DescribeMeshError> for DescribeMeshError
source#### fn eq(&self, other: &DescribeMeshError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeMeshError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeMeshError
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeMeshError
### impl Send for DescribeMeshError
### impl Sync for DescribeMeshError
### impl Unpin for DescribeMeshError
### impl UnwindSafe for DescribeMeshError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_appmesh::DescribeRouteError
===
```
pub enum DescribeRouteError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
NotFound(String),
ServiceUnavailable(String),
TooManyRequests(String),
}
```
Errors returned by DescribeRoute
Variants
---
### `BadRequest(String)`
The request syntax was malformed. Check your request syntax and try again.
### `Forbidden(String)`
You don't have permissions to perform this action.
### `InternalServerError(String)`
The request processing has failed because of an unknown error, exception, or failure.
### `NotFound(String)`
The specified resource doesn't exist. Check your request syntax and try again.
### `ServiceUnavailable(String)`
The request has failed due to a temporary failure of the service.
### `TooManyRequests(String)`
The maximum request rate permitted by the App Mesh APIs has been exceeded for your account. For best results, use an increasing or variable sleep interval between requests.
Implementations
---
source### impl DescribeRouteError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<DescribeRouteErrorTrait Implementations
---
source### impl Debug for DescribeRouteError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for DescribeRouteError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for DescribeRouteError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<DescribeRouteError> for DescribeRouteError
source#### fn eq(&self, other: &DescribeRouteError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeRouteError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeRouteError
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeRouteError
### impl Send for DescribeRouteError
### impl Sync for DescribeRouteError
### impl Unpin for DescribeRouteError
### impl UnwindSafe for DescribeRouteError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_appmesh::DescribeVirtualGatewayError
===
```
pub enum DescribeVirtualGatewayError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
NotFound(String),
ServiceUnavailable(String),
TooManyRequests(String),
}
```
Errors returned by DescribeVirtualGateway
Variants
---
### `BadRequest(String)`
The request syntax was malformed. Check your request syntax and try again.
### `Forbidden(String)`
You don't have permissions to perform this action.
### `InternalServerError(String)`
The request processing has failed because of an unknown error, exception, or failure.
### `NotFound(String)`
The specified resource doesn't exist. Check your request syntax and try again.
### `ServiceUnavailable(String)`
The request has failed due to a temporary failure of the service.
### `TooManyRequests(String)`
The maximum request rate permitted by the App Mesh APIs has been exceeded for your account. For best results, use an increasing or variable sleep interval between requests.
Implementations
---
source### impl DescribeVirtualGatewayError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<DescribeVirtualGatewayErrorTrait Implementations
---
source### impl Debug for DescribeVirtualGatewayError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for DescribeVirtualGatewayError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for DescribeVirtualGatewayError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<DescribeVirtualGatewayError> for DescribeVirtualGatewayError
source#### fn eq(&self, other: &DescribeVirtualGatewayError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeVirtualGatewayError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeVirtualGatewayError
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeVirtualGatewayError
### impl Send for DescribeVirtualGatewayError
### impl Sync for DescribeVirtualGatewayError
### impl Unpin for DescribeVirtualGatewayError
### impl UnwindSafe for DescribeVirtualGatewayError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_appmesh::DescribeVirtualNodeError
===
```
pub enum DescribeVirtualNodeError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
NotFound(String),
ServiceUnavailable(String),
TooManyRequests(String),
}
```
Errors returned by DescribeVirtualNode
Variants
---
### `BadRequest(String)`
The request syntax was malformed. Check your request syntax and try again.
### `Forbidden(String)`
You don't have permissions to perform this action.
### `InternalServerError(String)`
The request processing has failed because of an unknown error, exception, or failure.
### `NotFound(String)`
The specified resource doesn't exist. Check your request syntax and try again.
### `ServiceUnavailable(String)`
The request has failed due to a temporary failure of the service.
### `TooManyRequests(String)`
The maximum request rate permitted by the App Mesh APIs has been exceeded for your account. For best results, use an increasing or variable sleep interval between requests.
Implementations
---
source### impl DescribeVirtualNodeError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<DescribeVirtualNodeErrorTrait Implementations
---
source### impl Debug for DescribeVirtualNodeError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for DescribeVirtualNodeError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for DescribeVirtualNodeError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<DescribeVirtualNodeError> for DescribeVirtualNodeError
source#### fn eq(&self, other: &DescribeVirtualNodeError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeVirtualNodeError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeVirtualNodeError
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeVirtualNodeError
### impl Send for DescribeVirtualNodeError
### impl Sync for DescribeVirtualNodeError
### impl Unpin for DescribeVirtualNodeError
### impl UnwindSafe for DescribeVirtualNodeError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_appmesh::DescribeVirtualRouterError
===
```
pub enum DescribeVirtualRouterError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
NotFound(String),
ServiceUnavailable(String),
TooManyRequests(String),
}
```
Errors returned by DescribeVirtualRouter
Variants
---
### `BadRequest(String)`
The request syntax was malformed. Check your request syntax and try again.
### `Forbidden(String)`
You don't have permissions to perform this action.
### `InternalServerError(String)`
The request processing has failed because of an unknown error, exception, or failure.
### `NotFound(String)`
The specified resource doesn't exist. Check your request syntax and try again.
### `ServiceUnavailable(String)`
The request has failed due to a temporary failure of the service.
### `TooManyRequests(String)`
The maximum request rate permitted by the App Mesh APIs has been exceeded for your account. For best results, use an increasing or variable sleep interval between requests.
Implementations
---
source### impl DescribeVirtualRouterError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<DescribeVirtualRouterErrorTrait Implementations
---
source### impl Debug for DescribeVirtualRouterError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for DescribeVirtualRouterError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for DescribeVirtualRouterError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<DescribeVirtualRouterError> for DescribeVirtualRouterError
source#### fn eq(&self, other: &DescribeVirtualRouterError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeVirtualRouterError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeVirtualRouterError
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeVirtualRouterError
### impl Send for DescribeVirtualRouterError
### impl Sync for DescribeVirtualRouterError
### impl Unpin for DescribeVirtualRouterError
### impl UnwindSafe for DescribeVirtualRouterError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_appmesh::DescribeVirtualServiceError
===
```
pub enum DescribeVirtualServiceError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
NotFound(String),
ServiceUnavailable(String),
TooManyRequests(String),
}
```
Errors returned by DescribeVirtualService
Variants
---
### `BadRequest(String)`
The request syntax was malformed. Check your request syntax and try again.
### `Forbidden(String)`
You don't have permissions to perform this action.
### `InternalServerError(String)`
The request processing has failed because of an unknown error, exception, or failure.
### `NotFound(String)`
The specified resource doesn't exist. Check your request syntax and try again.
### `ServiceUnavailable(String)`
The request has failed due to a temporary failure of the service.
### `TooManyRequests(String)`
The maximum request rate permitted by the App Mesh APIs has been exceeded for your account. For best results, use an increasing or variable sleep interval between requests.
Implementations
---
source### impl DescribeVirtualServiceError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<DescribeVirtualServiceErrorTrait Implementations
---
source### impl Debug for DescribeVirtualServiceError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for DescribeVirtualServiceError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for DescribeVirtualServiceError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<DescribeVirtualServiceError> for DescribeVirtualServiceError
source#### fn eq(&self, other: &DescribeVirtualServiceError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeVirtualServiceError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeVirtualServiceError
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeVirtualServiceError
### impl Send for DescribeVirtualServiceError
### impl Sync for DescribeVirtualServiceError
### impl Unpin for DescribeVirtualServiceError
### impl UnwindSafe for DescribeVirtualServiceError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_appmesh::ListGatewayRoutesError
===
```
pub enum ListGatewayRoutesError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
NotFound(String),
ServiceUnavailable(String),
TooManyRequests(String),
}
```
Errors returned by ListGatewayRoutes
Variants
---
### `BadRequest(String)`
The request syntax was malformed. Check your request syntax and try again.
### `Forbidden(String)`
You don't have permissions to perform this action.
### `InternalServerError(String)`
The request processing has failed because of an unknown error, exception, or failure.
### `NotFound(String)`
The specified resource doesn't exist. Check your request syntax and try again.
### `ServiceUnavailable(String)`
The request has failed due to a temporary failure of the service.
### `TooManyRequests(String)`
The maximum request rate permitted by the App Mesh APIs has been exceeded for your account. For best results, use an increasing or variable sleep interval between requests.
Implementations
---
source### impl ListGatewayRoutesError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<ListGatewayRoutesErrorTrait Implementations
---
source### impl Debug for ListGatewayRoutesError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for ListGatewayRoutesError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for ListGatewayRoutesError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<ListGatewayRoutesError> for ListGatewayRoutesError
source#### fn eq(&self, other: &ListGatewayRoutesError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListGatewayRoutesError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ListGatewayRoutesError
Auto Trait Implementations
---
### impl RefUnwindSafe for ListGatewayRoutesError
### impl Send for ListGatewayRoutesError
### impl Sync for ListGatewayRoutesError
### impl Unpin for ListGatewayRoutesError
### impl UnwindSafe for ListGatewayRoutesError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_appmesh::ListMeshesError
===
```
pub enum ListMeshesError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
NotFound(String),
ServiceUnavailable(String),
TooManyRequests(String),
}
```
Errors returned by ListMeshes
Variants
---
### `BadRequest(String)`
The request syntax was malformed. Check your request syntax and try again.
### `Forbidden(String)`
You don't have permissions to perform this action.
### `InternalServerError(String)`
The request processing has failed because of an unknown error, exception, or failure.
### `NotFound(String)`
The specified resource doesn't exist. Check your request syntax and try again.
### `ServiceUnavailable(String)`
The request has failed due to a temporary failure of the service.
### `TooManyRequests(String)`
The maximum request rate permitted by the App Mesh APIs has been exceeded for your account. For best results, use an increasing or variable sleep interval between requests.
Implementations
---
source### impl ListMeshesError
source#### pub fn from_response(res: BufferedHttpResponse) -> RusotoError<ListMeshesErrorTrait Implementations
---
source### impl Debug for ListMeshesError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for ListMeshesError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for ListMeshesError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<ListMeshesError> for ListMeshesError
source#### fn eq(&self, other: &ListMeshesError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListMeshesError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ListMeshesError
Auto Trait Implementations
---
### impl RefUnwindSafe for ListMeshesError
### impl Send for ListMeshesError
### impl Sync for ListMeshesError
### impl Unpin for ListMeshesError
### impl UnwindSafe for ListMeshesError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_appmesh::ListRoutesError
===
```
pub enum ListRoutesError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
NotFound(String),
ServiceUnavailable(String),
TooManyRequests(String),
}
```
Errors returned by ListRoutes
Variants
---
### `BadRequest(String)`
The request syntax was malformed. Check your request syntax and try again.
### `Forbidden(String)`
You don't have permissions to perform this action.
### `InternalServerError(String)`
The request processing has failed because of an unknown error, exception, or failure.
### `NotFound(String)`
The specified resource doesn't exist. Check your request syntax and try again.
### `ServiceUnavailable(String)`
The request has failed due to a temporary failure of the service.
### `TooManyRequests(String)`
The maximum request rate permitted by the App Mesh APIs has been exceeded for your account. For best results, use an increasing or variable sleep interval between requests.
Implementations
---
source### impl ListRoutesError
source#### pub fn from_response(res: BufferedHttpResponse) -> RusotoError<ListRoutesErrorTrait Implementations
---
source### impl Debug for ListRoutesError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for ListRoutesError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for ListRoutesError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<ListRoutesError> for ListRoutesError
source#### fn eq(&self, other: &ListRoutesError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListRoutesError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ListRoutesError
Auto Trait Implementations
---
### impl RefUnwindSafe for ListRoutesError
### impl Send for ListRoutesError
### impl Sync for ListRoutesError
### impl Unpin for ListRoutesError
### impl UnwindSafe for ListRoutesError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_appmesh::ListTagsForResourceError
===
```
pub enum ListTagsForResourceError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
NotFound(String),
ServiceUnavailable(String),
TooManyRequests(String),
}
```
Errors returned by ListTagsForResource
Variants
---
### `BadRequest(String)`
The request syntax was malformed. Check your request syntax and try again.
### `Forbidden(String)`
You don't have permissions to perform this action.
### `InternalServerError(String)`
The request processing has failed because of an unknown error, exception, or failure.
### `NotFound(String)`
The specified resource doesn't exist. Check your request syntax and try again.
### `ServiceUnavailable(String)`
The request has failed due to a temporary failure of the service.
### `TooManyRequests(String)`
The maximum request rate permitted by the App Mesh APIs has been exceeded for your account. For best results, use an increasing or variable sleep interval between requests.
Implementations
---
source### impl ListTagsForResourceError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<ListTagsForResourceErrorTrait Implementations
---
source### impl Debug for ListTagsForResourceError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for ListTagsForResourceError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for ListTagsForResourceError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<ListTagsForResourceError> for ListTagsForResourceError
source#### fn eq(&self, other: &ListTagsForResourceError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListTagsForResourceError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ListTagsForResourceError
Auto Trait Implementations
---
### impl RefUnwindSafe for ListTagsForResourceError
### impl Send for ListTagsForResourceError
### impl Sync for ListTagsForResourceError
### impl Unpin for ListTagsForResourceError
### impl UnwindSafe for ListTagsForResourceError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_appmesh::ListVirtualGatewaysError
===
```
pub enum ListVirtualGatewaysError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
NotFound(String),
ServiceUnavailable(String),
TooManyRequests(String),
}
```
Errors returned by ListVirtualGateways
Variants
---
### `BadRequest(String)`
The request syntax was malformed. Check your request syntax and try again.
### `Forbidden(String)`
You don't have permissions to perform this action.
### `InternalServerError(String)`
The request processing has failed because of an unknown error, exception, or failure.
### `NotFound(String)`
The specified resource doesn't exist. Check your request syntax and try again.
### `ServiceUnavailable(String)`
The request has failed due to a temporary failure of the service.
### `TooManyRequests(String)`
The maximum request rate permitted by the App Mesh APIs has been exceeded for your account. For best results, use an increasing or variable sleep interval between requests.
Implementations
---
source### impl ListVirtualGatewaysError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<ListVirtualGatewaysErrorTrait Implementations
---
source### impl Debug for ListVirtualGatewaysError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for ListVirtualGatewaysError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for ListVirtualGatewaysError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<ListVirtualGatewaysError> for ListVirtualGatewaysError
source#### fn eq(&self, other: &ListVirtualGatewaysError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListVirtualGatewaysError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ListVirtualGatewaysError
Auto Trait Implementations
---
### impl RefUnwindSafe for ListVirtualGatewaysError
### impl Send for ListVirtualGatewaysError
### impl Sync for ListVirtualGatewaysError
### impl Unpin for ListVirtualGatewaysError
### impl UnwindSafe for ListVirtualGatewaysError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_appmesh::ListVirtualNodesError
===
```
pub enum ListVirtualNodesError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
NotFound(String),
ServiceUnavailable(String),
TooManyRequests(String),
}
```
Errors returned by ListVirtualNodes
Variants
---
### `BadRequest(String)`
The request syntax was malformed. Check your request syntax and try again.
### `Forbidden(String)`
You don't have permissions to perform this action.
### `InternalServerError(String)`
The request processing has failed because of an unknown error, exception, or failure.
### `NotFound(String)`
The specified resource doesn't exist. Check your request syntax and try again.
### `ServiceUnavailable(String)`
The request has failed due to a temporary failure of the service.
### `TooManyRequests(String)`
The maximum request rate permitted by the App Mesh APIs has been exceeded for your account. For best results, use an increasing or variable sleep interval between requests.
Implementations
---
source### impl ListVirtualNodesError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<ListVirtualNodesErrorTrait Implementations
---
source### impl Debug for ListVirtualNodesError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for ListVirtualNodesError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for ListVirtualNodesError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<ListVirtualNodesError> for ListVirtualNodesError
source#### fn eq(&self, other: &ListVirtualNodesError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListVirtualNodesError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ListVirtualNodesError
Auto Trait Implementations
---
### impl RefUnwindSafe for ListVirtualNodesError
### impl Send for ListVirtualNodesError
### impl Sync for ListVirtualNodesError
### impl Unpin for ListVirtualNodesError
### impl UnwindSafe for ListVirtualNodesError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_appmesh::ListVirtualRoutersError
===
```
pub enum ListVirtualRoutersError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
NotFound(String),
ServiceUnavailable(String),
TooManyRequests(String),
}
```
Errors returned by ListVirtualRouters
Variants
---
### `BadRequest(String)`
The request syntax was malformed. Check your request syntax and try again.
### `Forbidden(String)`
You don't have permissions to perform this action.
### `InternalServerError(String)`
The request processing has failed because of an unknown error, exception, or failure.
### `NotFound(String)`
The specified resource doesn't exist. Check your request syntax and try again.
### `ServiceUnavailable(String)`
The request has failed due to a temporary failure of the service.
### `TooManyRequests(String)`
The maximum request rate permitted by the App Mesh APIs has been exceeded for your account. For best results, use an increasing or variable sleep interval between requests.
Implementations
---
source### impl ListVirtualRoutersError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<ListVirtualRoutersErrorTrait Implementations
---
source### impl Debug for ListVirtualRoutersError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for ListVirtualRoutersError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for ListVirtualRoutersError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<ListVirtualRoutersError> for ListVirtualRoutersError
source#### fn eq(&self, other: &ListVirtualRoutersError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListVirtualRoutersError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ListVirtualRoutersError
Auto Trait Implementations
---
### impl RefUnwindSafe for ListVirtualRoutersError
### impl Send for ListVirtualRoutersError
### impl Sync for ListVirtualRoutersError
### impl Unpin for ListVirtualRoutersError
### impl UnwindSafe for ListVirtualRoutersError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_appmesh::ListVirtualServicesError
===
```
pub enum ListVirtualServicesError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
NotFound(String),
ServiceUnavailable(String),
TooManyRequests(String),
}
```
Errors returned by ListVirtualServices
Variants
---
### `BadRequest(String)`
The request syntax was malformed. Check your request syntax and try again.
### `Forbidden(String)`
You don't have permissions to perform this action.
### `InternalServerError(String)`
The request processing has failed because of an unknown error, exception, or failure.
### `NotFound(String)`
The specified resource doesn't exist. Check your request syntax and try again.
### `ServiceUnavailable(String)`
The request has failed due to a temporary failure of the service.
### `TooManyRequests(String)`
The maximum request rate permitted by the App Mesh APIs has been exceeded for your account. For best results, use an increasing or variable sleep interval between requests.
Implementations
---
source### impl ListVirtualServicesError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<ListVirtualServicesErrorTrait Implementations
---
source### impl Debug for ListVirtualServicesError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for ListVirtualServicesError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for ListVirtualServicesError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<ListVirtualServicesError> for ListVirtualServicesError
source#### fn eq(&self, other: &ListVirtualServicesError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListVirtualServicesError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ListVirtualServicesError
Auto Trait Implementations
---
### impl RefUnwindSafe for ListVirtualServicesError
### impl Send for ListVirtualServicesError
### impl Sync for ListVirtualServicesError
### impl Unpin for ListVirtualServicesError
### impl UnwindSafe for ListVirtualServicesError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_appmesh::TagResourceError
===
```
pub enum TagResourceError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
NotFound(String),
ServiceUnavailable(String),
TooManyRequests(String),
TooManyTags(String),
}
```
Errors returned by TagResource
Variants
---
### `BadRequest(String)`
The request syntax was malformed. Check your request syntax and try again.
### `Forbidden(String)`
You don't have permissions to perform this action.
### `InternalServerError(String)`
The request processing has failed because of an unknown error, exception, or failure.
### `NotFound(String)`
The specified resource doesn't exist. Check your request syntax and try again.
### `ServiceUnavailable(String)`
The request has failed due to a temporary failure of the service.
### `TooManyRequests(String)`
The maximum request rate permitted by the App Mesh APIs has been exceeded for your account. For best results, use an increasing or variable sleep interval between requests.
### `TooManyTags(String)`
The request exceeds the maximum allowed number of tags allowed per resource. The current limit is 50 user tags per resource. You must reduce the number of tags in the request. None of the tags in this request were applied.
Implementations
---
source### impl TagResourceError
source#### pub fn from_response(res: BufferedHttpResponse) -> RusotoError<TagResourceErrorTrait Implementations
---
source### impl Debug for TagResourceError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for TagResourceError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for TagResourceError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<TagResourceError> for TagResourceError
source#### fn eq(&self, other: &TagResourceError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &TagResourceError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for TagResourceError
Auto Trait Implementations
---
### impl RefUnwindSafe for TagResourceError
### impl Send for TagResourceError
### impl Sync for TagResourceError
### impl Unpin for TagResourceError
### impl UnwindSafe for TagResourceError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_appmesh::UntagResourceError
===
```
pub enum UntagResourceError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
NotFound(String),
ServiceUnavailable(String),
TooManyRequests(String),
}
```
Errors returned by UntagResource
Variants
---
### `BadRequest(String)`
The request syntax was malformed. Check your request syntax and try again.
### `Forbidden(String)`
You don't have permissions to perform this action.
### `InternalServerError(String)`
The request processing has failed because of an unknown error, exception, or failure.
### `NotFound(String)`
The specified resource doesn't exist. Check your request syntax and try again.
### `ServiceUnavailable(String)`
The request has failed due to a temporary failure of the service.
### `TooManyRequests(String)`
The maximum request rate permitted by the App Mesh APIs has been exceeded for your account. For best results, use an increasing or variable sleep interval between requests.
Implementations
---
source### impl UntagResourceError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<UntagResourceErrorTrait Implementations
---
source### impl Debug for UntagResourceError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for UntagResourceError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for UntagResourceError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<UntagResourceError> for UntagResourceError
source#### fn eq(&self, other: &UntagResourceError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &UntagResourceError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for UntagResourceError
Auto Trait Implementations
---
### impl RefUnwindSafe for UntagResourceError
### impl Send for UntagResourceError
### impl Sync for UntagResourceError
### impl Unpin for UntagResourceError
### impl UnwindSafe for UntagResourceError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_appmesh::UpdateGatewayRouteError
===
```
pub enum UpdateGatewayRouteError {
BadRequest(String),
Conflict(String),
Forbidden(String),
InternalServerError(String),
LimitExceeded(String),
NotFound(String),
ServiceUnavailable(String),
TooManyRequests(String),
}
```
Errors returned by UpdateGatewayRoute
Variants
---
### `BadRequest(String)`
The request syntax was malformed. Check your request syntax and try again.
### `Conflict(String)`
The request contains a client token that was used for a previous update resource call with different specifications. Try the request again with a new client token.
### `Forbidden(String)`
You don't have permissions to perform this action.
### `InternalServerError(String)`
The request processing has failed because of an unknown error, exception, or failure.
### `LimitExceeded(String)`
You have exceeded a service limit for your account. For more information, see Service Limits in the *AWS App Mesh User Guide*.
### `NotFound(String)`
The specified resource doesn't exist. Check your request syntax and try again.
### `ServiceUnavailable(String)`
The request has failed due to a temporary failure of the service.
### `TooManyRequests(String)`
The maximum request rate permitted by the App Mesh APIs has been exceeded for your account. For best results, use an increasing or variable sleep interval between requests.
Implementations
---
source### impl UpdateGatewayRouteError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<UpdateGatewayRouteErrorTrait Implementations
---
source### impl Debug for UpdateGatewayRouteError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for UpdateGatewayRouteError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for UpdateGatewayRouteError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<UpdateGatewayRouteError> for UpdateGatewayRouteError
source#### fn eq(&self, other: &UpdateGatewayRouteError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &UpdateGatewayRouteError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for UpdateGatewayRouteError
Auto Trait Implementations
---
### impl RefUnwindSafe for UpdateGatewayRouteError
### impl Send for UpdateGatewayRouteError
### impl Sync for UpdateGatewayRouteError
### impl Unpin for UpdateGatewayRouteError
### impl UnwindSafe for UpdateGatewayRouteError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_appmesh::UpdateMeshError
===
```
pub enum UpdateMeshError {
BadRequest(String),
Conflict(String),
Forbidden(String),
InternalServerError(String),
NotFound(String),
ServiceUnavailable(String),
TooManyRequests(String),
}
```
Errors returned by UpdateMesh
Variants
---
### `BadRequest(String)`
The request syntax was malformed. Check your request syntax and try again.
### `Conflict(String)`
The request contains a client token that was used for a previous update resource call with different specifications. Try the request again with a new client token.
### `Forbidden(String)`
You don't have permissions to perform this action.
### `InternalServerError(String)`
The request processing has failed because of an unknown error, exception, or failure.
### `NotFound(String)`
The specified resource doesn't exist. Check your request syntax and try again.
### `ServiceUnavailable(String)`
The request has failed due to a temporary failure of the service.
### `TooManyRequests(String)`
The maximum request rate permitted by the App Mesh APIs has been exceeded for your account. For best results, use an increasing or variable sleep interval between requests.
Implementations
---
source### impl UpdateMeshError
source#### pub fn from_response(res: BufferedHttpResponse) -> RusotoError<UpdateMeshErrorTrait Implementations
---
source### impl Debug for UpdateMeshError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for UpdateMeshError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for UpdateMeshError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<UpdateMeshError> for UpdateMeshError
source#### fn eq(&self, other: &UpdateMeshError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &UpdateMeshError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for UpdateMeshError
Auto Trait Implementations
---
### impl RefUnwindSafe for UpdateMeshError
### impl Send for UpdateMeshError
### impl Sync for UpdateMeshError
### impl Unpin for UpdateMeshError
### impl UnwindSafe for UpdateMeshError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_appmesh::UpdateRouteError
===
```
pub enum UpdateRouteError {
BadRequest(String),
Conflict(String),
Forbidden(String),
InternalServerError(String),
LimitExceeded(String),
NotFound(String),
ServiceUnavailable(String),
TooManyRequests(String),
}
```
Errors returned by UpdateRoute
Variants
---
### `BadRequest(String)`
The request syntax was malformed. Check your request syntax and try again.
### `Conflict(String)`
The request contains a client token that was used for a previous update resource call with different specifications. Try the request again with a new client token.
### `Forbidden(String)`
You don't have permissions to perform this action.
### `InternalServerError(String)`
The request processing has failed because of an unknown error, exception, or failure.
### `LimitExceeded(String)`
You have exceeded a service limit for your account. For more information, see Service Limits in the *AWS App Mesh User Guide*.
### `NotFound(String)`
The specified resource doesn't exist. Check your request syntax and try again.
### `ServiceUnavailable(String)`
The request has failed due to a temporary failure of the service.
### `TooManyRequests(String)`
The maximum request rate permitted by the App Mesh APIs has been exceeded for your account. For best results, use an increasing or variable sleep interval between requests.
Implementations
---
source### impl UpdateRouteError
source#### pub fn from_response(res: BufferedHttpResponse) -> RusotoError<UpdateRouteErrorTrait Implementations
---
source### impl Debug for UpdateRouteError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for UpdateRouteError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for UpdateRouteError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<UpdateRouteError> for UpdateRouteError
source#### fn eq(&self, other: &UpdateRouteError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &UpdateRouteError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for UpdateRouteError
Auto Trait Implementations
---
### impl RefUnwindSafe for UpdateRouteError
### impl Send for UpdateRouteError
### impl Sync for UpdateRouteError
### impl Unpin for UpdateRouteError
### impl UnwindSafe for UpdateRouteError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_appmesh::UpdateVirtualGatewayError
===
```
pub enum UpdateVirtualGatewayError {
BadRequest(String),
Conflict(String),
Forbidden(String),
InternalServerError(String),
LimitExceeded(String),
NotFound(String),
ServiceUnavailable(String),
TooManyRequests(String),
}
```
Errors returned by UpdateVirtualGateway
Variants
---
### `BadRequest(String)`
The request syntax was malformed. Check your request syntax and try again.
### `Conflict(String)`
The request contains a client token that was used for a previous update resource call with different specifications. Try the request again with a new client token.
### `Forbidden(String)`
You don't have permissions to perform this action.
### `InternalServerError(String)`
The request processing has failed because of an unknown error, exception, or failure.
### `LimitExceeded(String)`
You have exceeded a service limit for your account. For more information, see Service Limits in the *AWS App Mesh User Guide*.
### `NotFound(String)`
The specified resource doesn't exist. Check your request syntax and try again.
### `ServiceUnavailable(String)`
The request has failed due to a temporary failure of the service.
### `TooManyRequests(String)`
The maximum request rate permitted by the App Mesh APIs has been exceeded for your account. For best results, use an increasing or variable sleep interval between requests.
Implementations
---
source### impl UpdateVirtualGatewayError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<UpdateVirtualGatewayErrorTrait Implementations
---
source### impl Debug for UpdateVirtualGatewayError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for UpdateVirtualGatewayError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for UpdateVirtualGatewayError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<UpdateVirtualGatewayError> for UpdateVirtualGatewayError
source#### fn eq(&self, other: &UpdateVirtualGatewayError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &UpdateVirtualGatewayError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for UpdateVirtualGatewayError
Auto Trait Implementations
---
### impl RefUnwindSafe for UpdateVirtualGatewayError
### impl Send for UpdateVirtualGatewayError
### impl Sync for UpdateVirtualGatewayError
### impl Unpin for UpdateVirtualGatewayError
### impl UnwindSafe for UpdateVirtualGatewayError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_appmesh::UpdateVirtualNodeError
===
```
pub enum UpdateVirtualNodeError {
BadRequest(String),
Conflict(String),
Forbidden(String),
InternalServerError(String),
LimitExceeded(String),
NotFound(String),
ServiceUnavailable(String),
TooManyRequests(String),
}
```
Errors returned by UpdateVirtualNode
Variants
---
### `BadRequest(String)`
The request syntax was malformed. Check your request syntax and try again.
### `Conflict(String)`
The request contains a client token that was used for a previous update resource call with different specifications. Try the request again with a new client token.
### `Forbidden(String)`
You don't have permissions to perform this action.
### `InternalServerError(String)`
The request processing has failed because of an unknown error, exception, or failure.
### `LimitExceeded(String)`
You have exceeded a service limit for your account. For more information, see Service Limits in the *AWS App Mesh User Guide*.
### `NotFound(String)`
The specified resource doesn't exist. Check your request syntax and try again.
### `ServiceUnavailable(String)`
The request has failed due to a temporary failure of the service.
### `TooManyRequests(String)`
The maximum request rate permitted by the App Mesh APIs has been exceeded for your account. For best results, use an increasing or variable sleep interval between requests.
Implementations
---
source### impl UpdateVirtualNodeError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<UpdateVirtualNodeErrorTrait Implementations
---
source### impl Debug for UpdateVirtualNodeError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for UpdateVirtualNodeError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for UpdateVirtualNodeError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<UpdateVirtualNodeError> for UpdateVirtualNodeError
source#### fn eq(&self, other: &UpdateVirtualNodeError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &UpdateVirtualNodeError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for UpdateVirtualNodeError
Auto Trait Implementations
---
### impl RefUnwindSafe for UpdateVirtualNodeError
### impl Send for UpdateVirtualNodeError
### impl Sync for UpdateVirtualNodeError
### impl Unpin for UpdateVirtualNodeError
### impl UnwindSafe for UpdateVirtualNodeError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_appmesh::UpdateVirtualRouterError
===
```
pub enum UpdateVirtualRouterError {
BadRequest(String),
Conflict(String),
Forbidden(String),
InternalServerError(String),
LimitExceeded(String),
NotFound(String),
ServiceUnavailable(String),
TooManyRequests(String),
}
```
Errors returned by UpdateVirtualRouter
Variants
---
### `BadRequest(String)`
The request syntax was malformed. Check your request syntax and try again.
### `Conflict(String)`
The request contains a client token that was used for a previous update resource call with different specifications. Try the request again with a new client token.
### `Forbidden(String)`
You don't have permissions to perform this action.
### `InternalServerError(String)`
The request processing has failed because of an unknown error, exception, or failure.
### `LimitExceeded(String)`
You have exceeded a service limit for your account. For more information, see Service Limits in the *AWS App Mesh User Guide*.
### `NotFound(String)`
The specified resource doesn't exist. Check your request syntax and try again.
### `ServiceUnavailable(String)`
The request has failed due to a temporary failure of the service.
### `TooManyRequests(String)`
The maximum request rate permitted by the App Mesh APIs has been exceeded for your account. For best results, use an increasing or variable sleep interval between requests.
Implementations
---
source### impl UpdateVirtualRouterError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<UpdateVirtualRouterErrorTrait Implementations
---
source### impl Debug for UpdateVirtualRouterError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for UpdateVirtualRouterError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for UpdateVirtualRouterError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<UpdateVirtualRouterError> for UpdateVirtualRouterError
source#### fn eq(&self, other: &UpdateVirtualRouterError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &UpdateVirtualRouterError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for UpdateVirtualRouterError
Auto Trait Implementations
---
### impl RefUnwindSafe for UpdateVirtualRouterError
### impl Send for UpdateVirtualRouterError
### impl Sync for UpdateVirtualRouterError
### impl Unpin for UpdateVirtualRouterError
### impl UnwindSafe for UpdateVirtualRouterError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_appmesh::UpdateVirtualServiceError
===
```
pub enum UpdateVirtualServiceError {
BadRequest(String),
Conflict(String),
Forbidden(String),
InternalServerError(String),
LimitExceeded(String),
NotFound(String),
ServiceUnavailable(String),
TooManyRequests(String),
}
```
Errors returned by UpdateVirtualService
Variants
---
### `BadRequest(String)`
The request syntax was malformed. Check your request syntax and try again.
### `Conflict(String)`
The request contains a client token that was used for a previous update resource call with different specifications. Try the request again with a new client token.
### `Forbidden(String)`
You don't have permissions to perform this action.
### `InternalServerError(String)`
The request processing has failed because of an unknown error, exception, or failure.
### `LimitExceeded(String)`
You have exceeded a service limit for your account. For more information, see Service Limits in the *AWS App Mesh User Guide*.
### `NotFound(String)`
The specified resource doesn't exist. Check your request syntax and try again.
### `ServiceUnavailable(String)`
The request has failed due to a temporary failure of the service.
### `TooManyRequests(String)`
The maximum request rate permitted by the App Mesh APIs has been exceeded for your account. For best results, use an increasing or variable sleep interval between requests.
Implementations
---
source### impl UpdateVirtualServiceError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<UpdateVirtualServiceErrorTrait Implementations
---
source### impl Debug for UpdateVirtualServiceError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for UpdateVirtualServiceError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for UpdateVirtualServiceError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<UpdateVirtualServiceError> for UpdateVirtualServiceError
source#### fn eq(&self, other: &UpdateVirtualServiceError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &UpdateVirtualServiceError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for UpdateVirtualServiceError
Auto Trait Implementations
---
### impl RefUnwindSafe for UpdateVirtualServiceError
### impl Send for UpdateVirtualServiceError
### impl Sync for UpdateVirtualServiceError
### impl Unpin for UpdateVirtualServiceError
### impl UnwindSafe for UpdateVirtualServiceError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more |
postgres-types | rust | Rust | Crate postgres_types
===
Conversions to and from Postgres types.
This crate is used by the `tokio-postgres` and `postgres` crates. You normally don’t need to depend directly on it unless you want to define your own `ToSql` or `FromSql` definitions.
Derive
---
If the `derive` cargo feature is enabled, you can derive `ToSql` and `FromSql` implementations for custom Postgres types. Explicitly, modify your `Cargo.toml` file to include the following:
```
[dependencies]
postgres-types = { version = "0.X.X", features = ["derive"] }
```
### Enums
Postgres enums correspond to C-like enums in Rust:
```
CREATE TYPE "Mood" AS ENUM (
'Sad',
'Ok',
'Happy'
);
```
```
use postgres_types::{ToSql, FromSql};
#[derive(Debug, ToSql, FromSql)]
enum Mood {
Sad,
Ok,
Happy,
}
```
### Domains
Postgres domains correspond to tuple structs with one member in Rust:
```
CREATE DOMAIN "SessionId" AS BYTEA CHECK(octet_length(VALUE) = 16);
```
```
use postgres_types::{ToSql, FromSql};
#[derive(Debug, ToSql, FromSql)]
struct SessionId(Vec<u8>);
```
### Newtypes
The `#[postgres(transparent)]` attribute can be used on a single-field tuple struct to create a Rust-only wrapper type that will use the `ToSql` & `FromSql` implementation of the inner value :
```
use postgres_types::{ToSql, FromSql};
#[derive(Debug, ToSql, FromSql)]
#[postgres(transparent)]
struct UserId(i32);
```
### Composites
Postgres composite types correspond to structs in Rust:
```
CREATE TYPE "InventoryItem" AS (
name TEXT,
supplier_id INT,
price DOUBLE PRECISION
);
```
```
use postgres_types::{ToSql, FromSql};
#[derive(Debug, ToSql, FromSql)]
struct InventoryItem {
name: String,
supplier_id: i32,
price: Option<f64>,
}
```
### Naming
The derived implementations will enforce exact matches of type, field, and variant names between the Rust and Postgres types. The `#[postgres(name = "...")]` attribute can be used to adjust the name on a type, variant, or field:
```
CREATE TYPE mood AS ENUM (
'sad',
'ok',
'happy'
);
```
```
use postgres_types::{ToSql, FromSql};
#[derive(Debug, ToSql, FromSql)]
#[postgres(name = "mood")]
enum Mood {
#[postgres(name = "sad")]
Sad,
#[postgres(name = "ok")]
Ok,
#[postgres(name = "happy")]
Happy,
}
```
Alternatively, the `#[postgres(rename_all = "...")]` attribute can be used to rename all fields or variants with the chosen casing convention. This will not affect the struct or enum’s type name. Note that
`#[postgres(name = "...")]` takes precendence when used in conjunction with `#[postgres(rename_all = "...")]`:
```
use postgres_types::{ToSql, FromSql};
#[derive(Debug, ToSql, FromSql)]
#[postgres(name = "mood", rename_all = "snake_case")]
enum Mood {
#[postgres(name = "ok")]
Ok, // ok
VeryHappy, // very_happy
}
```
The following case conventions are supported:
* `"lowercase"`
* `"UPPERCASE"`
* `"PascalCase"`
* `"camelCase"`
* `"snake_case"`
* `"SCREAMING_SNAKE_CASE"`
* `"kebab-case"`
* `"SCREAMING-KEBAB-CASE"`
* `"Train-Case"`
### Allowing Enum Mismatches
By default the generated implementation of `ToSql` & `FromSql` for enums will require an exact match of the enum variants between the Rust and Postgres types.
To allow mismatches, the `#[postgres(allow_mismatch)]` attribute can be used on the enum definition:
```
CREATE TYPE mood AS ENUM (
'Sad',
'Ok',
'Happy'
);
```
#[postgres(allow_mismatch)]
enum Mood {
Happy,
Meh,
}
Macros
---
* acceptsGenerates a simple implementation of `ToSql::accepts` which accepts the types passed to it.
* to_sql_checkedGenerates an implementation of `ToSql::to_sql_checked`.
Structs
---
* FieldInformation about a field of a composite type.
* PgLsnPostgres `PG_LSN` type.
* TypeA Postgres type.
* WasNullAn error indicating that a `NULL` Postgres value was passed to a `FromSql`
implementation that does not support `NULL` values.
* WrongTypeAn error indicating that a conversion was attempted between incompatible Rust and Postgres types.
Enums
---
* DateA wrapper that can be used to represent infinity with `Type::Date` types.
* FormatSupported Postgres message format types
* IsNullAn enum representing the nullability of a Postgres value.
* KindRepresents the kind of a Postgres type.
* TimestampA wrapper that can be used to represent infinity with `Type::Timestamp` and `Type::Timestamptz`
types.
Traits
---
* BorrowToSqlA trait used by clients to abstract over `&dyn ToSql` and `T: ToSql`.
* FromSqlA trait for types that can be created from a Postgres value.
* FromSqlOwnedA trait for types which can be created from a Postgres value without borrowing any data.
* ToSqlA trait for types that can be converted into Postgres values.
Type Definitions
---
* OidA Postgres OID.
Crate postgres_types
===
Conversions to and from Postgres types.
This crate is used by the `tokio-postgres` and `postgres` crates. You normally don’t need to depend directly on it unless you want to define your own `ToSql` or `FromSql` definitions.
Derive
---
If the `derive` cargo feature is enabled, you can derive `ToSql` and `FromSql` implementations for custom Postgres types. Explicitly, modify your `Cargo.toml` file to include the following:
```
[dependencies]
postgres-types = { version = "0.X.X", features = ["derive"] }
```
### Enums
Postgres enums correspond to C-like enums in Rust:
```
CREATE TYPE "Mood" AS ENUM (
'Sad',
'Ok',
'Happy'
);
```
```
use postgres_types::{ToSql, FromSql};
#[derive(Debug, ToSql, FromSql)]
enum Mood {
Sad,
Ok,
Happy,
}
```
### Domains
Postgres domains correspond to tuple structs with one member in Rust:
```
CREATE DOMAIN "SessionId" AS BYTEA CHECK(octet_length(VALUE) = 16);
```
```
use postgres_types::{ToSql, FromSql};
#[derive(Debug, ToSql, FromSql)]
struct SessionId(Vec<u8>);
```
### Newtypes
The `#[postgres(transparent)]` attribute can be used on a single-field tuple struct to create a Rust-only wrapper type that will use the `ToSql` & `FromSql` implementation of the inner value :
```
use postgres_types::{ToSql, FromSql};
#[derive(Debug, ToSql, FromSql)]
#[postgres(transparent)]
struct UserId(i32);
```
### Composites
Postgres composite types correspond to structs in Rust:
```
CREATE TYPE "InventoryItem" AS (
name TEXT,
supplier_id INT,
price DOUBLE PRECISION
);
```
```
use postgres_types::{ToSql, FromSql};
#[derive(Debug, ToSql, FromSql)]
struct InventoryItem {
name: String,
supplier_id: i32,
price: Option<f64>,
}
```
### Naming
The derived implementations will enforce exact matches of type, field, and variant names between the Rust and Postgres types. The `#[postgres(name = "...")]` attribute can be used to adjust the name on a type, variant, or field:
```
CREATE TYPE mood AS ENUM (
'sad',
'ok',
'happy'
);
```
```
use postgres_types::{ToSql, FromSql};
#[derive(Debug, ToSql, FromSql)]
#[postgres(name = "mood")]
enum Mood {
#[postgres(name = "sad")]
Sad,
#[postgres(name = "ok")]
Ok,
#[postgres(name = "happy")]
Happy,
}
```
Alternatively, the `#[postgres(rename_all = "...")]` attribute can be used to rename all fields or variants with the chosen casing convention. This will not affect the struct or enum’s type name. Note that
`#[postgres(name = "...")]` takes precendence when used in conjunction with `#[postgres(rename_all = "...")]`:
```
use postgres_types::{ToSql, FromSql};
#[derive(Debug, ToSql, FromSql)]
#[postgres(name = "mood", rename_all = "snake_case")]
enum Mood {
#[postgres(name = "ok")]
Ok, // ok
VeryHappy, // very_happy
}
```
The following case conventions are supported:
* `"lowercase"`
* `"UPPERCASE"`
* `"PascalCase"`
* `"camelCase"`
* `"snake_case"`
* `"SCREAMING_SNAKE_CASE"`
* `"kebab-case"`
* `"SCREAMING-KEBAB-CASE"`
* `"Train-Case"`
### Allowing Enum Mismatches
By default the generated implementation of `ToSql` & `FromSql` for enums will require an exact match of the enum variants between the Rust and Postgres types.
To allow mismatches, the `#[postgres(allow_mismatch)]` attribute can be used on the enum definition:
```
CREATE TYPE mood AS ENUM (
'Sad',
'Ok',
'Happy'
);
```
#[postgres(allow_mismatch)]
enum Mood {
Happy,
Meh,
}
Macros
---
* acceptsGenerates a simple implementation of `ToSql::accepts` which accepts the types passed to it.
* to_sql_checkedGenerates an implementation of `ToSql::to_sql_checked`.
Structs
---
* FieldInformation about a field of a composite type.
* PgLsnPostgres `PG_LSN` type.
* TypeA Postgres type.
* WasNullAn error indicating that a `NULL` Postgres value was passed to a `FromSql`
implementation that does not support `NULL` values.
* WrongTypeAn error indicating that a conversion was attempted between incompatible Rust and Postgres types.
Enums
---
* DateA wrapper that can be used to represent infinity with `Type::Date` types.
* FormatSupported Postgres message format types
* IsNullAn enum representing the nullability of a Postgres value.
* KindRepresents the kind of a Postgres type.
* TimestampA wrapper that can be used to represent infinity with `Type::Timestamp` and `Type::Timestamptz`
types.
Traits
---
* BorrowToSqlA trait used by clients to abstract over `&dyn ToSql` and `T: ToSql`.
* FromSqlA trait for types that can be created from a Postgres value.
* FromSqlOwnedA trait for types which can be created from a Postgres value without borrowing any data.
* ToSqlA trait for types that can be converted into Postgres values.
Type Definitions
---
* OidA Postgres OID.
Trait postgres_types::ToSql
===
```
pub trait ToSql: Debug {
// Required methods
fn to_sql(
&self,
ty: &Type,
out: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>>
where Self: Sized;
fn accepts(ty: &Type) -> bool
where Self: Sized;
fn to_sql_checked(
&self,
ty: &Type,
out: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>>;
// Provided method
fn encode_format(&self, _ty: &Type) -> Format { ... }
}
```
A trait for types that can be converted into Postgres values.
Types
---
The following implementations are provided by this crate, along with the corresponding Postgres types:
| Rust type | Postgres type(s) |
| --- | --- |
| `bool` | BOOL |
| `i8` | “char” |
| `i16` | SMALLINT, SMALLSERIAL |
| `i32` | INT, SERIAL |
| `u32` | OID |
| `i64` | BIGINT, BIGSERIAL |
| `f32` | REAL |
| `f64` | DOUBLE PRECISION |
| `&str`/`String` | VARCHAR, CHAR(n), TEXT, CITEXT, NAME |
| | LTREE, LQUERY, LTXTQUERY |
| `&[u8]`/`Vec<u8>`/`[u8; N]` | BYTEA |
| `HashMap<String, Option<String>>` | HSTORE |
| `SystemTime` | TIMESTAMP, TIMESTAMP WITH TIME ZONE |
| `IpAddr` | INET |
In addition, some implementations are provided for types in third party crates. These are disabled by default; to opt into one of these implementations, activate the Cargo feature corresponding to the crate’s name prefixed by `with-`. For example, the `with-serde_json-1` feature enables the implementation for the `serde_json::Value` type.
| Rust type | Postgres type(s) |
| --- | --- |
| `chrono::NaiveDateTime` | TIMESTAMP |
| `chrono::DateTime<Utc>` | TIMESTAMP WITH TIME ZONE |
| `chrono::DateTime<Local>` | TIMESTAMP WITH TIME ZONE |
| `chrono::DateTime<FixedOffset>` | TIMESTAMP WITH TIME ZONE |
| `chrono::NaiveDate` | DATE |
| `chrono::NaiveTime` | TIME |
| `time::PrimitiveDateTime` | TIMESTAMP |
| `time::OffsetDateTime` | TIMESTAMP WITH TIME ZONE |
| `time::Date` | DATE |
| `time::Time` | TIME |
| `eui48::MacAddress` | MACADDR |
| `geo_types::Point<f64>` | POINT |
| `geo_types::Rect<f64>` | BOX |
| `geo_types::LineString<f64>` | PATH |
| `serde_json::Value` | JSON, JSONB |
| `uuid::Uuid` | UUID |
| `bit_vec::BitVec` | BIT, VARBIT |
| `eui48::MacAddress` | MACADDR |
Nullability
---
In addition to the types listed above, `ToSql` is implemented for
`Option<T>` where `T` implements `ToSql`. An `Option<T>` represents a nullable Postgres value.
Arrays
---
`ToSql` is implemented for `[u8; N]`, `Vec<T>`, `&[T]`, `Box<[T]>` and `[T; N]`
where `T` implements `ToSql` and `N` is const usize, and corresponds to one-dimensional Postgres arrays with an index offset of 1.
**Note:** the impl for arrays only exist when the Cargo feature `array-impls`
is enabled.
Required Methods
---
#### fn to_sql(
&self,
ty: &Type,
out: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>>where
Self: Sized,
Converts the value of `self` into the binary format of the specified Postgres `Type`, appending it to `out`.
The caller of this method is responsible for ensuring that this type is compatible with the Postgres `Type`.
The return value indicates if this value should be represented as
`NULL`. If this is the case, implementations **must not** write anything to `out`.
#### fn accepts(ty: &Type) -> boolwhere
Self: Sized,
Determines if a value of this type can be converted to the specified Postgres `Type`.
#### fn to_sql_checked(
&self,
ty: &Type,
out: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>An adaptor method used internally by Rust-Postgres.
*All* implementations of this method should be generated by the
`to_sql_checked!()` macro.
Provided Methods
---
#### fn encode_format(&self, _ty: &Type) -> Format
Specify the encode format
Trait Implementations
---
### impl BorrowToSql for &(dyn ToSql + Sync)
In async contexts it is sometimes necessary to have the additional Sync requirement on parameters for queries since this enables the resulting Futures to be Send, hence usable in, e.g., tokio::spawn.
This instance is provided for those cases.
#### fn borrow_to_sql(&self) -> &dyn ToSql
Returns a reference to `self` as a `ToSql` trait object.### impl BorrowToSql for &dyn ToSql
#### fn borrow_to_sql(&self) -> &dyn ToSql
Returns a reference to `self` as a `ToSql` trait object.### impl<'a> BorrowToSql for Box<dyn ToSql + Sync + 'a#### fn borrow_to_sql(&self) -> &dyn ToSql
Returns a reference to `self` as a `ToSql` trait object.### impl<'a> BorrowToSql for Box<dyn ToSql + Sync + Send + 'a#### fn borrow_to_sql(&self) -> &dyn ToSql
Returns a reference to `self` as a `ToSql` trait object.Implementations on Foreign Types
---
### impl<'a, T> ToSql for &'a Twhere
T: ToSql,
#### fn to_sql(
&self,
ty: &Type,
out: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>#### fn accepts(ty: &Type) -> bool
#### fn encode_format(&self, ty: &Type) -> Format
#### fn to_sql_checked(
&self,
ty: &Type,
out: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>### impl ToSql for u32
#### fn to_sql(
&self,
_: &Type,
w: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>#### fn accepts(ty: &Type) -> bool
#### fn to_sql_checked(
&self,
ty: &Type,
out: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>### impl ToSql for f32
#### fn to_sql(
&self,
_: &Type,
w: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>#### fn accepts(ty: &Type) -> bool
#### fn to_sql_checked(
&self,
ty: &Type,
out: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>### impl ToSql for Box<str#### fn to_sql(
&self,
ty: &Type,
w: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>#### fn accepts(ty: &Type) -> bool
#### fn to_sql_checked(
&self,
ty: &Type,
out: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>### impl<'a> ToSql for &'a [u8]
#### fn to_sql(
&self,
_: &Type,
w: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>#### fn accepts(ty: &Type) -> bool
#### fn to_sql_checked(
&self,
ty: &Type,
out: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>### impl ToSql for Vec<u8#### fn to_sql(
&self,
ty: &Type,
w: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>#### fn accepts(ty: &Type) -> bool
#### fn to_sql_checked(
&self,
ty: &Type,
out: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>### impl<'a> ToSql for Cow<'a, str#### fn to_sql(
&self,
ty: &Type,
w: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>#### fn accepts(ty: &Type) -> bool
#### fn to_sql_checked(
&self,
ty: &Type,
out: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>### impl ToSql for i32
#### fn to_sql(
&self,
_: &Type,
w: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>#### fn accepts(ty: &Type) -> bool
#### fn to_sql_checked(
&self,
ty: &Type,
out: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>### impl ToSql for IpAddr
#### fn to_sql(
&self,
_: &Type,
w: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>#### fn accepts(ty: &Type) -> bool
#### fn to_sql_checked(
&self,
ty: &Type,
out: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>### impl ToSql for SystemTime
#### fn to_sql(
&self,
_: &Type,
w: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>#### fn accepts(ty: &Type) -> bool
#### fn to_sql_checked(
&self,
ty: &Type,
out: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>### impl<T: ToSql> ToSql for Vec<T#### fn to_sql(
&self,
ty: &Type,
w: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>#### fn accepts(ty: &Type) -> bool
#### fn to_sql_checked(
&self,
ty: &Type,
out: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>### impl ToSql for String
#### fn to_sql(
&self,
ty: &Type,
w: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>#### fn accepts(ty: &Type) -> bool
#### fn to_sql_checked(
&self,
ty: &Type,
out: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>### impl<'a> ToSql for Cow<'a, [u8]#### fn to_sql(
&self,
ty: &Type,
w: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>#### fn accepts(ty: &Type) -> bool
#### fn to_sql_checked(
&self,
ty: &Type,
out: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>### impl<'a> ToSql for &'a str
#### fn to_sql(
&self,
ty: &Type,
w: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>#### fn accepts(ty: &Type) -> bool
#### fn to_sql_checked(
&self,
ty: &Type,
out: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>### impl<T: ToSql> ToSql for Box<[T]#### fn to_sql(
&self,
ty: &Type,
w: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>#### fn accepts(ty: &Type) -> bool
#### fn to_sql_checked(
&self,
ty: &Type,
out: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>### impl ToSql for i8
#### fn to_sql(
&self,
_: &Type,
w: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>#### fn accepts(ty: &Type) -> bool
#### fn to_sql_checked(
&self,
ty: &Type,
out: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>### impl<H> ToSql for HashMap<String, Option<String>, H>where
H: BuildHasher,
#### fn to_sql(
&self,
_: &Type,
w: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>#### fn accepts(ty: &Type) -> bool
#### fn to_sql_checked(
&self,
ty: &Type,
out: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>### impl ToSql for f64
#### fn to_sql(
&self,
_: &Type,
w: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>#### fn accepts(ty: &Type) -> bool
#### fn to_sql_checked(
&self,
ty: &Type,
out: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>### impl ToSql for bool
#### fn to_sql(
&self,
_: &Type,
w: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>#### fn accepts(ty: &Type) -> bool
#### fn to_sql_checked(
&self,
ty: &Type,
out: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>### impl ToSql for i64
#### fn to_sql(
&self,
_: &Type,
w: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>#### fn accepts(ty: &Type) -> bool
#### fn to_sql_checked(
&self,
ty: &Type,
out: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>### impl<T: ToSql> ToSql for Option<T#### fn to_sql(
&self,
ty: &Type,
out: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>#### fn accepts(ty: &Type) -> bool
#### fn encode_format(&self, ty: &Type) -> Format
#### fn to_sql_checked(
&self,
ty: &Type,
out: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>### impl<'a, T: ToSql> ToSql for &'a [T]
#### fn to_sql(
&self,
ty: &Type,
w: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>#### fn accepts(ty: &Type) -> bool
#### fn to_sql_checked(
&self,
ty: &Type,
out: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>### impl ToSql for i16
#### fn to_sql(
&self,
_: &Type,
w: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>#### fn accepts(ty: &Type) -> bool
#### fn to_sql_checked(
&self,
ty: &Type,
out: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>Implementors
---
### impl ToSql for PgLsn
### impl<T: ToSql> ToSql for Date<T### impl<T: ToSql> ToSql for Timestamp<TTrait postgres_types::FromSql
===
```
pub trait FromSql<'a>: Sized {
// Required methods
fn from_sql(
ty: &Type,
raw: &'a [u8]
) -> Result<Self, Box<dyn Error + Sync + Send>>;
fn accepts(ty: &Type) -> bool;
// Provided methods
fn from_sql_null(ty: &Type) -> Result<Self, Box<dyn Error + Sync + Send>> { ... }
fn from_sql_nullable(
ty: &Type,
raw: Option<&'a [u8]>
) -> Result<Self, Box<dyn Error + Sync + Send>> { ... }
}
```
A trait for types that can be created from a Postgres value.
Types
---
The following implementations are provided by this crate, along with the corresponding Postgres types:
| Rust type | Postgres type(s) |
| --- | --- |
| `bool` | BOOL |
| `i8` | “char” |
| `i16` | SMALLINT, SMALLSERIAL |
| `i32` | INT, SERIAL |
| `u32` | OID |
| `i64` | BIGINT, BIGSERIAL |
| `f32` | REAL |
| `f64` | DOUBLE PRECISION |
| `&str`/`String` | VARCHAR, CHAR(n), TEXT, CITEXT, NAME, UNKNOWN |
| | LTREE, LQUERY, LTXTQUERY |
| `&[u8]`/`Vec<u8>` | BYTEA |
| `HashMap<String, Option<String>>` | HSTORE |
| `SystemTime` | TIMESTAMP, TIMESTAMP WITH TIME ZONE |
| `IpAddr` | INET |
In addition, some implementations are provided for types in third party crates. These are disabled by default; to opt into one of these implementations, activate the Cargo feature corresponding to the crate’s name prefixed by `with-`. For example, the `with-serde_json-1` feature enables the implementation for the `serde_json::Value` type.
| Rust type | Postgres type(s) |
| --- | --- |
| `chrono::NaiveDateTime` | TIMESTAMP |
| `chrono::DateTime<Utc>` | TIMESTAMP WITH TIME ZONE |
| `chrono::DateTime<Local>` | TIMESTAMP WITH TIME ZONE |
| `chrono::DateTime<FixedOffset>` | TIMESTAMP WITH TIME ZONE |
| `chrono::NaiveDate` | DATE |
| `chrono::NaiveTime` | TIME |
| `time::PrimitiveDateTime` | TIMESTAMP |
| `time::OffsetDateTime` | TIMESTAMP WITH TIME ZONE |
| `time::Date` | DATE |
| `time::Time` | TIME |
| `eui48::MacAddress` | MACADDR |
| `geo_types::Point<f64>` | POINT |
| `geo_types::Rect<f64>` | BOX |
| `geo_types::LineString<f64>` | PATH |
| `serde_json::Value` | JSON, JSONB |
| `uuid::Uuid` | UUID |
| `bit_vec::BitVec` | BIT, VARBIT |
| `eui48::MacAddress` | MACADDR |
| `cidr::InetCidr` | CIDR |
| `cidr::InetAddr` | INET |
| `smol_str::SmolStr` | VARCHAR, CHAR(n), TEXT, CITEXT, |
| | NAME, UNKNOWN, LTREE, LQUERY, |
| | LTXTQUERY |
Nullability
---
In addition to the types listed above, `FromSql` is implemented for
`Option<T>` where `T` implements `FromSql`. An `Option<T>` represents a nullable Postgres value.
Arrays
---
`FromSql` is implemented for `Vec<T>`, `Box<[T]>` and `[T; N]` where `T`
implements `FromSql`, and corresponds to one-dimensional Postgres arrays.
**Note:** the impl for arrays only exist when the Cargo feature `array-impls`
is enabled.
Required Methods
---
#### fn from_sql(
ty: &Type,
raw: &'a [u8]
) -> Result<Self, Box<dyn Error + Sync + Send>Creates a new value of this type from a buffer of data of the specified Postgres `Type` in its binary format.
The caller of this method is responsible for ensuring that this type is compatible with the Postgres `Type`.
#### fn accepts(ty: &Type) -> bool
Determines if a value of this type can be created from the specified Postgres `Type`.
Provided Methods
---
#### fn from_sql_null(ty: &Type) -> Result<Self, Box<dyn Error + Sync + Send>Creates a new value of this type from a `NULL` SQL value.
The caller of this method is responsible for ensuring that this type is compatible with the Postgres `Type`.
The default implementation returns `Err(Box::new(WasNull))`.
#### fn from_sql_nullable(
ty: &Type,
raw: Option<&'a [u8]>
) -> Result<Self, Box<dyn Error + Sync + Send>A convenience function that delegates to `from_sql` and `from_sql_null` depending on the value of `raw`.
Implementations on Foreign Types
---
### impl<'a> FromSql<'a> for u32
#### fn from_sql(
_: &Type,
raw: &'a [u8]
) -> Result<u32, Box<dyn Error + Sync + Send>#### fn accepts(ty: &Type) -> bool
### impl<'a> FromSql<'a> for &'a str
#### fn from_sql(
ty: &Type,
raw: &'a [u8]
) -> Result<&'a str, Box<dyn Error + Sync + Send>#### fn accepts(ty: &Type) -> bool
### impl<'a> FromSql<'a> for String
#### fn from_sql(
ty: &Type,
raw: &'a [u8]
) -> Result<String, Box<dyn Error + Sync + Send>#### fn accepts(ty: &Type) -> bool
### impl<'a> FromSql<'a> for Vec<u8#### fn from_sql(
_: &Type,
raw: &'a [u8]
) -> Result<Vec<u8>, Box<dyn Error + Sync + Send>#### fn accepts(ty: &Type) -> bool
### impl<'a> FromSql<'a> for i64
#### fn from_sql(
_: &Type,
raw: &'a [u8]
) -> Result<i64, Box<dyn Error + Sync + Send>#### fn accepts(ty: &Type) -> bool
### impl<'a> FromSql<'a> for Box<str#### fn from_sql(
ty: &Type,
raw: &'a [u8]
) -> Result<Box<str>, Box<dyn Error + Sync + Send>#### fn accepts(ty: &Type) -> bool
### impl<'a> FromSql<'a> for f64
#### fn from_sql(
_: &Type,
raw: &'a [u8]
) -> Result<f64, Box<dyn Error + Sync + Send>#### fn accepts(ty: &Type) -> bool
### impl<'a, T: FromSql<'a>> FromSql<'a> for Option<T#### fn from_sql(
ty: &Type,
raw: &'a [u8]
) -> Result<Option<T>, Box<dyn Error + Sync + Send>#### fn from_sql_null(_: &Type) -> Result<Option<T>, Box<dyn Error + Sync + Send>#### fn accepts(ty: &Type) -> bool
### impl<'a> FromSql<'a> for i32
#### fn from_sql(
_: &Type,
raw: &'a [u8]
) -> Result<i32, Box<dyn Error + Sync + Send>#### fn accepts(ty: &Type) -> bool
### impl<'a> FromSql<'a> for SystemTime
#### fn from_sql(
_: &Type,
raw: &'a [u8]
) -> Result<SystemTime, Box<dyn Error + Sync + Send>#### fn accepts(ty: &Type) -> bool
### impl<'a, T: FromSql<'a>> FromSql<'a> for Box<[T]#### fn from_sql(
ty: &Type,
raw: &'a [u8]
) -> Result<Self, Box<dyn Error + Sync + Send>#### fn accepts(ty: &Type) -> bool
### impl<'a> FromSql<'a> for &'a [u8]
#### fn from_sql(
_: &Type,
raw: &'a [u8]
) -> Result<&'a [u8], Box<dyn Error + Sync + Send>#### fn accepts(ty: &Type) -> bool
### impl<'a> FromSql<'a> for i16
#### fn from_sql(
_: &Type,
raw: &'a [u8]
) -> Result<i16, Box<dyn Error + Sync + Send>#### fn accepts(ty: &Type) -> bool
### impl<'a> FromSql<'a> for IpAddr
#### fn from_sql(
_: &Type,
raw: &'a [u8]
) -> Result<IpAddr, Box<dyn Error + Sync + Send>#### fn accepts(ty: &Type) -> bool
### impl<'a> FromSql<'a> for f32
#### fn from_sql(
_: &Type,
raw: &'a [u8]
) -> Result<f32, Box<dyn Error + Sync + Send>#### fn accepts(ty: &Type) -> bool
### impl<'a> FromSql<'a> for i8
#### fn from_sql(_: &Type, raw: &'a [u8]) -> Result<i8, Box<dyn Error + Sync + Send>#### fn accepts(ty: &Type) -> bool
### impl<'a> FromSql<'a> for bool
#### fn from_sql(
_: &Type,
raw: &'a [u8]
) -> Result<bool, Box<dyn Error + Sync + Send>#### fn accepts(ty: &Type) -> bool
### impl<'a, T: FromSql<'a>> FromSql<'a> for Vec<T#### fn from_sql(
ty: &Type,
raw: &'a [u8]
) -> Result<Vec<T>, Box<dyn Error + Sync + Send>#### fn accepts(ty: &Type) -> bool
### impl<'a, S> FromSql<'a> for HashMap<String, Option<String>, S>where
S: Default + BuildHasher,
#### fn from_sql(
_: &Type,
raw: &'a [u8]
) -> Result<HashMap<String, Option<String>, S>, Box<dyn Error + Sync + Send>#### fn accepts(ty: &Type) -> bool
Implementors
---
### impl<'a> FromSql<'a> for PgLsn
### impl<'a, T: FromSql<'a>> FromSql<'a> for Date<T### impl<'a, T: FromSql<'a>> FromSql<'a> for Timestamp<TMacro postgres_types::accepts
===
```
macro_rules! accepts {
($($expected:ident),+) => { ... };
}
```
Generates a simple implementation of `ToSql::accepts` which accepts the types passed to it.
Macro postgres_types::to_sql_checked
===
```
macro_rules! to_sql_checked {
() => { ... };
}
```
Generates an implementation of `ToSql::to_sql_checked`.
All `ToSql` implementations should use this macro.
Struct postgres_types::Field
===
```
pub struct Field { /* private fields */ }
```
Information about a field of a composite type.
Implementations
---
### impl Field
#### pub fn new(name: String, type_: Type) -> Field
Creates a new `Field`.
#### pub fn name(&self) -> &str
Returns the name of the field.
#### pub fn type_(&self) -> &Type
Returns the type of the field.
Trait Implementations
---
### impl Clone for Field
#### fn clone(&self) -> Field
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn hash<__H: Hasher>(&self, state: &mut __H)
Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where
H: Hasher,
Self: Sized,
Feeds a slice of this type into the given `Hasher`.
#### fn eq(&self, other: &Field) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Eq for Field
### impl StructuralEq for Field
### impl StructuralPartialEq for Field
Auto Trait Implementations
---
### impl RefUnwindSafe for Field
### impl Send for Field
### impl Sync for Field
### impl Unpin for Field
### impl UnwindSafe for Field
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
Struct postgres_types::PgLsn
===
```
pub struct PgLsn(_);
```
Postgres `PG_LSN` type.
Trait Implementations
---
### impl Clone for PgLsn
#### fn clone(&self) -> PgLsn
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn from(lsn: PgLsn) -> u64
Converts to this type from the input type.### impl From<u64> for PgLsn
#### fn from(lsn_u64: u64) -> Self
Converts to this type from the input type.### impl<'a> FromSql<'a> for PgLsn
#### fn from_sql(
_: &Type,
raw: &'a [u8]
) -> Result<Self, Box<dyn Error + Sync + Send>Creates a new value of this type from a buffer of data of the specified Postgres `Type` in its binary format.
Determines if a value of this type can be created from the specified Postgres `Type`.#### fn from_sql_null(ty: &Type) -> Result<Self, Box<dyn Error + Sync + Send>Creates a new value of this type from a `NULL` SQL value.
ty: &Type,
raw: Option<&'a [u8]>
) -> Result<Self, Box<dyn Error + Sync + Send>A convenience function that delegates to `from_sql` and `from_sql_null` depending on the value of `raw`.### impl FromStr for PgLsn
#### type Err = ParseLsnError
The associated error which can be returned from parsing.#### fn from_str(lsn_str: &str) -> Result<Self, Self::ErrParses a string `s` to return a value of this type.
#### fn cmp(&self, other: &PgLsn) -> Ordering
This method returns an `Ordering` between `self` and `other`. Read more1.21.0 · source#### fn max(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the maximum of two values. Read more1.21.0 · source#### fn min(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the minimum of two values. Read more1.50.0 · source#### fn clamp(self, min: Self, max: Self) -> Selfwhere
Self: Sized + PartialOrd<Self>,
Restrict a value to a certain interval.
#### fn eq(&self, other: &PgLsn) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialOrd<PgLsn> for PgLsn
#### fn partial_cmp(&self, other: &PgLsn) -> Option<OrderingThis method returns an ordering between `self` and `other` values if one exists. Read more1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool
This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool
This method tests less than or equal to (for `self` and `other`) and is used by the `<=`
operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool
This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool
This method tests greater than or equal to (for `self` and `other`) and is used by the `>=`
operator.
#### fn to_sql(
&self,
_: &Type,
out: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>Converts the value of `self` into the binary format of the specified Postgres `Type`, appending it to `out`.
Determines if a value of this type can be converted to the specified Postgres `Type`.#### fn to_sql_checked(
&self,
ty: &Type,
out: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>An adaptor method used internally by Rust-Postgres.
Specify the encode format### impl Copy for PgLsn
### impl Eq for PgLsn
### impl StructuralEq for PgLsn
### impl StructuralPartialEq for PgLsn
Auto Trait Implementations
---
### impl RefUnwindSafe for PgLsn
### impl Send for PgLsn
### impl Sync for PgLsn
### impl Unpin for PgLsn
### impl UnwindSafe for PgLsn
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
T: ToSql,
#### fn borrow_to_sql(&self) -> &dyn ToSql
Returns a reference to `self` as a `ToSql` trait object.### impl<T> From<T> for T
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
T: Display + ?Sized,
#### default fn to_string(&self) -> String
Converts the given value to a `String`.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> FromSqlOwned for Twhere
T: for<'a> FromSql<'a>,
Struct postgres_types::Type
===
```
pub struct Type(_);
```
A Postgres type.
Implementations
---
### impl Type
#### pub const BOOL: Type = _
BOOL - boolean, 'true'/'false'
#### pub const BYTEA: Type = _
BYTEA - variable-length string, binary values escaped
#### pub const CHAR: Type = _
CHAR - single character
#### pub const NAME: Type = _
NAME - 63-byte type for storing system identifiers
#### pub const INT8: Type = _
INT8 - ~18 digit integer, 8-byte storage
#### pub const INT2: Type = _
INT2 - -32 thousand to 32 thousand, 2-byte storage
#### pub const INT2_VECTOR: Type = _
INT2VECTOR - array of int2, used in system tables
#### pub const INT4: Type = _
INT4 - -2 billion to 2 billion integer, 4-byte storage
#### pub const REGPROC: Type = _
REGPROC - registered procedure
#### pub const TEXT: Type = _
TEXT - variable-length string, no limit specified
#### pub const OID: Type = _
OID - object identifier(oid), maximum 4 billion
#### pub const TID: Type = _
TID - (block, offset), physical location of tuple
#### pub const XID: Type = _
XID - transaction id
#### pub const CID: Type = _
CID - command identifier type, sequence in transaction id
#### pub const OID_VECTOR: Type = _
OIDVECTOR - array of oids, used in system tables
#### pub const PG_DDL_COMMAND: Type = _
PG_DDL_COMMAND - internal type for passing CollectedCommand
#### pub const JSON: Type = _
JSON - JSON stored as text
#### pub const XML: Type = _
XML - XML content
#### pub const XML_ARRAY: Type = _
XML[]
#### pub const PG_NODE_TREE: Type = _
PG_NODE_TREE - string representing an internal node tree
#### pub const JSON_ARRAY: Type = _
JSON[]
#### pub const TABLE_AM_HANDLER: Type = _
TABLE_AM_HANDLER
#### pub const XID8_ARRAY: Type = _
XID8[]
#### pub const INDEX_AM_HANDLER: Type = _
INDEX_AM_HANDLER - pseudo-type for the result of an index AM handler function
#### pub const POINT: Type = _
POINT - geometric point '(x, y)'
#### pub const LSEG: Type = _
LSEG - geometric line segment '(pt1,pt2)'
#### pub const PATH: Type = _
PATH - geometric path '(pt1,…)'
#### pub const BOX: Type = _
BOX - geometric box '(lower left,upper right)'
#### pub const POLYGON: Type = _
POLYGON - geometric polygon '(pt1,…)'
#### pub const LINE: Type = _
LINE - geometric line
#### pub const LINE_ARRAY: Type = _
LINE[]
#### pub const CIDR: Type = _
CIDR - network IP address/netmask, network address
#### pub const CIDR_ARRAY: Type = _
CIDR[]
#### pub const FLOAT4: Type = _
FLOAT4 - single-precision floating point number, 4-byte storage
#### pub const FLOAT8: Type = _
FLOAT8 - double-precision floating point number, 8-byte storage
#### pub const UNKNOWN: Type = _
UNKNOWN - pseudo-type representing an undetermined type
#### pub const CIRCLE: Type = _
CIRCLE - geometric circle '(center,radius)'
#### pub const CIRCLE_ARRAY: Type = _
CIRCLE[]
#### pub const MACADDR8: Type = _
MACADDR8 - XX:XX:XX:XX:XX:XX:XX:XX, MAC address
#### pub const MACADDR8_ARRAY: Type = _
MACADDR8[]
#### pub const MONEY: Type = _
MONEY - monetary amounts, $d,ddd.cc
#### pub const MONEY_ARRAY: Type = _
MONEY[]
#### pub const MACADDR: Type = _
MACADDR - XX:XX:XX:XX:XX:XX, MAC address
#### pub const INET: Type = _
INET - IP address/netmask, host address, netmask optional
#### pub const BOOL_ARRAY: Type = _
BOOL[]
#### pub const BYTEA_ARRAY: Type = _
BYTEA[]
#### pub const CHAR_ARRAY: Type = _
CHAR[]
#### pub const NAME_ARRAY: Type = _
NAME[]
#### pub const INT2_ARRAY: Type = _
INT2[]
#### pub const INT2_VECTOR_ARRAY: Type = _
INT2VECTOR[]
#### pub const INT4_ARRAY: Type = _
INT4[]
#### pub const REGPROC_ARRAY: Type = _
REGPROC[]
#### pub const TEXT_ARRAY: Type = _
TEXT[]
#### pub const TID_ARRAY: Type = _
TID[]
#### pub const XID_ARRAY: Type = _
XID[]
#### pub const CID_ARRAY: Type = _
CID[]
#### pub const OID_VECTOR_ARRAY: Type = _
OIDVECTOR[]
#### pub const BPCHAR_ARRAY: Type = _
BPCHAR[]
#### pub const VARCHAR_ARRAY: Type = _
VARCHAR[]
#### pub const INT8_ARRAY: Type = _
INT8[]
#### pub const POINT_ARRAY: Type = _
POINT[]
#### pub const LSEG_ARRAY: Type = _
LSEG[]
#### pub const PATH_ARRAY: Type = _
PATH[]
#### pub const BOX_ARRAY: Type = _
BOX[]
#### pub const FLOAT4_ARRAY: Type = _
FLOAT4[]
#### pub const FLOAT8_ARRAY: Type = _
FLOAT8[]
#### pub const POLYGON_ARRAY: Type = _
POLYGON[]
#### pub const OID_ARRAY: Type = _
OID[]
#### pub const ACLITEM: Type = _
ACLITEM - access control list
#### pub const ACLITEM_ARRAY: Type = _
ACLITEM[]
#### pub const MACADDR_ARRAY: Type = _
MACADDR[]
#### pub const INET_ARRAY: Type = _
INET[]
#### pub const BPCHAR: Type = _
BPCHAR - char(length), blank-padded string, fixed storage length
#### pub const VARCHAR: Type = _
VARCHAR - varchar(length), non-blank-padded string, variable storage length
#### pub const DATE: Type = _
DATE - date
#### pub const TIME: Type = _
TIME - time of day
#### pub const TIMESTAMP: Type = _
TIMESTAMP - date and time
#### pub const TIMESTAMP_ARRAY: Type = _
TIMESTAMP[]
#### pub const DATE_ARRAY: Type = _
DATE[]
#### pub const TIME_ARRAY: Type = _
TIME[]
#### pub const TIMESTAMPTZ: Type = _
TIMESTAMPTZ - date and time with time zone
#### pub const TIMESTAMPTZ_ARRAY: Type = _
TIMESTAMPTZ[]
#### pub const INTERVAL: Type = _
INTERVAL - @ <number> <units>, time interval
#### pub const INTERVAL_ARRAY: Type = _
INTERVAL[]
#### pub const NUMERIC_ARRAY: Type = _
NUMERIC[]
#### pub const CSTRING_ARRAY: Type = _
CSTRING[]
#### pub const TIMETZ: Type = _
TIMETZ - time of day with time zone
#### pub const TIMETZ_ARRAY: Type = _
TIMETZ[]
#### pub const BIT: Type = _
BIT - fixed-length bit string
#### pub const BIT_ARRAY: Type = _
BIT[]
#### pub const VARBIT: Type = _
VARBIT - variable-length bit string
#### pub const VARBIT_ARRAY: Type = _
VARBIT[]
#### pub const NUMERIC: Type = _
NUMERIC - numeric(precision, decimal), arbitrary precision number
#### pub const REFCURSOR: Type = _
REFCURSOR - reference to cursor (portal name)
#### pub const REFCURSOR_ARRAY: Type = _
REFCURSOR[]
#### pub const REGPROCEDURE: Type = _
REGPROCEDURE - registered procedure (with args)
#### pub const REGOPER: Type = _
REGOPER - registered operator
#### pub const REGOPERATOR: Type = _
REGOPERATOR - registered operator (with args)
#### pub const REGCLASS: Type = _
REGCLASS - registered class
#### pub const REGTYPE: Type = _
REGTYPE - registered type
#### pub const REGPROCEDURE_ARRAY: Type = _
REGPROCEDURE[]
#### pub const REGOPER_ARRAY: Type = _
REGOPER[]
#### pub const REGOPERATOR_ARRAY: Type = _
REGOPERATOR[]
#### pub const REGCLASS_ARRAY: Type = _
REGCLASS[]
#### pub const REGTYPE_ARRAY: Type = _
REGTYPE[]
#### pub const RECORD: Type = _
RECORD - pseudo-type representing any composite type
#### pub const CSTRING: Type = _
CSTRING - C-style string
#### pub const ANY: Type = _
ANY - pseudo-type representing any type
#### pub const ANYARRAY: Type = _
ANYARRAY - pseudo-type representing a polymorphic array type
#### pub const VOID: Type = _
VOID - pseudo-type for the result of a function with no real result
#### pub const TRIGGER: Type = _
TRIGGER - pseudo-type for the result of a trigger function
#### pub const LANGUAGE_HANDLER: Type = _
LANGUAGE_HANDLER - pseudo-type for the result of a language handler function
#### pub const INTERNAL: Type = _
INTERNAL - pseudo-type representing an internal data structure
#### pub const ANYELEMENT: Type = _
ANYELEMENT - pseudo-type representing a polymorphic base type
#### pub const RECORD_ARRAY: Type = _
RECORD[]
#### pub const ANYNONARRAY: Type = _
ANYNONARRAY - pseudo-type representing a polymorphic base type that is not an array
#### pub const TXID_SNAPSHOT_ARRAY: Type = _
TXID_SNAPSHOT[]
#### pub const UUID: Type = _
UUID - UUID datatype
#### pub const UUID_ARRAY: Type = _
UUID[]
#### pub const TXID_SNAPSHOT: Type = _
TXID_SNAPSHOT - txid snapshot
#### pub const FDW_HANDLER: Type = _
FDW_HANDLER - pseudo-type for the result of an FDW handler function
#### pub const PG_LSN: Type = _
PG_LSN - PostgreSQL LSN datatype
#### pub const PG_LSN_ARRAY: Type = _
PG_LSN[]
#### pub const TSM_HANDLER: Type = _
TSM_HANDLER - pseudo-type for the result of a tablesample method function
#### pub const PG_NDISTINCT: Type = _
PG_NDISTINCT - multivariate ndistinct coefficients
#### pub const PG_DEPENDENCIES: Type = _
PG_DEPENDENCIES - multivariate dependencies
#### pub const ANYENUM: Type = _
ANYENUM - pseudo-type representing a polymorphic base type that is an enum
#### pub const TS_VECTOR: Type = _
TSVECTOR - text representation for text search
#### pub const TSQUERY: Type = _
TSQUERY - query representation for text search
#### pub const GTS_VECTOR: Type = _
GTSVECTOR - GiST index internal text representation for text search
#### pub const TS_VECTOR_ARRAY: Type = _
TSVECTOR[]
#### pub const GTS_VECTOR_ARRAY: Type = _
GTSVECTOR[]
#### pub const TSQUERY_ARRAY: Type = _
TSQUERY[]
#### pub const REGCONFIG: Type = _
REGCONFIG - registered text search configuration
#### pub const REGCONFIG_ARRAY: Type = _
REGCONFIG[]
#### pub const REGDICTIONARY: Type = _
REGDICTIONARY - registered text search dictionary
#### pub const REGDICTIONARY_ARRAY: Type = _
REGDICTIONARY[]
#### pub const JSONB: Type = _
JSONB - Binary JSON
#### pub const JSONB_ARRAY: Type = _
JSONB[]
#### pub const ANY_RANGE: Type = _
ANYRANGE - pseudo-type representing a range over a polymorphic base type
#### pub const EVENT_TRIGGER: Type = _
EVENT_TRIGGER - pseudo-type for the result of an event trigger function
#### pub const INT4_RANGE: Type = _
INT4RANGE - range of integers
#### pub const INT4_RANGE_ARRAY: Type = _
INT4RANGE[]
#### pub const NUM_RANGE: Type = _
NUMRANGE - range of numerics
#### pub const NUM_RANGE_ARRAY: Type = _
NUMRANGE[]
#### pub const TS_RANGE: Type = _
TSRANGE - range of timestamps without time zone
#### pub const TS_RANGE_ARRAY: Type = _
TSRANGE[]
#### pub const TSTZ_RANGE: Type = _
TSTZRANGE - range of timestamps with time zone
#### pub const TSTZ_RANGE_ARRAY: Type = _
TSTZRANGE[]
#### pub const DATE_RANGE: Type = _
DATERANGE - range of dates
#### pub const DATE_RANGE_ARRAY: Type = _
DATERANGE[]
#### pub const INT8_RANGE: Type = _
INT8RANGE - range of bigints
#### pub const INT8_RANGE_ARRAY: Type = _
INT8RANGE[]
#### pub const JSONPATH: Type = _
JSONPATH - JSON path
#### pub const JSONPATH_ARRAY: Type = _
JSONPATH[]
#### pub const REGNAMESPACE: Type = _
REGNAMESPACE - registered namespace
#### pub const REGNAMESPACE_ARRAY: Type = _
REGNAMESPACE[]
#### pub const REGROLE: Type = _
REGROLE - registered role
#### pub const REGROLE_ARRAY: Type = _
REGROLE[]
#### pub const REGCOLLATION: Type = _
REGCOLLATION - registered collation
#### pub const REGCOLLATION_ARRAY: Type = _
REGCOLLATION[]
#### pub const INT4MULTI_RANGE: Type = _
INT4MULTIRANGE - multirange of integers
#### pub const NUMMULTI_RANGE: Type = _
NUMMULTIRANGE - multirange of numerics
#### pub const TSMULTI_RANGE: Type = _
TSMULTIRANGE - multirange of timestamps without time zone
#### pub const TSTZMULTI_RANGE: Type = _
TSTZMULTIRANGE - multirange of timestamps with time zone
#### pub const DATEMULTI_RANGE: Type = _
DATEMULTIRANGE - multirange of dates
#### pub const INT8MULTI_RANGE: Type = _
INT8MULTIRANGE - multirange of bigints
#### pub const ANYMULTI_RANGE: Type = _
ANYMULTIRANGE - pseudo-type representing a polymorphic base type that is a multirange
#### pub const ANYCOMPATIBLEMULTI_RANGE: Type = _
ANYCOMPATIBLEMULTIRANGE - pseudo-type representing a multirange over a polymorphic common type
#### pub const PG_BRIN_BLOOM_SUMMARY: Type = _
PG_BRIN_BLOOM_SUMMARY - BRIN bloom summary
#### pub const PG_BRIN_MINMAX_MULTI_SUMMARY: Type = _
PG_BRIN_MINMAX_MULTI_SUMMARY - BRIN minmax-multi summary
#### pub const PG_MCV_LIST: Type = _
PG_MCV_LIST - multivariate MCV list
#### pub const PG_SNAPSHOT: Type = _
PG_SNAPSHOT - snapshot
#### pub const PG_SNAPSHOT_ARRAY: Type = _
PG_SNAPSHOT[]
#### pub const XID8: Type = _
XID8 - full transaction id
#### pub const ANYCOMPATIBLE: Type = _
ANYCOMPATIBLE - pseudo-type representing a polymorphic common type
#### pub const ANYCOMPATIBLEARRAY: Type = _
ANYCOMPATIBLEARRAY - pseudo-type representing an array of polymorphic common type elements
#### pub const ANYCOMPATIBLENONARRAY: Type = _
ANYCOMPATIBLENONARRAY - pseudo-type representing a polymorphic common type that is not an array
#### pub const ANYCOMPATIBLE_RANGE: Type = _
ANYCOMPATIBLERANGE - pseudo-type representing a range over a polymorphic common type
#### pub const INT4MULTI_RANGE_ARRAY: Type = _
INT4MULTIRANGE[]
#### pub const NUMMULTI_RANGE_ARRAY: Type = _
NUMMULTIRANGE[]
#### pub const TSMULTI_RANGE_ARRAY: Type = _
TSMULTIRANGE[]
#### pub const TSTZMULTI_RANGE_ARRAY: Type = _
TSTZMULTIRANGE[]
#### pub const DATEMULTI_RANGE_ARRAY: Type = _
DATEMULTIRANGE[]
#### pub const INT8MULTI_RANGE_ARRAY: Type = _
INT8MULTIRANGE[]
### impl Type
#### pub fn new(name: String, oid: Oid, kind: Kind, schema: String) -> Type
Creates a new `Type`.
#### pub fn from_oid(oid: Oid) -> Option<TypeReturns the `Type` corresponding to the provided `Oid` if it corresponds to a built-in type.
#### pub fn oid(&self) -> Oid
Returns the OID of the `Type`.
#### pub fn kind(&self) -> &Kind
Returns the kind of this type.
#### pub fn schema(&self) -> &str
Returns the schema of this type.
#### pub fn name(&self) -> &str
Returns the name of this type.
Trait Implementations
---
### impl Clone for Type
#### fn clone(&self) -> Type
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, fmt: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn fmt(&self, fmt: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn hash<__H: Hasher>(&self, state: &mut __H)
Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where
H: Hasher,
Self: Sized,
Feeds a slice of this type into the given `Hasher`.
#### fn eq(&self, other: &Type) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Eq for Type
### impl StructuralEq for Type
### impl StructuralPartialEq for Type
Auto Trait Implementations
---
### impl RefUnwindSafe for Type
### impl Send for Type
### impl Sync for Type
### impl Unpin for Type
### impl UnwindSafe for Type
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
T: Display + ?Sized,
#### default fn to_string(&self) -> String
Converts the given value to a `String`.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
Struct postgres_types::WasNull
===
```
pub struct WasNull;
```
An error indicating that a `NULL` Postgres value was passed to a `FromSql`
implementation that does not support `NULL` values.
Trait Implementations
---
### impl Clone for WasNull
#### fn clone(&self) -> WasNull
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn fmt(&self, fmt: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more1.0.0 · source#### fn description(&self) -> &str
👎Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎Deprecated since 1.33.0: replaced by Error::source, which can support downcasting#### fn provide<'a>(&'a self, request: &mut Request<'a>)
🔬This is a nightly-only experimental API. (`error_generic_member_access`)Provides type based access to context intended for error reports.
Auto Trait Implementations
---
### impl RefUnwindSafe for WasNull
### impl Send for WasNull
### impl Sync for WasNull
### impl Unpin for WasNull
### impl UnwindSafe for WasNull
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
T: Display + ?Sized,
#### default fn to_string(&self) -> String
Converts the given value to a `String`.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
Struct postgres_types::WrongType
===
```
pub struct WrongType { /* private fields */ }
```
An error indicating that a conversion was attempted between incompatible Rust and Postgres types.
Implementations
---
### impl WrongType
#### pub fn new<T>(ty: Type) -> WrongType
Creates a new `WrongType` error.
Trait Implementations
---
### impl Debug for WrongType
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn fmt(&self, fmt: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more1.0.0 · source#### fn description(&self) -> &str
👎Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎Deprecated since 1.33.0: replaced by Error::source, which can support downcasting#### fn provide<'a>(&'a self, request: &mut Request<'a>)
🔬This is a nightly-only experimental API. (`error_generic_member_access`)Provides type based access to context intended for error reports. Read moreAuto Trait Implementations
---
### impl RefUnwindSafe for WrongType
### impl Send for WrongType
### impl Sync for WrongType
### impl Unpin for WrongType
### impl UnwindSafe for WrongType
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> ToString for Twhere
T: Display + ?Sized,
#### default fn to_string(&self) -> String
Converts the given value to a `String`.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
Enum postgres_types::Date
===
```
pub enum Date<T> {
PosInfinity,
NegInfinity,
Value(T),
}
```
A wrapper that can be used to represent infinity with `Type::Date` types.
Variants
---
### PosInfinity
Represents `infinity`, a date that is later than all other dates.
### NegInfinity
Represents `-infinity`, a date that is earlier than all other dates.
### Value(T)
The wrapped date.
Trait Implementations
---
### impl<T: Clone> Clone for Date<T#### fn clone(&self) -> Date<TReturns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
Formats the value using the given formatter.
ty: &Type,
raw: &'a [u8]
) -> Result<Self, Box<dyn Error + Sync + Send>Creates a new value of this type from a buffer of data of the specified Postgres `Type` in its binary format.
Determines if a value of this type can be created from the specified Postgres `Type`.#### fn from_sql_null(ty: &Type) -> Result<Self, Box<dyn Error + Sync + Send>Creates a new value of this type from a `NULL` SQL value.
ty: &Type,
raw: Option<&'a [u8]>
) -> Result<Self, Box<dyn Error + Sync + Send>A convenience function that delegates to `from_sql` and `from_sql_null` depending on the value of `raw`.### impl<T: PartialEq> PartialEq<Date<T>> for Date<T#### fn eq(&self, other: &Date<T>) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl<T: ToSql> ToSql for Date<T#### fn to_sql(
&self,
ty: &Type,
out: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>Converts the value of `self` into the binary format of the specified Postgres `Type`, appending it to `out`.
Determines if a value of this type can be converted to the specified Postgres `Type`.#### fn to_sql_checked(
&self,
ty: &Type,
out: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>An adaptor method used internally by Rust-Postgres.
Specify the encode format### impl<T: Copy> Copy for Date<T### impl<T: Eq> Eq for Date<T### impl<T> StructuralEq for Date<T### impl<T> StructuralPartialEq for Date<TAuto Trait Implementations
---
### impl<T> RefUnwindSafe for Date<T>where
T: RefUnwindSafe,
### impl<T> Send for Date<T>where
T: Send,
### impl<T> Sync for Date<T>where
T: Sync,
### impl<T> Unpin for Date<T>where
T: Unpin,
### impl<T> UnwindSafe for Date<T>where
T: UnwindSafe,
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
T: ToSql,
#### fn borrow_to_sql(&self) -> &dyn ToSql
Returns a reference to `self` as a `ToSql` trait object.### impl<T> From<T> for T
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> FromSqlOwned for Twhere
T: for<'a> FromSql<'a>,
Enum postgres_types::Format
===
```
pub enum Format {
Text,
Binary,
}
```
Supported Postgres message format types
Using Text format in a message assumes a Postgres `SERVER_ENCODING` of `UTF8`
Variants
---
### Text
Text format (UTF-8)
### Binary
Compact, typed binary format
Trait Implementations
---
### impl Clone for Format
#### fn clone(&self) -> Format
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
Auto Trait Implementations
---
### impl RefUnwindSafe for Format
### impl Send for Format
### impl Sync for Format
### impl Unpin for Format
### impl UnwindSafe for Format
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
Enum postgres_types::IsNull
===
```
pub enum IsNull {
Yes,
No,
}
```
An enum representing the nullability of a Postgres value.
Variants
---
### Yes
The value is NULL.
### No
The value is not NULL.
Auto Trait Implementations
---
### impl RefUnwindSafe for IsNull
### impl Send for IsNull
### impl Sync for IsNull
### impl Unpin for IsNull
### impl UnwindSafe for IsNull
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
Enum postgres_types::Kind
===
```
#[non_exhaustive]pub enum Kind {
Simple,
Enum(Vec<String>),
Pseudo,
Array(Type),
Range(Type),
Multirange(Type),
Domain(Type),
Composite(Vec<Field>),
}
```
Represents the kind of a Postgres type.
Variants (Non-exhaustive)
---
Non-exhaustive enums could have additional variants added in future. Therefore, when matching against variants of non-exhaustive enums, an extra wildcard arm must be added to account for any future variants.### Simple
A simple type like `VARCHAR` or `INTEGER`.
### Enum(Vec<String>)
An enumerated type along with its variants.
### Pseudo
A pseudo-type.
### Array(Type)
An array type along with the type of its elements.
### Range(Type)
A range type along with the type of its elements.
### Multirange(Type)
A multirange type along with the type of its elements.
### Domain(Type)
A domain type along with its underlying type.
### Composite(Vec<Field>)
A composite type along with information about its fields.
Trait Implementations
---
### impl Clone for Kind
#### fn clone(&self) -> Kind
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn hash<__H: Hasher>(&self, state: &mut __H)
Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where
H: Hasher,
Self: Sized,
Feeds a slice of this type into the given `Hasher`.
#### fn eq(&self, other: &Kind) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Eq for Kind
### impl StructuralEq for Kind
### impl StructuralPartialEq for Kind
Auto Trait Implementations
---
### impl RefUnwindSafe for Kind
### impl Send for Kind
### impl Sync for Kind
### impl Unpin for Kind
### impl UnwindSafe for Kind
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
Enum postgres_types::Timestamp
===
```
pub enum Timestamp<T> {
PosInfinity,
NegInfinity,
Value(T),
}
```
A wrapper that can be used to represent infinity with `Type::Timestamp` and `Type::Timestamptz`
types.
Variants
---
### PosInfinity
Represents `infinity`, a timestamp that is later than all other timestamps.
### NegInfinity
Represents `-infinity`, a timestamp that is earlier than all other timestamps.
### Value(T)
The wrapped timestamp.
Trait Implementations
---
### impl<T: Clone> Clone for Timestamp<T#### fn clone(&self) -> Timestamp<TReturns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
Formats the value using the given formatter.
ty: &Type,
raw: &'a [u8]
) -> Result<Self, Box<dyn Error + Sync + Send>Creates a new value of this type from a buffer of data of the specified Postgres `Type` in its binary format.
Determines if a value of this type can be created from the specified Postgres `Type`.#### fn from_sql_null(ty: &Type) -> Result<Self, Box<dyn Error + Sync + Send>Creates a new value of this type from a `NULL` SQL value.
ty: &Type,
raw: Option<&'a [u8]>
) -> Result<Self, Box<dyn Error + Sync + Send>A convenience function that delegates to `from_sql` and `from_sql_null` depending on the value of `raw`.### impl<T: PartialEq> PartialEq<Timestamp<T>> for Timestamp<T#### fn eq(&self, other: &Timestamp<T>) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl<T: ToSql> ToSql for Timestamp<T#### fn to_sql(
&self,
ty: &Type,
out: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>Converts the value of `self` into the binary format of the specified Postgres `Type`, appending it to `out`.
Determines if a value of this type can be converted to the specified Postgres `Type`.#### fn to_sql_checked(
&self,
ty: &Type,
out: &mut BytesMut
) -> Result<IsNull, Box<dyn Error + Sync + Send>An adaptor method used internally by Rust-Postgres.
Specify the encode format### impl<T: Copy> Copy for Timestamp<T### impl<T: Eq> Eq for Timestamp<T### impl<T> StructuralEq for Timestamp<T### impl<T> StructuralPartialEq for Timestamp<TAuto Trait Implementations
---
### impl<T> RefUnwindSafe for Timestamp<T>where
T: RefUnwindSafe,
### impl<T> Send for Timestamp<T>where
T: Send,
### impl<T> Sync for Timestamp<T>where
T: Sync,
### impl<T> Unpin for Timestamp<T>where
T: Unpin,
### impl<T> UnwindSafe for Timestamp<T>where
T: UnwindSafe,
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
T: ToSql,
#### fn borrow_to_sql(&self) -> &dyn ToSql
Returns a reference to `self` as a `ToSql` trait object.### impl<T> From<T> for T
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> FromSqlOwned for Twhere
T: for<'a> FromSql<'a>,
Trait postgres_types::BorrowToSql
===
```
pub trait BorrowToSql: Sealed {
// Required method
fn borrow_to_sql(&self) -> &dyn ToSql;
}
```
A trait used by clients to abstract over `&dyn ToSql` and `T: ToSql`.
This cannot be implemented outside of this crate.
Required Methods
---
#### fn borrow_to_sql(&self) -> &dyn ToSql
Returns a reference to `self` as a `ToSql` trait object.
Implementations on Foreign Types
---
### impl<'a> BorrowToSql for Box<dyn ToSql + Sync + 'a#### fn borrow_to_sql(&self) -> &dyn ToSql
### impl<'a> BorrowToSql for Box<dyn ToSql + Sync + Send + 'a#### fn borrow_to_sql(&self) -> &dyn ToSql
Implementors
---
### impl BorrowToSql for &(dyn ToSql + Sync)
In async contexts it is sometimes necessary to have the additional Sync requirement on parameters for queries since this enables the resulting Futures to be Send, hence usable in, e.g., tokio::spawn.
This instance is provided for those cases.
### impl BorrowToSql for &dyn ToSql
### impl<T> BorrowToSql for Twhere
T: ToSql,
Trait postgres_types::FromSqlOwned
===
```
pub trait FromSqlOwned: for<'a> FromSql<'a> { }
```
A trait for types which can be created from a Postgres value without borrowing any data.
This is primarily useful for trait bounds on functions.
Implementors
---
### impl<T> FromSqlOwned for Twhere
T: for<'a> FromSql<'a>,
Type Definition postgres_types::Oid
===
```
pub type Oid = u32;
```
A Postgres OID. |
dgrid | npm | JavaScript | The dgrid project provides widgets for lists of data, including simple sets of scrolling rows,
grids of data, on-demand lazy-loaded data, and various mixins for additional functionality.
dgrid is available under the ["New" BSD License](https://github.com/SitePen/dgrid/blob/HEAD/LICENSE).
Installation
===
Install from npm
---
dgrid and its dependencies can be installed via [npm](https://www.npmjs.com/) using the following command:
```
npm install dgrid dojo-dstore
```
Note that by default, npm installs to a `node_modules` subdirectory.
If you are using Dojo widgets, you may want to include `dijit` and `dojox`:
```
npm install dgrid dojo-dstore dijit dojox
```
By default, npm will automatically find the highest tagged version of each component and install it along with its dependencies.
Manual Download
---
Alternatively, dgrid and its dependencies can be downloaded individually:
* [dstore](https://github.com/SitePen/dstore) >= 1.0.3 or 1.1.1, for store-backed grids
* [The Dojo Toolkit](http://dojotoolkit.org) SDK >= 1.8.2
+ Out of the DTK components, Dojo core is the only hard dependency for dgrid;
however, some of the test pages also use components from Dijit, and
Dojox (namely grid for a comparison test, and mobile for a mobile page).
It is recommended to arrange all dependencies as siblings, resulting in a directory structure like the following:
* `dgrid`
* `dijit` (optional, dependency of some dgrid tests/components)
* `dojo`
* `dojox` (optional, dependency of some dgrid tests)
* `dstore`
* `util` (optional, e.g. if pursuing a custom build)
CDN
---
[unpkg](https://unpkg.com/) offers CDN hosting of raw tagged git URLs.
It can serve any version of dgrid and dstore.
For example, here's a `packages` configuration for dgrid 1.1.0 and dstore 1.1.1:
```
packages: [ { name: 'dgrid', location: '//unpkg.com/[email protected]/' }, { name: 'dstore', location: '//unpkg.com/[email protected]/' }]
```
Browser and Dojo Version Support
===
dgrid works with Dojo 1.8.2 or higher, and supports the following browsers:
* IE 11 (IE8+ still unofficially supported, but no longer tested)
* Edge latest
* Firefox latest + ESR
* Chrome latest (desktop and mobile)
* Safari latest (desktop and mobile)
* Opera latest
dgrid *does not* support quirks mode. You are *heavily* encouraged to include the HTML5 DOCTYPE (`<!DOCTYPE html>`) at the beginning of your pages.
Documentation
===
Documentation for dgrid components is available in the
[doc folder](https://github.com/SitePen/dgrid/blob/HEAD/doc). In addition, the website hosts a number of
[tutorials](http://dgrid.io/#tutorials).
If upgrading from a previous dgrid release, please be sure to read the
[release notes on GitHub](https://github.com/SitePen/dgrid/releases).
Community
===
Reporting Issues
---
Bugs or enhancements can be filed by opening an issue in the
[issue tracker on GitHub](https://github.com/SitePen/dgrid/issues?state=open).
When reporting a bug, please provide the following information:
* Affected browsers and Dojo versions
* A clear list of steps to reproduce the problem
* If the problem cannot be easily reproduced in an existing dgrid test page,
include a [Gist](https://gist.github.com/) with code for a page containing a reduced test case
If you would like to suggest a fix for a particular issue, you are welcome to fork dgrid, create a branch, and submit a pull request. Please note that a
[Dojo CLA](http://www.dojofoundation.org/about/cla) is required for any non-trivial modifications.
Getting Support
---
Questions about dgrid usage can be asked in the following places:
* [Stack Overflow](http://stackoverflow.com/questions/tagged/dgrid)
* The #dojo IRC channel on irc.freenode.net
* The [dojo-interest mailing list](http://mail.dojotoolkit.org/mailman/listinfo/dojo-interest)
Web interfaces for IRC and the mailing list are available from the
[Dojo Toolkit Community page](https://dojotoolkit.org/community/).
SitePen also offers [commercial support](https://www.sitepen.com/support/)
for dgrid, as well as Dojo and a number of other JavaScript libraries.
Testing
===
See [test/README.md](https://github.com/SitePen/dgrid/blob/HEAD/test/README.md).
Readme
---
### Keywords
none |
halfcircle | cran | R | Package ‘halfcircle’
October 13, 2022
Type Package
Title Plot Halfcircle Diagram
Version 0.1.0
Date 2018-10-28
Author <NAME>, <NAME>
Maintainer <NAME> <<EMAIL>>
Description There are growing concerns on flow data in diverse fields including trade, migra-
tion, knowledge diffusion, disease spread, and transportation. The package is an effective vi-
sual support to learn the pattern of flow which is called halfcircle diagram. The flow be-
tween two nodes placed on the center line of a circle is represented using a half cir-
cle drawn from the origin to the destination in a clockwise direction. Through changing the or-
der of nodes, the halfcircle diagram enables users to examine the complex relationship be-
tween bidirectional flow and each potential determinants. Furthermore, the halfmeancenter func-
tion, which calculates (un) weighted mean center of half circles, makes the comparison easier.
License MIT + file LICENSE
Encoding UTF-8
LazyData true
RoxygenNote 6.1.0
Depends R (>= 2.10)
Imports scales, graphics
Suggests knitr, rmarkdown
NeedsCompilation no
Repository CRAN
Date/Publication 2018-11-02 18:30:11 UTC
R topics documented:
ex_flo... 2
ex_nod... 2
halfcircl... 3
halfmeancente... 5
ex_flow Traded volume of land between countries
Description
A dataset containing trade data between countries and traded volumes for each 4 category.
Usage
data(ex_flow)
Format
A data frame with 10866 rows and 6 variables:
O name of exporting country
D name of importing country
vegetable volume of land associated with trading vegetables, in ha
fruit volume of land associated with trading fruits, in ha
wheat volume of land associated with trading wheats, in ha
soybean volume of land associated with trading soybeans, in ha
Source
http://fao.org/faostat/
ex_node country attributes
Description
A dataset containing 154 countries who participate in the trade and related attributes.
Usage
data(ex_node)
Format
A data frame with 154 rows and 8 variables:
country name of exporting country
x longitude of the center of a country
y latitude of the center of a country
pop_total total number of population
gdpc Gross Domestic Product per capita, in dollar
area_cultivation total volume land for cultivation use, in ha
water_total total volume of usable water, in cubic meter
income_level 5 levels by income
Source
http://fao.org/faostat/
halfcircle Visualization method for flow data using halfcircle diagram
Description
halfcircle function draws flows between nodes creating halfcircle diagram.
Usage
halfcircle(flow, node, dir = "horizontal", circle.col = "lightgray",
circle.trans = 0.5, flow.col = "black", flow.trans = 0.5,
flow.width = "proportional", node.color = "black", node.size = 0.1,
node.pch = 20, node.trans = 0.7, label = node[, c(1)],
label.size = 0.5, label.col = "black", label.gap = 0.1)
Arguments
flow a dataframe which is to draw half-circles. The data should be in the form of an
edge list containing node of origin, node of destination, and magnitude of the
flow on the first three columns.
node a dataframe which contains names of node on the first column. Nodes on the
center line of a circle are drawn by the order of the data. Every node presented
in flow data must be contained.
dir if ’horizontal’ (the default), nodes are drawn along the X-axis. If ’vertical’,
nodes are drawn along the Y-axis.
circle.col color of background circle
circle.trans transparency of color of background circle
flow.col flow color. flow.col can be a list of color vectors, the vectors are then used per
flow.
flow.trans transparency of color of flows
flow.width width of flows. if ’proportional’ (the default), each width is calculated to be
proportional to the maximum volume of flows. Maximum width is set to be 10.
Otherwise, a list of width vectors can be used per flow.
node.color node color. It can be a list of color vectors, and the vectors are then used per
node.
node.size node size
node.pch node type. see ?points for more options.
node.trans transparency of color of flows
label the first column of node, names, is represented (the default). a list of vector an
be used per node. if NULL, no label is drawn.
label.size label size
label.col label color
label.gap gap between the node and the respective label
Details
This function is a low-level graphical function, and you will create a halfcircle diagram. To create
the diagram, nodes are placed as a set of points on a straight line segment in the center of a circle.
The flow between two nodes is represented using a half cicle drawn from the origin to the destination
in a clockwise direction. It is virtually drawn on xy-coordinates where both x and y range from -1
to 1. Flows between the same nodes are not drawn.
Author(s)
<NAME> <<EMAIL>>, <NAME>
References
Xiao and Chun (2009) <doi:10.1559/152304009788188763>
Examples
# load flow data
data(ex_flow)
flow <- ex_flow[,c(1,2,3)] # select veget column as volume
flow <- subset(flow,flow$vegetable>5000)
data(ex_node) # load node data
node <- ex_node[c(order(-ex_node$gdpc)),] # sort nodes in descending order of gdpc values
halfcircle(flow, node, dir="vertical", circle.col="gray", flow.col="black",label=NULL)
# legend
max <- max(flow[,c(3)]); median <- median(flow[,c(3)]); min <- min(flow[,c(3)])
max_w <- 10; median_w <- round(10*median/max); min_w <- round(10*min/max)
legend(x=-1.2, y=-0.8, legend=c(paste(round(max)), paste(round(median)), paste(round(min))),
lty=1, lwd=c(max_w, median_w, min_w), cex=0.7)
# customize colors
node$color <- c("#22abcb","#4eb6ad","#86c388","#adcd6c","#dad84f")[node$income_level]
flow2 <- data.frame(flow, node[match(flow[,"O"], node[,"country"]),])
halfcircle(flow2, node, dir="vertical", flow.col=flow2$color, node.color=node$color, label=NULL)
# highlight one node
flow3 <- flow
flow3$color <- "gray"
flow3$color[flow3$O=="China"|flow3$D=="China"] <- "blue"
flow3 <- flow3[c(order(flow3$color,decreasing=TRUE)),]
node$label <- ""
node$label[node$country=="China"] <- "China"
halfcircle(flow3, node, dir="vertical", flow.col=flow3$color, label=node$label, label.size=0.7)
halfmeancenter Calculate average values of flows and plot them
Description
Calculate average values of flows and plot them
Usage
halfmeancenter(flow, node, dir = "horizontal")
Arguments
flow a dataframe which is to draw half-circles. The data should consist of node of
origin, node of destination, and magnitude of the flow on the first three columns.
node a dataframe which contains names of node on the first column. Nodes on the
center line of a circle are drawn by the order of the data.
dir if ’horizontal’, nodes are drawn along the X-axis. If ’vertical’, nodes are drawn
along the Y-axis.
Details
This function is to get values of mean centers and average radius of flows. One of values of mean
centers is weighted by the magnitude of flow and the other one is unweighted. If flows are normally
distributed or all combinations of flows between nodes are made, the mean center should be located
in the center of a circle, that is (0,0) on the xy-coordinates, and average radius should be 0.5. If the
mean center fall in a certain quadrant, a user can evaluate the skewedness.
Value
A list containing calculated average values c(x-coordinate of weighted mean center, y-coordinate
of weighted mean center, weighted average radius,x-coordinate of unweighted mean center, y-
coordinate of unweighted mean center, unweighted average radius)
Author(s)
<NAME> <<EMAIL>>, <NAME>
Examples
data(ex_flow)
flow <- subset(ex_flow, ex_flow$veget>5000)
data(ex_node)
node <- ex_node[c(order(-ex_node$gdpc)),]
halfmeancenter(flow, node, dir="vertical") |
bndovb | cran | R | Package ‘bndovb’
October 12, 2022
Title Bounding Omitted Variable Bias Using Auxiliary Data
Version 1.1
Description Functions to implement a Hwang(2021) <doi:10.2139/ssrn.3866876> estima-
tor, which bounds an omitted variable bias using auxiliary data.
License GPL-3
Encoding UTF-8
LazyData true
Depends R (>= 2.10)
RoxygenNote 7.1.1
Imports np, pracma, stats, utils, MASS, dplyr, factormodel, nnet
Suggests knitr, rmarkdown
VignetteBuilder knitr
NeedsCompilation no
Author <NAME> [aut, cre] (<https://orcid.org/0000-0002-8136-8987>)
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2021-07-30 17:40:02 UTC
R topics documented:
auxdat_mecon... 2
auxdat_medis... 2
auxdat_nom... 3
bndov... 3
bndovbm... 5
maindat_mecon... 7
maindat_medis... 7
maindat_nom... 8
auxdat_mecont A simulated auxiliary data to show how to use ’bndovbme’ function
with continuous proxy variables
Description
A simulated auxiliary data to show how to use ’bndovbme’ function with continuous proxy variables
Usage
auxdat_mecont
Format
A data frame with 3000 rows and 5 variables:
w1 A common covariate in both main and auxiliary data
x A common covariate in both main and auxiliary data
z1 A continuous proxy variable
z2 A continuous proxy variable
z3 A continuous proxy variable
Source
This dataset was simulated by simulatePackageData.R in data-raw folder
auxdat_medisc A simulated auxiliary data to show how to use ’bndovbme’ function
with discrete proxy variables
Description
A simulated auxiliary data to show how to use ’bndovbme’ function with discrete proxy variables
Usage
auxdat_medisc
Format
A data frame with 3000 rows and 5 variables:
w1 A common covariate in both main and auxiliary data
x A common covariate in both main and auxiliary data
z1 A discrete proxy variable
z2 A discrete proxy variable
z3 A discrete proxy variable
Source
This dataset was simulated by simulatePackageData.R in data-raw folder
auxdat_nome A simulated auxiliary data to show how to use ’bndovb’ function
Description
A simulated auxiliary data to show how to use ’bndovb’ function
Usage
auxdat_nome
Format
A data frame with 50000 rows and 3 variables:
x1 An omitted variable in the main data
x2 A common covariate in both main and auxiliary data
x3 A common covariate in both main and auxiliary data
Source
This dataset was simulated by simulatePackageData.R in data-raw folder
bndovb bndovb
Description
This function runs a two sample least squares when auxiliary data contains every right-hand side
regressor and main data contains a dependent variable and every right-hand side regressor but one
omitted variable.
Usage
bndovb(
maindat,
auxdat,
depvar,
ovar,
comvar,
method = 1,
mainweights = NULL,
auxweights = NULL,
signres = NULL
)
Arguments
maindat Main data set. It must be a data frame.
auxdat Auxiliary data set. It must be a data frame.
depvar A name of a dependent variable in main dataset
ovar A name of an omitted variable in main dataset which exists in auxiliary data
comvar A vector of the names of common regressors existing in both main data and
auxiliary data
method CDF and Quantile function estimation method. Users can choose either 1 or
2. If the method is 1, the CDF and quantile function is estimated assuming
a parametric normal distribution. If the method is 2, the CDF and quantile
function is estimated using a nonparaemtric estimator in Li and Racine(2008)
doi: 10.1198/073500107000000250, Li, Lin, and Racine(2013) doi: 10.1080/
07350015.2012.738955. Default is 1.
mainweights An optional weight vector for the main dataset. The length must be equal to the
number of rows of ’maindat’.
auxweights An optional weight vector for the auxiliary dataset. The length must be equal to
the number of rows of ’auxdat’.
signres An option to impose a sign restriction on a coefficient of an omitted variable.
Set either NULL or pos or neg. Default is NULL. If NULL, there is no sign
restriction. If ’pos’, the estimator imposes an extra restriction that the coefficient
of an omitted variable must be positive. If ’neg’, the estimator imposes an extra
restriction that the coefficient of an omitted variable must be negative.
Value
Returns a list of 4 components :
hat_beta_l lower bound estimates of regression coefficients
hat_beta_u upper bound estimates of regression coefficients
mu_l lower bound estimate of E[ovar*depvar]
mu_u upper bound estimate of E[ovar*depvar]
Author(s)
<NAME>, <<EMAIL>>
References
Hwang, Yujung (2021) Bounding Omitted Variable Bias Using Auxiliary Data. Available at SSRN.doi: 10.2139/
ssrn.3866876
Examples
data(maindat_nome)
data(auxdat_nome)
bndovb(maindat=maindat_nome,auxdat=auxdat_nome,depvar="y",ovar="x1",comvar=c("x2","x3"),method=1)
bndovbme bndovbme
Description
This function runs a two sample least squares when main data contains a dependent variable and
every right hand side regressor but one omitted variable. The function requires an auxiliary data
which includes every right hand side regressor but one omitted variable, and enough proxy variables
for the omitted variable. When the omitted variable is continuous, the auxiliary data must contain
at least two continuous proxy variables. When the omitted variable is discrete, the auxiliary data
must contain at least three continuous proxy variables.
Usage
bndovbme(
maindat,
auxdat,
depvar,
pvar,
ptype = 1,
comvar,
sbar = 2,
mainweights = NULL,
auxweights = NULL,
normalize = TRUE,
signres = NULL
)
Arguments
maindat Main data set. It must be a data frame.
auxdat Auxiliary data set. It must be a data frame.
depvar A name of a dependent variable in main dataset
pvar A vector of the names of the proxy variables for the omitted variable. When
proxy variables are continuous, the first proxy variable is used as an anchoring
variable. When proxy variables are discrete, the first proxy variable is used for
initialization (For details, see a documentation for "dproxyme" function).
ptype Either 1 (continuous) or 2 (discrete). Whether proxy variables are continuous or
discrete. Default is 1 (continuous).
comvar A vector of the names of the common regressors existing in both main data and
auxiliary data
sbar A cardinality of the support of the discrete proxy variables. Default is 2. If
proxy variables are continuous, this variable is irrelevant.
mainweights An optional weight vector for the main dataset. The length must be equal to the
number of rows of ’maindat’.
auxweights An optional weight vector for the auxiliary dataset. The length must be equal to
the number of rows of ’auxdat’.
normalize Whether to normalize the omitted variable to have mean 0 and standard devia-
tion 1. Set TRUE or FALSE. Default is TRUE. If FALSE, then the scale of the
omitted variable is anchored with the first proxy variable in pvar list.
signres An option to impose a sign restriction on a coefficient of an omitted variable.
Set either NULL or pos or neg. Default is NULL. If NULL, there is no sign
restriction. If ’pos’, the estimator imposes an extra restriction that the coefficient
of an omitted variable must be positive. If ’neg’, the estimator imposes an extra
restriction that the coefficient of an omitted variable must be negative.
Value
Returns a list of 4 components :
hat_beta_l lower bound estimates of regression coefficients
hat_beta_u upper bound estimates of regression coefficients
mu_l lower bound estimate of E[ovar*depvar]
mu_u upper bound estimate of E[ovar*depvar]
Author(s)
<NAME>, <<EMAIL>>
References
Hwang, Yujung (2021) Bounding Omitted Variable Bias Using Auxiliary Data. Available at SSRN.
doi: 10.2139/ssrn.3866876
Examples
## load example data
data(maindat_mecont)
data(auxdat_mecont)
## set ptype=1 for continuous proxy variables
pvar<-c("z1","z2","z3")
cvar<-c("x","w1")
bndovbme(maindat=maindat_mecont,auxdat=auxdat_mecont,depvar="y",pvar=pvar,ptype=1,comvar=cvar)
## set ptype=2 for discrete proxy variables
data(maindat_medisc)
data(auxdat_medisc)
bndovbme(maindat=maindat_medisc,auxdat=auxdat_medisc,depvar="y",pvar=pvar,ptype=2,comvar=cvar)
maindat_mecont A simulated main data to show how to use ’bndovbme’ function with
continuous proxy variables
Description
A simulated main data to show how to use ’bndovbme’ function with continuous proxy variables
Usage
maindat_mecont
Format
A data frame with 3000 rows and 3 variables:
w1 A common covariate in both main and auxiliary data
x A common covariate in both main and auxiliary data
y A dependent variable
Source
This dataset was simulated by simulatePackageData.R in data-raw folder
maindat_medisc A simulated main data to show how to use ’bndovbme’ function with
discrete proxy variables
Description
A simulated main data to show how to use ’bndovbme’ function with discrete proxy variables
Usage
maindat_medisc
Format
A data frame with 3000 rows and 3 variables:
w1 A common covariate in both main and auxiliary data
x A common covariate in both main and auxiliary data
y A dependent variable
Source
This dataset was simulated by simulatePackageData.R in data-raw folder
maindat_nome A simulated main data to show how to use ’bndovb’ function
Description
A simulated main data to show how to use ’bndovb’ function
Usage
maindat_nome
Format
A data frame with 100000 rows and 3 variables:
x2 A common covariate in both main and auxiliary data
x3 A common covariate in both main and auxiliary data
y A dependent variable
Source
This dataset was simulated by simulatePackageData.R in data-raw folder |
grafzahl | cran | R | Package ‘grafzahl’
April 12, 2023
Title Supervised Machine Learning for Textual Data Using Transformers
and 'Quanteda'
Version 0.0.8
Description Duct tape the 'quanteda' ecosys-
tem (Benoit et al., 2018) <doi:10.21105/joss.00774> to modern Transformer-based text classifi-
cation models (Wolf et al., 2020) <doi:10.18653/v1/2020.emnlp-demos.6>, in order to facili-
tate supervised machine learning for textual data. This package mimics the behav-
iors of 'quanteda.textmodels' and provides a function to setup the 'Python' environ-
ment to use the pretrained models from 'Hugging Face' <https://huggingface.co/>. More in-
formation: <doi:10.5117/CCR2023.1.003.CHAN>.
License GPL (>= 3)
Encoding UTF-8
RoxygenNote 7.2.3
URL https://github.com/chainsawriot/grafzahl
BugReports https://github.com/chainsawriot/grafzahl/issues
Suggests quanteda.textmodels, testthat (>= 3.0.0), withr
Config/testthat/edition 3
Imports jsonlite, lime, quanteda, reticulate, utils, stats
LazyData true
Depends R (>= 3.5)
NeedsCompilation no
Author <NAME> [aut, cre] (<https://orcid.org/0000-0002-6232-7530>)
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2023-04-12 09:00:07 UTC
R topics documented:
detect_cond... 2
ecosen... 3
get_amharic_dat... 3
grafzah... 4
hydrat... 7
predict.grafzah... 8
setup_grafzah... 9
unciviltweet... 9
detect_conda Detecting Miniconda And Cuda
Description
These functions detects miniconda and cuda.
Usage
detect_conda()
detect_cuda()
Details
detect_conda conducts a test to check whether 1) a miniconda installation and 2) the grafzahl
miniconda environment exist.
detect_cuda checks whether cuda is available. If setup_grafzahl was executed with cuda being
FALSE, this function will return FALSE. Even if setup_grafzahl was executed with cuda being
TRUE but with any factor that can’t enable cuda (e.g. no Nvidia GPU, the environment was incor-
rectly created), this function will also return FALSE.
Value
boolean, whether the system is available.
ecosent A Corpus Of Dutch News Headlines
Description
This is a dataset from the paper "The Validity of Sentiment Analysis: Comparing Manual Annota-
tion, Crowd-Coding, Dictionary Approaches, and Machine Learning Algorithms." The data frame
contains four columns: id (identifier), headline (the actual text data), value (sentiment: 0 Neutral,
+1 Positive, -1 Negative), gold (whether or not this row is "gold standard", i.e. test set). The data is
available from <NAME>teveldt’s Github. https://github.com/vanatteveldt/ecosent
Usage
ecosent
Format
An object of class data.frame with 6322 rows and 4 columns.
References
<NAME>., <NAME>., & <NAME>. (2021). The validity of sentiment anal-
ysis: Comparing manual annotation, crowd-coding, dictionary approaches, and machine learning
algorithms. Communication Methods and Measures, 15(2), 121-140.
get_amharic_data Download The Amharic News Text Classification Dataset
Description
This function downloads the training and test sets of the Amharic News Text Classification Dataset
from Hugging Face.
Usage
get_amharic_data()
Value
A named list of two corpora: training and test
References
Azime, <NAME>, and <NAME> (2021). "An Amharic News Text classification Dataset."
arXiv preprint arXiv:2103.05639
grafzahl Fine tune a pretrained Transformer model for texts
Description
Fine tune (or train) a pretrained Transformer model for your given training labelled data x and y. The
prediction task can be classification (if regression is FALSE, default) or regression (if regression
is TRUE).
Usage
grafzahl(
x,
y = NULL,
model_name = "xlm-roberta-base",
regression = FALSE,
output_dir,
cuda = detect_cuda(),
num_train_epochs = 4,
train_size = 0.8,
args = NULL,
cleanup = TRUE,
model_type = NULL,
manual_seed = floor(runif(1, min = 1, max = 721831)),
verbose = TRUE
)
## Default S3 method:
grafzahl(
x,
y = NULL,
model_name = "xlm-roberta-base",
regression = FALSE,
output_dir,
cuda = detect_cuda(),
num_train_epochs = 4,
train_size = 0.8,
args = NULL,
cleanup = TRUE,
model_type = NULL,
manual_seed = floor(runif(1, min = 1, max = 721831)),
verbose = TRUE
)
## S3 method for class 'corpus'
grafzahl(
x,
y = NULL,
model_name = "xlm-roberta-base",
regression = FALSE,
output_dir,
cuda = detect_cuda(),
num_train_epochs = 4,
train_size = 0.8,
args = NULL,
cleanup = TRUE,
model_type = NULL,
manual_seed = floor(runif(1, min = 1, max = 721831)),
verbose = TRUE
)
textmodel_transformer(...)
## S3 method for class 'character'
grafzahl(
x,
y = NULL,
model_name = "xlmroberta",
regression = FALSE,
output_dir,
cuda = detect_cuda(),
num_train_epochs = 4,
train_size = 0.8,
args = NULL,
cleanup = TRUE,
model_type = NULL,
manual_seed = floor(runif(1, min = 1, max = 721831)),
verbose = TRUE
)
Arguments
x the corpus or character vector of texts on which the model will be trained. De-
pending on train_size, some texts will be used for cross-validation.
y training labels. It can either be a single string indicating which docvars of the
corpus is the training labels; a vector of training labels in either character or
factor; or NULL if the corpus contains exactly one column in docvars and that
column is the training labels. If x is a character vector, y must be a vector of the
same length.
model_name string indicates either 1) the model name on Hugging Face website; 2) the local
path of the model
regression logical, if TRUE, the task is regression, classification otherwise.
output_dir string, location of the output model. If missing, the model will be stored in a
temporary directory. Important: Please note that if this directory exists, it will
be overwritten.
cuda logical, whether to use CUDA, default to detect_cuda().
num_train_epochs
numeric, if train_size is not exactly 1.0, the maximum number of epochs to
try in the "early stop" regime will be this number times 5 (i.e. 4 * 5 = 20 by
default). If train_size is exactly 1.0, the number of epochs is exactly that.
train_size numeric, proportion of data in x and y to be used actually for training. The rest
will be used for cross validation.
args list, additionally parameters to be used in the underlying simple transformers
cleanup logical, if TRUE, the runs directory generated will be removed when the training
is done
model_type a string indicating model_type of the input model. If NULL, it will be inferred
from model_name. It can only be one of the following: "albert", "bert", "bertweet",
"bigbird", "camembert", "deberta", "distilbert", "electra", "flaubert", "herbert",
"layoutlm", "layoutlmv2", "longformer", "mpnet", "mobilebert", "rembert", "roberta",
"squeezebert", "squeezebert", "xlm", "xlmroberta", "xlnet". This will be lower-
cased and hyphens will be removed, e.g. "XLM-RoBERTa" will be normalized
to "xlmroberta".
manual_seed numeric, random seed
verbose logical, if TRUE, debug messages will be displayed
... paramters pass to grafzahl()
Value
a grafzahl S3 object with the following items
call original function call
input_data input_data for the underlying python function
output_dir location of the output model
model_type model type
model_name model name
regression whether or not it is a regression model
levels factor levels of y
manual_seed random seed
meta metadata about the current session
See Also
predict.grafzahl()
Examples
if (detect_conda() && interactive()) {
library(quanteda)
set.seed(20190721)
## Using the default cross validation method
model1 <- grafzahl(unciviltweets, model_type = "bertweet", model_name = "vinai/bertweet-base")
predict(model1)
## Using LIME
input <- corpus(ecosent, text_field = "headline")
training_corpus <- corpus_subset(input, !gold)
model2 <- grafzahl(x = training_corpus,
y = "value",
model_name = "GroNLP/bert-base-dutch-cased")
test_corpus <- corpus_subset(input, gold)
predicted_sentiment <- predict(model2, test_corpus)
require(lime)
sentences <- c("Dijsselbloem pessimistisch over snelle stappen Grieken",
"Aandelenbeurzen zetten koersopmars voort")
explainer <- lime(training_corpus, model2)
explanations <- explain(sentences, explainer, n_labels = 1,
n_features = 2)
plot_text_explanations(explanations)
}
hydrate Create a grafzahl S3 object from the output_dir
Description
Create a grafzahl S3 object from the output_dir
Usage
hydrate(output_dir, model_type = NULL, regression = FALSE)
Arguments
output_dir string, location of the output model. If missing, the model will be stored in a
temporary directory. Important: Please note that if this directory exists, it will
be overwritten.
model_type a string indicating model_type of the input model. If NULL, it will be inferred
from model_name. It can only be one of the following: "albert", "bert", "bertweet",
"bigbird", "camembert", "deberta", "distilbert", "electra", "flaubert", "herbert",
"layoutlm", "layoutlmv2", "longformer", "mpnet", "mobilebert", "rembert", "roberta",
"squeezebert", "squeezebert", "xlm", "xlmroberta", "xlnet". This will be lower-
cased and hyphens will be removed, e.g. "XLM-RoBERTa" will be normalized
to "xlmroberta".
regression logical, if TRUE, the task is regression, classification otherwise.
Value
a grafzahl S3 object with the following items
call original function call
input_data input_data for the underlying python function
output_dir location of the output model
model_type model type
model_name model name
regression whether or not it is a regression model
levels factor levels of y
manual_seed random seed
meta metadata about the current session
predict.grafzahl Prediction from a fine-tuned grafzahl object
Description
Make prediction from a fine-tuned grafzahl object.
Usage
## S3 method for class 'grafzahl'
predict(object, newdata, cuda = detect_cuda(), return_raw = FALSE, ...)
Arguments
object an S3 object trained with grafzahl()
newdata a corpus or a character vector of texts on which prediction should be made.
cuda logical, whether to use CUDA, default to detect_cuda().
return_raw logical, if TRUE, return a matrix of logits; a vector of class prediction otherwise
... not used
Value
a vector of class prediction or a matrix of logits
setup_grafzahl Setup grafzahl
Description
Install a self-contained miniconda environment with all Python components (PyTorch, Transform-
ers, Simpletransformers, etc) which grafzahl required. The default location is "~/.local/share/r-
miniconda/envs/grafzahl_condaenv" (suffix "_cuda" is added if cuda is TRUE). On Linux or Mac
and if miniconda is not found, this function will also install miniconda. The path can be changed
by the environment variable GRAFZAHL_MINICONDA_PATH
Usage
setup_grafzahl(cuda = FALSE, force = FALSE, cuda_version = "11.3")
Arguments
cuda logical, if TRUE, indicate whether a CUDA-enabled environment is wanted.
force logical, if TRUE, delete previous environment (if exists) and create a new envi-
ronment
cuda_version character, indicate CUDA version, ignore if cuda is FALSE
Value
TRUE (invisibly) if installation is successful.
Examples
# setup an environment with cuda enabled.
if (detect_conda() && interactive()) {
setup_grafzahl(cuda = TRUE)
}
unciviltweets A Corpus Of Tweets With Incivility Labels
Description
This is a dataset from the paper "The Dynamics of Political Incivility on Twitter". The tweets were
by Members of Congress elected to the 115th Congress (2017–2018). It is important to note that
not all the incivility labels were coded by human. Majority of the labels were coded by the Google
Perspective API. All mentions were removed. The dataset is available from <NAME>’s Github.
https://github.com/pablobarbera/incivility-sage-open
Usage
unciviltweets
Format
An object of class corpus (inherits from character) of length 19982.
References
<NAME>., <NAME>., <NAME>., & <NAME>. (2020). The dynamics of political incivility
on Twitter. Sage Open, 10(2), 2158244020919447. |
php.pdf | free_programming_book | Unknown | &RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
&8562'(
/,1*8$*(0 3+3
$XWRU0DXUtFLR9LYDVGH6RX]D%DUUHWR PDXULFLR#FLSVJDRUJEU
$EULOGH
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 1
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
0DXUtFLR9LYDVGH6RX]D%DUUHWR PDXULFLR#FLSVJDRUJEU YLYDV#XVDQHW
$EULOGH
Projeto Supervisionado de Final de Curso Este apostila de PHP e fruto do Projeto Supervisionado de Final de Curso de Maurcio Vivas de Souza Barreto, tendo o mesmo sido submetido a uma banca examinadora composta pelo Professor <NAME>, Professora <NAME> e Professor <NAME>, da Universidade Federal de Sergipe, Centro de Cincias Exatas e Tecnologia do Departamento de Estatistica e Informtica.
&RS\ULJKW F
0DXUtFLR 9LYDV GH 6RX]D %DUUHWR
3HUPLVVLRQ LV JUDQWHG WR FRS\ GLVWULEXWH DQGRU PRGLI\ WKLV GRFXPHQW XQGHU WKH WHUPV RI WKH *18 )UHH
'RFXPHQWDWLRQ /LFHQVH 9HUVLRQ RU DQ\ ODWHU YHUVLRQ SXEOLVKHG E\ WKH )UHH 6RIWZDUH )RXQGDWLRQ ZLWK WKH
,QYDULDQW 6HFWLRQV EHLQJ /,67 7+(,5 7,7/(6 ZLWK WKH )URQW&RYHU 7H[WV EHLQJ /,67 DQG ZLWK WKH %DFN&RYHU 7H[WV EHLQJ /,67
$ FRS\ RI WKH OLFHQVH LV LQFOXGHG LQ WKH VHFWLRQ HQWLWOHG *18 )UHH 'RFXPHQWDWLRQ /LFHQVH
&RS\ULJKW F 0DXUtFLR 9LYDV GH 6RX]D %DUUHWR
( JDUDQWLGD D SHUPLVVmR SDUD FRSLDU GLVWULEXLU HRX PRGLILFDU HVWH GRFXPHQWR VRE RV WHUPRV GD *18 )UHH 'RFXPHQWDWLRQ /LFHQVH
YHUVmR RX TXDOTXHU RXWUD YHUVmR SRVWHULRU SXEOLFDGD SHOD )UHH 6RIWZDUH )RXQGDWLRQ VHP REULJDWRULHGDGH GH 6Ho}HV ,QYDULDQWHV QD DEHUWXUD H DR ILQDO GRV WH[WRV
8PD FRSLD GD OLFHQoD GHYH VHU LQFOXtGD QD VHomR LQWLWXODGD *18 )UHH 'RFXPHQWDWLRQ /LFHQVH
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 2
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
Indice 1. INTRODUO ... 5 O QUE PHP?... 6 O QUE PODE SER FEITO COM PHP? ... 6 COMO SURGIU A LINGUAGEM PHP?... 6 2. SINTAXE BSICA... 8 DELIMITANDO O CDIGO PHP ... 8 SEPARADOR DE INSTRUES ... 8 NOMES DE VARIVEIS... 8 COMENTRIOS... 9 Comentrios de uma linha: ... 9 Comentrios de mais de uma linha: ... 9 3. CRIANDO OS PRIMEIROS SCRIPTS ... 10 PRIMEIRO EXEMPLO ... 10 UTILIZANDO FORMULRIOS HTML ... 11 INTERAGINDO COM O BROWSER... 12 ACESSANDO BANCOS DE DADOS ... 13 Conexo com o servidor ... 13 Seleo do banco de dados... 13 Execuo de queries SQL ... 14 TRATAMENTO DE RESULTADOS DE QUERY SELECT ... 15 4. TIPOS ... 17 TIPOS SUPORTADOS ... 17 Inteiros (integer ou long)... 17 Strings... 18 Arrays ... 19 LISTAS ... 19 Objetos... 20 Booleanos ... 20 TRANSFORMAO DE TIPOS... 20 Coeres ... 20 Transformao explcita de tipos ... 21 Com a funo settype... 22 5. CONSTANTES ... 23 CONSTANTES PR-DEFINIDAS... 23 DEFININDO CONSTANTES ... 23 6. OPERADORES ... 24 ARITMTICOS ... 24 DE STRINGS ... 24 DE ATRIBUIO... 24 BIT A BIT ... 25 LGICOS ... 25 COMPARAO ... 25 EXPRESSO CONDICIONAL ... 26 DE INCREMENTO E DECREMENTO ... 26 ORDEM DE PRECEDNCIA DOS OPERADORES ... 27 7. ESTRUTURAS DE CONTROLE ... 28 BLOCOS ... 28
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 3
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
COMANDOS DE SELEO... 28 if ... 28 switch ... 30 COMANDOS DE REPETIO ... 32 while... 32 do... while... 32 for... 33 QUEBRA DE FLUXO ... 33 Break... 33 Continue... 34 8. FUNES ... 35 DEFININDO FUNES ... 35 VALOR DE RETORNO ... 35 ARGUMENTOS ... 35 Passagem de parmetros por referncia ... 36 Argumentos com valores pr-definidos (default) ... 37 CONTEXTO ... 37 ESCOPO... 37 9. VARIVEIS... 39 O MODIFICADOR STATIC ... 39 VARIVEIS VARIVEIS... 40 VARIVEIS ENVIADAS PELO NAVEGADOR... 40 URLencode ... 40 VARIVEIS DE AMBIENTE ... 41 VERIFICANDO O TIPO DE UMA VARIVEL... 41 Funo que retorna o tipo da varivel ... 41 Funes que testam o tipo da varivel ... 41 DESTRUINDO UMA VARIVEL... 42 VERIFICANDO SE UMA VARIVEL POSSUI UM VALOR ... 42 A funo isset ... 42 A funo empty ... 42 10. CLASSES E OBJETOS ... 43 CLASSE ... 43 OBJETO ... 43 A VARIVEL $THIS ... 43 SUBCLASSES... 44 CONSTRUTORES ... 44 12. CONCLUSES ... 46 13. BIBLIOGRAFIA E REFERNCIAS... 47 APNDICE 01 - FUNES PARA TRATAMENTO DE STRINGS ... 48 FUNES RELACIONADAS A HTML ... 48 htmlspecialchars ... 48 htmlentities... 48 nl2br... 48 get_meta_tags ... 49 strip_tags ... 49 urlencode ... 49 urldecode ... 49 FUNES RELACIONADAS A ARRAYS ... 50 Implode e join ... 50 split ... 50 explode... 50
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 4
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
COMPARAES ENTRE STRINGS ... 51 similar_text ... 51 strcasecmp ... 51 strcmp ... 51 strstr... 51 stristr... 52 strpos ... 52 strrpos... 52 FUNES PARA EDIO DE STRINGS ... 52 chop ... 52 ltrim ... 52 trim ... 53 strrev... 53 strtolower... 53 strtoupper ... 53 ucfirst... 54 ucwords... 54 str_replace... 54 FUNES DIVERSAS ... 54 chr... 54 ord ... 54 echo ... 55 print ... 55 strlen ... 55 APNDICE 02 - FUNES PARA TRATAMENTO DE ARRAYS ... 56 FUNES GENRICAS ... 56 Array... 56 range... 56 shuffle ... 57 sizeof ... 57 FUNES DE NAVEGAO... 57 reset ... 57 end ... 57 next ... 57 prev... 57 pos ... 58 key... 58 each ... 58 FUNES DE ORDENAO ... 58 sort... 59 rsort ... 59 asort... 59 arsort ... 59 ksort ... 59 usort... 59 uasort... 60 uksort ... 60 SOBRE O AUTOR DA APOSTILA ... 61
*18 )5(( '2&80(17$7,21 /,&(16( ... 62 1. Introduo
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 5
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
O que PHP?
PHP uma linguagem que permite criar sites WEB dinmicos, possibilitando uma interao com o usurio atravs de formulrios, parmetros da URL e links. A diferena de PHP com relao a linguagens semelhantes a Javascript que o cdigo PHP executado no servidor, sendo enviado para o cliente apenas html puro. Desta maneira
possvel interagir com bancos de dados e aplicaes existentes no servidor, com a vantagem de no expor o cdigo fonte para o cliente. Isso pode ser til quando o programa est lidando com senhas ou qualquer tipo de informao confidencial.
O que diferencia PHP de um script CGI escrito em C ou Perl que o cdigo PHP fica embutido no prprio HTML, enquanto no outro caso necessrio que o script CGI gere todo o cdigo HTML, ou leia de um outro arquivo.
O que pode ser feito com PHP?
Basicamente, qualquer coisa que pode ser feita por algum programa CGI pode ser feita tambm com PHP,
como coletar dados de um formulrio, gerar pginas dinamicamente ou enviar e receber cookies.
PHP tambm tem como uma das caractersticas mais importantes o suporte a um grande nmero de bancos de dados, como dBase, Interbase, mSQL, mySQL, Oracle, Sybase, PostgreSQL e vrios outros. Construir uma pgina baseada em um banco de dados torna-se uma tarefa extremamente simples com PHP.
Alm disso, PHP tem suporte a outros servios atravs de protocolos como IMAP, SNMP, NNTP, POP3 e, logicamente, HTTP. Ainda possvel abrir sockets e interagir com outros protocolos.
Como surgiu a linguagem PHP?
A linguagem PHP foi concebida durante o outono de 1994 por <NAME>. As primeiras verses no foram disponibilizadas, tendo sido utilizadas em sua home-page apenas para que ele pudesse ter informaes sobre as visitas que estavam sendo feitas. A primeira verso utilizada por outras pessoas foi disponibilizada em 1995, e ficou conhecida como Personal Home Page Tools (ferramentas para pgina pessoal). Era composta por um sistema bastante simples que interpretava algumas macros e alguns utilitrios que rodavam por trs das home-pages: um livro de visitas,
um contador e algumas outras coisas.
Em meados de 1995 o interpretador foi reescrito, e ganhou o nome de PHP/FI, o FI veio de um outro pacote escrito por Rasmus que interpretava dados de formulrios HTML (Form Interpreter). Ele combinou os scripts do pacote Personal Home Page Tools com o FI e adicionou suporte a mSQL, nascendo assim o PHP/FI, que cresceu bastante,
e as pessoas passaram a contribuir com o projeto.
Estima-se que em 1996 PHP/FI estava sendo usado por cerca de 15.000 sites pelo mundo, e em meados de 1997 esse nmero subiu para mais de 50.000. Nessa poca houve uma mudana no desenvolvimento do PHP. Ele
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 6
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
deixou de ser um projeto de Rasmus com contribuies de outras pessoas para ter uma equipe de desenvolvimento mais organizada. O interpretador foi reescrito por <NAME> e <NAME>, e esse novo interpretador foi a base para a verso 3.
Atualmente o uso do PHP3 vem crescendo numa velocidade incrvel, e j est sendo desenvolvida a verso 4 do PHP.
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 7
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
2. Sintaxe Bsica Delimitando o cdigo PHP O cdigo PHP fica embutido no prprio HTML. O interpretador identifica quando um cdigo PHP pelas seguintes tags:
<?php comandos
?>
<script language=php>
comandos
</script>
<?
comandos
?>
<%
comandos
%>
O tipo de tags mais utilizado o terceiro, que consiste em uma abreviao do primeiro. Para utiliz-lo,
necessrio habilitar a opo short-tags na configurao do PHP. O ltimo tipo serve para facilitar o uso por programadores acostumados sintaxe de ASP. Para utiliz-lo tambm necessrio habilit-lo no PHP, atravs do arquivo de configurao php.ini.
Separador de instrues Entre cada instruo em PHP preciso utilizar o ponto-e-vrgula, assim como em C, Perl e outras linguagens mais conhecidas. Na ltima instruo do bloco de script no necessrio o uso do ponto-e-vrgula, mas por questes estticas recomenda-se o uso sempre.
Nomes de variveis Toda varivel em PHP tem seu nome composto pelo caracter $ e uma string, que deve iniciar por uma letra ou o caracter _. PHP case sensitive, ou seja, as variveis $vivas e $VIVAS so diferentes. Por isso preciso ter muito cuidado ao definir os nomes das variveis. bom evitar os nomes em maisculas, pois como veremos mais adiante,
o PHP j possui alguma variveis pr-definidas cujos nomes so formados por letras maisculas.
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 8
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
Comentrios H dois tipos de comentrios em cdigo PHP:
Comentrios de uma linha:
Marca como comentrio at o final da linha ou at o final do bloco de cdigo PHP o que vier antes.
Pode ser delimitado pelo caracter # ou por duas barras ( // ).
Exemplo:
<? echo teste; #isto um teste ?>
<? echo teste; //este teste similar ao anterior ?>
Comentrios de mais de uma linha:
Tem como delimitadores os caracteres /* para o incio do bloco e */ para o final do comentrio.
Se o delimitador de final de cdigo PHP ( ?> ) estiver dentro de um comentrio, no ser reconhecido pelo interpretador.
Exemplos:
<?
echo teste; /* Isto um comentrio com mais de uma linha, mas no funciona corretamente ?>
*/
<?
echo teste; /* Isto um comentrio com mais de uma linha que funciona corretamente
*/
?>
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 9
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
3. Criando os primeiros scripts Primeiro Exemplo Neste exemplo, criaremos um script com uma sada simples, que servir para testar se a instalao foi feita corretamente:
<html>
<head><title>Aprendendo PHP</title></head>
<body>
<?php echo "Primeiro Script";
?>
</body>
</html>
Salve o arquivo como primeiro.php3 no diretorio de documentos do Apache (ou o Web Server escolhido). Abra uma janela do navegador e digite o endereo http://localhost/primeiro.php3.
Verificando o cdigo fonte da pgina exibida, temos o seguinte:
<html>
<head><title>Aprendendo PHP</title></head>
<body>
Primeiro Script
</body>
</html>
Isso mostra como o PHP funciona. O script executado no servidor, ficando disponvel para o usurio apenas o resultado. Agora vamos escrever um script que produza exatamente o mesmo resultado utilizando uma varivel:
<html>
<head><title>Aprendendo PHP</title></head>
<body>
<?php
$texto = "Primeiro Script";
echo $texto;
?>
</body>
</html>
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 10
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
Utilizando formulrios HTML Ao clicar num boto Submit em um formulrio HTML as informaes dos campos sero enviadas ao servidor especificado para que possa ser produzida uma resposta. O PHP trata esses valores como variveis, cujo nome o nome do campo definido no formulrio. O exemplo a seguir mostra isso, e mostra tambm como o cdigo PHP pode ser inserido em qualquer parte do cdigo HTML:
<html>
<head><title>Aprendendo PHP</title></head>
<body>
<?php if ($texto != "")
echo "Voc digitou \"$texto\"<br><br>";
?>
<form method=post action="<? echo $PATH_INFO; ?>">
<input type="text" name="texto" value="" size=10>
<br>
<input type="submit" name="sub" value="Enviar!">
</form>
</body>
</html>
Ao salvar o arquivo acima e carreg-lo no browser, o usurio ver apenas um formulrio que contm um espao para digitar o texto, como visto na figura 01. Ao digitar um texto qualquer e submeter o formulrio, a resposta, que o mesmo arquivo PHP (indicado pela constante
$PATH_INFO, que retorna o nome do arquivo) ser como na figura 02:
[Imagem16]
[Imagem17]
figura 01 figura 02 Isso ocorre porque o cdigo PHP testa o contedo da varivel $texto. Inicialmente ele uma string vazia,
e por isso nada impresso na primeira parte. Quando algum texto digitado no formulrio e submetido, o PHP passa a trat-lo como uma varivel. Como no formulrio o campo possui o nome texto, a varivel com seu contedo ser $texto.
Assim, no prximo teste o valor da varivel ser diferente de uma string vazia, e o PHP imprime um texto antes do formulrio.
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 11
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
Interagindo com o browser PHP tambm permite interagir com informaes do browser automaticamente. Por exemplo, o script a seguir mostra informaes sobre o browser do usurio. As figuras 03 e 04 mostram o resultado visto no Netscape Communicator e o Microsoft Internet Explorer, respectivamente.
<html>
<head><title>Aprendendo PHP</title></head>
<body>
<? echo $HTTP_USER_AGENT; ?>
</body>
</html>
[Imagem18]
[Imagem19]
figura 03 figura 04 Observe que o resultado mostra caractersticas de cada browser, como a verso, e no caso do Communicator at o idioma (en). Com isso, se voc criar uma pgina com recursos disponveis somente no Internet Explorer, por exemplo, pode esconder o cdigo dos outros browsers, com um cdigo semelhante ao seguinte:
<html>
<head><title>Aprendendo PHP</title></head>
<body>
<?
if (strpos($HTTP_USER_AGENT,"MSIE 5") != 0) {
echo "Voc usa Internet Explorer";
} else {
echo "Voc no usa Internet Explorer";
}
?>
</body>
</html>
Neste exemplo, ser apenas exibido um texto informando se est sendo utilizado o Microsoft Internet Explorer ou no, mas para outras funes poderia ser utilizado algo semelhante.
bom notar o surgimento de mais uma funo no cdigo anterior: strpos(string1,string2).
Essa funo retorna a posio da primeira apario de string2 em string1, contando a partir de zero, e no retorna valor algum se no ocorrer. Assim, para testar se a string $HTTP_USER_AGENT contm a string MSIE, basta testar se strpos devolve algum valor.
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 12
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
Acessando Bancos de Dados Neste documento todos os exemplos referentes a acesso de bancos de dados utilizaro o gerenciador de banco de dados MySQL, que pode ser copiado gratuitamente no site http://www.mysql.org.
Para interagir com uma base de dados SQL existem trs comandos bsicos que devem ser utilizados: um que faz a conexo com o servidor de banco de dados, um que seleciona a base de dados a ser utilizada e um terceiro que executa uma query SQL.
Conexo com o servidor A conexo com o servidor de banco de dados mySQL em PHP feita atravs do comando mysql_connect, que tem a seguinte sintaxe:
int mysql_connect(string /*host [:porta]*/ , string /*login*/ , string
/*senha*/ );
Os parmetros so bastante simples: o endereo do servidor(host), o nome do usurio (login) e a senha para a conexo. A funo retorna um valor inteiro, que o identificador da conexo estabelecida e dever ser armazenado numa varivel para ser utilizado depois. No nosso exemplo, temos como servidor de banco de dados a mesma mquina que roda o servidor http, como login o usurio root e senha phppwd:
$conexao = mysql_connect(localhost, root, phppwd);
Assim, se a conexo for bem sucedida (existir um servidor no endereo especificado que possua o usurio com a senha fornecida), o identificador da conexo fica armazenado na varivel $conexo.
Seleo do banco de dados Uma vez conectado, preciso selecionar o banco de dados existente no servidor com o qual desejamos trabalhar. Isso feito atravs da funo int mysql_select_db, que possui a seguinte sintaxe:
int mysql_select_db(string /*nome_base*/, int /*conexao*/ );
O valor de retorno 0 se o comando falhar, e 1 em caso de sucesso. O nome da base de dados a selecionar o primeiro parmetro fornecido, seguido pelo identificador da conexo. Se este for omitido, o interpretador PHP tentar utilizar a ltima conexo estabelecida. Recomenda-se sempre explicitar esse valor, para facilitar a legibilidade do cdigo. No nosso exemplo, a base de dados a ser selecionada possui o nome ged:
mysql_select_db(ged, $conexao);
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 13
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
Aps a execuo desse comando qualquer consulta executada para aquela conexo utilizar a base de dados selecionada.
Execuo de queries SQL Aps estabelecida a conexo e selecionada a base de dados a ser utilizada, quase toda a interao com o servidor mySQL pode ser feita atravs de consultas escritas em SQL (Structured Query Language), com o comando mysql_query, que utiliza a seguinte sintaxe:
int mysql_query(string consulta, int [conexao] );
O valor de retorno 0 se falhar ou 1 em caso de sucesso. Sucesso aqui significa que a consulta est sintaticamente correta e foi executada no servidor. Nenhuma informao sobre o resultado retornada deste comando, ou at mesmo se o resultado o esperado. No caso da consulta ser um comando SELECT, o valor de retorno um valor interno que identifica o resultado, que poder ser tratado com a funo mysql_result() e outras. A string query no deve conter ponto-e-vrgula no final do comando, e o identificador da conexo opcional. Vamos criar uma tabela como exemplo:
$cria = CREATE TABLE exemplo (codigo INT AUTO_INCREMENT PRIMARY KEY, nome CHAR(40), email CHAR(50));
mysql_query($cria, $conexao);
Agora vejamos como ficou o cdigo completo para executar uma query SQL numa base de dados mySQL, com um exemplo que cria uma tabela chamada exemplo e adiciona alguns dados:
$conexao = mysql_connect(localhost, root, phppwd);
mysql_select_db(ged, $conexao);
$cria = CREATE TABLE exemplo (codigo INT AUTO_INCREMENT PRIMARY KEY, nome CHAR(40), email CHAR(50));
$insere1 = INSERT Vivas,<EMAIL>);
INTO exemplo
(nome,email)
VALUES
(Mauricio
$insere2 = INSERT Silva,<EMAIL>);
INTO exemplo
(nome,email)
VALUES
(Jose
$insere3 = INSERT INTO exemplo <NAME>,<EMAIL>);
(nome,email)
VALUES
(Fernando
$insere4 = INSERT INTO exemplo Clinton,<EMAIL>);
(nome,email)
VALUES
(Bill mysql_query($cria, $conexao);
mysql_query($insere1, $conexao);
mysql_query($insere2, $conexao);
mysql_query($insere3, $conexao);
mysql_query($insere4, $conexao);
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU da
3iJLQD 14
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
Tratamento de resultados de query SELECT Ao executar uma query SQL SELECT atravs do comando mysql_query, o identificador do resultado deve ser armazenado numa varivel que pode ser tratada de diversas formas. Duas maneiras interessantes de faz-lo usam o comando mysql_result e o comando mysql_fetch_row, respectivamente.
O comando mysql_result tem a seguinte sintaxe:
int mysql_result(int resultado, int linha, mixed [campo]);
Onde resultado o identificador do resultado, obtido com o retorno da funo mysql_query,
linha especifica a tupla a ser exibida, j que uma query SELECT pode retornar diversas tuplas, e campo o identificador do campo a ser exibido, sendo o tipo descrito como mixed pela possibilidade de ser de diversos tipos (neste caso, inteiro ou string). Vejamos um exemplo utilizando a tabela criada anteriormente:
$consulta vivas;
=
SELECT nome,
email FROM
exemplo WHERE
email LIKE
$resultado = mysql_query($consulta, $conexao);
printf("Nome: ", mysql_result($resultado,0,"nome"), <br>\n);
printf("e-mail: ", mysql_result($resultado,0,"email"),<br>);
Com o exemplo acima, o resultado ser:
Nome: <NAME><br>
e-mail: <EMAIL><br>
importante notar que a utilizao desta funo um pouco trabalhosa, j que no caso de um resultado com vrias linhas preciso controlar o nmero de linhas para trat-las (pode-se utilizar a funo mysql_num_rows(int resultado), que retorna o nmero de linhas de um resultado), e no caso de uma alterao no nome do campo preciso alterar tambm a maneira de trat-lo. Por isso mais aconselhvel que se use uma outra funo, como por exemplo mysql_fetch_row, que possui a seguinte sintaxe:
array mysql_fetch_row(int result);
A varivel resultado o identificador da memria de resultados, obtido como retorno da funo mysql_query. O resultado produzido por esta funo de retirar a primeira linha da memria de resultados, se houver, e coloc-la num array. Assim torna-se mais fcil tratar um resultado com vrias linhas, e sem utilizar os nomes dos campos na rotina de tratamento do resultado:
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 15
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
$consulta = SELECT nome, email FROM exemplo;
$resultado = mysql_query($consulta, $conexao);
echo "<table border=1>\n";
echo "<tr><td>Nome</td><td>e-mail</tr>\n";
while ($linha = mysql_fetch_row($resultado)) {
printf("<tr><td>$linha[0]</td>);
printf("<td>$linha[1]</td></tr>);
}
echo "</table>\n";
O cdigo acima ir imprimir todos os registros da tabela exemplo numa tabela html. Se o programador desejar pular alguma(s) linha(s) do resultado, poder utilizar a funo mysql_data_seek, que tem por objetivo definir qual ser a prxima linha da memria de resultados a ser impressa. Sua sintaxe :
int mysql_data_seek(int resultado, int linha);
Sendo resultado o identificador do resultado e linha o numero da linha. Retorna 0 em caso de falha, e um valor diferente de zero em caso de sucesso.
Existem diversas outras funes para o tratamento de resultados, que armazenam as linhas em arrays e objetos, assim como outras funes para administrar o banco de dados, mas como este documento trata-se de uma introduo, inicialmente no tratar tpicos mais avanados.
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 16
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
4. Tipos Tipos Suportados PHP suporta os seguintes tipos de dados:
Inteiro
Ponto flutuante
String
Array
Objeto PHP utiliza checagem de tipos dinmica, ou seja, uma varivel pode conter valores de diferentes tipos em diferentes momentos da execuo do script. Por este motivo no necessrio declarar o tipo de uma varivel para us-la. O interpretador PHP decidir qual o tipo daquela varivel,
verificando o contedo em tempo de execuo.
Ainda assim, permitido converter os valores de um tipo para outro desejado,
utilizando o typecasting ou a funo settype (ver adiante).
Inteiros (integer ou long)
Uma varivel pode conter um valor inteiro com atribuies que sigam as seguintes sintaxes:
$vivas = 1234; # inteiro positivo na base decimal
$vivas = -234; # inteiro negativo na base decimal
$vivas = 0234; # inteiro na base octal-simbolizado pelo 0
# equivale a 156 decimal
$vivas = 0x34; # inteiro na base hexadecimal(simbolizado
# pelo 0x) equivale a 52 decimal.
A diferena entre inteiros simples e long est no nmero de bytes utilizados para armazenar a varivel.
Como a escolha feita pelo interpretador PHP de maneira transparente para o usurio, podemos afirmar que os tipos so iguais.
Nmeros em Ponto Flutuante (double ou float)
Uma varivel pode ter um valor em ponto flutuante com atribuies que sigam as seguintes sintaxes:
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 17
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
$vivas = 1.234;
$vivas = 23e4; # equivale a 230.000 Strings
Strings podem ser atribudas de duas maneiras:
a)
utilizando aspas simples ( ' ) Desta maneira, o valor da varivel ser exatamente o texto contido entre as aspas (com exceo de \\ e \' ver tabela abaixo)
b) utilizando aspas duplas ( " ) Desta maneira, qualquer varivel ou caracter de escape ser expandido antes de ser atribudo.
Exemplo:
<?
$teste = "Mauricio";
$vivas = '---$teste--\n';
echo "$vivas";
?>
A sada desse script ser "---$teste--\n".
<?
$teste = "Mauricio";
$vivas = "---$teste---\n";
echo "$vivas";
?>
A sada desse script ser "---Mauricio--" (com uma quebra de linha no final).
A tabela seguinte lista os caracteres de escape:
Sintaxe Significado
\n Nova linha
\r Retorno de carro (semelhante a \n)
\t Tabulao horizontal
\\
A prpria barra ( \ )
\$
O smbolo $
\
Aspa simples
\
Aspa dupla No apndice 01 est disponvel uma lista das funes utilizadas no tratamento de strings.
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 18
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
Arrays Arrays em PHP podem ser observados como mapeamentos ou como vetores indexados. Mais precisamente, um valor do tipo array um dicionrio onde os ndices so as chaves de acesso. Vale ressaltar que os ndices podem ser valores de qualquer tipo e no somente inteiros. Inclusive, se os ndices forem todos inteiros, estes no precisam formar um intervalo contnuo Como a checagem de tipos em PHP dinmica, valores de tipos diferentes podem ser usados como ndices de array, assim como os valores mapeados tambm podem ser de diversos tipos.
Exemplo:
<?
$cor[1] = vermelho;
$cor[2] = verde;
$cor[3] = azul;
$cor[teste] = 1;
?>
Equivalentemente, pode-se escrever:
<?
$cor = array(1 => vermelho, 2 => verde, 3 => azul, teste => 1);
?>
Listas As listas so utilizadas em PHP para realizar atribuies mltiplas. Atravs de listas possvel atribuir valores que esto num array para variveis. Vejamos o exemplo:
Exemplo:
list($a, $b, $c) = array(a, b, c);
O comando acima atribui valores s trs variveis simultaneamente. bom notar que s so atribudos s variveis da lista os elementos do array que possuem ndices inteiros e no negativos. No exemplo acima as trs atribuies foram bem sucedidas porque ao inicializar um array sem especificar os ndices eles passam a ser inteiros, a partir do zero.
Um fator importante que cada varivel da lista possui um ndice inteiro e ordinal, iniciando com zero, que serve para determinar qual valor ser atribudo. No exemplo anterior temos $a com ndice 0, $b com ndice 1 e $c com ndice 2.
Vejamos um outro exemplo:
$arr = array(1=>um,3=>tres,a=>letraA,2=>dois);
list($a,$b,$c,$d) = $arr;
Aps a execuo do cdigo acima temos os seguintes valores:
$a == null
$b == um
$c == dois
$d == tres
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 19
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
Devemos observar que varivel $a no foi atribudo valor, pois no array no existe elemento com ndice 0 (zero). Outro detalhe importante que o valor tres foi atribudo varivel $d, e no a $b, pois seu ndice 3, o mesmo que $d na lista. Por fim, vemos que o valor letraA no foi atribudo a elemento algum da lista pois seu ndice no
inteiro.
Os ndices da lista servem apenas como referncia ao interpretador PHP para realizar as atribuies, no podendo ser acessados de maneira alguma pelo programador. De maneira diferente do array, uma lista no pode ser atribuda a uma varivel, servindo apenas para fazer mltiplas atribuies atravs de um array.
No apndice 02 est disponvel uma lista das funes mais comuns para o tratamento de arrays.
Objetos Um objeto pode ser inicializado utilizando o comando new para instanciar uma classe para uma varivel.
Exemplo:
class teste {
function nada() {
echo nada;
}
}
$vivas = new teste;
$vivas -> nada();
A utilizao de objetos ser mais detalhada mais frente.
Booleanos PHP no possui um tipo booleano, mas capaz de avaliar expresses e retornar true ou false, atravs do tipo integer: usado o valor 0 (zero) para representar o estado false, e qualquer valor diferente de zero (geralmente 1)
para representar o estado true.
Transformao de tipos A transformao de tipos em PHP pode ser feita das seguintes maneiras:
Coeres Quando ocorrem determinadas operaes (+, por exemplo) entre dois valores de tipos diferentes, o PHP converte o valor de um deles automaticamente (coero). interessante notar que se o operando for uma varivel, seu valor no ser alterado.
O tipo para o qual os valores dos operandos sero convertidos determinado da seguinte forma: Se um dos operandos for float, o outro ser convertido para float, seno, se um deles for integer, o outro ser convertido para integer.
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 20
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
Exemplo:
$vivas = 1;
// $vivas a string 1
$vivas = $vivas + 1; // $vivas o integer 2
$vivas = $vivas + 3.7;// $vivas o double 5.7
$vivas = 1 + 1.5
// $vivas o double 2.5 Como podemos notar, o PHP converte string para integer ou double mantendo o valor. O sistema utilizado pelo PHP para converter de strings para nmeros o seguinte:
analisado o incio da string. Se contiver um nmero, ele ser avaliado. Seno, o valor ser 0
(zero);
O nmero pode conter um sinal no incio (+ ou -);
Se a string contiver um ponto em sua parte numrica a ser analisada, ele ser considerado, e o valor obtido ser double;
Se a string contiver um e ou E em sua parte numrica a ser analisada, o valor seguinte ser considerado como expoente da base 10, e o valor obtido ser double;
Exemplos:
$vivas = 1 + 10.5;
// $vivas == 11.5
$vivas = 1 + -1.3e3;
// $vivas == -1299
$vivas = 1 + teste10.5; // $vivas == 1
$vivas = 1 + 10testes; // $vivas == 11
$vivas = 1 + " 10testes";
// $vivas == 11
$vivas = 1 + "+ 10testes";
// $vivas == 1 Transformao explcita de tipos A sintaxe do typecast de PHP semelhante ao C: basta escrever o tipo entre parenteses antes do valor Exemplo:
$vivas = 15;
$vivas = (double) $vivas
$vivas = 3.9
$vivas = (int) $vivas
// $vivas integer (15)
// $vivas double (15.0)
// $vivas double (3.9)
// $vivas integer (3)
// o valor decimal truncado Os tipos de cast permitidos so:
(int), (integer)
muda para integer;
(real), (double), (float)
muda para float;
(string)
muda para string;
(array)
muda para array;
(object)
muda para objeto.
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 21
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
Com a funo settype A funo settype converte uma varivel para o tipo especificado, que pode ser integer, double,
string, array ou object.
Exemplo:
$vivas = 15;
settype($vivas,double)
&XUVR GH /LQJXDJHP 3+3
// $vivas integer
// $vivas double ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 22
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
5. Constantes Constantes pr-definidas O PHP possui algumas constantes pr-definidas, indicando a verso do PHP, o Sistema Operacional do servidor, o arquivo em execuo, e diversas outras informaes. Para ter acesso a todas as constantes pr-definidas, podese utilizar a funo phpinfo(), que exibe uma tabela contendo todas as constantes pr-definidas, assim como configuraes da mquina, sistema operacional, servidor http e verso do PHP instalada.
Definindo constantes Para definir constantes utiliza-se a funo define. Uma vez definido, o valor de uma constante no poder mais ser alterado. Uma constante s pode conter valores escalares, ou seja, no pode conter nem um array nem um objeto. A assinatura da funo define a seguinte:
int define(string nome_da_constante, mixed valor);
A funo retorna true se for bem-sucedida. Veja um exemplo de sua utilizao a seguir:
define ("pi", 3.1415926536);
$circunf
&XUVR GH /LQJXDJHP 3+3
=
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU 2*pi*$raio;
3iJLQD 23
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
6. Operadores Aritmticos
S podem ser utilizados quando os operandos so nmeros (integer ou float). Se forem de outro tipo,
tero seus valores convertidos antes da realizao da operao.
+
adio
-
subtrao
*
multiplicao
/
diviso
%
mdulo de strings S h um operador exclusivo para strings:
.
concatenao de atribuio Existe um operador bsico de atribuio e diversos derivados. Sempre retornam o valor atribudo. No caso dos operadores derivados de atribuio, a operao feita entre os dois operandos, sendo atribudo o resultado para o primeiro. A atribuio sempre por valor, e no por referncia.
=
atribuio simples
+=
atribuio com adio
-=
atribuio com subtrao
*=
atribuio com multiplicao
/=
atribuio com diviso
%=
atribuio com mdulo
.=
atribuio com concatenao
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 24
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
Exemplo:
$a = 7;
$a += 2; // $a passa a conter o valor 9 bit a bit Comparam dois nmeros bit a bit.
&
e lgico
|
ou lgico
^
ou exclusivo
~
no (inverso)
<<
shift left
>>
shift right Lgicos
Utilizados para inteiros representando valores booleanos and
e lgico or
ou lgico xor
ou exclusivo
!
no (inverso)
&&
e lgico
||
ou lgico Existem dois operadores para e e para ou porque eles tm diferentes posies na ordem de precedncia.
Comparao As comparaes so feitas entre os valores contidos nas variveis, e no as referncias. Sempre retornam um valor booleano.
&XUVR GH /LQJXDJHP 3+3
==
igual a
!=
diferente de
<
menor que ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 25
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
>
maior que
<=
menor ou igual a
>=
maior ou igual a Expresso condicional Existe um operador de seleo que ternrio. Funciona assim:
(expressao1)?(expressao2):( expressao3)
o interpretador PHP avalia a primeira expresso. Se ela for verdadeira, a expresso retorna o valor de expresso2. Seno, retorna o valor de expresso3.
de incremento e decremento
++
incremento
--
decremento Podem ser utilizados de duas formas: antes ou depois da varivel. Quando utilizado antes, retorna o valor da varivel antes de increment-la ou decrement-la. Quando utilizado depois, retorna o valor da varivel j incrementado ou decrementado.
Exemplos:
$a = $b = 10; // $a e $b recebem o valor 10
$c = $a++; // $c recebe 10 e $a passa a ter 11
$d = ++$b; // $d recebe 11, valor de $b j incrementado
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 26
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
Ordem de precedncia dos operadores A tabela a seguir mostra a ordem de precedncia dos operadores no momento de avaliar as expresses;
Precedncia Associatividade
1.
esquerda
,
2.
esquerda or
3.
esquerda xor
4.
esquerda and
5.
direita print
6.
esquerda
= += -= *= /= .= %= &= != ~= <<= >>=
7.
esquerda
?:
8.
esquerda
||
9.
esquerda
&&
10.
esquerda
|
11.
esquerda
^
12.
esquerda
&
13.
no associa
== !=
14.
no associa
< <= > >=
15.
esquerda
<< >>
16.
esquerda
+-.
17.
esquerda
*/%
18.
direita
! ~ ++ -- (int) (double) (string) (array) (object) @
19.
direita
[
20.
no associa new
&XUVR GH /LQJXDJHP 3+3
Operadores ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 27
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
7. Estruturas de Controle As estruturas que veremos a seguir so comuns para as linguagens de programao imperativas, bastando,
portanto, descrever a sintaxe de cada uma delas, resumindo o funcionamento.
Blocos Um bloco consiste de vrios comandos agrupados com o objetivo de relacion-los com determinado comando ou funo. Em comandos como if, for, while, switch e em declaraes de funes blocos podem ser utilizados para permitir que um comando faa parte do contexto desejado. Blocos em PHP so delimitados pelos caracteres
{ e }. A utilizao dos delimitadores de bloco em uma parte qualquer do cdigo no relacionada com os comandos citados ou funes no produzir efeito algum, e ser tratada normalmente pelo interpretador.
Exemplo:
if ($x == $y)
comando1;
comando2;
Para que comando2 esteja relacionado ao if preciso utilizar um bloco:
if ($x == $y){
comando1;
comando2;
}
Comandos de seleo Tambm chamados de condicionais, os comandos de seleo permitem executar comandos ou blocos de comandos com base em testes feitos durante a execuo.
if O mais trivial dos comandos condicionais o if. Ele testa a condio e executa o comando indicado se o resultado for true (valor diferente de zero). Ele possui duas sintaxes:
if (expresso)
comando;
if (expresso):
comando;
. . .
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 28
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
comando;
endif;
Para incluir mais de um comando no if da primeira sintaxe, preciso utilizar um bloco, demarcado por chaves.
O else um complemento opcional para o if. Se utilizado, o comando ser executado se a expresso retornar o valor false (zero). Suas duas sintaxes so:
if (expresso)
comando;
else comando;
if (expresso):
comando;
. . .
comando;
else comando;
. . .
comando;
endif;
A seguir, temos um exemplo do comando if utilizado com else:
if ($a > $b)
$maior = $a;
else
$maior = $b;
O exemplo acima coloca em $maior o maior valor entre $a e $b Em determinadas situaes necessrio fazer mais de um teste, e executar condicionalmente diversos comandos ou blocos de comandos. Para facilitar o entendimento de uma estrutura do tipo:
if (expressao1)
comando1;
else if (expressao2)
comando2;
else if (expressao3)
comando3;
else comando4;
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 29
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
foi criado o comando, tambm opcional elseif. Ele tem a mesma funo de um else e um if usados sequencialmente, como no exemplo acima. Num mesmo if podem ser utilizados diversos elseifs, ficando essa utilizao a critrio do programador, que deve zelar pela legibilidade de seu script.
O comando elseif tambm pode ser utilizado com dois tipos de sintaxe. Em resumo, a sintaxe geral do comando if fica das seguintes maneiras:
if (expressao1)
comando;
[ elseif (expressao2)
comando; ]
[ else comando; ]
if (expressao1) :
comando;
. . .
comando;
[ elseif (expressao2)
comando;
. . .
comando; ]
[ else comando;
. . .
comando; ]
endif;
switch O comando switch atua de maneira semelhante a uma srie de comandos if na mesma expresso.
Frequentemente o programador pode querer comparar uma varivel com diversos valores, e executar um cdigo diferente a depender de qual valor igual ao da varivel. Quando isso for necessrio, deve-se usar o comando switch. O exemplo seguinte mostra dois trechos de cdigo que fazem a mesma coisa, sendo que o primeiro utiliza uma srie de ifs e o segundo utiliza switch:
if ($i == 0)
print i igual a zero;
elseif ($i == 1)
print i igual a um;
elseif ($i == 2)
print i igual a dois;
switch ($i) {
case 0:
print i igual a zero;
break;
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 30
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
case 1:
print i igual a um;
break;
case 2:
print i igual a dois;
break;
}
importante compreender o funcionamento do switch para no cometer enganos. O comando switch testa linha a linha os cases encontrados, e a partir do momento que encontra um valor igual ao da varivel testada, passa a executar todos os comandos seguintes, mesmo os que fazem parte de outro teste, at o fim do bloco. por isso usa-se o comando break, quebrando o fluxo e fazendo com que o cdigo seja executado da maneira desejada.
Veremos mais sobre o break mais adiante. Veja o exemplo:
switch ($i) {
case 0:
print i igual a zero;
case 1:
print i igual a um;
case 2:
print i igual a dois;
}
No exemplo acima, se $i for igual a zero, os trs comandos print sero executados. Se $i for igual a 1,
os dois ltimos print sero executados. O comando s funcionar da maneira desejada se $i for igual a 2.
Em outras linguagens que implementam o comando switch, ou similar, os valores a serem testados s podem ser do tipo inteiro. Em PHP permitido usar valores do tipo string como elementos de teste do comando switch. O exemplo abaixo funciona perfeitamente:
switch ($s) {
case casa:
print A casa amarela;
case arvore:
print a rvore bonita;
case lampada:
print joao apagou a lampada;
}
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 31
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
comandos de repetio while
O while o comando de repetio (lao) mais simples. Ele testa uma condio e executa um comando,
ou um bloco de comandos, at que a condio testada seja falsa. Assim como o if, o while tambm possui duas sintaxes alternativas:
while (<expressao>)
<comando>;
while (<expressao>):
<comando>;
. . .
<comando>;
endwhile;
A expresso s testada a cada vez que o bloco de instrues termina, alm do teste inicial. Se o valor da expresso passar a ser false no meio do bloco de instrues, a execuo segue at o final do bloco. Se no teste inicial a condio for avaliada como false, o bloco de comandos no ser executado.
O exemplo a seguir mostra o uso do while para imprimir os nmeros de 1 a 10:
$i = 1;
while ($i <=10)
print $i++;
do... while O lao do..while funciona de maneira bastante semelhante ao while, com a simples diferena que a expresso testada ao final do bloco de comandos. O lao do..while possui apenas uma sintaxe, que a seguinte:
do {
<comando>
. . .
<comando>
} while (<expressao>);
O exemplo utilizado para ilustrar o uso do while pode ser feito da seguinte maneira utilizando o do..
while:
$i = 0;
do {
print ++$i;
} while ($i < 10);
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 32
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
for O tipo de lao mais complexo o for. Para os que programam em C, C++ ou Java, a assimilao do funcionamento do for natural. Mas para aqueles que esto acostumados a linguagens como Pascal, h uma grande mudana para o uso do for. As duas sintaxes permitidas so:
for (<inicializacao>;<condicao>;<incremento>)
<comando>;
for (<inicializacao>;<condicao>;<incremento>) :
<comando>;
. . .
<comando>;
endfor;
As trs expresses que ficam entre parnteses tm as seguintes finalidades:
Inicializao: comando ou sequencia de comandos a serem realizados antes do inicio do lao. Serve para inicializar variveis.
Condio: Expresso booleana que define se os comandos que esto dentro do lao sero executados ou no. Enquanto a expresso for verdadeira (valor diferente de zero) os comandos sero executados.
Incremento: Comando executado ao final de cada execuo do lao.
Um comando for funciona de maneira semelhante a um while escrito da seguinte forma:
<inicializacao>
while (<condicao>) {
comandos
...
<incremento>
}
Quebra de fluxo Break
O comando break pode ser utilizado em laos de do, for e while, alm do uso j visto no comando switch. Ao encontrar um break dentro de um desses laos, o interpretador PHP para imediatamente a execuo do lao, seguindo normalmente o fluxo do script.
while ($x > 0) {
...
if ($x == 20) {
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 33
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
echo erro! x = 20;
break;
...
}
No trecho de cdigo acima, o lao while tem uma condio para seu trmino normal ($x <= 0), mas foi utilizado o break para o caso de um trmino no previsto no incio do lao. Assim o interpretador seguir para o comando seguinte ao lao.
Continue O comando continue tambm deve ser utilizado no interior de laos, e funciona de maneira semelhante ao break, com a diferena que o fluxo ao invs de sair do lao volta para o incio dele. Vejamos o exemplo:
for ($i = 0; $i < 100; $i++) {
if ($i % 2) continue;
echo $i ;
}
O exemplo acima uma maneira ineficiente de imprimir os nmeros pares entre 0 e 99. O que o lao faz
testar se o resto da diviso entre o nmero e 2 0. Se for diferente de zero (valor lgico true) o interpretador encontrar um continue, que faz com que os comandos seguintes do interior do lao sejam ignorados, seguindo para a prxima iterao.
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 34
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
8. Funes Definindo funes A sintaxe bsica para definir uma funo :
function nome_da_funo([arg1, arg2, arg3]) {
Comandos;
... ;
[return <valor de retorno>];
}
Qualquer cdigo PHP vlido pode estar contido no interior de uma funo. Como a checagem de tipos em PHP dinmica, o tipo de retorno no deve ser declarado, sendo necessrio que o programador esteja atento para que a funo retorne o tipo desejado. recomendvel que esteja tudo bem documentado para facilitar a leitura e compreenso do cdigo. Para efeito de documentao, utiliza-se o seguinte formato de declarao de funo:
tipo function nome_da_funcao(tipo arg1, tipo arg2, ...);
Este formato s deve ser utilizado na documentao do script, pois o PHP no aceita a declarao de tipos. Isso significa que em muitos casos o programador deve estar atento ao tipos dos valores passados como parmetros,
pois se no for passado o tipo esperado no emitido nenhum alerta pelo interpretador PHP, j que este no testa os tipos.
Valor de retorno Toda funo pode opcionalmente retornar um valor, ou simplesmente executar os comandos e no retornar valor algum.
No possvel que uma funo retorne mais de um valor, mas permitido fazer com que uma funo retorne um valor composto, como listas ou arrays.
Argumentos
possvel passar argumentos para uma funo. Eles devem ser declarados logo aps o nome da funo,
entre parnteses, e tornam-se variveis pertencentes ao escopo local da funo. A declarao do tipo de cada argumento tambm utilizada apenas para efeito de documentao.
Exemplo:
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 35
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
function imprime($texto){
echo $texto;
}
imprime(teste de funes);
Passagem de parmetros por referncia Normalmente, a passagem de parmetros em PHP feita por valor, ou seja, se o contedo da varivel for alterado, essa alterao no afeta a varivel original.
Exemplo:
function mais5($numero) {
$numero += 5;
}
$a = 3;
mais5($a); //$a continua valendo 3 No exemplo acima, como a passagem de parmetros por valor, a funo mais5 intil, j que aps a execuo sair da funo o valor anterior da varivel recuperado. Se a passagem de valor fosse feita por referncia, a varivel $a teria 8 como valor. O que ocorre normalmente que ao ser chamada uma funo, o interpretador salva todo o escopo atual, ou seja, os contedos das variveis. Se uma dessas variveis for passada como parmetro, seu contedo fica preservado, pois a funo ir trabalhar na verdade com uma cpia da varivel. Porm, se a passagem de parmetros for feita por referncia, toda alterao que a funo realizar no valor passado como parmetro afetar a varivel que o contm.
H duas maneiras de fazer com que uma funo tenha parmetros passados por referncia: indicando isso na declarao da funo, o que faz com que a pasagem de parmetros sempre seja assim; e tambm na prpria chamada da funo. Nos dois casos utiliza-se o modificador &. Vejamos um exemplo que ilustra os dois casos:
function mais5(&$num1, $num2) {
$num1 += 5;
$num2 += 5;
}
$a = $b = 1;
mais5($a, $b); /* Neste caso, s $num1 ter seu valor alterado, pois a passagem por referncia est definida na declarao da funo. */
mais5($a,
alterados. */
&$b);
&XUVR GH /LQJXDJHP 3+3
/*
Aqui as
duas variveis
tero ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU seus
valores
3iJLQD 36
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
Argumentos com valores pr-definidos (default)
Em PHP possvel ter valores default para argumentos de funes, ou seja, valores que sero assumidos em caso de nada ser passado no lugar do argumento. Quando algum parmetro declarado desta maneira, a passagem do mesmo na chamada da funo torna-se opcional.
function teste($vivas = testando) {
echo $vivas;
}
teste(); // imprime testando teste(outro teste); // imprime outro teste
bom lembrar que quando a funo tem mais de um parmetro, o que tem valor default deve ser declarado por ltimo:
function teste($figura = circulo, $cor) {
echo a figura um , $figura, de cor $cor;
}
teste(azul);
/* A funo no vai funcionar da maneira esperada, ocorrendo um erro no interpretador. A declarao correta : */
function teste2($cor, $figura = circulo) {
echo a figura um , $figura, de cor $cor;
}
teste2(azul);
/* Aqui a funcao funciona da maneira esperada, ou seja, imprime o texto: a figura um crculo de cor azul */
Contexto O contexto o conjunto de variveis e seus respectivos valores num determinado ponto do programa. Na chamada de uma funo, ao iniciar a execuo do bloco que contm a implementao da mesma criado um novo contexto, contendo as variveis declaradas dentro do bloco, ou seja, todas as variveis utilizadas dentro daquele bloco sero eliminadas ao trmino da execuo da funo.
Escopo O escopo de uma varivel em PHP define a poro do programa onde ela pode ser utilizada. Na maioria dos casos todas as variveis tm escopo global. Entretanto, em funes definidas pelo usurio um escopo local criado.
Uma varivel de escopo global no pode ser utilizada no interior de uma funo sem que haja uma declarao.
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 37
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
Exemplo:
$vivas = Testando;
function Teste() {
echo $vivas;
}
Teste();
O trecho acima no produzir sada alguma, pois a varivel $vivas de escopo global, e no pode ser referida num escopo local, mesmo que no haja outra com nome igual que cubra a sua visibilidade. Para que o script funcione da forma desejada, a varivel global a ser utilizada deve ser declarada.
Exemplo:
$vivas = Testando;
function Teste() {
global $vivas;
echo $vivas;
}
Teste();
Uma declarao global pode conter vrias variveis, separadas por vrgulas. Uma outra maneira de acessar variveis de escopo global dentro de uma funo utilizando um array pr-definido pelo PHP cujo nome
$GLOBALS. O ndice para a varivel referida o proprio nome da varivel, sem o caracter $. O exemplo acima e o abaixo produzem o mesmo resultado:
Exemplo:
$vivas = "Testando";
function Teste() {
echo $GLOBALS["vivas"]; // imprime $vivas echo $vivas; // no imprime nada
}
Teste();
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 38
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
9. Variveis O modificador static Uma varivel esttica visvel num escopo local, mas ela inicializada apenas uma vez e seu valor no
perdido quando a execuo do script deixa esse escopo. Veja o seguinte exemplo:
function Teste() {
$a = 0;
echo $a;
$a++;
}
O ltimo comando da funo intil, pois assim que for encerrada a execuo da funo a varivel $a perde seu valor. J no exemplo seguinte, a cada chamada da funo a varivel $a ter seu valor impresso e ser incrementada:
function Teste() {
static $a = 0;
echo $a;
$a++;
}
O modificador static muito utilizado em funes recursivas, j que o valor de algumas variveis precisa ser mantido. Ele funciona da seguinte forma: O valor das variveis declaradas como estticas mantido ao terminar a execuo da funo. Na prxima execuo da funo, ao encontrar novamente a declarao com static, o valor da varivel
recuperado.
Em outras palavras, uma varivel declarada como static tem o mesmo tempo de vida que uma varivel global, porm sua visibilidade restrita ao escopo local em que foi declarada e s recuperada aps a declarao.
Exemplo:
function Teste() {
echo "$a";
static $a = 0;
$a++;
}
O exemplo acima no produzir sada alguma. Na primeira execuo da funo, a impresso ocorre antes da atribuio de um valor funo, e portanto o contedo de $a nulo (string vazia). Nas execues seguintes da funo Teste() a impresso ocorre antes da recuperao do valor de $a, e portanto nesse momento seu valor ainda nulo. Para que a funo retorne algum valor o modificador static deve ser utilizado.
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 39
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
Variveis Variveis O PHP tem um recurso conhecido como variveis variveis, que consiste em variveis cujos nomes tambm so variveis. Sua utilizao feita atravs do duplo cifro ($$).
$a = teste;
$$a = Mauricio Vivas;
O exemplo acima e equivalente ao seguinte:
$a = teste;
$teste = Mauricio Vivas;
Variveis enviadas pelo navegador Para interagir com a navegao feita pelo usurio, necessrio que o PHP possa enviar e receber informaes para o software de navegao. A maneira de enviar informaes, como j foi visto anteriormente, geralmente
atravs de um comando de impresso, como o echo. Para receber informaes vindas do navegador atravs de um link ou um formulrio html o PHP utiliza as informaes enviadas atravs da URL. Por exemplo: se seu script php est localizado em
http://localhost/teste.php3 e
voc o
chama com
a url
http://localhost/teste.php3?vivas=teste, automaticamente o PHP criar uma varivel com o nome $vivas contendo a string teste. Note que o contedo da varivel est no formato urlencode. Os formulrios html j enviam informaes automaticamente nesse formato, e o PHP decodifica sem necessitar de tratamento pelo programador.
URLencode O formato urlencode obtido substituindo os espaos pelo caracter + e todos os outros caracteres no alfa-numricos (com exceo de _) pelo caracter % seguido do cdigo ASCII em hexadecimal.
Por exemplo: o texto Testando 1 2 3 !! em urlencode fica Testando+1+2+3+%21%21 O PHP possui duas funes para tratar com texto em urlencode. Seguem suas sintaxes:
string urlencode(string texto);
string urldecode(string texto);
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 40
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
Essas funes servem respectivamente para codificar ou decodificar um texto passado como argumento. Para entender melhor o que um argumento e como funciona uma funo, leia o tpico funes.
Variveis de ambiente O PHP possui diversas variveis de ambiente, como a $PHP_SELF, por exemplo, que contm o nome e o path do prprio arquivo. Algumas outras contm informaes sobre o navegador do usurio, o servidor http, a verso do PHP e diversas informaes. Para ter uma listagem de todas as variveis e constantes de ambiente e seus respectivos contedos, deve-se utilizar a funo phpinfo().
Verificando o tipo de uma varivel Por causa da tipagem dinmica utilizada pelo PHP, nem sempre possvel saber qual o tipo de uma varivel em determinado instantese no contar com a ajuda de algumas funes que ajudam a verificar isso. A verificao pode ser feita de duas maneiras:
Funo que retorna o tipo da varivel Esta funo a gettype. Sua assinatura a seguinte:
string gettype(mixed var);
A palavra mixed indica que a varivel var pode ser de diversos tipos.
A funo gettype pode retornar as seguintes strings: integer, double, string,
array, object e unknown type.
Funes que testam o tipo da varivel So
as funes is_int,
is_integer,
is_real,
is_long,
is_float,
is_string, is_array e is_object. Todas tm o mesmo formato, seguindo modelo da assinatura a seguir:
int is_integer(mixed var);
Todas essas funes retornam true se a varivel for daquele tipo, e false em caso contrrio.
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 41
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
Destruindo uma varivel
possvel desalocar uma varivel se ela no for usada posteriormente atravs da funo unset, que tem a seguinte assinatura:
int unset(mixed var);
A funo destri a varivel, ou seja, libera a memria ocupada por ela, fazendo com que ela deixe de existir. Se mais na frente for feita uma chamada varivel, ser criada uma nova varivel de mesmo nome e de contedo vazio, a no ser que a chamada seja pela funo isset. Se a operao for bem sucedida, retorna true.
Verificando se uma varivel possui um valor Existem dois tipos de teste que podem ser feitos para verificar se uma varivel est setada: com a funo isset e com a funo empty.
A funo isset Possui o seguinte prottipo:
int isset(mixed var);
E retorna true se a varivel estiver setada (ainda que com uma string vazia ou o valor zero), e false em caso contrrio.
A funo empty Possui a seguinte assinatura:
int empty(mixed var);
E retorna true se a varivel no contiver um valor (no estiver setada) ou possuir valor 0 (zero) ou uma string vazia. Caso contrrio, retorna false.
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 42
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
10. Classes e Objetos Classe
Uma classe um conjunto de variveis e funes relacionadas a essas variveis. Uma vantagem da utilizao poder usufruir do recurso de encapsulamento de informao. Com o encapsulamento o usurio de uma classe no precisa saber como ela implementada, bastando para a utilizao conhecer a interface, ou seja, as funes disponveis. Uma classe um tipo, e portanto no pode ser atribuda a uma varivel. Para definir uma classe, deve-se utilizar a seguinte sintaxe:
class Nome_da_classe {
var $variavel1;
var $variavel2;
function funcao1 ($parametro) {
/* === corpo da funo === */
}
}
Objeto Como foi dito anteriormente, classes so tipos, e no podem ser atribudas a variveis. Variveis do tipo de uma classe so chamadas de objetos, e devem ser criadas utilizando o operador new, seguindo o exemplo abaixo:
$variavel = new $nome_da_classe;
Para utilizar as funes definidas na classe, deve ser utilizado o operador ->, como no exemplo:
$variavel->funcao1(
A varivel $this Na definio de uma classe, pode-se utilizar a varivel $this, que o prprio objeto. Assim, quando uma classe instanciada em um objeto, e uma funo desse objeto na definio da classe utiliza a varivel $this, essa varivel significa o objeto que estamos utilizando.
Como exemplo da utilizao de classes e objetos, podemos utilizar a classe conta, que define uma conta bancria bastante simples, com funes para ver saldo e fazer um crdito.
class conta {
var $saldo;
function saldo() {
return $this->saldo;
}
function credito($valor) {
$this->saldo += $valor;
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 43
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
}
}
$minhaconta = new conta;
$minhaconta->saldo();
// a variavel interna no foi
// inicializada, e no contm
// valor algum
$minhaconta->credito(50);
$minhaconta->saldo(); // retorna 50 SubClasses
Uma classe pode ser uma extenso de outra. Isso significa que ela herdar todas as variveis e funes da outra classe, e ainda ter as que forem adicionadas pelo programador. Em PHP no permitido utilizar herana mltipla,
ou seja, uma classe pode ser extenso de apenas uma outra.Para criar uma classe extendida, ou derivada de outra, deve ser utilizada a palavra reservada extends, como pode ser visto no exemplo seguinte:
class novaconta extends conta {
var $numero;
function numero() {
return $this->numero;
}
}
A classe acima derivada da classe conta, tendo as mesmas funes e variveis, com a adio da varivel
$numero e a funo numero().
Construtores Um construtor uma funo definida na classe que automaticamente chamada no momento em que a classe instanciada (atravs do operador new). O construtor deve ter o mesmo nome que a classe a que pertence. Veja o exemplo:
class conta {
var $saldo;
function conta () {
$this.saldo = 0;
}
function saldo() {
return $this->saldo;
}
function credito($valor) {
$this->saldo += $valor;
}
}
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 44
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
Podemos perceber que a classe conta agora possui um construtor, que inicializa a varivel $saldo com o valor 0.
Um construtor pode conter argumentos, que so opcionais, o que torna esta ferramenta mais poderosa. No exemplo acima, o construtor da classe conta pode receber como argumento um valor, que seria o valor inicial da conta.
Vale observar que para classes derivadas, o construtor da classe pai no automaticamente herdado quando o construtor da classe derivada chamado.
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 45
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
12. Concluses A realizao deste Projeto Supervisionado possibilitou o estudo da linguagem PHP, que se mostrou uma ferramenta poderosa e simples de utilizar na construo de sites para a World Wide Web dinmicos, possibilitando uma maior interao com o usurio e a armazenagem das informaes em Bancos de Dados.
Aps a concluso da aplicao, tornou-se claro que a combinao de scripts server-side, como o PHP,
com scripts client-side, como JavaScript, por exemplo, possibilita um maior aproveitamento dos recursos disponveis para criar pginas dinmicas, e no processo de criao deve-se ponderar bastante para concluir qual dos dois tipos de scripts deve ser utilizado para determinado fim.
Entre as linguagens de script server-side, PHP surgiu como uma tima opo, por diversos motivos: o custo de aquisio, que no existe; a portabilidade, permitindo que uma aplicao seja desenvolvida em uma plataforma para ser executada em outra; a simplicidade, j que os scripts ficam no prprio cdigo html, e possuem uma sintaxe bastante simples; a possibilidade de trabalhar com diversos bancos de dados e servidores http, alm do grande nmero de funes pr-definidas, entre outras coisas.
Por esses e outros motivos, possvel afirmar que o estudo sobre PHP foi bastante enriquecedor, por ter produzido uma documentao em portugus para a linguagem e ter motivado o aluno a continuar se dedicando ao tema.
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 46
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
13. Bibliografia e Referncias A pesquisa foi baseada no manual de PHP, disponvel em www.php.net, e em diversos tutoriais disponveis no site www.phpbuilder.com. Esses dois endereos contm uma vasta documentao sobre a linguagem, alm de endereos para listas de discusso, onde pode-se solicitar ajuda de programadores mais experientes.
Uma boa referncia em portugus a lista PHP para quem fala Portugus, que pode ser assinada no endereo www.egroups.com/group/php-pt/.
Em ingls, alm dos endereos citados acima, uma boa fonte o site PHPWizard, que pode ser encontrado em www.phpwizard.net.
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 47
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
APNDICE 01 - Funes para tratamento de strings Funes relacionadas a HTML htmlspecialchars
string htmlspecialchars(string str);
Retorna a string fornecida, substituindo os seguintes caracteres:
& para '&'
" para '"'
< para '<'
> para >'
htmlentities string htmlentities(string str);
Funciona de maneira semelhante ao comando anterior, mas de maneira mais completa, pois converte todos os caracteres da string que possuem uma representao especial em html, como por exemplo:
para 'º'
para 'ª'
para 'á'
para ç'
nl2br string nl2br(string str);
Retorna a string fornecida substituindo todas as quebras de linha (\n) por quebras de linhas em html
(<br>).
Exemplo:
echo nl2br(Mauricio\nVivas\n);
Imprime:
Maurcio<br>Vivas<br>
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 48
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
get_meta_tags array get_meta_tags(string arquivo);
Abre um arquivo html e percorre o cabealho em busca de meta tags, retornando num array todos os valores encontrados.
Exemplo:
No arquivo teste.html temos:
...
<head>
<meta name="author" content="jose">
<meta name="tags" content="php3 documentation">
...
</head><!-- busca encerra aqui -->
...
a execuo da funo:
get_meta_tags(teste.html);
retorna o array:
array(author=>jose,tags=>"php3 documentation");
strip_tags string strip_tags(string str);
Retorna a string fornecida, retirando todas as tags html e/ou PHP encontradas.
Exemplo:
strip_tags('<a href="teste1.php3">testando</a><br>');
Retorna a string testando urlencode
string urlencode(string str);
Retorna a string fornecida, convertida para o formato urlencode. Esta funo til para passar variveis para uma prxima pgina.
urldecode string urldecode(string str);
Funciona de maneira inversa a urlencode, desta vez decodificando a string fornecida do formato urlencode para texto normal.
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 49
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
Funes relacionadas a arrays Implode e join string implode(string separador, array partes);
string join(string separador, array partes);
As duas funes so idnticas. Retornam uma string contendo todos os elementos do array fornecido separados pela string tambm fornecida.
Exemplo:
$partes = array("a", "casa nmero", 13, " azul");
$inteiro = join(" ",$partes);
$inteiro passa a conter a string:
a casa nmero 13 azul split
array split(string padrao, string str, int [limite]);
Retorna um array contendo partes da string fornecida separadas pelo padro fornecido, podendo limitar o nmero de elementos do array.
Exemplo:
$data = 11/14/1975;
$data_array = split(/,$data);
O cdigo acima faz com que a varivel $data_array receba o valor:
array(11,14,1975);
explode array explode(string padrao, string str);
Funciona de maneira bastante semelhante funo split, com a diferena que no possvel estabelecer um limite para o nmero de elementos do array.
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 50
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
Comparaes entre strings similar_text
int similar_text(string str1, string str2, double [porcentagem]);
Compara as duas strings fornecidas e retorna o nmero de caracteres coincidentes. Opcionalmente pode ser fornecida uma varivel, passada por referncia (ver tpico sobre funes), que receber o valor percentual de igualdade entre as strings. Esta funo case sensitive, ou seja, maisculas e minsculas so tratadas como diferentes.
Exemplo:
$num = similar_text("teste", "testando",&$porc);
As variveis passam a ter os seguintes valores:
$num == 4; $porc == 61.538461538462 strcasecmp
int strcasecmp(string str1, string str2);
Compara as duas strings e retorna 0 (zero) se forem iguais, um valor maior que zero se str1 > str2,
e um valor menor que zero se str1 < str2. Esta funo case insensitive, ou seja, maisculas e minsculas so tratadas como iguais.
strcmp int strcasecmp(string str1, string str2);
Funciona de maneira semelhante funo strcasecmp, com a diferena que esta case sensitive, ou seja, maisculas e minsculas so tratadas como diferentes.
strstr string strstr(string str1, string str2);
string strchr(string str1, string str2);
As duas funes so idnticas. Procura a primeira ocorrncia de str2 em str1. Se no encontrar,
retorna uma string vazia, e se encontrar retorna todos os caracteres de str1 a partir desse ponto.
Exemplo:
strstr("<NAME>", "Viv"); // retorna Vivas
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 51
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
stristr string strstr(string str1, string str2);
Funciona de maneira semelhante funo strstr, com a diferena que esta case insensitive, ou seja,
maisculas e minsculas so tratadas como iguais.
strpos int strpos(string str1, string str2, int [offset] );
Retorna a posio da primeira ocorrncia de str2 em str1, ou zero se no houver. O parmetro opcional offset determina a partir de qual caracter de str1 ser efetuada a busca. Mesmo utilizando o offset, o valor de retorno referente ao incio de str1.
strrpos int strrpos(string haystack, char needle);
Retorna a posio da ltima ocorrncia de str2 em str1, ou zero se no houver.
Funes para edio de strings chop
string chop(string str);
Retira espaos e linhas em branco do final da string fornecida.
Exemplo:
chop(
Teste
\n
\n
); // retorna
Teste ltrim
string ltrim(string str);
Retira espaos e linhas em branco do final da string fornecida.
Exemplo:
ltrim(
Teste
\n
&XUVR GH /LQJXDJHP 3+3
\n ); // retorna Teste
\n
\n ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 52
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
trim string trim(string str);
Retira espaos e linhas em branco do incio e do final da string fornecida.
Exemplo:
trim(
Teste
\n
\n ); // retorna Teste strrev
string strrev(string str);
Retorna a string fornecida invertida.
Exemplo:
strrev(Teste); // retorna etseT strtolower
string strtolower(string str);
Retorna a string fornecida com todas as letras minsculas.
Exemplo:
strtolower(Teste); // retorna teste strtoupper
string strtoupper(string str);
Retorna a string fornecida com todas as letras maisculas.
Exemplo:
strtolower(Teste); // retorna TESTE
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 53
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
ucfirst string ucfirst(string str);
Retorna a string fornecida com o primeiro caracter convertido para letra maiscula.
Exemplo:
ucfirst(teste de funcao); // retorna Teste de funcao ucwords
string ucwords(string str);
Retorna a string fornecida com todas as palavras iniciadas por letras maisculas.
Exemplo:
ucwords(teste de funcao); // retorna Teste De Funcao str_replace
string str_replace(string str1, string str2, string str3);
Altera todas as ocorrncias de str1 em str3 pela string str2.
Funes diversas chr
string chr(int ascii);
Retorna o caracter correspondente ao cdigo ASCII fornecido.
ord int ord(string string);
Retorna o cdigo ASCII correspontente ao caracter fornecido.
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 54
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
echo echo(string arg1, string [argn]... );
Imprime os argumentos fornecidos.
print print(string arg);
Imprime o argumento fornecido.
strlen int strlen(string str);
Retorna o tamanho da string fornecida.
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 55
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
APNDICE 02 - Funes para tratamento de arrays Funes Genricas Array
array array(...);
a funo que cria um array a partir dos parmetros forncidos. possvel fornecer o ndice de cada elemento. Esse ndice pode ser um valor de qualquer tipo, e no apenas de inteiro. Se o ndice no for fornecido o PHP atribui um valor inteiro sequencial, a partir do 0 ou do ltimo ndice inteiro explicitado. Vejamos alguns exemplos:
Exemplo 1
$teste = array("um", "dois","tr"=>"tres",5=>"quatro","cinco");
Temos o seguinte mapeamento:
0 => um (0 o primeiro ndice, se no houver um explicito)
1 => dois (o inteiro seguinte)
tr => tres 5 => quatro (valor explicitado)
6 => cinco (o inteiro seguinte ao ltimo atribudo, e no o prximo valor, que seria 2)
Exemplo 2
$teste = array("um", 6=>"dois","tr"=>"tres",5=>"quatro","cinco");
Temos o seguinte mapeamento:
0 => um 6 => dois tr => tres 5 => quatro (seria 7, se no fosse explicitado)
7 => cinco (seria 6, se no estivesse ocupado)
Em geral, no recomendvel utilizar arrays com vrios tipos de ndices, j que isso pode confundir o programador. No caso de realmente haver a necessidade de utilizar esse recurso, deve-se ter bastante ateno ao manipular os ndices do array.
range array range(int minimo, int maximo);
A funo range cria um array cujos elementos so os inteiros pertencentes ao intervalo fornecido,
inclusive. Se o valor do primeiro parmetro for maior do que o do segundo, a funo retorna false (valor vazio).
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 56
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
shuffle void shuffle(array &arr);
Esta funo embaralha o array, ou seja, troca as posies dos elementos aleatoriamente e no retorna valor algum.
sizeof int sizeof(array arr);
Retorna um valor inteiro contendo o nmero de elementos de um array. Se for utilizada com uma varivel cujo valor no do tipo array, retorna 1. Se a varivel no estiver setada ou for um array vazio, retorna 0.
Funes de navegao Toda varivel do tipo array possui um ponteiro interno indicando o prximo elemento a ser acessado no caso de no ser especificado um ndice. As funes seguintes servem para modificar esse ponteiro, permitindo assim percorrer um array para verificar seu contedo (chaves e elementos).
reset mixed reset(array arr);
Seta o ponteiro interno para o primeiro elemento do array, e retorna o contedo desse elemento.
end mixed end(array arr);
Seta o ponteiro interno para o ltimo elemento do array, e retorna o contedo desse elemento.
next mixed next(array arr);
Seta o ponteiro interno para o prximo elemento do array, e retorna o contedo desse elemento.
Obs.: esta no uma boa funo para determinar se um elemento o ltimo do array, pois pode retornar false tanto no final do array como no caso de haver um elemento vazio.
prev mixed prev(array arr);
Seta o ponteiro interno para o elemento anterior do array, e retorna o contedo desse elemento. Funciona de maneira inversa a next.
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 57
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
pos mixed pos(array arr);
Retorna o contedo do elemento atual do array, indicado pelo ponteiro interno.
key mixed key(array arr);
Funciona de maneira bastante semelhante a pos, mas ao invs de retornar o elemento atual indicado pelo ponteiro interno do array, retorna seu ndice.
each array each(array arr);
Retorna um array contendo o ndice e o elemento atual indicao pelo ponteiro interno do array. o valor de retorno um array de quatro elementos, cujos ndices so 0, 1, key e value. Os elementos de ndices 0 e key armazenam o ndice do valor atual, e os elementos de ndices 1 e value contm o valor do elemento atual indicado pelo ponteiro.
Esta funo pode ser utilizada para percorrer todos os elementos de um array e determinar se j foi encontrado o ltimo elemento, pois no caso de haver um elemento vazio, a funo no retornar o valor false. A funo each s retorna false depois q o ltimo elemento do array foi encontrado.
Exemplo:
/*funo que percorre todos os elementos de um array e imprime seus ndices e valores */
function imprime_array($arr) {
reset($arr);
while (list($chave,$valor) = each($arr))
echo Chave: $chave. Valor: $valor;
}
Funes de ordenao So funes que servem para arrumar os elementos de um array de acordo com determinados critrios.
Estes critrios so: manuteno ou no da associao entre ndices e elementos; ordenao por elementos ou por ndices;
funo de comparao entre dois elementos.
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 58
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
sort void sort(array &arr);
A funo mais simples de ordenao de arrays. Ordena os elementos de um array em ordem crescente,
sem manter os relacionamentos com os ndices.
rsort void rsort(array &arr);
Funciona de maneir ainversa funo sort. Ordena os elementos de um array em ordem decrescente,
sem manter os relacionamentos com os ndices.
asort void asort(array &arr);
Tem o funcionamento bastante semelhante funo sort. Ordena os elementos de um array em ordem crescente, porm mantm os relacionamentos com os ndices.
arsort void arsort(array &arr);
Funciona de maneira inversa funo asort. Ordena os elementos de um array em ordem decrescente e mantm os relacionamentos dos elementos com os ndices.
ksort void ksort(array &arr);
Funo de ordenao baseada nos ndices. Ordena os elementos de um array de acordo com seus ndices,
em ordem crescente, mantendo os relacionamentos.
usort void usort(array &arr, function compara);
Esta uma funo que utiliza outra funo como parmetro. Ordena os elementos de um array sem manter os relacionamentos com os ndices, e utiliza para efeito de comparao uma funo definida pelo usurio, que deve comparar dois elementos do array e retornar 0, 1 ou 1, de acordo com qualquer critrio estabelecido pelo usurio.
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 59
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
uasort void uasort(array &arr, function compara);
Esta funo tambm utiliza outra funo como parmetro. Ordena os elementos de um array e mantm os relacionamentos com os ndices, utilizando para efeito de comparao uma funo definida pelo usurio, que deve comparar dois elementos do array e retornar 0, 1 ou 1, de acordo com qualquer critrio estabelecido pelo usurio.
uksort void uksort(array &arr, function compara);
Esta funo ordena o array atravs dos ndices, mantendo os relacionamentos com os elementos., e utiliza para efeito de comparao uma funo definida pelo usurio, que deve comparar dois ndices do array e retornar 0, 1 ou
1, de acordo com qualquer critrio estabelecido pelo usurio.
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 60
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
Sobre o autor da Apostila Esta apostila de autoria de 0DXUtFLR9LYDVGH6RX]D%DUUHWR <EMAIL>
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 61
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
*18)UHH'RFXPHQWDWLRQ/LFHQVH 9HUVLRQ 0DUFK
&RS\ULJKW & )UHH 6RIWZDUH )RXQGDWLRQ ,QF
7HPSOH 3ODFH 6XLWH %RVWRQ 0$ 86$
(YHU\RQH LV SHUPLWWHG WR FRS\ DQG GLVWULEXWH YHUEDWLP FRSLHV RI WKLV OLFHQVH GRFXPHQW EXW FKDQJLQJ LW LV QRW DOORZHG
35($0%/(
7KH SXUSRVH RI WKLV /LFHQVH LV WR PDNH D PDQXDO WH[WERRN RU RWKHU ZULWWHQ GRFXPHQW IUHH LQ WKH VHQVH RI IUHHGRP WR DVVXUH HYHU\RQH WKH HIIHFWLYH IUHHGRP WR FRS\ DQG UHGLVWULEXWH LW ZLWK RU ZLWKRXW PRGLI\LQJ LW
HLWKHU FRPPHUFLDOO\ RU QRQFRPPHUFLDOO\ 6HFRQGDULO\ WKLV /LFHQVH SUHVHUYHV IRU WKH DXWKRU DQG SXEOLVKHU D ZD\
WR JHW FUHGLW IRU WKHLU ZRUN ZKLOH QRW EHLQJ FRQVLGHUHG UHVSRQVLEOH IRU PRGLILFDWLRQV PDGH E\ RWKHUV
7KLV /LFHQVH LV D NLQG RI FRS\OHIW ZKLFK PHDQV WKDW GHULYDWLYH ZRUNV RI WKH GRFXPHQW PXVW WKHPVHOYHV EH IUHH LQ WKH VDPH VHQVH ,W FRPSOHPHQWV WKH *18 *HQHUDO 3XEOLF /LFHQVH ZKLFK LV D FRS\OHIW OLFHQVH GHVLJQHG IRU IUHH VRIWZDUH
:H KDYH GHVLJQHG WKLV /LFHQVH LQ RUGHU WR XVH LW IRU PDQXDOV IRU IUHH VRIWZDUH EHFDXVH IUHH VRIWZDUH QHHGV IUHH GRFXPHQWDWLRQ D IUHH SURJUDP VKRXOG FRPH ZLWK PDQXDOV SURYLGLQJ WKH VDPH IUHHGRPV WKDW WKH VRIWZDUH GRHV %XW WKLV /LFHQVH LV QRW OLPLWHG WR VRIWZDUH PDQXDOV LW FDQ EH XVHG IRU DQ\ WH[WXDO ZRUN
UHJDUGOHVV RI VXEMHFW PDWWHU RU ZKHWKHU LW LV SXEOLVKHG DV D SULQWHG ERRN :H UHFRPPHQG WKLV /LFHQVH SULQFLSDOO\ IRU ZRUNV ZKRVH SXUSRVH LV LQVWUXFWLRQ RU UHIHUHQFH
$33/,&$%,/,7< $1' '(),1,7,216 7KLV /LFHQVH DSSOLHV WR DQ\ PDQXDO RU RWKHU ZRUN WKDW FRQWDLQV D QRWLFH SODFHG E\ WKH FRS\ULJKW KROGHU VD\LQJ LW FDQ EH GLVWULEXWHG XQGHU WKH WHUPV RI WKLV /LFHQVH 7KH 'RFXPHQW EHORZ UHIHUV WR DQ\ VXFK PDQXDO RU ZRUN
$Q\ PHPEHU RI WKH SXEOLF LV D OLFHQVHH DQG LV DGGUHVVHG DV \RX
$ 0RGLILHG 9HUVLRQ RI WKH 'RFXPHQW PHDQV DQ\ ZRUN FRQWDLQLQJ WKH 'RFXPHQW RU D SRUWLRQ RI LW HLWKHU FRSLHG YHUEDWLP RU ZLWK PRGLILFDWLRQV DQGRU WUDQVODWHG LQWR DQRWKHU ODQJXDJH
$ 6HFRQGDU\ 6HFWLRQ LV D QDPHG DSSHQGL[ RU D IURQWPDWWHU VHFWLRQ RI WKH 'RFXPHQW WKDW GHDOV H[FOXVLYHO\
ZLWK WKH UHODWLRQVKLS RI WKH SXEOLVKHUV RU DXWKRUV RI WKH 'RFXPHQW WR WKH 'RFXPHQW V RYHUDOO VXEMHFW UHODWHG PDWWHUV DQG FRQWDLQV QRWKLQJ WKDW FRXOG IDOO GLUHFWO\ ZLWKLQ WKDW RYHUDOO VXEMHFW
RU WR
)RU H[DPSOH LI WKH
'RFXPHQW LV LQ SDUW D WH[WERRN RI PDWKHPDWLFV D 6HFRQGDU\ 6HFWLRQ PD\ QRW H[SODLQ DQ\ PDWKHPDWLFV 7KH UHODWLRQVKLS FRXOG EH D PDWWHU RI KLVWRULFDO FRQQHFWLRQ ZLWK WKH VXEMHFW RU ZLWK UHODWHG PDWWHUV RU RI OHJDO
FRPPHUFLDO SKLORVRSKLFDO HWKLFDO RU SROLWLFDO SRVLWLRQ UHJDUGLQJ WKHP
7KH ,QYDULDQW 6HFWLRQV DUH FHUWDLQ 6HFRQGDU\ 6HFWLRQV ZKRVH WLWOHV DUH GHVLJQDWHG DV EHLQJ WKRVH RI
,QYDULDQW 6HFWLRQV LQ WKH QRWLFH WKDW VD\V WKDW WKH 'RFXPHQW LV UHOHDVHG XQGHU WKLV /LFHQVH
7KH &RYHU 7H[WV DUH FHUWDLQ VKRUW SDVVDJHV RI WH[W WKDW DUH OLVWHG DV )URQW&RYHU 7H[WV RU
%DFN&RYHU 7H[WV LQ WKH QRWLFH WKDW VD\V WKDW WKH 'RFXPHQW LV UHOHDVHG XQGHU WKLV /LFHQVH
$ 7UDQVSDUHQW FRS\ RI WKH 'RFXPHQW PHDQV D PDFKLQHUHDGDEOH FRS\ UHSUHVHQWHG LQ D IRUPDW ZKRVH VSHFLILFDWLRQ LV DYDLODEOH WR WKH JHQHUDO SXEOLF ZKRVH FRQWHQWV FDQ EH YLHZHG DQG HGLWHG GLUHFWO\ DQG VWUDLJKWIRUZDUGO\ ZLWK JHQHULF WH[W HGLWRUV RU IRU LPDJHV FRPSRVHG RI SL[HOV GUDZLQJV
JHQHULF SDLQW SURJUDPV RU IRU
VRPH ZLGHO\ DYDLODEOH GUDZLQJ HGLWRU DQG WKDW LV VXLWDEOH IRU LQSXW WR WH[W IRUPDWWHUV RU IRU DXWRPDWLF WUDQVODWLRQ WR D YDULHW\ RI IRUPDWV VXLWDEOH IRU LQSXW WR WH[W IRUPDWWHUV $ FRS\ PDGH LQ DQ RWKHUZLVH 7UDQVSDUHQW ILOH IRUPDW ZKRVH PDUNXS KDV EHHQ GHVLJQHG WR WKZDUW RU GLVFRXUDJH VXEVHTXHQW PRGLILFDWLRQ E\
UHDGHUV LV QRW 7UDQVSDUHQW $ FRS\ WKDW LV QRW 7UDQVSDUHQW LV FDOOHG 2SDTXH
([DPSOHV RI VXLWDEOH IRUPDWV IRU 7UDQVSDUHQW FRSLHV LQFOXGH SODLQ $6&,, ZLWKRXW PDUNXS 7H[LQIR LQSXW IRUPDW
/D7H; LQSXW IRUPDW 6*0/ RU ;0/ XVLQJ D SXEOLFO\ DYDLODEOH '7' DQG
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 62
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
VWDQGDUGFRQIRUPLQJ VLPSOH +70/ GHVLJQHG IRU KXPDQ PRGLILFDWLRQ 2SDTXH IRUPDWV LQFOXGH 3RVW6FULSW 3')
SURSULHWDU\ IRUPDWV WKDW FDQ EH UHDG DQG HGLWHG RQO\ E\ SURSULHWDU\ ZRUG SURFHVVRUV 6*0/ RU ;0/ IRU ZKLFK WKH '7' DQGRU SURFHVVLQJ WRROV DUH QRW JHQHUDOO\ DYDLODEOH DQG WKH PDFKLQHJHQHUDWHG +70/ SURGXFHG E\
VRPH ZRUG SURFHVVRUV IRU RXWSXW SXUSRVHV RQO\
7KH 7LWOH 3DJH PHDQV IRU D SULQWHG ERRN WKH WLWOH SDJH LWVHOI SOXV VXFK IROORZLQJ SDJHV DV DUH QHHGHG WR KROG OHJLEO\ WKH PDWHULDO WKLV /LFHQVH UHTXLUHV WR DSSHDU LQ WKH WLWOH SDJH
)RU ZRUNV LQ IRUPDWV ZKLFK GR QRW KDYH DQ\ WLWOH SDJH DV VXFK 7LWOH 3DJH PHDQV WKH WH[W QHDU WKH PRVW SURPLQHQW DSSHDUDQFH RI WKH ZRUN V WLWOH SUHFHGLQJ WKH EHJLQQLQJ RI WKH ERG\ RI WKH WH[W
9(5%$7,0 &23<,1*
<RX PD\ FRS\ DQG GLVWULEXWH WKH 'RFXPHQW LQ DQ\ PHGLXP HLWKHU FRPPHUFLDOO\ RU QRQFRPPHUFLDOO\ SURYLGHG WKDW WKLV /LFHQVH WKH FRS\ULJKW QRWLFHV DQG WKH OLFHQVH QRWLFH VD\LQJ WKLV /LFHQVH DSSOLHV WR WKH 'RFXPHQW DUH UHSURGXFHG LQ DOO FRSLHV DQG WKDW \RX DGG QR RWKHU FRQGLWLRQV ZKDWVRHYHU WR WKRVH RI WKLV /LFHQVH <RX PD\
QRW XVH WHFKQLFDO PHDVXUHV WR REVWUXFW RU FRQWURO WKH UHDGLQJ RU IXUWKHU FRS\LQJ RI WKH FRSLHV \RX PDNH RU GLVWULEXWH +RZHYHU \RX PD\ DFFHSW FRPSHQVDWLRQ LQ H[FKDQJH IRU FRSLHV ,I \RX GLVWULEXWH D ODUJH HQRXJK QXPEHU RI FRSLHV \RX PXVW DOVR IROORZ WKH FRQGLWLRQV LQ VHFWLRQ
<RX PD\ DOVR OHQG FRSLHV XQGHU WKH VDPH FRQGLWLRQV VWDWHG DERYH DQG \RX PD\ SXEOLFO\ GLVSOD\ FRSLHV
&23<,1* ,1 48$17,7<
,I \RX SXEOLVK SULQWHG FRSLHV RI WKH 'RFXPHQW QXPEHULQJ PRUH WKDQ DQG WKH 'RFXPHQW V OLFHQVH QRWLFH UHTXLUHV &RYHU 7H[WV \RX PXVW HQFORVH WKH FRSLHV LQ FRYHUV WKDW FDUU\ FOHDUO\ DQG OHJLEO\ DOO WKHVH &RYHU 7H[WV )URQW&RYHU 7H[WV RQ WKH IURQW FRYHU DQG %DFN&RYHU 7H[WV RQ WKH EDFN FRYHU %RWK FRYHUV PXVW DOVR FOHDUO\ DQG OHJLEO\ LGHQWLI\ \RX DV WKH SXEOLVKHU RI WKHVH FRSLHV 7KH IURQW FRYHU PXVW SUHVHQW WKH IXOO WLWOH ZLWK DOO ZRUGV RI WKH WLWOH HTXDOO\ SURPLQHQW DQG YLVLEOH <RX PD\ DGG RWKHU PDWHULDO RQ WKH FRYHUV LQ DGGLWLRQ
&RS\LQJ ZLWK FKDQJHV OLPLWHG WR WKH FRYHUV DV ORQJ DV WKH\ SUHVHUYH WKH WLWOH RI WKH 'RFXPHQW DQG VDWLVI\
WKHVH FRQGLWLRQV FDQ EH WUHDWHG DV YHUEDWLP FRS\LQJ LQ RWKHU UHVSHFWV
,I WKH UHTXLUHG WH[WV IRU HLWKHU FRYHU DUH WRR YROXPLQRXV WR ILW OHJLEO\ \RX VKRXOG SXW WKH ILUVW RQHV OLVWHG DV PDQ\ DV ILW UHDVRQDEO\ RQ WKH DFWXDO FRYHU DQG FRQWLQXH WKH UHVW RQWR DGMDFHQW SDJHV
,I \RX SXEOLVK RU GLVWULEXWH 2SDTXH FRSLHV RI WKH 'RFXPHQW QXPEHULQJ PRUH WKDQ \RX PXVW HLWKHU LQFOXGH D PDFKLQHUHDGDEOH 7UDQVSDUHQW FRS\ DORQJ ZLWK HDFK 2SDTXH FRS\ RU VWDWH LQ RU ZLWK HDFK 2SDTXH FRS\ D SXEOLFO\DFFHVVLEOH FRPSXWHUQHWZRUN ORFDWLRQ FRQWDLQLQJ D FRPSOHWH 7UDQVSDUHQW FRS\ RI WKH 'RFXPHQW IUHH RI DGGHG PDWHULDO ZKLFK WKH JHQHUDO QHWZRUNXVLQJ SXEOLF KDV DFFHVV WR GRZQORDG DQRQ\PRXVO\ DW QR FKDUJH XVLQJ SXEOLFVWDQGDUG QHWZRUN SURWRFROV ,I \RX XVH WKH ODWWHU RSWLRQ \RX PXVW WDNH UHDVRQDEO\ SUXGHQW VWHSV
ZKHQ \RX EHJLQ GLVWULEXWLRQ RI 2SDTXH FRSLHV LQ TXDQWLW\ WR HQVXUH WKDW WKLV 7UDQVSDUHQW FRS\ ZLOO UHPDLQ WKXV DFFHVVLEOH DW WKH VWDWHG ORFDWLRQ XQWLO DW OHDVW RQH \HDU DIWHU WKH ODVW WLPH \RX GLVWULEXWH DQ 2SDTXH FRS\
GLUHFWO\ RU WKURXJK \RXU DJHQWV RU UHWDLOHUV RI WKDW HGLWLRQ WR WKH SXEOLF
,W LV UHTXHVWHG EXW QRW UHTXLUHG WKDW \RX FRQWDFW WKH DXWKRUV RI WKH 'RFXPHQW ZHOO EHIRUH UHGLVWULEXWLQJ DQ\
ODUJH QXPEHU RI FRSLHV WR JLYH WKHP D FKDQFH WR SURYLGH \RX ZLWK DQ XSGDWHG YHUVLRQ RI WKH 'RFXPHQW
02',),&$7,216
<RX PD\ FRS\ DQG GLVWULEXWH D 0RGLILHG 9HUVLRQ RI WKH 'RFXPHQW XQGHU WKH FRQGLWLRQV RI VHFWLRQV DQG
DERYH SURYLGHG WKDW \RX UHOHDVH WKH 0RGLILHG 9HUVLRQ XQGHU SUHFLVHO\ WKLV /LFHQVH ZLWK WKH 0RGLILHG 9HUVLRQ ILOOLQJ WKH UROH RI WKH 'RFXPHQW WKXV OLFHQVLQJ GLVWULEXWLRQ DQG PRGLILFDWLRQ RI WKH 0RGLILHG 9HUVLRQ WR ZKRHYHU SRVVHVVHV D FRS\ RI LW ,Q DGGLWLRQ \RX PXVW GR WKHVH WKLQJV LQ WKH 0RGLILHG 9HUVLRQ
$
8VH LQ WKH 7LWOH 3DJH DQG RQ WKH FRYHUV LI DQ\ D WLWOH GLVWLQFW IURP WKDW RI WKH 'RFXPHQW DQG IURP WKRVH RI SUHYLRXV YHUVLRQV ZKLFK VKRXOG LI WKHUH ZHUH DQ\ EH OLVWHG LQ WKH +LVWRU\ VHFWLRQ RI WKH 'RFXPHQW
<RX PD\ XVH WKH VDPH WLWOH DV D SUHYLRXV YHUVLRQ LI WKH RULJLQDO SXEOLVKHU RI WKDW YHUVLRQ JLYHV SHUPLVVLRQ
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 63
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
%
/LVW RQ WKH 7LWOH 3DJH DV DXWKRUV RQH RU PRUH SHUVRQV RU HQWLWLHV UHVSRQVLEOH IRU DXWKRUVKLS RI WKH PRGLILFDWLRQV LQ WKH 0RGLILHG 9HUVLRQ WRJHWKHU ZLWK DW OHDVW ILYH RI WKH SULQFLSDO DXWKRUV RI WKH 'RFXPHQW DOO RI LWV SULQFLSDO DXWKRUV LI LW KDV OHVV WKDQ ILYH
&
6WDWH RQ WKH 7LWOH SDJH WKH QDPH RI WKH SXEOLVKHU RI WKH 0RGLILHG 9HUVLRQ DV WKH SXEOLVKHU
'
3UHVHUYH DOO WKH FRS\ULJKW QRWLFHV RI WKH 'RFXPHQW
(
$GG DQ DSSURSULDWH FRS\ULJKW QRWLFH IRU \RXU PRGLILFDWLRQV DGMDFHQW WR WKH RWKHU FRS\ULJKW QRWLFHV
)
,QFOXGH LPPHGLDWHO\ DIWHU WKH FRS\ULJKW QRWLFHV D OLFHQVH QRWLFH JLYLQJ WKH SXEOLF SHUPLVVLRQ WR XVH WKH 0RGLILHG 9HUVLRQ XQGHU WKH WHUPV RI WKLV /LFHQVH LQ WKH IRUP VKRZQ LQ WKH $GGHQGXP EHORZ
*
3UHVHUYH LQ WKDW OLFHQVH QRWLFH WKH IXOO OLVWV RI ,QYDULDQW 6HFWLRQV DQG UHTXLUHG &RYHU 7H[WV JLYHQ LQ WKH
'RFXPHQW V OLFHQVH QRWLFH
+
,QFOXGH DQ XQDOWHUHG FRS\ RI WKLV /LFHQVH
,
3UHVHUYH WKH VHFWLRQ HQWLWOHG +LVWRU\ DQG LWV WLWOH DQG DGG WR LW DQ LWHP VWDWLQJ DW OHDVW WKH WLWOH \HDU
QHZ DXWKRUV DQG SXEOLVKHU RI WKH 0RGLILHG 9HUVLRQ DV JLYHQ RQ WKH 7LWOH 3DJH ,I WKHUH LV QR VHFWLRQ HQWLWOHG +LVWRU\ LQ WKH 'RFXPHQW FUHDWH RQH VWDWLQJ WKH WLWOH \HDU DXWKRUV DQG SXEOLVKHU RI WKH
'RFXPHQW DV JLYHQ RQ LWV 7LWOH 3DJH WKHQ DGG DQ LWHP GHVFULELQJ WKH 0RGLILHG 9HUVLRQ DV VWDWHG LQ WKH SUHYLRXV VHQWHQFH
-
3UHVHUYH WKH QHWZRUN ORFDWLRQ LI DQ\ JLYHQ LQ WKH 'RFXPHQW IRU SXEOLF DFFHVV WR D 7UDQVSDUHQW FRS\ RI WKH
'RFXPHQW DQG OLNHZLVH WKH QHWZRUN ORFDWLRQV JLYHQ LQ WKH 'RFXPHQW IRU SUHYLRXV YHUVLRQV LW ZDV EDVHG RQ
7KHVH PD\ EH SODFHG LQ WKH +LVWRU\ VHFWLRQ <RX PD\ RPLW D QHWZRUN ORFDWLRQ IRU D ZRUN WKDW ZDV SXEOLVKHG DW OHDVW IRXU \HDUV EHIRUH WKH 'RFXPHQW LWVHOI RU LI WKH RULJLQDO SXEOLVKHU RI WKH YHUVLRQ LW UHIHUV WR JLYHV SHUPLVVLRQ
.
,Q DQ\ VHFWLRQ HQWLWOHG $FNQRZOHGJHPHQWV RU 'HGLFDWLRQV SUHVHUYH WKH VHFWLRQ V WLWOH DQG SUHVHUYH LQ WKH VHFWLRQ DOO WKH VXEVWDQFH DQG WRQH RI HDFK RI WKH FRQWULEXWRU DFNQRZOHGJHPHQWV DQGRU GHGLFDWLRQV JLYHQ WKHUHLQ
/
3UHVHUYH DOO WKH ,QYDULDQW 6HFWLRQV RI WKH 'RFXPHQW XQDOWHUHG LQ WKHLU WH[W DQG LQ WKHLU WLWOHV 6HFWLRQ QXPEHUV RU WKH HTXLYDOHQW DUH QRW FRQVLGHUHG SDUW RI WKH VHFWLRQ WLWOHV
0 'HOHWH DQ\ VHFWLRQ HQWLWOHG (QGRUVHPHQWV 6XFK D VHFWLRQ PD\ QRW EH LQFOXGHG LQ WKH 0RGLILHG 9HUVLRQ
1
1 'R QRW UHWLWOH DQ\ H[LVWLQJ VHFWLRQ DV (QGRUVHPHQWV RU WR FRQIOLFW LQ WLWOH ZLWK DQ\ ,QYDULDQW 6HFWLRQ
2 ,I WKH 0RGLILHG 9HUVLRQ LQFOXGHV QHZ IURQWPDWWHU VHFWLRQV RU DSSHQGLFHV WKDW TXDOLI\ DV 6HFRQGDU\ 6HFWLRQV DQG FRQWDLQ QR PDWHULDO FRSLHG IURP WKH 'RFXPHQW \RX PD\ DW \RXU RSWLRQ GHVLJQDWH VRPH RU DOO RI WKHVH VHFWLRQV DV LQYDULDQW 7R GR WKLV DGG WKHLU WLWOHV WR WKH OLVW RI ,QYDULDQW 6HFWLRQV LQ WKH 0RGLILHG 9HUVLRQ V OLFHQVH QRWLFH 7KHVH WLWOHV PXVW EH GLVWLQFW IURP DQ\ RWKHU VHFWLRQ WLWOHV
<RX PD\ DGG D VHFWLRQ HQWLWOHG (QGRUVHPHQWV SURYLGHG LW FRQWDLQV QRWKLQJ EXW HQGRUVHPHQWV RI \RXU 0RGLILHG 9HUVLRQ E\ YDULRXV SDUWLHVIRU H[DPSOH VWDWHPHQWV RI SHHU UHYLHZ RU WKDW WKH WH[W KDV EHHQ DSSURYHG E\ DQ RUJDQL]DWLRQ DV WKH DXWKRULWDWLYH GHILQLWLRQ RI D VWDQGDUG
<RX PD\ DGG D SDVVDJH RI XS WR ILYH ZRUGV DV D )URQW&RYHU 7H[W DQG D SDVVDJH RI XS WR ZRUGV DV D %DFN
&RYHU 7H[W WR WKH HQG RI WKH OLVW RI &RYHU 7H[WV LQ WKH 0RGLILHG 9HUVLRQ 2QO\ RQH SDVVDJH RI )URQW&RYHU 7H[W DQG RQH RI %DFN&RYHU 7H[W PD\ EH DGGHG E\
RU WKURXJK DUUDQJHPHQWV PDGH E\
DQ\ RQH HQWLW\ ,I WKH
'RFXPHQW DOUHDG\ LQFOXGHV D FRYHU WH[W IRU WKH VDPH FRYHU SUHYLRXVO\ DGGHG E\ \RX RU E\ DUUDQJHPHQW PDGH
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 64
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
E\ WKH VDPH HQWLW\ \RX DUH DFWLQJ RQ EHKDOI RI \RX PD\ QRW DGG DQRWKHU EXW \RX PD\ UHSODFH WKH ROG RQH RQ H[SOLFLW SHUPLVVLRQ IURP WKH SUHYLRXV SXEOLVKHU WKDW DGGHG WKH ROG RQH
7KH DXWKRU V DQG SXEOLVKHU V RI WKH 'RFXPHQW GR QRW E\ WKLV /LFHQVH JLYH SHUPLVVLRQ WR XVH WKHLU QDPHV IRU SXEOLFLW\ IRU RU WR DVVHUW RU LPSO\ HQGRUVHPHQW RI DQ\ 0RGLILHG 9HUVLRQ
&20%,1,1* '2&80(176
<RX PD\ FRPELQH WKH 'RFXPHQW ZLWK RWKHU GRFXPHQWV UHOHDVHG XQGHU WKLV /LFHQVH XQGHU WKH WHUPV GHILQHG LQ VHFWLRQ DERYH IRU PRGLILHG YHUVLRQV SURYLGHG WKDW \RX LQFOXGH LQ WKH FRPELQDWLRQ DOO RI WKH ,QYDULDQW 6HFWLRQV RI DOO RI WKH RULJLQDO GRFXPHQWV XQPRGLILHG DQG OLVW WKHP DOO DV ,QYDULDQW 6HFWLRQV RI \RXU FRPELQHG ZRUN LQ LWV OLFHQVH QRWLFH
7KH FRPELQHG ZRUN QHHG RQO\ FRQWDLQ RQH FRS\ RI WKLV /LFHQVH DQG PXOWLSOH LGHQWLFDO ,QYDULDQW 6HFWLRQV PD\
EH UHSODFHG ZLWK D VLQJOH FRS\ ,I WKHUH DUH PXOWLSOH ,QYDULDQW 6HFWLRQV ZLWK WKH VDPH QDPH EXW GLIIHUHQW FRQWHQWV PDNH WKH WLWOH RI HDFK VXFK VHFWLRQ XQLTXH E\ DGGLQJ DW WKH HQG RI LW LQ SDUHQWKHVHV WKH QDPH RI WKH RULJLQDO DXWKRU RU SXEOLVKHU RI WKDW VHFWLRQ LI NQRZQ RU HOVH D XQLTXH QXPEHU 0DNH WKH VDPH DGMXVWPHQW WR WKH VHFWLRQ WLWOHV LQ WKH OLVW RI ,QYDULDQW 6HFWLRQV LQ WKH OLFHQVH QRWLFH RI WKH FRPELQHG ZRUN
,Q WKH FRPELQDWLRQ \RX PXVW FRPELQH DQ\ VHFWLRQV HQWLWOHG +LVWRU\ LQ WKH YDULRXV RULJLQDO GRFXPHQWV
IRUPLQJ RQH VHFWLRQ HQWLWOHG +LVWRU\ OLNHZLVH FRPELQH DQ\ VHFWLRQV HQWLWOHG $FNQRZOHGJHPHQWV DQG DQ\
VHFWLRQV HQWLWOHG 'HGLFDWLRQV <RX PXVW GHOHWH DOO VHFWLRQV HQWLWOHG (QGRUVHPHQWV
&2//(&7,216 2) '2&80(176
<RX PD\ PDNH D FROOHFWLRQ FRQVLVWLQJ RI WKH 'RFXPHQW DQG RWKHU GRFXPHQWV UHOHDVHG XQGHU WKLV /LFHQVH DQG UHSODFH WKH LQGLYLGXDO FRSLHV RI WKLV /LFHQVH LQ WKH YDULRXV GRFXPHQWV ZLWK D VLQJOH FRS\ WKDW LV LQFOXGHG LQ WKH FROOHFWLRQ SURYLGHG WKDW \RX IROORZ WKH UXOHV RI WKLV /LFHQVH IRU YHUEDWLP FRS\LQJ RI HDFK RI WKH GRFXPHQWV LQ DOO RWKHU UHVSHFWV
<RX PD\ H[WUDFW D VLQJOH GRFXPHQW IURP VXFK D FROOHFWLRQ DQG GLVWULEXWH LW LQGLYLGXDOO\ XQGHU WKLV /LFHQVH
SURYLGHG \RX LQVHUW D FRS\ RI WKLV /LFHQVH LQWR WKH H[WUDFWHG GRFXPHQW DQG IROORZ WKLV /LFHQVH LQ DOO RWKHU UHVSHFWV UHJDUGLQJ YHUEDWLP FRS\LQJ RI WKDW GRFXPHQW
$**5(*$7,21 :,7+ ,1'(3(1'(17 :25.6
$ FRPSLODWLRQ RI WKH 'RFXPHQW RU LWV GHULYDWLYHV ZLWK RWKHU VHSDUDWH DQG LQGHSHQGHQW GRFXPHQWV RU ZRUNV LQ RU RQ D YROXPH RI D VWRUDJH RU GLVWULEXWLRQ PHGLXP GRHV QRW DV D ZKROH FRXQW DV D 0RGLILHG 9HUVLRQ RI WKH
'RFXPHQW SURYLGHG QR FRPSLODWLRQ FRS\ULJKW LV FODLPHG IRU WKH FRPSLODWLRQ 6XFK D FRPSLODWLRQ LV FDOOHG DQ
DJJUHJDWH DQG WKLV WKLV /LFHQVH GRHV QRW DSSO\ WR WKH RWKHU VHOIFRQWDLQHG ZRUNV WKXV FRPSLOHG ZLWK WKH
'RFXPHQW RQ DFFRXQW RI WKHLU EHLQJ WKXV FRPSLOHG LI WKH\ DUH QRW WKHPVHOYHV GHULYDWLYH ZRUNV RI WKH
'RFXPHQW ,I WKH &RYHU 7H[W UHTXLUHPHQW RI VHFWLRQ LV DSSOLFDEOH WR WKHVH FRSLHV RI WKH 'RFXPHQW WKHQ LI WKH
'RFXPHQW LV OHVV WKDQ RQH TXDUWHU RI WKH HQWLUH DJJUHJDWH WKH 'RFXPHQW V &RYHU 7H[WV PD\ EH SODFHG RQ FRYHUV WKDW VXUURXQG RQO\ WKH 'RFXPHQW ZLWKLQ WKH DJJUHJDWH 2WKHUZLVH WKH\ PXVW DSSHDU RQ FRYHUV DURXQG WKH ZKROH DJJUHJDWH
75$16/$7,21 7UDQVODWLRQ LV FRQVLGHUHG D NLQG RI PRGLILFDWLRQ VR \RX PD\ GLVWULEXWH WUDQVODWLRQV RI WKH
'RFXPHQW XQGHU WKH WHUPV RI VHFWLRQ 5HSODFLQJ ,QYDULDQW 6HFWLRQV ZLWK WUDQVODWLRQV UHTXLUHV HVSHFLDO SHUPLVVLRQ IURP WKHLU FRS\ULJKW KROGHUV EXW \RX PD\ LQFOXGH WUDQVODWLRQV RI VRPH RU DOO ,QYDULDQW 6HFWLRQV LQ DGGLWLRQ WR WKH RULJLQDO YHUVLRQV RI WKHVH ,QYDULDQW 6HFWLRQV <RX PD\ LQFOXGH D WUDQVODWLRQ RI WKLV /LFHQVH SURYLGHG WKDW \RX DOVR LQFOXGH WKH RULJLQDO (QJOLVK YHUVLRQ RI WKLV /LFHQVH ,Q FDVH RI D GLVDJUHHPHQW EHWZHHQ WKH WUDQVODWLRQ DQG WKH RULJLQDO (QJOLVK YHUVLRQ RI WKLV /LFHQVH WKH RULJLQDO (QJOLVK YHUVLRQ ZLOO SUHYDLO
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 65
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
7(50,1$7,21
<RX PD\ QRW FRS\ PRGLI\ VXEOLFHQVH RU GLVWULEXWH WKH 'RFXPHQW H[FHSW DV H[SUHVVO\ SURYLGHG IRU XQGHU WKLV
/LFHQVH $Q\ RWKHU DWWHPSW WR FRS\ PRGLI\ VXEOLFHQVH RU GLVWULEXWH WKH 'RFXPHQW LV YRLG DQG ZLOO DXWRPDWLFDOO\ WHUPLQDWH \RXU ULJKWV XQGHU WKLV /LFHQVH +RZHYHU SDUWLHV ZKR KDYH UHFHLYHG FRSLHV RU ULJKWV
IURP \RX XQGHU WKLV /LFHQVH ZLOO QRW KDYH WKHLU OLFHQVHV WHUPLQDWHG VR ORQJ DV VXFK SDUWLHV UHPDLQ LQ IXOO FRPSOLDQFH
)8785( 5(9,6,216 2) 7+,6 /,&(16(
7KH )UHH 6RIWZDUH )RXQGDWLRQ PD\ SXEOLVK QHZ UHYLVHG YHUVLRQV RI WKH *18 )UHH 'RFXPHQWDWLRQ /LFHQVH IURP WLPH WR WLPH 6XFK QHZ YHUVLRQV ZLOO EH VLPLODU LQ VSLULW WR WKH SUHVHQW YHUVLRQ EXW PD\ GLIIHU LQ GHWDLO WR DGGUHVV QHZ SUREOHPV RU FRQFHUQV 6HH KWWSZZZJQXRUJFRS\OHIW
(DFK YHUVLRQ RI WKH /LFHQVH LV JLYHQ D GLVWLQJXLVKLQJ YHUVLRQ QXPEHU ,I WKH 'RFXPHQW VSHFLILHV WKDW D SDUWLFXODU QXPEHUHG YHUVLRQ RI WKLV /LFHQVH RU DQ\ ODWHU YHUVLRQ DSSOLHV WR LW \RX KDYH WKH RSWLRQ RI IROORZLQJ WKH WHUPV DQG FRQGLWLRQV HLWKHU RI WKDW VSHFLILHG YHUVLRQ RU RI DQ\ ODWHU YHUVLRQ WKDW KDV EHHQ SXEOLVKHG QRW DV D GUDIW E\ WKH )UHH 6RIWZDUH )RXQGDWLRQ ,I WKH 'RFXPHQW GRHV QRW VSHFLI\ D YHUVLRQ QXPEHU RI WKLV /LFHQVH \RX PD\
FKRRVH DQ\ YHUVLRQ HYHU SXEOLVKHG QRW DV D GUDIW E\ WKH )UHH 6RIWZDUH )RXQGDWLRQ
+RZ WR XVH WKLV /LFHQVH IRU \RXU GRFXPHQWV
7R XVH WKLV /LFHQVH LQ D GRFXPHQW \RX KDYH ZULWWHQ LQFOXGH D FRS\ RI WKH /LFHQVH LQ WKH GRFXPHQW DQG SXW WKH IROORZLQJ FRS\ULJKW DQG OLFHQVH QRWLFHV MXVW DIWHU WKH WLWOH SDJH
&RS\ULJKW F
<($5 <285 1$0(
3HUPLVVLRQ LV JUDQWHG WR FRS\ GLVWULEXWH DQGRU PRGLI\ WKLV GRFXPHQW XQGHU WKH WHUPV RI WKH *18 )UHH 'RFXPHQWDWLRQ /LFHQVH 9HUVLRQ
RU DQ\ ODWHU YHUVLRQ SXEOLVKHG E\ WKH )UHH 6RIWZDUH )RXQGDWLRQ ZLWK WKH ,QYDULDQW 6HFWLRQV EHLQJ /,67 7+(,5 7,7/(6 ZLWK WKH
)URQW&RYHU 7H[WV EHLQJ /,67 DQG ZLWK WKH %DFN&RYHU 7H[WV EHLQJ /,67
$ FRS\ RI WKH OLFHQVH LV LQFOXGHG LQ WKH VHFWLRQ HQWLWOHG *18
)UHH 'RFXPHQWDWLRQ /LFHQVH
,I \RX KDYH QR ,QYDULDQW 6HFWLRQV ZULWH ZLWK QR ,QYDULDQW 6HFWLRQV LQVWHDG RI VD\LQJ ZKLFK RQHV DUH LQYDULDQW ,I \RX KDYH QR )URQW&RYHU 7H[WV ZULWH QR )URQW&RYHU 7H[WV LQVWHDG RI
)URQW&RYHU 7H[WV EHLQJ /,67 OLNHZLVH IRU %DFN&RYHU 7H[WV
,I \RXU GRFXPHQW FRQWDLQV QRQWULYLDO H[DPSOHV RI SURJUDP FRGH ZH UHFRPPHQG UHOHDVLQJ WKHVH H[DPSOHV LQ SDUDOOHO XQGHU \RXU FKRLFH RI IUHH VRIWZDUH OLFHQVH VXFK DV WKH *18 *HQHUDO 3XEOLF /LFHQVH WR SHUPLW WKHLU XVH LQ IUHH VRIWZDUH
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 66
&RPLWr GH ,QFHQWLYR D 3URGXomR GR 6RIWZDUH *UDWXLWR H $OWHUQDWLYR &,36*$
&RPLWrGH,QFHQWLYRD3URGXomRGR 6RIWZDUH*UDWXLWRH$OWHUQDWLYR Fundado em 29 de janeiro de 1999.
1 Diretoria <NAME>
'LUHWRU ([HFXWLYR <EMAIL> <NAME> Diretor Administrativo Paulo Ro<NAME> Guimares Diretor Institucional CIPSGA
Rua Professora Ester de Melo, numero 202,
Parte, Benfica, Rio de Janeiro, RJ, CEP. 20930-010;
Telefone (Fax/Dados): 021-5564201;
e-mail: <EMAIL> CNPJ: 03179614-0001/70
&XUVR GH /LQJXDJHP 3+3
ZZZFLSVJDRUJEU FXUVRV#FLSVJDRUJEU
3iJLQD 67 |
multDM | cran | R | Package ‘multDM’
October 13, 2022
Type Package
Title Multivariate Version of the Diebold-Mariano Test
Version 1.1.4
Imports MTS
Date 2022-06-09
Author <NAME> [aut, cre] (Faculty of Economic Sciences, University
of Warsaw, Poland)
Maintainer <NAME> <<EMAIL>>
Description Allows to perform the multivariate version of the Diebold-Mariano test for equal predic-
tive ability of multiple forecast comparison. Main reference: Mari-
ano, R.S., <NAME>. (2012) <doi:10.1016/j.jeconom.2012.01.014>.
License GPL-3
LazyData TRUE
URL https://CRAN.R-project.org/package=multDM
Note Research funded by the Polish National Science Centre grant under
the contract number DEC-2015/19/N/HS4/00205.
NeedsCompilation no
Repository CRAN
Date/Publication 2022-06-09 09:50:08 UTC
R topics documented:
DM.tes... 2
d_... 3
los... 4
MDM.selectio... 5
MDM.tes... 6
MDMforecast... 7
oilforecast... 8
print.MD... 9
TB_AR_tes... 10
TB_M... 11
DM.test Computes Diebold-Mariano Test for the Equal Predictive Accuracy.
Description
This function computes Diebold-Mariano test for the equal predictive accuracy. The null hypothesis
of this test is that two forecasts have the same accuracy. The alternative hypothesis can be specified
as ”Both forecasts have different accuracy”, ”The first forecast is less accurate than the second
forecast”, or ”The first forecast is more accurate than the second forecast”.
Usage
DM.test(f1,f2,y,loss.type="SE",h,c=FALSE,H1="same")
Arguments
f1 vector of the first forecast
f2 vector of the second forecast
y vector of the real values of the modelled time-series
loss.type method to compute the loss function, loss.type="SE" will use squared errors,
loss.type="AE" will use absolute errors, loss.type="SPE" will use squred
proportional error (useful if errors are heteroskedastic), loss.type="ASE" will
use absolute scaled error, if loss.type will be specified as some numeric, then
the function of type exp(loss.type*errors)-1-loss.type*errors will be
used (useful when it is more costly to underpredict y than to overpredict), if not
specified loss.type="SE" is used
h numeric dentoing that the forecast h-steps ahead are evaluated, if not specified
h=1 is used
c logical indicating if Harvey-Leybourne-Newbold correction for small samples
should be used, if not specified c=FALSE is used
H1 alternative hypothesis, H1="same" for ”both forecasts have different accuracy”,
H1="more" for ”the first forecast is more accurate than the second forecast”,
H1="less" for ”the first forecast is less accurate than the second forecast”, if
not specified H1="same" is taken
Value
class htest object, list of
statistic test statistic
parameter h, forecast horizon used
alternative alternative hypothesis of the test
p.value p-value
method name of the test
data.name names of the tested time-series
References
<NAME>., <NAME>. 1995. Comparing predictive accuracy. Journal of Business and Eco-
nomic Statistics 13, 253–265.
<NAME>., <NAME>., <NAME>., 1997. Testing the equality of prediction mean squared
errors. International Journal of Forecasting 13, 281–291.
<NAME>., <NAME>. 2006. Another look at measures of forecast accuracy. International
Journal of Forecasting 22, 679–688.
<NAME>., 2005. Asset Price Dynamics, Volatility, and Prediction, Princeton University Press.
Triacca, U., 2018. Comparing Predictive Accuracy of Two Forecasts, https://www.lem.sssup.
it/phd/documents/Lesson19.pdf.
Examples
data(MDMforecasts)
ts <- MDMforecasts$ts
forecasts <- MDMforecasts$forecasts
DM.test(f1=forecasts[,1],f2=forecasts[,2],y=ts,loss="SE",h=1,c=FALSE,H1="same")
d_t Computes Loss Differential.
Description
This function computes loss differential, i.e., differences between losses from k + 1-th and k-th
models.
Usage
d_t(e)
Arguments
e matrix of loss functions, columns correspond to time index, and rows to differ-
ent models
Value
matrix of loss differentials
References
<NAME>., <NAME>., 2012. Statistical tests for multiple forecast comparison. Journal of Econo-
metrics 169, 123–130.
Examples
data(MDMforecasts)
ts <- MDMforecasts$ts
forecasts <- MDMforecasts$forecasts
l <- loss(realized=ts,evaluated=forecasts,loss.type="SE")
d <- d_t(l)
loss Computes Loss Function.
Description
This function computes various loss functions for given realized values of time-series and a collec-
tion of forecasts.
Usage
loss(realized,evaluated,loss.type)
Arguments
realized vector of the real values of the modelled time-series
evaluated matrix of the forecasts, columns correspond to time index, rows correspond to
different models
loss.type method to compute the loss function, loss.type="SE" will use squared errors,
loss.type="AE" will use absolute errors, loss.type="SPE" will use squred
proportional error (useful if errors are heteroskedastic), loss.type="ASE" will
use absolute scaled error, if loss.type will be specified as some numeric, then
the function of type exp(loss.type*errors)-1-loss.type*errors will be
used (useful when it is more costly to underpredict realized than to overpre-
dict)
Value
matrix with columns corresponding to time index and rows to different models
References
<NAME>., <NAME>. 2006. Another look at measures of forecast accuracy. International
Journal of Forecasting 22, 679–688.
<NAME>., 2005. Asset Price Dynamics, Volatility, and Prediction, Princeton University Press.
<NAME>., 2018. Comparing Predictive Accuracy of Two Forecasts, https://www.lem.sssup.
it/phd/documents/Lesson19.pdf.
Examples
data(MDMforecasts)
ts <- MDMforecasts$ts
forecasts <- MDMforecasts$forecasts
l <- loss(realized=ts,evaluated=forecasts,loss.type="SE")
MDM.selection Selects Models with Outstanding Predictive Ability basing on Multi-
variate Diebold-Mariano Test.
Description
This function selects models with outstanding predictive ability basing on multivariate Diebold-
Mariano test MDM.test.
Usage
MDM.selection(realized,evaluated,q,alpha,statistic="Sc",loss.type="SE")
Arguments
realized vector of the real values of the modelled time-series
evaluated matrix of the forecasts, columns correspond to time index, rows correspond to
different models
q numeric indicating a lag length beyond which we are willing to assume that the
autocorrelation of loss differentials is essentially zero
alpha numeric indicating a significance level for multivariate Diebold-Mariano tests
statistic statistic="S" for the basic version of the test, and statistic="Sc" for the
finite-sample correction, if not specified statistic="Sc" is used
loss.type method to compute the loss function, loss.type="SE" will use squared errors,
loss.type="AE" will use absolute errors, loss.type="SPE" will use squred
proportional error (useful if errors are heteroskedastic), loss.type="ASE" will
use absolute scaled error, if loss.type will be specified as some numeric, then
the function of type exp(loss.type*errors)-1-loss.type*errors will be
used (useful when it is more costly to underpredict realized than to overpre-
dict), if not specified loss.type="SE" is used
Value
class MDM object, list of
outcomes matrix with mean losses for the selected models, statistics corresponding to
losses differentials and ranking of these statistics
p.value numeric of p-value from the procedure, i.e., p-value of multivariate Diebold-
Mariano test from the last step
alpha alpha, i.e., the chosen significance level
eliminated numeric indicating the number of eliminated models
References
<NAME>., <NAME>., 2012. Statistical tests for multiple forecast comparison. Journal of Econo-
metrics 169, 123–130.
Examples
data(MDMforecasts)
ts <- MDMforecasts$ts
forecasts <- MDMforecasts$forecasts
MDM.selection(realized=ts,evaluated=forecasts,q=10,alpha=0.1,statistic="S",loss.type="AE")
MDM.test Computes Multivariate Diebold-Mariano Test for the Equal Predictive
Accuracy of Two or More Non-nested Forecasting Models.
Description
This function computes multivariate Diebold-Mariano test for the equal predictive accuracy of two
or more non-nested forecasting models. The null hypothesis of this test is that the evaluated fore-
casts have the same accuracy. The alternative hypothesis is that Equal predictive accuracy (EPA)
does not hold.
Usage
MDM.test(realized,evaluated,q,statistic="Sc",loss.type="SE")
Arguments
realized vector of the real values of the modelled time-series
evaluated matrix of the forecasts, columns correspond to time index, rows correspond to
different models
q numeric indicating a lag length beyond which we are willing to assume that the
autocorrelation of loss differentials is essentially zero
statistic statistic="S" for the basic version of the test, and statistic="Sc" for the
finite-sample correction, if not specified statistic="Sc" is used
loss.type method to compute the loss function, loss.type="SE" will use squared errors,
loss.type="AE" will use absolute errors, loss.type="SPE" will use squred
proportional error (useful if errors are heteroskedastic), loss.type="ASE" will
use absolute scaled error, if loss.type will be specified as some numeric, then
the function of type exp(loss.type*errors)-1-loss.type*errors will be
used (useful when it is more costly to underpredict realized than to overpre-
dict), if not specified loss.type="SE" is used
Value
class htest object, list of
statistic test statistic
parameter q, a lag length
alternative alternative hypothesis of the test
p.value p-value
method name of the test
data.name names of the tested objects
References
<NAME>., <NAME>., 2012. Statistical tests for multiple forecast comparison. Journal of Econo-
metrics 169, 123–130.
Examples
data(MDMforecasts)
ts <- MDMforecasts$ts
forecasts <- MDMforecasts$forecasts
MDM.test(realized=ts,evaluated=forecasts,q=10,statistic="S",loss.type="AE")
MDMforecasts Sample Data.
Description
Sample artificial data.
Usage
data(MDMforecasts)
Format
MDMforecasts is list object such that
MDMforecasts$ts is vector of time-series which is of interest to model
• MDMforecasts$forecasts is matrix of 20 different forecasts of ts from 20 different fore-
casting models, each row represents different forecast and time is indexed by columns
Examples
data(MDMforecasts)
ts <- MDMforecasts$ts
forecasts <- MDMforecasts$forecasts
MDM.test(realized=ts,evaluated=forecasts,q=10,statistic="S",loss.type="AE")
oilforecasts Sample Data from Crude Oil Price Forecasting.
Description
Forecasts obtained from various methods applied to crude oil price.
Usage
data(oilforecasts)
Format
oilforecasts is matrix object such that its rows correspond to forecasts from various meth-
ods, i.e.,
REALIZED is the forecasted time-series,
• DMA.DOW is the forecast from Dynamic Model Averaging with the dynamic Occam’s window,
• BMA.DOW is the forecast from Bayesian Model Averaging with the dynamic Occam’s window,
• DMA.1V is the forecast from Dynamic Model Averaging applied only to one-variable models,
• BMA.1V is the forecast from Bayesian Model Averaging applied only to one-variable models,
• DMS.1V is the forecast from Dynamic Model Selection applied only to one-variable models,
• BMS.1V is the forecast from Bayesian Model Selection applied only to one-variable models,
• TVP is the forecast from Time-Varying Parameters regression,
• LASSO is the forecast from LASSO regression,
• RIDGE is the forecast from RIDGE regression,
• DYN.EL.NET is the forecast from the elastic net regression, with the elastic net mixing param-
eter changing with time index,
• LARS is the forecast from the least-angle regression,
• B.LASSO is the forecast from the Bayesian LASSO regression,
• B.RIDGE is the forecast from the Bayesian RIDGE regression,
• ARIMA is the forecast from the best ARIMA model according to AIC,
• NAIVE is the naive forecast, i.e., the last observation is taken as a one-step ahead prediction,
• MA is the moving average.
Details
The data were taken from Juvenal and Petrella (2015). They cover the period between 1971 and
2009, and are in quarterly freqency. Time-series with missing observations were excluded from the
original data set, resulting finally in 127 explanatory variables, instead of 150 in the original data
set. In particular, the excluded time-series are the ones which start date is after 1971. The dependent
time-series is the average world price of oil taken in logarithmic differences. The independent time-
series represent various stationarity tranformations of macroeconomic and financial variables of the
G7 countries and from the oil market, global economic activity and various commodity prices. The
details of the original data set are given in the paper by Juvenal and Petrella (2015). The forecasting
with various models, based on this data set, was done by the author of this package, just to provide
some more concrete example set of forecasts. The independent variables were taken in the first
lags. The forgetting parameters in DMA/DMS models were set to 0.99, resulting in the effective
rolling window size of 100. Therefore, such a window was taken for the moving average. LASSO
and RIDGE (also in the Bayesian versions), the elastic net, the least-angle regression and ARIMA
models were estimated in rolling windows of the size of 100 observations. First 100 observations
were excluded, and oilforecasts consists of the remaining last observations. The estimations
were done with the following packages fDMA, forecast, glmnet, lars and monomvn.
References
<NAME>. 2017. fDMA: Dynamic Model Averaging and Dynamic Model Selection for continuous
outcomes. https://CRAN.R-project.org/package=fDMA
<NAME>., <NAME>., <NAME>. 2010. Regularization paths for generalized linear models via
coordinate descent. Journal of Statistical Software 33, 1–22.
<NAME>. 2017. monomvn: Estimation for Multivariate Normal and Student-t Data with
Monotone Missingness. https://CRAN.R-project.org/package=monomvn
<NAME>., <NAME>. 2013. lars: Least Angle Regression, Lasso and Forward Stagewise. https:
//CRAN.R-project.org/package=lars
<NAME>., <NAME>. 2008. Automatic time series forecasting: the forecast package for
R. Journal of Statistical Software 26, 1–22.
<NAME>., <NAME>. 2015. Speculation in the oil market. Journal of Applied Econometrics 30,
612–649.
Examples
data(oilforecasts)
ts <- oilforecasts[1,]
forecasts <- oilforecasts[-1,]
l <- loss(realized=ts,evaluated=forecasts,loss.type="SE")
d <- d_t(l)
q <- TB_MA(d=d,q.max=4)
MDM.selection(realized=ts,evaluated=forecasts,q=q,alpha=0.1,statistic="Sc",loss.type="SE")
print.MDM Prints MDM Object.
Description
The function prints selected outcomes obtained from MDM.selection.
Usage
## S3 method for class 'MDM'
print(x, ...)
Arguments
x an object of MDM class
... not used
Details
The function prints models with outstanding predictive ability, their mean loss function, statistics
corresponding to their loss differentials (they are the number of the models less one), and orders
these statistics. It also prints the p-value of the test and the number of eliminated models. If no
models with outstanding predictive ability were found, the funtions prints such an information.
Examples
data(MDMforecasts)
ts <- MDMforecasts$ts
forecasts <- MDMforecasts$forecasts
m <- MDM.selection(realized=ts,evaluated=forecasts,q=10,alpha=0.1,statistic="S",loss.type="AE")
print(m)
TB_AR_test Computes Tiao-Box Test for Autocorrelation.
Description
This function computes Tiao-Box test for autocorrelation, i.e, coefficient of p-th lag in VAR(p)
model. Its null hypothesis is that p-th lag is not essential. The alternative hypothesis is that it is
essential.
Usage
TB_AR_test(d,p)
Arguments
d matrix of time-series, assumed to be the stationary VARMA type, columns
correspond to time index, and rows to different time-series
p numeric indicating a lag length beyond which we are willing to assume that the
autocorrelation is essentially zero
Value
class htest object, list of
statistic test statistic
parameter q, a lag length
alternative alternative hypothesis of the test
p.value p-value
method name of the test
data.name name of the tested time-series
References
<NAME>., Box, G.E.P. 1981. Modeling multiple times series with applications. Journal of the
American Statistical Association 76, 802–816.
Examples
data(MDMforecasts)
ts <- MDMforecasts$ts
forecasts <- MDMforecasts$forecasts
l <- loss(realized=ts,evaluated=forecasts,loss.type="SE")
d <- d_t(l)
TB_AR_test(d=d,p=10)
TB_MA Checks for a Lag in VMA Process with Tiao-Box Procedure.
Description
This function helps to find a lag in stationary VMA process with Tiao-Box procedure, i.e., the lag
length beyond which we are willing to assume that the autocorrelation is essentially zero.
Usage
TB_MA(d,q.max)
Arguments
d matrix of time-series, assumed to be the stationary VARMA type, columns
correspond to time index, and rows to different time-series
q.max numeric indicating the maximum number of lag to be considered
Details
The function searches for correlations smaller than −2n−0.5 or higher than 2n−0.5 , where n is the
lenght of the time-series.
Value
numeric indicating the found lag length
References
<NAME>., <NAME>. 1981. Modeling multiple times series with applications. Journal of the
American Statistical Association 76, 802–816.
Examples
data(MDMforecasts)
ts <- MDMforecasts$ts
forecasts <- MDMforecasts$forecasts
l <- loss(realized=ts,evaluated=forecasts,loss.type="SE")
d <- d_t(l)
TB_MA(d=d,q.max=10) |
opentelemetry-prometheus | rust | Rust | Crate opentelemetry_prometheus
===
An OpenTelemetry exporter for Prometheus metrics.
```
use opentelemetry_api::{metrics::MeterProvider as _, KeyValue};
use opentelemetry_sdk::metrics::MeterProvider;
use prometheus::{Encoder, TextEncoder};
// create a new prometheus registry let registry = prometheus::Registry::new();
// configure OpenTelemetry to use this registry let exporter = opentelemetry_prometheus::exporter()
.with_registry(registry.clone())
.build()?;
// set up a meter meter to create instruments let provider = MeterProvider::builder().with_reader(exporter).build();
let meter = provider.meter("my-app");
// Use two instruments let counter = meter
.u64_counter("a.counter")
.with_description("Counts things")
.init();
let histogram = meter
.i64_histogram("a.histogram")
.with_description("Records values")
.init();
counter.add(100, &[KeyValue::new("key", "value")]);
histogram.record(100, &[KeyValue::new("key", "value")]);
// Encode data as text or protobuf let encoder = TextEncoder::new();
let metric_families = registry.gather();
let mut result = Vec::new();
encoder.encode(&metric_families, &mut result)?;
// result now contains encoded metrics:
//
// # HELP a_counter_total Counts things
// # TYPE a_counter_total counter
// a_counter_total{key="value",otel_scope_name="my-app"} 100
// # HELP a_histogram Records values
// # TYPE a_histogram histogram
// a_histogram_bucket{key="value",otel_scope_name="my-app",le="0"} 0
// a_histogram_bucket{key="value",otel_scope_name="my-app",le="5"} 0
// a_histogram_bucket{key="value",otel_scope_name="my-app",le="10"} 0
// a_histogram_bucket{key="value",otel_scope_name="my-app",le="25"} 0
// a_histogram_bucket{key="value",otel_scope_name="my-app",le="50"} 0
// a_histogram_bucket{key="value",otel_scope_name="my-app",le="75"} 0
// a_histogram_bucket{key="value",otel_scope_name="my-app",le="100"} 1
// a_histogram_bucket{key="value",otel_scope_name="my-app",le="250"} 1
// a_histogram_bucket{key="value",otel_scope_name="my-app",le="500"} 1
// a_histogram_bucket{key="value",otel_scope_name="my-app",le="750"} 1
// a_histogram_bucket{key="value",otel_scope_name="my-app",le="1000"} 1
// a_histogram_bucket{key="value",otel_scope_name="my-app",le="2500"} 1
// a_histogram_bucket{key="value",otel_scope_name="my-app",le="5000"} 1
// a_histogram_bucket{key="value",otel_scope_name="my-app",le="7500"} 1
// a_histogram_bucket{key="value",otel_scope_name="my-app",le="10000"} 1
// a_histogram_bucket{key="value",otel_scope_name="my-app",le="+Inf"} 1
// a_histogram_sum{key="value",otel_scope_name="my-app"} 100
// a_histogram_count{key="value",otel_scope_name="my-app"} 1
// # HELP otel_scope_info Instrumentation Scope metadata
// # TYPE otel_scope_info gauge
// otel_scope_info{otel_scope_name="my-app"} 1
// # HELP target_info Target metadata
// # TYPE target_info gauge
// target_info{service_name="unknown_service"} 1
```
Structs
---
* ExporterBuilderPrometheusExporter configuration options
* PrometheusExporterPrometheus metrics exporter
Functions
---
* exporterCreates a builder to configure a PrometheusExporter
Crate opentelemetry_prometheus
===
An OpenTelemetry exporter for Prometheus metrics.
```
use opentelemetry_api::{metrics::MeterProvider as _, KeyValue};
use opentelemetry_sdk::metrics::MeterProvider;
use prometheus::{Encoder, TextEncoder};
// create a new prometheus registry let registry = prometheus::Registry::new();
// configure OpenTelemetry to use this registry let exporter = opentelemetry_prometheus::exporter()
.with_registry(registry.clone())
.build()?;
// set up a meter meter to create instruments let provider = MeterProvider::builder().with_reader(exporter).build();
let meter = provider.meter("my-app");
// Use two instruments let counter = meter
.u64_counter("a.counter")
.with_description("Counts things")
.init();
let histogram = meter
.i64_histogram("a.histogram")
.with_description("Records values")
.init();
counter.add(100, &[KeyValue::new("key", "value")]);
histogram.record(100, &[KeyValue::new("key", "value")]);
// Encode data as text or protobuf let encoder = TextEncoder::new();
let metric_families = registry.gather();
let mut result = Vec::new();
encoder.encode(&metric_families, &mut result)?;
// result now contains encoded metrics:
//
// # HELP a_counter_total Counts things
// # TYPE a_counter_total counter
// a_counter_total{key="value",otel_scope_name="my-app"} 100
// # HELP a_histogram Records values
// # TYPE a_histogram histogram
// a_histogram_bucket{key="value",otel_scope_name="my-app",le="0"} 0
// a_histogram_bucket{key="value",otel_scope_name="my-app",le="5"} 0
// a_histogram_bucket{key="value",otel_scope_name="my-app",le="10"} 0
// a_histogram_bucket{key="value",otel_scope_name="my-app",le="25"} 0
// a_histogram_bucket{key="value",otel_scope_name="my-app",le="50"} 0
// a_histogram_bucket{key="value",otel_scope_name="my-app",le="75"} 0
// a_histogram_bucket{key="value",otel_scope_name="my-app",le="100"} 1
// a_histogram_bucket{key="value",otel_scope_name="my-app",le="250"} 1
// a_histogram_bucket{key="value",otel_scope_name="my-app",le="500"} 1
// a_histogram_bucket{key="value",otel_scope_name="my-app",le="750"} 1
// a_histogram_bucket{key="value",otel_scope_name="my-app",le="1000"} 1
// a_histogram_bucket{key="value",otel_scope_name="my-app",le="2500"} 1
// a_histogram_bucket{key="value",otel_scope_name="my-app",le="5000"} 1
// a_histogram_bucket{key="value",otel_scope_name="my-app",le="7500"} 1
// a_histogram_bucket{key="value",otel_scope_name="my-app",le="10000"} 1
// a_histogram_bucket{key="value",otel_scope_name="my-app",le="+Inf"} 1
// a_histogram_sum{key="value",otel_scope_name="my-app"} 100
// a_histogram_count{key="value",otel_scope_name="my-app"} 1
// # HELP otel_scope_info Instrumentation Scope metadata
// # TYPE otel_scope_info gauge
// otel_scope_info{otel_scope_name="my-app"} 1
// # HELP target_info Target metadata
// # TYPE target_info gauge
// target_info{service_name="unknown_service"} 1
```
Structs
---
* ExporterBuilderPrometheusExporter configuration options
* PrometheusExporterPrometheus metrics exporter
Functions
---
* exporterCreates a builder to configure a PrometheusExporter
Struct opentelemetry_prometheus::ExporterBuilder
===
```
pub struct ExporterBuilder { /* private fields */ }
```
PrometheusExporter configuration options
Implementations
---
### impl ExporterBuilder
#### pub fn without_units(self) -> Self
Disables exporter’s addition of unit suffixes to metric names.
By default, metric names include a unit suffix to follow Prometheus naming conventions. For example, the counter metric `request.duration`, with unit
`ms` would become `request_duration_milliseconds_total`.
With this option set, the name would instead be `request_duration_total`.
#### pub fn without_counter_suffixes(self) -> Self
Disables exporter’s addition `_total` suffixes on counters.
By default, metric names include a `_total` suffix to follow Prometheus naming conventions. For example, the counter metric `happy.people` would become `happy_people_total`. With this option set, the name would instead be
`happy_people`.
#### pub fn without_target_info(self) -> Self
Configures the exporter to not export the resource `target_info` metric.
If not specified, the exporter will create a `target_info` metric containing the metrics’ Resource attributes.
#### pub fn without_scope_info(self) -> Self
Configures the exporter to not export the `otel_scope_info` metric.
If not specified, the exporter will create a `otel_scope_info` metric containing the metrics’ Instrumentation Scope, and also add labels about Instrumentation Scope to all metric points.
#### pub fn with_namespace(self, namespace: impl Into<String>) -> Self
Configures the exporter to prefix metrics with the given namespace.
Metrics such as `target_info` and `otel_scope_info` are not prefixed since these have special behavior based on their name.
#### pub fn with_registry(self, registry: Registry) -> Self
Configures which prometheus::Registry the exporter will use.
If no registry is specified, the prometheus default is used.
#### pub fn with_aggregation_selector(
self,
agg: impl AggregationSelector + 'static
) -> Self
Configure the AggregationSelector the exporter will use.
If no selector is provided, the DefaultAggregationSelector is used.
#### pub fn build(self) -> Result<PrometheusExporterCreates a new PrometheusExporter from this configuration.
Trait Implementations
---
### impl Debug for ExporterBuilder
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> ExporterBuilder
Returns the “default value” for a type. Read moreAuto Trait Implementations
---
### impl !RefUnwindSafe for ExporterBuilder
### impl Send for ExporterBuilder
### impl Sync for ExporterBuilder
### impl Unpin for ExporterBuilder
### impl !UnwindSafe for ExporterBuilder
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> FutureExt for T
#### fn with_context(self, otel_cx: Context) -> WithContext<SelfAttaches the provided `Context` to this type, returning a `WithContext`
wrapper.
wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct opentelemetry_prometheus::PrometheusExporter
===
```
pub struct PrometheusExporter { /* private fields */ }
```
Prometheus metrics exporter
Trait Implementations
---
### impl AggregationSelector for PrometheusExporter
#### fn aggregation(&self, kind: InstrumentKind) -> Aggregation
Selects the aggregation and the parameters to use for that aggregation based on the InstrumentKind.### impl Debug for PrometheusExporter
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn register_pipeline(&self, pipeline: Weak<Pipeline>)
Registers a MetricReader with a [Pipeline].
Registers a an external Producer with this MetricReader.
#### fn temporality(&self, kind: InstrumentKind) -> Temporality
Note: Prometheus only supports cumulative temporality so this will always be Temporality::Cumulative.
Auto Trait Implementations
---
### impl !RefUnwindSafe for PrometheusExporter
### impl Send for PrometheusExporter
### impl Sync for PrometheusExporter
### impl Unpin for PrometheusExporter
### impl !UnwindSafe for PrometheusExporter
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> FutureExt for T
#### fn with_context(self, otel_cx: Context) -> WithContext<SelfAttaches the provided `Context` to this type, returning a `WithContext`
wrapper.
wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Function opentelemetry_prometheus::exporter
===
```
pub fn exporter() -> ExporterBuilder
```
Creates a builder to configure a PrometheusExporter |
tinymce-rails-bach | ruby | Ruby | Rails Integration for TinyMCE
===
The `tinymce-rails` gem integrates the [TinyMCE](https://www.tiny.cloud/) editor with the Rails asset pipeline.
This gem is compatible with Rails 4.2 and higher.
This is the branch for TinyMCE 5. For TinyMCE 4, please see the [tinymce-4 branch](https://github.com/spohlenz/tinymce-rails/tree/tinymce-4), and for TinyMCE 3.5.x, please see the [tinymce-3 branch](https://github.com/spohlenz/tinymce-rails/tree/tinymce-3).
[![Build Status](https://travis-ci.org/spohlenz/tinymce-rails.png?branch=master)](https://travis-ci.org/spohlenz/tinymce-rails)
**New in 3.5.11, 4.1.10 and 4.2.1:** Alternative asset installation methods (copy vs compile/symlink). See the [Asset Compilation](#asset-compilation) section below for details.
Instructions
---
**1. Add `tinymce-rails` to your Gemfile**
```
gem 'tinymce-rails'
```
Be sure to add to the global group, not the `assets` group. Then run `bundle install`.
**2. Create a `config/tinymce.yml` file with your global configuration options:**
```
toolbar:
- styleselect | bold italic | undo redo
- image | link plugins:
- image
- link
```
The Rails server no longer needs to be restarted when this file is updated in development mode.
To define multiple configuration sets, follow this syntax (a default configuration must be specified):
```
default: &default
plugins:
- image
- link
alternate:
<<: *default
toolbar: styleselect | bold italic | undo redo | table
plugins:
- table
```
See the [TinyMCE 5 Documentation](https://www.tiny.cloud/docs/configure/) for a full list of configuration options.
**3. Include the TinyMCE assets**
Use *one* of the following options to include TinyMCE assets.
(1) Add to your application.js:
```
//= require tinymce
```
or (2) with jQuery integration:
```
//= require tinymce-jquery
```
(3) The TinyMCE assets can be included on a per-page basis using the `tinymce_assets` helper:
```
<%= tinymce_assets %>
#=> <script type="text/javascript" src="/assets/tinymce.js">
```
**4. Initialize TinyMCE**
For each textarea that you want to use with TinyMCE, add the "tinymce" class and ensure it has a unique ID:
```
<%= text_area_tag :content, "", :class => "tinymce", :rows => 40, :cols => 120 %>
```
or if you are using Rails' form builders:
```
<%= f.text_area :content, :class => "tinymce", :rows => 40, :cols => 120 %>
```
Then invoke the `tinymce` helper to initialize TinyMCE:
```
<%= tinymce %>
```
Custom options can be passed to `tinymce` to override the global options specified in `config/tinymce.yml`:
```
<%= tinymce :theme => "simple", :language => "de", :plugins => ["wordcount", "paste"] %>
```
Alternate configurations defined in 'config/tinymce.yml' can be used with:
```
<%= tinymce :alternate %>
```
Language Packs
---
See the [tinymce-rails-langs](https://github.com/spohlenz/tinymce-rails-langs) gem for additional language packs for TinyMCE.
Manual Initialization
---
Using the `tinymce` helper and global configuration file is entirely optional. The `tinyMCE.init` function can be invoked manually if desired.
```
<%= text_area_tag :editor, "", :rows => 40, :cols => 120 %<script type="text/javascript">
tinyMCE.init({
selector: 'textarea.editor'
});
</script>
```
Asset Compilation
---
Since TinyMCE loads most of its files dynamically, some workarounds are required to ensure that the TinyMCE asset files are accessible using non-digested filenames.
As of tinymce-rails 3.5.11, 4.1.10 and 4.2.1, two alternative asset installation methods are available, which can be changed by setting `config.tinymce.install` within your `config/application.rb` file. These methods are called when you run `rake asset:precompile` (via `Rake::Task#enhance`) after the regular application assets are compiled.
The default method (as of 4.5.2), `compile`, adds the TinyMCE paths to the Sprockets precompilation paths and then creates symlinks from the non-digested filenames to their digested versions.
```
config.tinymce.install = :compile
```
If you experience issues with the `compile` method, you may wish to use the `copy` method instead, which copies the TinyMCE assets directly into `public/assets` and appends the file information into the asset manifest. The `copy_no_preserve` method is also available of you do not wish to or cannot preserve file modes on your filesystem.
```
config.tinymce.install = :copy
```
If you are including TinyMCE via `application.js` or using the `tinymce_assets` helper, you do not need to manually alter the precompile paths. However if you wish to include `tinymce-jquery.js` independently (i.e. using `javascript_include_tag`), you will need to add it to the precompile list in `config/environments/production.rb`:
```
config.assets.precompile << "tinymce-jquery.js"
```
Custom Plugins & Skins
---
To use custom plugins or skins, simply add the files to your asset load path so that they are locatable at a path beneath `tinymce/plugins/` or `tinymce/skins/`.
For example, a plugin called `mycustomplugin` could have its main JS file at `app/assets/javascripts/tinymce/plugins/mycustomplugin/plugin.js`.
You should also ensure that your custom paths are added to the asset precompile paths.
Using tinymce-rails as an Engine Dependency
---
Ensure that you explicitly require `tinymce-rails` within your engine file. Including tinymce-rails as a dependency in your gemspec is not enough.
Updating
---
When new versions of TinyMCE are released, simply update the `tinymce-rails` gem to the latest version. There is no need to run any extra rake tasks (apart from `rake assets:precompile`). |
github.com/hashicorp/vault-csi-provider | go | Go | README
[¶](#section-readme)
---
### HashiCorp Vault Provider for Secrets Store CSI Driver
HashiCorp [Vault](https://vaultproject.io) provider for the [Secrets Store CSI driver](https://github.com/kubernetes-sigs/secrets-store-csi-driver) allows you to get secrets stored in Vault and use the Secrets Store CSI driver interface to mount them into Kubernetes pods.
#### Installation
##### Prerequisites
* Supported Kubernetes version, see the [documentation](https://developer.hashicorp.com/vault/docs/platform/k8s/csi#supported-kubernetes-versions) (runs on Linux nodes only)
* [Secrets store CSI driver](https://secrets-store-csi-driver.sigs.k8s.io/getting-started/installation.html) installed
##### Using helm
The recommended installation method is via helm 3:
```
helm repo add hashicorp https://helm.releases.hashicorp.com
# Just installs Vault CSI provider. Adjust `server.enabled` and `injector.enabled`
# if you also want helm to install Vault and the Vault Agent injector.
helm install vault hashicorp/vault \
--set "server.enabled=false" \
--set "injector.enabled=false" \
--set "csi.enabled=true"
```
##### Using yaml
You can also install using the deployment config in the `deployment` folder:
```
kubectl apply -f deployment/vault-csi-provider.yaml
```
#### Usage
See the [learn tutorial](https://learn.hashicorp.com/tutorials/vault/kubernetes-secret-store-driver)
and [documentation pages](https://www.vaultproject.io/docs/platform/k8s/csi) for full details of deploying, configuring and using Vault CSI provider. The integration tests in [test/bats/provider.bats](https://github.com/hashicorp/vault-csi-provider/blob/v1.4.0/test/bats/provider.bats) also provide a good set of fully worked and tested examples to build on.
#### Troubleshooting
To troubleshoot issues with Vault CSI provider, look at logs from the Vault CSI provider pod running on the same node as your application pod:
```
kubectl get pods -o wide
# find the Vault CSI provider pod running on the same node as your application pod
kubectl logs vault-csi-provider-7x44t
```
Pass `-debug=true` to the provider to get more detailed logs. When installing via helm, you can use `--set "csi.debug=true"`.
#### Developing
The Makefile has targets to automate building and testing:
```
make build test
```
The project also uses some linting and formatting tools. To install the tools:
```
make bootstrap
```
You can then run the additional checks:
```
make fmt lint mod
```
To run a full set of integration tests on a local kind cluster, ensure you have the following additional dependencies installed:
* `docker`
* [`kind`](https://github.com/kubernetes-sigs/kind)
* [`kubectl`](https://kubernetes.io/docs/tasks/tools/)
* [`helm`](https://helm.sh/docs/intro/install/)
* [`bats`](https://bats-core.readthedocs.io/en/stable/installation.html)
You can then run:
```
make setup-kind e2e-image e2e-setup e2e-test
```
Finally tidy up the resources created in the kind cluster with:
```
make e2e-teardown
```
Documentation
[¶](#section-documentation)
---
![The Go Gopher](/static/shared/gopher/airplane-1200x945.svg)
There is no documentation for this package. |
github.com/Golangltd/LollipopGo | go | Go | README
[¶](#section-readme)
---
### LollipopGo
Golang语言社区 全球服游戏服务器框架,目前协议支持websocket、http及RPC,采用状态同步,愿景:打造竞技实时【比赛】对战游戏平台框架! 功能持续更新中... ...
> 微信订阅号:Golang语言社区
> 微信服务号:Golang技术社区
> 商业定制版:联系彬哥(微信:cserli)
#### 论坛
WwW.Golang.Ltd
#### QQ群
221273219
#### 简书
[简书专栏](https://www.jianshu.com/u/9f8cf18345b5)
#### 腾讯云+社区专栏
[腾讯专栏](https://cloud.tencent.com/developer/column/2170)
#### 架构视频教程
[流程讲解](http://www.byteedu.com/forum.php?mod=viewthread&tid=306)
#### Golang语言社区
1. 希望更多喜欢Go语言的同学及想从事Go语言开发游戏服务器的同学一个方向的指引 2. 课程多维度教学,lollipopGo游戏框架实战课程等等 3. LollipopGo架构 最新版本: v1.0.20190117 4. LollipopGo架构 手机对战游戏视频:[点击访问](https://www.bilibili.com/video/av52239498)
5. LollipopGo架构 PC端游对战游戏视频:[点击访问](https://www.bilibili.com/video/av54726431)
6. LollipopGo对应的最新cocos creator客户端版本地址:[点击访问(不同厂商浏览器试玩游戏对战,例如:谷歌、360浏览器)](http://game1.golang.ltd/20190118/)
7. 同时我们的免费课程也在持续更新中; 点击访问:[腾讯课堂](http://gopher.ke.qq.com)
8. 同时我们的免费课程也在持续更新中; 点击访问:[网易云课堂](https://study.163.com/provider/400000000538037/index.htm?share=2&shareId=400000000538037)
9. 同时我们的免费课程也在持续更新中; 点击访问:[B站(bilibili.com)](http://space.bilibili.com/389368547?)
10. 同时我们的免费课程也在持续更新中; 点击访问:[ByteEdu教育平台(ByteEdu.com)](http://www.byteedu.com/forum.php?mod=forumdisplay&fid=36)
#### 架构整体流程图
![](https://github.com/Golangltd/LollipopGo/raw/master/vender/src/LollipopGo/LollipopGo/xmind/LollipopGo%E6%9E%B6%E6%9E%84%E6%8B%93%E6%89%91%E5%9B%BE%20v1.0.20181221.png)
None |
SAMBA | cran | R | Package ‘SAMBA’
October 12, 2022
Title Selection and Misclassification Bias Adjustment for Logistic
Regression Models
Version 0.9.0
Description Health research using data from electronic health records (EHR) has gained
popularity, but misclassification of EHR-derived disease status and lack of
representativeness of the study sample can result in substantial bias in
effect estimates and can impact power and type I error for association
tests. Here, the assumed target of inference is the relationship between
binary disease status and predictors modeled using a logistic regression
model. 'SAMBA' implements several methods for obtaining bias-corrected
point estimates along with valid standard errors as proposed in Beesley and
Mukherjee (2020) <doi:10.1101/2019.12.26.19015859>, currently under review.
License GPL-3
Encoding UTF-8
LazyData true
RoxygenNote 6.1.1
Imports stats, optimx, survey
Suggests knitr, rmarkdown, ggplot2, scales, MASS
VignetteBuilder knitr
NeedsCompilation no
Author <NAME> [cre],
<NAME> [aut]
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2020-02-20 07:50:07 UTC
R topics documented:
approxdis... 2
nonlogisti... 3
obslogli... 5
obsloglikE... 7
samba.d... 9
sensitivit... 10
approxdist Estimate parameters in the disease model approximating the observed
data distribution
Description
approxdist estimates parameters in the disease model given a previously-estimated marginal sen-
sitivity. This estimation is based on approximating the distribution of D* given Z.
Usage
approxdist(Dstar, Z, c_marg, weights = NULL)
Arguments
Dstar Numeric vector containing observed disease status. Should be coded as 0/1
Z Numeric matrix of covariates in disease model
c_marg marginal sensitivity, P(D* = 1 | D = 1, S = 1)
weights Optional numeric vector of patient-specific weights used for selection bias ad-
justment. Default is NULL
Details
We are interested in modeling the relationship between binary disease status and covariates Z using
a logistic regression model. However, D may be misclassified, and our observed data may not well-
represent the population of interest. In this setting, we estimate parameters from the disease model
using the following modeling framework.
Notation:
D Binary disease status of interest.
D* Observed binary disease status. Potentially a misclassified version of D. We assume D = 0
implies D* = 0.
S Indicator for whether patient from population of interest is included in the analytical dataset.
Z Covariates in disease model of interest.
W Covariates in model for patient inclusion in analytical dataset (selection model).
X Covariates in model for probability of observing disease given patient has disease (sensitivity
model).
Model Structure:
Disease Model
logit(P (D = 1|X)) = theta0 + thetaZ Z
Selection Model
Sensitivity Model
logit(P (D∗ = 1|D = 1, S = 1, X)) = beta0 + betaX X
Value
a list with two elements: (1) ’param’, a vector with parameter estimates for disease model (logOR
of Z), and (2) ’variance’, a vector of variance estimates for disease model parameters. Results do
not include intercept.
References
Statistical inference for association studies using electronic health records: handling both selection
bias and outcome misclassification <NAME> and <NAME> medRxiv 2019.12.26.19015859
Examples
library(SAMBA)
# These examples are generated from the vignette. See it for more details.
# Generate IPW weights from the true model
expit <- function(x) exp(x) / (1 + exp(x))
prob.WD <- expit(-0.6 + 1 * samba.df$D + 0.5 * samba.df$W)
weights <- nrow(samba.df) * (1 / prob.WD) / (sum(1 / prob.WD))
# Estimate sensitivity by using inverse probability of selection weights
# and P(D=1)
sens <- sensitivity(samba.df$Dstar, samba.df$X, prev = mean(samba.df$D),
weights = weights)
approx1 <- approxdist(samba.df$Dstar, samba.df$Z, sens$c_marg,
weights = weights)
nonlogistic Estimate parameters in the disease model given sensitivity as a func-
tion of covariates.
Description
non-logistic link function for D* given Z and sensitivity. This function assumes that sensitivity as a
function of X is known or has been estimated
Usage
nonlogistic(Dstar, Z, c_X, weights = NULL)
Arguments
Dstar Numeric vector containing observed disease status. Should be coded as 0/1
Z numeric matrix of covariates in disease model
c_X sensitivity as a function of X, P(D* = 1| D = 1, S = 1, X)
weights Optional numeric vector of patient-specific weights used for selection bias ad-
justment. Default is NULL
Details
We are interested in modeling the relationship between binary disease status and covariates Z using
a logistic regression model. However, D may be misclassified, and our observed data may not well-
represent the population of interest. In this setting, we estimate parameters from the disease model
using the following modeling framework.
Notation:
D Binary disease status of interest.
D* Observed binary disease status. Potentially a misclassified version of D. We assume D = 0
implies D* = 0.
S Indicator for whether patient from population of interest is included in the analytical dataset.
Z Covariates in disease model of interest.
W Covariates in model for patient inclusion in analytical dataset (selection model).
X Covariates in model for probability of observing disease given patient has disease (sensitivity
model).
Model Structure:
Disease Model
logit(P (D = 1|X)) = theta0 + thetaZ Z
Selection Model
Sensitivity Model
logit(P (D∗ = 1|D = 1, S = 1, X)) = beta0 + betaX X
Value
a list with two elements: (1) ’param’, a vector with parameter estimates for disease model (logOR
of Z), and (2) ’variance’, a vector of variance estimates for disease model parameters. Results do
not include intercept.
References
Statistical inference for association studies using electronic health records: handling both selection
bias and outcome misclassification <NAME> and <NAME> medRxiv 2019.12.26.19015859
Examples
library(SAMBA)
# These examples are generated from the vignette. See it for more details.
# Generate IPW weights from the true model
expit <- function(x) exp(x) / (1 + exp(x))
prob.WD <- expit(-0.6 + 1 * samba.df$D + 0.5 * samba.df$W)
weights <- nrow(samba.df) * (1 / prob.WD) / (sum(1 / prob.WD))
# Estimate sensitivity by using inverse probability of selection weights
# and P(D=1)
sens <- sensitivity(samba.df$Dstar, samba.df$X, prev = mean(samba.df$D),
weights = weights)
nonlog1 <- nonlogistic(samba.df$Dstar, samba.df$Z, c_X = sens$c_X,
weights = weights)
obsloglik Estimate parameters in the disease model using observed data log-
likelihood using direct maximization.
Description
obsloglik jointly estimates the disease model and sensitivity model parameters using profile like-
lihood methods. Estimation involves direct maximization of the observed data log-likelihood.
Usage
obsloglik(Dstar, Z, X, start, beta0_fixed = NULL, weights = NULL,
expected = TRUE, itnmax = 5000)
Arguments
Dstar Numeric vector containing observed disease status. Should be coded as 0/1
Z Numeric matrix of covariates in disease model. ’Z’ should not contain an inter-
cept
X Numeric matrix of covariates in sensitivity model. Set to NULL to fit model
with no covariates in sensitivity model. ’X’ should not contain an intercept
start Numeric vector of starting values for theta and beta (theta, beta). Theta is the pa-
rameter of the disease model, and beta is the parameter of the sensitivity model
beta0_fixed Optional numeric vector of values of sensitivity model intercept to profile over.
If a single value, corresponds to fixing intercept at specified value. Default is
NULL
weights Optional vector of patient-specific weights used for selection bias adjustment.
Default is NULL
expected Whether or not to calculate the covariance matrix via the expected fisher infor-
mation matrix. Default is TRUE
itnmax Maximum number of iterations to run optimx
Details
We are interested in modeling the relationship between binary disease status and covariates Z using
a logistic regression model. However, D may be misclassified, and our observed data may not well-
represent the population of interest. In this setting, we estimate parameters from the disease model
using the following modeling framework. Notation:
D Binary disease status of interest.
D* Observed binary disease status. Potentially a misclassified version of D. We assume D = 0
implies D* = 0.
S Indicator for whether patient from population of interest is included in the analytical dataset.
Z Covariates in disease model of interest.
W Covariates in model for patient inclusion in analytical dataset (selection model).
X Covariates in model for probability of observing disease given patient has disease (sensitivity
model).
Model Structure:
Disease Model
logit(P (D = 1|X)) = theta0 + thetaZ Z
Selection Model
Sensitivity Model
logit(P (D∗ = 1|D = 1, S = 1, X)) = beta0 + betaX X
Value
A "SAMBA.fit" object with nine elements: ’param’, the maximum likelihood estimate of the coefi-
cients, ’variance’, the covariance matrix of the final estimate, param.seq’, the sequence of estimates
at each value of beta0, and ’loglik.seq’, the log likelihood at each value. The rest of the elements
are Dstar’, ’X’, ’Z’, and ’weights’.
References
Statistical inference for association studies using electronic health records: handling both selection
bias and outcome misclassification <NAME> and <NAME> medRxiv 2019.12.26.19015859
Examples
library(SAMBA)
# These examples are generated from the vignette. See it for more details.
# Generate IPW weights from the true model
expit <- function(x) exp(x) / (1 + exp(x))
prob.WD <- expit(-0.6 + 1 * samba.df$D + 0.5 * samba.df$W)
weights <- nrow(samba.df) * (1 / prob.WD) / (sum(1 / prob.WD))
# Get initial parameter estimates
logit <- function(x) log(x / (1 - x))
fitBeta <- glm(Dstar ~ X, binomial(), data = samba.df)
fitTheta <- glm(Dstar ~ Z, binomial(), data = samba.df)
sens <- sensitivity(samba.df$Dstar, samba.df$X, mean(samba.df$D), r = 2)
start <- c(coef(fitTheta), logit(sens$c_marg), coef(fitBeta)[2])
# Direct observed data likelihood maximization without fixed intercept
fit1 <- obsloglik(samba.df$Dstar, samba.df$Z, samba.df$X, start = start,
weights = weights)
obsloglik1 <- list(param = fit1$param, variance = diag(fit1$variance))
# Direct observed data likelihood maximization with fixed intercept
fit2 <- obsloglik(samba.df$Dstar, samba.df$Z, samba.df$X, start = start,
beta0_fixed = logit(sens$c_marg), weights = weights)
# since beta0 is fixed, its variance is NA
obsloglik1 <- list(param = fit2$param, variance = diag(fit2$variance))
obsloglikEM Estimate parameters in the disease model using observed data log-
likelihood using the expectation-maximization algorithm
Description
obsloglikEM jointly estimates the disease model and sensitivity model parameters using profile
likelihood methods. Estimation involves an expectation-maximization algorithm.
Usage
obsloglikEM(Dstar, Z, X, start, beta0_fixed = NULL, weights = NULL,
expected = TRUE, tol = 1e-06, maxit = 50)
Arguments
Dstar Numeric vector containing observed disease status. Should be coded as 0/1
Z Numeric matrix of covariates in disease model. ’Z’ should not contain an inter-
cept
X Numeric matrix of covariates in sensitivity model. Set to NULL to fit model
with no covariates in sensitivity model. ’X’ should not contain an intercept
start Numeric vector of starting values for theta and beta (theta, beta). Theta is the pa-
rameter of the disease model, and beta is the parameter of the sensitivity model
beta0_fixed Optional numeric vector of values of sensitivity model intercept to profile over.
If a single value, corresponds to fixing intercept at specified value. Default is
NULL
weights Optional vector of patient-specific weights used for selection bias adjustment.
Default is NULL
expected Whether or not to calculate the covariance matrix via the expected fisher infor-
mation matrix. Default is TRUE
tol stop estimation when subsequent log-likelihood estimates are within this value
maxit Maximum number of iterations of the estimation algorithm
Details
We are interested in modeling the relationship between binary disease status and covariates Z using
a logistic regression model. However, D may be misclassified, and our observed data may not well-
represent the population of interest. In this setting, we estimate parameters from the disease model
using the following modeling framework. Notation:
D Binary disease status of interest.
D* Observed binary disease status. Potentially a misclassified version of D. We assume D = 0
implies D* = 0.
S Indicator for whether patient from population of interest is included in the analytical dataset.
Z Covariates in disease model of interest.
W Covariates in model for patient inclusion in analytical dataset (selection model).
X Covariates in model for probability of observing disease given patient has disease (sensitivity
model).
Model Structure:
Disease Model
logit(P (D = 1|X)) = theta0 + thetaZ Z
Selection Model
Sensitivity Model
logit(P (D∗ = 1|D = 1, S = 1, X)) = beta0 + betaX X
Value
A "SAMBA.fit" object with nine elements: ’param’, the final estimate of the coeficients organized
as (theta, beta), ’variance’, the covariance matrix of the final estimate, param.seq’, the sequence of
estimates at each step of the EM algorithm, and ’loglik.seq’, the log likelihood at each step. The
rest of the elements are Dstar’, ’X’, ’Z’, and ’weights’.
References
Statistical inference for association studies using electronic health records: handling both selection
bias and outcome misclassification <NAME> and <NAME> medRxiv 2019.12.26.19015859
Examples
library(SAMBA)
# These examples are generated from the vignette. See it for more details.
# Generate IPW weights from the true model
expit <- function(x) exp(x) / (1 + exp(x))
prob.WD <- expit(-0.6 + 1 * samba.df$D + 0.5 * samba.df$W)
weights <- nrow(samba.df) * (1 / prob.WD) / (sum(1 / prob.WD))
# Get initial parameter estimates
logit <- function(x) log(x / (1 - x))
fitBeta <- glm(Dstar ~ X, binomial(), data = samba.df)
fitTheta <- glm(Dstar ~ Z, binomial(), data = samba.df)
sens <- sensitivity(samba.df$Dstar, samba.df$X, mean(samba.df$D), r = 2)
start <- c(coef(fitTheta), logit(sens$c_marg), coef(fitBeta)[2])
# Direct observed data likelihood maximization without fixed intercept
fit1 <- obsloglikEM(samba.df$Dstar, samba.df$Z, samba.df$X, start = start,
weights = weights)
obsloglik1 <- list(param = fit1$param, variance = diag(fit1$variance))
# Direct observed data likelihood maximization with fixed intercept
fit2 <- obsloglikEM(samba.df$Dstar, samba.df$Z, samba.df$X, start = start,
beta0_fixed = logit(sens$c_marg), weights = weights)
# since beta0 is fixed, its variance is NA
list(param = fit2$param, variance = diag(fit2$variance))
samba.df Synthetic example data for SAMBA adapted from the vignette
Description
’samba.df’ is the sampled data from the entire population
Usage
samba.df
Format
A synthetic data.frame with 4999 observations on 5 variables:
X Covariate for sensitivity model.
Z Covariate for disease model.
W Selection Covariate
D True disease status.
Dstar Observed disease status.
sensitivity Estimate sensitivity
Description
sensitivity estimates (1) marginal sensitivity and (2) sensitivity as a function of covariates X for
a misclassified binary outcome.
Usage
sensitivity(Dstar, X, prev, r = NULL, weights = NULL)
Arguments
Dstar Numeric vector containing observed disease status. Should be coded as 0/1
X Numeric matrix with covariates in sensitivity model. Set to NULL to fit model
with no covariates in sensitivity model. ’X’ should not contain an intercept
prev marginal disease prevalence P (D = 1) or patient-specific P (D = 1|X) in
population
r (optional) marginal sampling ratio, P (S = 1|D = 1)/P (S = 1|D = 0). Only
one of ’r’ and ’weights’ can be specified. Default is ‘NULL‘
weights Optional vector of patient-specific weights used for selection bias adjustment.
Only one of r and weights can be specified. Default is ‘NULL‘
Details
We are interested in modeling the relationship between binary disease status and covariates Z using
a logistic regression model. However, D may be misclassified, and our observed data may not well-
represent the population of interest. In this setting, we estimate parameters from the disease model
using the following modeling framework.
Notation:
D Binary disease status of interest.
D* Observed binary disease status. Potentially a misclassified version of D. We assume D = 0
implies D* = 0.
S Indicator for whether patient from population of interest is included in the analytical dataset.
Z Covariates in disease model of interest.
W Covariates in model for patient inclusion in analytical dataset (selection model).
X Covariates in model for probability of observing disease given patient has disease (sensitivity
model).
Model Structure:
Disease Model
logit(P (D = 1|X)) = theta0 + thetaZ Z
Selection Model
Sensitivity Model
logit(P (D∗ = 1|D = 1, S = 1, X)) = beta0 + betaX X
Value
a list with two elements: (1) ‘c_marg‘, marginal sensitivity estimate P (D∗ = 1|D = 1, S = 1),
and (2) ‘c_X‘, sensitivity as a function of X P (D∗ = 1|D = 1, S = 1, X)
References
Statistical inference for association studies using electronic health records: handling both selection
bias and outcome misclassification <NAME> and <NAME> medRxiv 2019.12.26.19015859
Examples
library(SAMBA)
# These examples are generated from the vignette. See it for more details.
# Generate IPW weights from the true model
expit <- function(x) exp(x) / (1 + exp(x))
prob.WD <- expit(-0.6 + 1 * samba.df$D + 0.5 * samba.df$W)
weights <- nrow(samba.df) * (1 / prob.WD) / (sum(1 / prob.WD))
# Using marginal sampling ratio r ~ 2 and P(D=1)
sens <- sensitivity(samba.df$Dstar, samba.df$X, mean(samba.df$D),
r = 2)
# Using inverse probability of selection weights and P(D=1)
sens <- sensitivity(samba.df$Dstar, samba.df$X, prev = mean(samba.df$D),
weights = weights) |
LakeMetabolizer | cran | R | Package ‘LakeMetabolizer’
November 16, 2022
Title Tools for the Analysis of Ecosystem Metabolism
Maintainer <NAME> <<EMAIL>>
Version 1.5.5
Description A collection of tools for the calculation of freewater metabolism
from in situ time series of dissolved oxygen, water temperature, and,
optionally, additional environmental variables. LakeMetabolizer implements
5 different metabolism models with diverse statistical underpinnings:
bookkeeping, ordinary least squares, maximum likelihood, Kalman filter,
and Bayesian. Each of these 5 metabolism models can be combined with
1 of 7 models for computing the coefficient of gas exchange across the
air–water interface (k). LakeMetabolizer also features a variety of
supporting functions that compute conversions and implement calculations
commonly applied to raw data prior to estimating metabolism (e.g., oxygen
saturation and optical conversion models).
License GPL (>= 2)
Imports plyr, methods
Suggests R2jags, testthat
Depends R (>= 2.15.0), rLakeAnalyzer (>= 1.4)
Repository CRAN
BugReports https://github.com/GLEON/LakeMetabolizer/issues
URL https://www.tandfonline.com/doi/abs/10.1080/IW-6.4.883
RoxygenNote 7.2.1
Encoding UTF-8
NeedsCompilation yes
Author <NAME> [aut],
<NAME> [cre, aut] (<https://orcid.org/0000-0002-3870-405X>),
<NAME> [aut],
<NAME> [aut],
<NAME> [aut],
<NAME> [aut],
<NAME> [aut],
<NAME> [aut],
<NAME> [aut]
Date/Publication 2022-11-15 23:30:16 UTC
R topics documented:
calc.lw.ne... 2
calc.zen... 4
get.T... 5
get.var... 6
getSchmid... 7
has.var... 8
is.da... 8
is.nigh... 9
k.rea... 10
k.read.bas... 12
k600.2.kGA... 15
load.all.dat... 16
load.met... 17
meta... 17
metab.bayesia... 20
metab.bookkee... 21
metab.kalma... 23
metab.ml... 25
metab.ol... 28
o2.at.sa... 29
par.to.s... 31
rmv.var... 32
sun.rise.se... 33
sw.to.pa... 34
temp.kalma... 35
var.ind... 36
watts.i... 36
wind.scal... 37
calc.lw.net Estimate net long wave heat radiation
Description
Returns the net long wave radiation based on Crawford and Duchon, 1999.
Usage
calc.lw.net(ts.data, lat, atm.press)
calc.lw.net.base(dateTime, sw, Ts, lat, atm.press, airT, RH)
Arguments
ts.data Object of class data.frame including the required variables(see details for list
of variables and their units)
lat latitude in degrees north
atm.press atmospheric pressure in mb
dateTime vector of datetime in POSIXct format
sw numeric value of short wave radiation, W/m2
Ts numeric value of surface water temperature, degC
airT numeric value of air temperature, degC
RH numeric value of relative humidity, %
Value
## for calc.lw.net.base
A numeric value of net long wave heat flux in W/m^2
## for calc.lw.net
A data.frame with columns datetime and lwnet in W/m^2
Author(s)
<NAME>. Read <NAME>
References
Crawford, T.M., and Duchon, C.E. 1999. An improved parameterization for estimating effective
atmospheric emissivity for use in calculating daytime downwelling longwave radiation. Journal of
Applied Meteorology 38: 474-480.
See Also
k.read and k.macIntyre
Examples
## Base example
dateTime <- as.POSIXct("2013-12-30 23:00")
Uz <- 3
airT <- 20
RH <- 90
sw <- 800
wndZ <- 2
Kd <- 2
lat <- 54
lake.area <- 5000
atm.press <- 1013
Ts <- 22
calc.lw.net.base(dateTime,sw,Ts,lat,atm.press,airT,RH)
## Example using timeseries in a data frame
data.path = system.file('extdata', package="LakeMetabolizer")
sp.data = load.all.data('sparkling', data.path)
# Prep the input data
ts.data = sp.data$data #pull out just the timeseries data
atm.press = 1018
lat = sp.data$metadata$latitude
lwnet = calc.lw.net(ts.data, lat, atm.press)
plot(lwnet$datetime, lwnet$lwnet)
calc.zeng Estimate sensible and latent heat fluxes
Description
Returns the sensible and latent heat fluxed based on Zeng et al, 1998’
Usage
calc.zeng(dateTime,Ts,airT,Uz,RH,atm.press,wnd.z,airT.z,RH.z)
Arguments
dateTime vector of datetime in POSIXct format
Ts numeric value of surface water temperature, degC
airT numeric value of air temperature, degC
Uz numeric value of wind speed, m/s
RH numeric value of relative humidity, %
atm.press atmospheric pressure in mb
wnd.z height of wind measurement, m
airT.z height of air temperature measurement, m (optional)
RH.z height of relative humidity measurement, m (optional)
Value
A data.frame including sensible and latent heat flux estimates, and other variables used in calculat-
ing these fluxes.
Author(s)
<NAME>. Woolway
References
<NAME>., <NAME>., and <NAME>. 1998. Intercomparison of bulk aerodynamic algorithms
for the computation of sea surface fluxes using TOGA COARE and TAO data. Journal of Climate
11: 2628-2644.
See Also
k.read
Examples
dateTime <- as.POSIXct("2013-12-30 23:00")
Ts <- 22.51
airT <- 20
Uz <- 3
RH <- 90
atm.press <- 1013
wnd.z <- 2
calc.zeng(dateTime,Ts,airT,Uz,RH,atm.press,wnd.z)
get.Ts gets surface water temperatures
Description
grabs best available data for surface water temperature
Usage
get.Ts(data, s.range = c(0, 1))
Arguments
data Object of class data.frame
s.range a numeric vector of length=2 with the range for depth measurements to still be
considered ’surface’
Value
An object of class data.frame
Author(s)
<NAME>
See Also
has.vars get.vars rmv.vars
get.vars subsets data.frame according to header names
Description
subsets data according to header names
Usage
get.vars(data, var.names)
Arguments
data Object of class data.frame
var.names A character vector of names to get from data
Value
An object of class data.frame
Author(s)
<NAME>
See Also
has.vars rmv.vars
getSchmidt Returns Schmidt number for a specific gas at a given temperature
Description
Schmidt number is temperature dependant, and is the ratio of the kinematic viscosity of water to a
diffusion coefficient. Coefficients are included for He, O2, CO2, CH4, SF6, N2O, Ar, and N2.
Usage
getSchmidt(temperature, gas)
Arguments
temperature Numeric vector of water temperatures in deg. Celsius
gas String for gas code. Valid inputs include: He, O2, CO2, CH4, SF6, N2O, Ar,
and N2
Value
Schmidt number (unitless)
Note
Temperature range is only valid from 4-35 deg Celsius
Author(s)
<NAME>
References
Raymond, <NAME>., <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, and <NAME>. Scaling the gas
transfer velocity and hydraulic geometry in streams and small rivers. Limnology & Oceanography:
Fluids & Environments 2 (2012): 41-53.
Examples
getSchmidt(temperature=12, gas="O2")
has.vars tests data.frame for column names
Description
tests data for data column names
Usage
has.vars(data, var.names)
Arguments
data Object of class data.frame
var.names A character vector of names to test against data
Value
a boolean vector of same length as var.names
Author(s)
<NAME>
See Also
get.vars rmv.vars
is.day determines if measurement was taken during the daytime
Description
determines if measurement was taken during the daytime
Usage
is.day(datetimes, lat)
Arguments
datetimes Vector of dates as POSIXct or POSIXlt (see DateTimeClasses) format
lat Single latitude value of site. South should be negative, north positive
Value
a boolean vector of same length as datetimes
Author(s)
<NAME>
See Also
is.night sun.rise.set
is.night determines if measurement was taken during the night
Description
determines if measurement was taken during the nighttime
Usage
is.night(datetimes, lat)
Arguments
datetimes Vector of dates as POSIXct or POSIXlt (see DateTimeClasses) format
lat Single latitude value of site. South should be negative, north positive
Value
a boolean vector of same length as datetimes
Author(s)
<NAME>
See Also
is.day sun.rise.set
k.read Returns a timeseries of gas exchange velocity
Description
Returns the gas exchange velocity based on the chosen model in units of m/day
Usage
k.cole(ts.data)
k.crusius(ts.data, method='power')
k.read(ts.data, wnd.z, Kd, atm.press, lat, lake.area)
k.read.soloviev(ts.data, wnd.z, Kd, atm.press, lat, lake.area)
k.macIntyre(ts.data, wnd.z, Kd, atm.press,params=c(1.2,0.4872,1.4784))
k.vachon(ts.data, lake.area, params=c(2.51,1.48,0.39))
k.heiskanen(ts.data, wnd.z, Kd, atm.press)
Arguments
ts.data vector of datetime in POSIXct format
wnd.z height of wind measurement, m
Kd Light attenuation coefficient (Units:m^-1)
atm.press atmospheric pressure in mb
lat Latitude, degrees north
lake.area Lake area, m^2
method Only for k.crusius. String of valid method . Either "linear", "bilinear", or
"power"
params Only for k.vachon.base and k.macIntyre. See details.
Details
Can change default parameters of MacIntyre and Vachon models. Default for Vachon is c(2.51,1.48,0.39).
Default for MacIntyre is c(1.2,0.4872,1.4784). Heiskanen 2014 uses MacIntyre model with c(0.5,0.77,0.3)
and z.aml constant at 0.15.
Value
Returns a data.frame with a datetime column and a k600 column. k600 is in units of meters per day
(m/d).
Author(s)
<NAME>, <NAME>, <NAME>, <NAME>, <NAME>. Read
References
<NAME>., <NAME>, and <NAME>. Atmospheric exchange of carbon dioxide in a low-wind oligotrophic
lake measured by the addition of SF~ 6. Limnology and Oceanography 43 (1998): 647-656.
MacIntyre, Sally, <NAME>, <NAME>, <NAME>, <NAME>, and <NAME>.
Buoyancy flux, turbulence, and the gas transfer coefficient in a stratified lake. Geophysical Research
Letters 37, no. 24 (2010).
Read, <NAME>., <NAME>, <NAME>, <NAME>, <NAME>, <NAME>.
Lenters, <NAME>. Smyth et al. Lake-size dependency of wind shear and convection as controls on
gas exchange. Geophysical Research Letters 39, no. 9 (2012).
Crusius, John, and <NAME>. Gas transfer velocities measured at low wind speed over a
lake. Limnology and Oceanography 48, no. 3 (2003): 1010-1017.
<NAME> and <NAME>. The ecosystem size and shape dependence of gas transfer
velocity versus wind speed relationships in lakes. Can. J. Fish. Aquat. Sci. 70 (2013): 1757-1764.
<NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>-
<NAME>. Effects of cooling and internal wave motions on gas transfer coefficients in a
boreal lake. Tellus B 66, no.22827 (2014)
<NAME>, <NAME>, <NAME>, <NAME>, <NAME>. An approach to
estimation of near-surface turbulence and CO2 transfer velocity from remote sensing data. Journal
of Marine Systems 66, (2007): 182-194.
See Also
k.cole k.crusius k.macIntyre k.vachon k.heiskanen
Examples
data.path = system.file('extdata', package="LakeMetabolizer")
tb.data = load.all.data('sparkling', data.path)
ts.data = tb.data$data #pull out just the timeseries data
#calculate U10 and add it back onto the original
u10 = wind.scale(ts.data)
ts.data = rmv.vars(ts.data, 'wnd', ignore.offset=TRUE) #drop old wind speed column
ts.data = merge(ts.data, u10) #merge new u10 into big dataset
k600_cole = k.cole(ts.data)
k600_crusius = k.crusius(ts.data)
kd = tb.data$metadata$averagekd
wnd.z = 10 #because we converted to u10
atm.press = 1018
lat = tb.data$metadata$latitude
lake.area = tb.data$metadata$lakearea
#for k.read and k.macIntyre, we need LW_net.
#Calculate from the observations we have available.
lwnet = calc.lw.net(ts.data, lat, atm.press)
ts.data = merge(ts.data, lwnet)
k600_read = k.read(ts.data, wnd.z=wnd.z, Kd=kd, atm.press=atm.press,
lat=lat, lake.area=lake.area)
k600_soloviev = k.read.soloviev(ts.data, wnd.z=wnd.z, Kd=kd,
atm.press=atm.press, lat=lat, lake.area=lake.area)
k600_macIntyre = k.macIntyre(ts.data, wnd.z=wnd.z, Kd=kd, atm.press=atm.press)
k.read.base Returns a timeseries of gas exchange velocity
Description
Returns the gas exchange velocity based on the chosen model in units of m/day
Usage
k.cole.base(wnd)
k.crusius.base(wnd, method='power')
k.read.base(wnd.z, Kd, lat, lake.area, atm.press, dateTime, Ts, z.aml,
airT, wnd, RH, sw, lwnet)
k.read.soloviev.base(wnd.z, Kd, lat, lake.area, atm.press, dateTime, Ts, z.aml,
airT, wnd, RH, sw, lwnet)
k.macIntyre.base(wnd.z, Kd, atm.press, dateTime, Ts, z.aml, airT, wnd, RH, sw,
lwnet, params=c(1.2,0.4872,1.4784))
k.vachon.base(wnd, lake.area, params=c(2.51,1.48,0.39))
k.heiskanen.base(wnd.z, Kd, atm.press, dateTime, Ts, z.aml, airT, wnd, RH, sw, lwnet)
Arguments
wnd.z Height of wind measurement, (Units: m)
Kd Light attenuation coefficient (Units: m^-1)
lat Latitude, degrees north
lake.area Lake area, m^2
atm.press Atmospheric pressure, (Units: millibar)
dateTime datetime (Y-%m-%d %H:%M), (Format: POSIXct)
Ts Numeric vector of surface water temperature, (Units(deg C)
z.aml Numeric vector of actively mixed layer depths. Must be the same length as the
Ts parameter
airT Numeric value of air temperature, Units(deg C)
wnd Numeric value of wind speed, (Units:m/s)
RH Numeric value of relative humidity, %
sw Numeric value of short wave radiation, W m^-2
lwnet Numeric value net long wave radiation, W m^-2
method Only for k.crusius.base. String of valid method . Either "constant", "bilinear",
or "power"
params Optional parameter input, only for k.vachon.base and k.macIntyre.base. See
details.
Details
Can change default parameters of MacIntyre and Vachon models. Default for Vachon is c(2.51,1.48,0.39).
Default for MacIntyre is c(1.2,0.4872,1.4784). Heiskanen et al. (2014) uses MacIntyre model with
c(0.5,0.77,0.3) and z.aml constant at 0.15.
Value
Numeric value of gas exchange velocity (k600) in units of m/day. Before use, should be converted
to appropriate gas using k600.2.kGAS.
Author(s)
<NAME>, <NAME>, <NAME>, <NAME>, GLEON fellows
References
<NAME>., <NAME>, and <NAME>. Atmospheric exchange of carbon dioxide in a low-wind oligotrophic
lake measured by the addition of SF~ 6. Limnology and Oceanography 43 (1998): 647-656.
MacIntyre, Sally, <NAME>, <NAME>, <NAME>, <NAME>, and <NAME>.
Buoyancy flux, turbulence, and the gas transfer coefficient in a stratified lake. Geophysical Research
Letters 37, no. 24 (2010).
Read, <NAME>., <NAME>, <NAME>, <NAME>, <NAME>, <NAME>.
Lenters, <NAME> et al. Lake-size dependency of wind shear and convection as controls on
gas exchange. Geophysical Research Letters 39, no. 9 (2012).
Crusius, John, and <NAME>. Gas transfer velocities measured at low wind speed over a
lake. Limnology and Oceanography 48, no. 3 (2003): 1010-1017.
<NAME> and <NAME>. The ecosystem size and shape dependence of gas transfer
velocity versus wind speed relationships in lakes. Can. J. Fish. Aquat. Sci. 70 (2013): 1757-1764.
<NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>-
<NAME>. Effects of cooling and internal wave motions on gas transfer coefficients in a
boreal lake. Tellus B 66, no.22827 (2014)
<NAME>, <NAME>, <NAME>, <NAME>, <NAME>. An approach to
estimation of near-surface turbulence and CO2 transfer velocity from remote sensing data. Journal
of Marine Systems 66, (2007): 182-194.
See Also
k.cole k.read k.crusius k.macIntyre k.vachon k.heiskanen
Examples
wnd.z <- 2
Kd <- 2
lat <- 54
lake.area <- 5000
atm.press <- 1013
dateTime <- as.POSIXct("2013-12-30 14:00")
Ts <- 16.5
z.aml <- 2.32
airT <- 20
wnd <- 6
RH <- 90
sw <- 800
lwnet <- -55
timeStep <- 30
U10 <- wind.scale.base(wnd, wnd.z)
k600_cole <- k.cole.base(U10)
k600_crusius <- k.crusius.base(U10)
k600_read <- k.read.base(wnd.z, Kd, lat, lake.area, atm.press,
dateTime, Ts, z.aml, airT, wnd, RH, sw, lwnet)
k600_soloviev <- k.read.soloviev.base(wnd.z, Kd, lat, lake.area,
atm.press, dateTime, Ts, z.aml, airT, wnd, RH, sw, lwnet)
k600_macInytre <- k.macIntyre.base(wnd.z, Kd, atm.press,
dateTime, Ts, z.aml, airT, wnd, RH, sw, lwnet)
k600.2.kGAS Returns the gas exchange velocity for gas of interest w/ no unit con-
versions
Description
Returns the gas exchange velocity for gas of interest w/ no unit conversions
Usage
k600.2.kGAS.base(k600,temperature,gas="O2")
k600.2.kGAS(ts.data, gas="O2")
Arguments
k600 k600 as vector array of numbers or single number
temperature Water temperature (deg C) as vector array of numbers or single number
gas gas for conversion, as string (e.g., ’CO2’ or ’O2’)
ts.data Object of class data.frame with named columns datetime and k600 and wtr (wa-
ter temp in deg C). Other columns are ignored
Value
Numeric value of gas exchange velocity for gas
Author(s)
<NAME>
See Also
k.read and k.read.base for functions that calculate k600 estimates
Examples
## single example
kO2 <- k600.2.kGAS.base(k600=2.4,temperature=20.4,gas='O2')
## Timeseries example
#load data
data.path = system.file('extdata', package="LakeMetabolizer")
sp.data = load.all.data('sparkling', data.path)
ts.data = sp.data$data #pull out just the timeseries data
#calculate U10 and add it back onto the original
u10 = wind.scale(ts.data)
ts.data = rmv.vars(ts.data, 'wnd', ignore.offset=TRUE) #drop old wind speed column
ts.data = merge(ts.data, u10) #merge new u10 into big dataset
k600 = k.cole(ts.data)
ts.data = merge(k600, ts.data)
k.gas = k600.2.kGAS(ts.data, 'O2')
load.all.data Attemps to load and merge all timeseries data for a given site name
Description
Loads and returns all the data available in the specified directory for a given site. All timeseries
data are merged by “datetime” into a single data.frame. Data are identified by the column header
information.
Usage
load.all.data(lake.name, data.path, checkMerge=TRUE)
Arguments
lake.name The file prefix to be matched. For example, “sparkling” matches “sparkling.wtr”
but not “troutbog.wtr”
data.path The directory to look for files
checkMerge Should check merge size before attempting to prevent potential merge problems.
Value
A list with two items
data
metadata
Author(s)
<NAME>
See Also
load.ts load.meta
load.meta Loads a metadata file from the specified path
Description
Parses a formatted metadata file. Useful for site-specific metadata that is not contained in the
timeseries files.
Usage
load.meta(fPath)
Arguments
fPath The file path as a string
Value
A list with the metadata parsed from the file.
Author(s)
<NAME>
See Also
load.ts load.all.data
metab Calculate metabolism
Description
Returns daily time series of gross primary production (GPP), respiration (R), and net ecosystem
production (NEP). Depending on the method used, other information may be returned as well.
Calculations are made using one of 5 statistical methods.
Usage
metab(data, method, wtr.name="wtr", irr.name="irr", do.obs.name="do.obs", ...)
Arguments
data a data.frame whose columns are
"datetime" = class POSIXct vector
"do.obs" = numeric vector of oxygen concentration in mg/L
"do.sat" = numeric vector of saturated oxygen concentration in mg/L
"k.gas" = numeric vector of gas exchange coefficient values in m/day, should
be 0 when depth of do.obs is deeper than z.mix
"z.mix" = numeric vector of mixing depth values in meters
"irr" = numeric vector of PAR values, arbitrary units
"wtr" = numeric vector of water temperature values, arbitrary units
Columns that are not used by a particular statistical method do not need to be
supplied.
method a character string specifying one of the 5 statistical methods (bayesian, book-
keep, kalman, ols, mle)
wtr.name the name of the column containing temperature at the depth of do.obs (predictor
variable for R)
irr.name the name of the column containing irradiance (predictor variable for GPP)
do.obs.name the name of the column in data containing the DO observations (in mg/L) to be
used as the response variable
... arguments to be passed on to the metabolism model specified by method
Value
A data.frame containing columns for year, doy (day of year, julian day plus fraction of day), GPP,
R, and NEP
year integer year
doy numeric, day of year + fraction of day, where the day is the julian day, and a
fraction of 0.5 corresponds to noon
GPP numeric, gross primary production, in units of mg O2 per liter per day. By
convention, this value is positive.
R numeric, respiration, in units of mg O2 per liter per day. By convention, this
value is negative
NEP numeric, net ecosystem production, in units of mg O2 per liter per day. For most
methods this equal GPP+R, but this is not necessarily the case for "method"="bookkeep"
Note that different models will have different attributes attached to them. See examples.
Author(s)
<NAME>
See Also
Metabolism models: metab.bookkeep, metab.ols, metab.mle, metab.kalman, metab.bayesian
For smoothing noisy temperature: temp.kalman
To calculate do.sat: o2.at.sat
To calculate k.gas: k600.2.kGAS
To calculate k600 values for k.gas: k.cole, k.crusius, k.macIntyre, k.read
Examples
# fake data
datetime <- seq(as.POSIXct("2014-06-16 00:00:00", tz="GMT"),
as.POSIXct("2014-06-17 23:55:00", tz="GMT"), length.out=288*2)
do.obs <- 2*sin(2*pi*(1/288)*(1:(288*2))+1.1*pi) + 8 + rnorm(288*2, 0, 0.5)
wtr <- 3*sin(2*pi*(1/288)*(1:(288*2))+pi) + 17 + rnorm(288*2, 0, 0.15)
do.sat <- LakeMetabolizer::o2.at.sat.base(wtr, 960)
irr <- (1500*sin(2*pi*(1/288)*(1:(288*2))+1.5*pi) +650 + rnorm(288*2, 0, 0.25)) *
ifelse(is.day(datetime, 42.3), 1, 0)
k.gas <- 0.4
z.mix <- 1
# plot time series
plot(wtr, type="l", xaxt="n", yaxt="n", xlab="", ylab="")
par(new=TRUE); plot(do.obs, type="l", col="blue", xaxt="n", yaxt="n", xlab="", ylab="")
par(new=TRUE); plot(irr, type="l", col="orange", xaxt="n", yaxt="n", xlab="", ylab="")
abline(v=144, lty="dotted")
abline(v=288)
legend("topleft", legend=c("wtr", "do.obs", "irr"), lty=1,
col=c("black", "blue", "orange"), inset=c(0.08, 0.01))
# put data in a data.frame
data <- data.frame(datetime=datetime, do.obs=do.obs, do.sat=do.sat, k.gas=k.gas,
z.mix=z.mix, irr=irr, wtr=wtr)
# run each metabolism model
m.bk <- metab(data, "bookkeep", lake.lat=42.6)
m.bk <- metab(data, lake.lat=42.6) # no method defaults to "bookeep"
m.ols <- metab(data, "ols", lake.lat=42.6)
m.mle <- metab(data, "mle", lake.lat=42.6)
m.kal <- metab(data, "kalman", lake.lat=42.6)
## Not run: m.bay <- metab(data, "bayesian", lake.lat=42.6)
# example attributes
names(attributes(m.ols))
attr(m.ols, "mod")
# To get full JAGS model
# including posterior draws:
## Not run: names(attributes(m.bay))
## Not run: attr(m.bay, "model")
metab.bayesian Metabolism model based on a bayesian parameter estimation frame-
work
Description
This function runs the bayesian metabolism model on the supplied gas concentration and other
supporting data. This allows for both estimates of metabolism along with uncertainty around the
parameters.
Usage
metab.bayesian(do.obs, do.sat, k.gas, z.mix, irr, wtr, priors, ...)
Arguments
do.obs Vector of dissovled oxygen concentration observations, mg L^-1
do.sat Vector of dissolved oxygen saturation values based on water temperature. Cal-
culate using o2.at.sat
k.gas Vector of kGAS values calculated from any of the gas flux models (e.g., k.cole)
and converted to kGAS using k600.2.kGAS
z.mix Vector of mixed-layer depths in meters. To calculate, see ts.meta.depths
irr Vector of photosynthetically active radiation in µmol m−2 s−1
wtr Vector of water temperatures in ◦ C. Used in scaling respiration with temperature
priors Parameter priors supplied as a named numeric vector (example: c("gppMu"=0,
"gppSig2"=1E5, "rMu"=0, "rSig2"=1E5, "kSig2"=NA))
... additional arguments; currently "datetime" is the only recognized argument passed
through ...
Value
A list of length 4 with components:
model the jags model, including posterior draws (see jags)
params parameter estimates of interest from model (medians)
metab.sd standard deviation of metabolism estimates
metab daily metabolism estimates as a data.frame with columns corresponding to
GPP numeric estimate of Gross Primary Production, mgO2 L−1 d−1
R numeric estimate of Respiration, mgO2 L−1 d−1
NEP numeric estimate of Net Ecosystem production, mgO2 L−1 d−1
Author(s)
<NAME>, <NAME>
References
Holtgrieve, <NAME>., <NAME>, <NAME>, and <NAME>. 2010. Simulta-
neous Quantification of Aquatic Ecosystem Metabolism and Reaeration Using a Bayesian Statistical
Model of Oxygen Dynamics. Limnology and Oceanography 55 (3): 1047-1062. doi:10.4319/lo.2010.55.3.1047.
http://www.aslo.org/lo/toc/vol_55/issue_3/1047.html.
See Also
metab.mle, metab.bookkeep, metab.kalman
Examples
## Not run:
library(rLakeAnalyzer)
doobs = load.ts(system.file('extdata',
'sparkling.doobs', package="LakeMetabolizer"))
wtr = load.ts(system.file('extdata',
'sparkling.wtr', package="LakeMetabolizer"))
wnd = load.ts(system.file('extdata',
'sparkling.wnd', package="LakeMetabolizer"))
irr = load.ts(system.file('extdata',
'sparkling.par', package="LakeMetabolizer"))
#Subset a day
mod.date = as.POSIXct('2009-07-08', 'GMT')
doobs = doobs[trunc(doobs$datetime, 'day') == mod.date, ]
wtr = wtr[trunc(wtr$datetime, 'day') == mod.date, ]
wnd = wnd[trunc(wnd$datetime, 'day') == mod.date, ]
irr = irr[trunc(irr$datetime, 'day') == mod.date, ]
k600 = k.cole.base(wnd[,2])
k.gas = k600.2.kGAS.base(k600, wtr[,3], 'O2')
do.sat = o2.at.sat(wtr[,1:2], altitude=300)
metab.bayesian(irr=irr[,2], z.mix=rep(1, length(k.gas)),
do.sat=do.sat[,2], wtr=wtr[,2],
k.gas=k.gas, do.obs=doobs[,2])
## End(Not run)
metab.bookkeep Metabolism model based on simple day/night summation NEP-
interpreted changes in DO.
Description
This model is a simple model based on the assumption that movements in DO during the day are due
to NEP and gas exchange. Respiration is estimated from night-time decreases. GPP is calculated
from the algebraic manipulation of NEP and R. Based on Cole et al 2000.
Usage
metab.bookkeep(do.obs, do.sat, k.gas, z.mix, irr, ...)
Arguments
do.obs Vector of dissovled oxygen concentration observations, mg L^-1
do.sat Vector of dissolved oxygen saturation values based on water temperature. Cal-
culate using o2.at.sat
k.gas Vector of kGAS values calculated from any of the gas flux models (e.g., k.cole)
and converted to kGAS using k600.2.kGAS
z.mix Vector of mixed-layer depths in meters. To calculate, see ts.meta.depths
irr Integer vector of 1’s (daytime) and 0’s (nighttime), or numeric vector of irra-
diance that will be converted to boolean 1’s and 0’s if "datetime" is passed via
...
... additional arguments to be passed, particularly POSIXct class "datetime"
Value
A data.frame with columns corresponding to components of metabolism
GPP numeric estimate of Gross Primary Production, mgO2 L−1 d−1
R numeric estimate of Respiration, mgO2 L−1 d−1
NEP numeric estimate of Net Ecosystem production, mgO2 L−1 d−1
Author(s)
<NAME>, <NAME>, <NAME>, <NAME>, <NAME>, GLEON fellows
References
<NAME>., <NAME>, <NAME>, and <NAME>. 2000. Persistence
of Net Heterotrophy in Lakes during Nutrient Addition and Food Web Manipulations. Limnology
and Oceanography 45 (8): 1718-1730. doi:10.4319/lo.2000.45.8.1718.
See Also
metab.bayesian, metab.mle, metab.kalman
Examples
library(rLakeAnalyzer)
Sys.setenv(TZ='GMT')
doobs = load.ts(system.file('extdata',
'sparkling.doobs', package="LakeMetabolizer"))
wtr = load.ts(system.file('extdata',
'sparkling.wtr', package="LakeMetabolizer"))
wnd = load.ts(system.file('extdata',
'sparkling.wnd', package="LakeMetabolizer"))
#Subset a day
mod.date = as.POSIXct('2009-07-08', 'GMT')
doobs = doobs[trunc(doobs$datetime, 'day') == mod.date, ]
wtr = wtr[trunc(wtr$datetime, 'day') == mod.date, ]
wnd = wnd[trunc(wnd$datetime, 'day') == mod.date, ]
k.gas = k600.2.kGAS.base(k.cole.base(wnd[,2]), wtr[,3], 'O2')
do.sat = o2.at.sat.base(wtr[,3], altitude=300)
# Must supply 1 for daytime timesteps and 0 for nighttime timesteps
irr = as.integer(is.day(doobs[,1], 45))
metab.bookkeep(doobs[,2], do.sat, k.gas, z.mix=1, irr, datetime=doobs$datetime)
metab.kalman Metabolism calculated from parameters estimated using a Kalman fil-
ter
Description
A state space model accounting for process and observation error, with the maximum likelihood
of parameteres estimated using a Kalman filter. Also provides a smoothed time series of oxygen
concentration.
Usage
metab.kalman(do.obs, do.sat, k.gas, z.mix, irr, wtr, ...)
Arguments
do.obs Vector of dissovled oxygen concentration observations, mgO[2]L−1
do.sat Vector of dissolved oxygen saturation values based on water temperature. Cal-
culate using o2.at.sat
k.gas Vector of kGAS values calculated from any of the gas flux models (e.g., k.cole)
and converted to kGAS using k600.2.kGAS
z.mix Vector of mixed-layer depths in meters. To calculate, see ts.meta.depths
irr Vector of photosynthetically active radiation in µmol m−2 s−1
wtr Vector of water temperatures in ◦ C. Used in scaling respiration with temperature
... additional arguments; currently "datetime" is the only recognized argument passed
through ...
Details
The model has four parameters, c1 , c2 , Q, H, and consists of equations involving the prediction of
upcoming state conditional on information of the previous state (at|t−1 , Pt|t−1 ), as well as updates
of those predictions that are conditional upon information of the current state (at|t , Pt|t ). a is the
v = k.gas/z.mix
at = c1 ∗ irrt−1 + c2 ∗ loge (wtrt−1 ) + vt−1 ∗ do.satt−1
beta = e−v
do.obst = at /vt−1 + −e−vt−1 ∗ at /vt−1 + betat−1 ∗ do.obst−1 + epsilont
The above model is used during model fitting, but if gas flux is not integrated between time steps,
those equations simplify to the following:
Ft−1 = k.gast−1 ∗ (do.satt−1 − do.obst−1 )/z.mixt−1
do.obst = do.obst−1 + c1 ∗ irrt−1 + c2 ∗ loge (wtrt−1 ) + Ft−1 + epsilont
The parameters are fit using maximum likelihood, and the optimization (minimization of the nega-
tive log likelihood function) is performed by optim using default settings.
GPP is then calculated as mean(c1*irr, na.rm=TRUE)*freq, where freq is the number of obser-
vations per day, as estimated from the typical size between time steps. Thus, generally freq==length(do.obs).
Similarly, R is calculated as mean(c2*log(wtr), na.rm=TRUE)*freq.
NEP is the sum of GPP and R.
Value
A data.frame with columns corresponding to components of metabolism
GPP numeric estimate of Gross Primary Production, mgO2 L−1 d−1
R numeric estimate of Respiration, mgO2 L−1 d−1
NEP numeric estimate of Net Ecosystem production, mgO2 L−1 d−1
Use attributes to access more model output:
smoothDO smoothed time series of oxygen concentration (mgO[2]L−1 ), from Kalman smoother
params parameters estimated by the Kalman filter (c1 , c2 , Q, H)
Note
If observation error is substantial, consider applying a Kalman filter to the water temperature time
series by supplying wtr as the output from temp.kalman
Author(s)
<NAME>, <NAME>
References
Batt, <NAME>. and <NAME>. 2012. Free-water lake metabolism: addressing noisy time
series with a Kalman filter. Limnology and Oceanography: Methods 10: 20-30. doi: 10.4319/lom.2012.10.20
See Also
temp.kalman, watts.in, metab, metab.bookkeep, metab.ols, metab.mle, metab.bayesian
Examples
library(rLakeAnalyzer)
doobs <- load.ts(system.file('extdata',
'sparkling.doobs', package="LakeMetabolizer"))
wtr <- load.ts(system.file('extdata',
'sparkling.wtr', package="LakeMetabolizer"))
wnd <- load.ts(system.file('extdata',
'sparkling.wnd', package="LakeMetabolizer"))
irr <- load.ts(system.file('extdata',
'sparkling.par', package="LakeMetabolizer"))
#Subset a day
Sys.setenv(TZ='GMT')
mod.date <- as.POSIXct('2009-07-08', 'GMT')
doobs <- doobs[trunc(doobs$datetime, 'day') == mod.date, ]
wtr <- wtr[trunc(wtr$datetime, 'day') == mod.date, ]
wnd <- wnd[trunc(wnd$datetime, 'day') == mod.date, ]
irr <- irr[trunc(irr$datetime, 'day') == mod.date, ]
k600 <- k.cole.base(wnd[,2])
k.gas <- k600.2.kGAS.base(k600, wtr[,3], 'O2')
do.sat <- o2.at.sat.base(wtr[,3], altitude=300)
metab.kalman(irr=irr[,2], z.mix=rep(1, length(k.gas)),
do.sat=do.sat, wtr=wtr[,2],
k.gas=k.gas, do.obs=doobs[,2])
metab.mle Metabolism calculated from the maximum likelihood estimates of the
parameters in a standard linear regression model
Description
Process-error-only model with parameters fitted via maximum likelihood estimation (MLE). This
function runs the maximum likelihood metabolism model on the supplied gas concentration and
other supporting data.
Usage
metab.mle(do.obs, do.sat, k.gas, z.mix, irr, wtr, error.type = "OE", ...)
Arguments
do.obs Vector of dissolved oxygen concentration observations, mgO[2]L−1
do.sat Vector of dissolved oxygen saturation values based on water temperature. Cal-
culate using o2.at.sat
k.gas Vector of kGAS values calculated from any of the gas flux models (e.g., k.cole)
and converted to kGAS using k600.2.kGAS
z.mix Vector of mixed-layer depths in meters. To calculate, see ts.meta.depths
irr Vector of photosynthetically active radiation in µmol m−2 s−1
wtr Vector of water temperatures in ◦ C. Used in scaling respiration with temperature
error.type Option specifying if model should assume pure Process Error ’PE’ or Observa-
tion Error ’OE’. Defaults to observation error ’OE’.
... additional arguments; currently "datetime" is the only recognized argument passed
through ...
Details
The model has the three parameters, c1 , c2 , epsilon, and has the form
v = k.gas/z.mix
at = c1 ∗ irrt−1 + c2 ∗ loge (wtrt−1 ) + vt−1 ∗ do.satt−1
beta = e−v
do.obst = at /vt−1 + −e−vt−1 ∗ at /vt−1 + betat−1 ∗ do.obst−1 + epsilont
The above model is used during model fitting, but if gas flux is not integrated between time steps,
those equations simplify to the following:
Ft−1 = k.gast−1 ∗ (do.satt−1 − do.obst−1 )/z.mixt−1
do.obst = do.obst−1 + c1 ∗ irrt−1 + c2 ∗ loge (wtrt−1 ) + Ft−1 + epsilont
The parameters are fit using maximum likelihood, and the optimization (minimization of the nega-
tive log likelihood function) is performed by optim using default settings.
GPP is then calculated as mean(c1*irr, na.rm=TRUE)*freq, where freq is the number of obser-
vations per day, as estimated from the typical size between time steps. Thus, generally freq==length(do.obs).
Similarly, R is calculated as mean(c2*log(wtr), na.rm=TRUE)*freq.
NEP is the sum of GPP and R.
Value
A data.frame with columns corresponding to components of metabolism
GPP numeric estimate of Gross Primary Production, mgO2 L−1 d−1
R numeric estimate of Respiration, mgO2 L−1 d−1
NEP numeric estimate of Net Ecosystem production, mgO2 L−1 d−1
The maximum likelihood estimates of model parameters can be accessed via attributes(metab.mle(...))[["params"]]
Note
Currently, missing values in any arguments will result in an error, so freq must always equal nobs.
Author(s)
<NAME>, <NAME>, G<NAME>
References
Hanson, PC, SR Carpenter, <NAME>, <NAME>, SP Cornelius, TK Kratz. 2008 Evaluation of
metabolism models for free-water dissolved oxygen in lakes. Limnology and Oceanography: Meth-
ods 6: 454:465.
Solomon CT, DA Bruesewitz, DC Richardson, KC Rose, MC Van de Bogert, PC Hanson, TK
Kratz, <NAME>, <NAME>, <NAME>, CY Chiu, DP Hamilton, EE Gaiser, <NAME>,
<NAME>, <NAME>, <NAME>, ML Pace, E Ryder, PA Staehr, <NAME>, MJ Vanni,
KC Weathers, <NAME>. 2013. Ecosystem Respiration: Drivers of Daily Variability and Back-
ground Respiration in Lakes around the Globe. Limnology and Oceanography 58 (3): 849:866.
doi:10.4319/lo.2013.58.3.0849.
See Also
metab, metab.bookkeep, metab.ols, metab.kalman, metab.bayesian
Examples
library(rLakeAnalyzer)
doobs = load.ts(system.file('extdata',
'sparkling.doobs', package="LakeMetabolizer"))
wtr = load.ts(system.file('extdata',
'sparkling.wtr', package="LakeMetabolizer"))
wnd = load.ts(system.file('extdata',
'sparkling.wnd', package="LakeMetabolizer"))
irr = load.ts(system.file('extdata',
'sparkling.par', package="LakeMetabolizer"))
#Subset a day
mod.date = as.POSIXct('2009-07-08', 'GMT')
doobs = doobs[trunc(doobs$datetime, 'day') == mod.date, ]
wtr = wtr[trunc(wtr$datetime, 'day') == mod.date, ]
wnd = wnd[trunc(wnd$datetime, 'day') == mod.date, ]
irr = irr[trunc(irr$datetime, 'day') == mod.date, ]
z.mix = ts.thermo.depth(wtr)
k600 = k.cole.base(wnd[,2])
k.gas = k600.2.kGAS.base(k600, wtr[,3], 'O2')
do.sat = o2.at.sat.base(wtr[,3], altitude=300)
metab.mle(doobs[,2], do.sat, k.gas, z.mix[,2], irr[,2], wtr[,3])
metab.ols Metabolism model based on a ordinary least squares parameter esti-
mation framework.
Description
This function runs the ordinary least squares metabolism model on the supplied gas concentration
and other supporting data. This is a common approach that allows for the concurrent estimation of
metabolism paramters from a timeseries.
Usage
metab.ols(do.obs, do.sat, k.gas, z.mix, irr, wtr, ...)
Arguments
do.obs Vector of dissolved oxygen concentration observations, mg L^-1
do.sat Vector of dissolved oxygen saturation values based on water temperature. Cal-
culate using o2.at.sat
k.gas Vector of kGAS values calculated from any of the gas flux models (e.g., k.cole)
and converted to kGAS using k600.2.kGAS
z.mix Vector of mixed-layer depths in meters. To calculate, see ts.meta.depths
irr Vector of photosynthetically active radiation in µmol m−2 s−1
wtr Vector of water temperatures in ◦ C. Used in scaling respiration with temperature
... additional arguments; currently "datetime" is the only recognized argument passed
through ...
Value
A data.frame with columns corresponding to components of metabolism
GPP numeric estimate of Gross Primary Production, mgO2 L−1 d−1
R numeric estimate of Respiration, mgO2 L−1 d−1
NEP numeric estimate of Net Ecosystem production, mgO2 L−1 d−1
Author(s)
<NAME>, <NAME>, GLEON Fellows
See Also
metab, metab.bookkeep, metab.mle, metab.kalman, metab.bayesian,
Examples
library(rLakeAnalyzer)
doobs = load.ts(system.file('extdata',
'sparkling.doobs', package="LakeMetabolizer"))
wtr = load.ts(system.file('extdata',
'sparkling.wtr', package="LakeMetabolizer"))
wnd = load.ts(system.file('extdata',
'sparkling.wnd', package="LakeMetabolizer"))
irr = load.ts(system.file('extdata',
'sparkling.par', package="LakeMetabolizer"))
#Subset a day
mod.date = as.POSIXct('2009-07-08')
doobs = doobs[trunc(doobs$datetime, 'day') == mod.date, ]
wtr = wtr[trunc(wtr$datetime, 'day') == mod.date, ]
wnd = wnd[trunc(wnd$datetime, 'day') == mod.date, ]
irr = irr[trunc(irr$datetime, 'day') == mod.date, ]
z.mix = ts.thermo.depth(wtr)
k600 = k.cole.base(wnd[,2])
k.gas = k600.2.kGAS.base(k600, wtr[,3], 'O2')
do.sat = o2.at.sat.base(wtr[,3], altitude=300)
metab.ols(doobs[,2], do.sat, k.gas, z.mix[,2], irr[,2], wtr[,3])
o2.at.sat Calculates the equilibrium saturation concentration of oxygen in wa-
ter at the supplied conditions
Description
Used to calculate the equilibrium concentration of oxygen in water. The equilibration concentration
of oxygen in water varies with both temperature, salinity, and the partial pressure of oxygen in
contact with the water (calculated from supplied elevation or barometric pressure).
Usage
o2.at.sat(ts.data, baro, altitude = 0, salinity = 0, model = "garcia-benson")
o2.at.sat.base(
temp,
baro,
altitude = 0,
salinity = rep(0, length(temp)),
model = "garcia-benson"
)
Arguments
ts.data Object of class data.frame with two named columns “datetime” and “wtr” (water
temp in deg C).
baro barometric pressure in millibars.
altitude a numeric value indicating the elevation above mean sea level in meters. De-
faults to mean sea level. An alternative to supplying barometric pressure.
salinity a numeric vector of salinity in PSU. Defaults to zero. Length must be one or
equal to length of temperature.
model the empirical model to be used. "garcia-benson", "garcia", "weiss" and
"benson" are the available options. "garcia-benson" is our current recom-
mendation. The models correspond to the like-named references described be-
low, where both "garcia" and "garcia-benson" are from Garcia & Gordon
(1992).
temp a numeric vector of water temperature in degrees Celsius.
Details
DO solubility is converted from mL/L to mg/L by multiplying by 1.42905, per USGS memo
2011.03. Corrections for vapor pressure are made according to barometric pressure as in Equations
2&3 of USGS memos 81.11 and 81.15. When barometric pressure is not supplied, it is estimated
from altitude by the barometric formula as in Colt (2012).
Value
The equilibration concentration at the supplied conditions in mg/L of oxygen.
Author(s)
<NAME>
References
<NAME>. 1 - Solubility of Atmospheric Gases in Freshwater. In Computation of Dissolved Gas
Concentration in Water as Functions of Temperature, Salinity and Pressure (Second Edition), edited
by <NAME>, 1-71. London: Elsevier, 2012. http://www.sciencedirect.com/science/article/pii/B9780124159167000012.
<NAME>., and <NAME> (1992), Oxygen solubility in seawater: Better fitting equations, Limnol.
Oceanogr., 37(6).
<NAME>. & <NAME>. (1984). The concentration and isotopic fractionation of oxygen dis-
solved in freshwater and seawater in equilibrium with the atmosphere. Limnology and Oceanogra-
phy, 29(3), 620-632. doi:10.4319/lo.1984.29.3.0620
Staehr, <NAME>., <NAME>, <NAME>, <NAME>, <NAME>,
<NAME>, <NAME>, and <NAME>. Lake Metabolism and the Diel Oxygen Technique:
State of the Science. Limnology and Oceanography: Methods 8, no. 11 (November 1, 2010):
628-44. doi:10.4319/lom.2010.8.0628
USGS. New Tables of Dissolved Oxygen Saturation Values. Quality of Water Branch, 1981. http://water.usgs.gov/admin/mem
USGS. New Tables of Dissolved Oxygen Saturation Values; Amendment of Quality of Water Techni-
cal Memorandum No. 81.11. Quality of Water Branch, 1981. http://water.usgs.gov/admin/memo/QW/qw81.15.html.
USGS. Change to Solubility Equations for Oxygen in Water. Technical Memorandum 2011.03.
USGS Office of Water Quality, 2011.
Weiss, R. (1970). The solubility of nitrogen, oxygen and argon in water and seawater. Deep Sea
Research and Oceanographic Abstracts, 17(4), 721-735. doi:10.1016/0011-7471(70)90037-9
See Also
water.density, o2.at.sat.base
Examples
temp.range = 1:25
sal.range = 1:25
par(mfrow=c(1,2))
plot(temp.range, o2.at.sat.base(temp.range), xlab='Temperature (C)',
ylab='Oxygen Saturation (mg/L)')
plot(o2.at.sat.base(rep(20,25), salinity=sal.range), xlab='Salinity (PSU)', ylab='')
par.to.sw Convert PAR to shortwave
Description
Returns incoming shortwave radiation by converting PAR measuremt.
Usage
par.to.sw.base(par, coeff=0.473)
par.to.sw(data, par.col='par', coeff=0.473)
Arguments
data Object of class data.frame with column name ’par’ (units umol/m^2/sec)
par.col String of alternative name for PAR column
coeff Numerical coefficient to convert PAR (umol/m^2/sec) to SW (W/m^2). Defaults
to value from Britton and Dodd (1976).
par Numeric vector of PAR values (umol/m^2/sec)
Value
#For par.to.sw
Object of class data.frame with column name ’sw’ and other values from ts.data
#For par.to.sw.base
Numeric vector of shortwave values with units W/m^2
Author(s)
LakeMetabolizer
References
<NAME>., and <NAME>. Relationships of photosynthetically active radiation and shortwave
irradiance. Agricultural Meteorology 17, no. 1 (1976): 1-7.
See Also
sw.to.par
Examples
par <- 800
par.to.sw.base(par)
rmv.vars subsets data.frame according to header names
Description
subsets data according to header names. Excludes all matches to var.name
Usage
rmv.vars(data, var.name, ignore.missing=TRUE, ignore.offset=FALSE)
Arguments
data Object of class data.frame
var.name A character vector of names to remove from data
ignore.missing Boolean, should an error be thrown if no matching data found
ignore.offset Should the numerical offset be ignored in the match, (e.g. all wtr columns
removed, or wtr_0 specifically)
Value
An object of class data.frame
Author(s)
<NAME>
See Also
has.vars get.vars
sun.rise.set Calculates the time of sunrise and sunset
Description
Calculates the time of sunrise and sunset based on latitude and date.
Usage
sun.rise.set(datetimes, lat)
Arguments
datetimes Vector of dates as POSIXct or POSIXlt (see DateTimeClasses) format
lat Single latitude value of site. South should be negative, north positive
Value
A 2-column data frame, first column sunrise, second column sunset, as POSIXct format in standard
time. Value is NA when there is no defined sunrise or sunset for that day (winter/summer at high
and low latitudes).
Author(s)
<NAME>
References
Iqbal, Muhammad. 1983. An Introduction to Solar Radiation. Elsevier.
See Also
is.night is.day
Examples
sun.rise.set(lat=40.75,datetimes=as.POSIXlt('2013-03-31'))
sw.to.par Convert shortwave radiation to PAR
Description
Returns PAR by converting incoming shortwave radiation measuremt.
Usage
sw.to.par(data, sw.col='sw', coeff=2.114)
sw.to.par.base(sw, coeff=2.114)
Arguments
data Object of class data.frame with column name sw (or specified alternate)
sw.col Name of column containing shortwave data (units must be W/m^2)
coeff Numerical coefficient to convert SW (W/m^2) to PAR (umol/m^2/sec). Defaults
to value from Britton and Dodd (1976).
sw Numeric shortwave value in W/m^2
Value
#For sw.to.par
Object of class data.frame with column name ’par’ and other values from ts.data
#for sw.to.par.base
Numeric vector of PAR values in units umol/m^2/sec
Author(s)
<NAME> and others
References
Britton, <NAME>., and <NAME>. Relationships of photosynthetically active radiation and shortwave
irradiance. Agricultural Meteorology 17, no. 1 (1976): 1-7.
See Also
par.to.sw
Examples
#For base function
sw <- 800
sw.to.par.base(sw)
temp.kalman Smooth temperature time series using a Kalman filter/ smoother
Description
Smoothes a temperature time series uses a Kalman filter/ smoother.
Usage
temp.kalman(wtr, watts, ampH=1, ...)
Arguments
wtr Vector (regular time series) of water temperature in degrees C
watts estimate of watts entering the layer at each time step, from watts.in
ampH factor by which to artificially amplify the observation error variance, H
... parameters to be passed to optim
Details
basic model process is x[t] = beta*x[t-1] + c1*watts[t-1]
Value
a smoothed temperature time series
Author(s)
<NAME>
References
Batt, <NAME>. and <NAME>. 2012. Free-water lake metabolism: addressing noisy time
series with a Kalman filter. Limnology and Oceanography: Methods 10: 20-30. doi: 10.4319/lom.2012.10.20
See Also
watts.in metab.kalman
var.indx finds matching column names in data.frame
Description
returns index of column matches for data according to header names matches with var.names.
Usage
var.indx(data, var.name)
Arguments
data Object of class data.frame
var.name A character vector of names to find matches with data
Value
a boolean vector with same length as var.names
Author(s)
<NAME>
See Also
has.vars get.vars rmv.vars
watts.in Simple estimate of energy gained by a layer of water
Description
Estimate the amount of energy gained by a layer of water as the difference between energy entering
from the top of the layer and energy leaving at the bottom. Energy gained/ lost is calculated from
photosynthetically active radiation (PAR, which is then converted to watts) and an estimate of kd
(light attenuation coefficient) which is derived from the depth of 1 percent surface light.
Usage
watts.in(top, bot, irr, z1perc)
Arguments
top Depth of the top of the layer, in meters
bot Depth of the bottom of the layer, in meters
irr PAR in uE/s (umol / m^2 / s)
z1perc Depth of 1 percent of surface light, in meters
Details
This rough estimate is used in the Kalman filter/ smoother for water temperature. It does not account
for a variety of potentially important factors, and is made specifically for use with temp.kalman(),
which uses maximum likelihood to fit a linear coefficient that converts this heat gain estimate into
temperature change.
Value
numeric vector of estimates of energy gain
Author(s)
<NAME>, <NAME>
References
Batt, <NAME>. and <NAME>. 2012. Free-water lake metabolism: addressing noisy time
series with a Kalman filter. Limnology and Oceanography: Methods 10: 20-30. doi: 10.4319/lom.2012.10.20
See Also
temp.kalman metab.kalman
Examples
watts.in(3.2, 4, 1200, 4.5)
wind.scale Wind Scaling U10 - exponential conversion to 10m wind speed
Description
Scale wind speed to standard U10 (10 meters) based on height of observations
Usage
## Used for timeseries data in a data.frame
wind.scale(ts.data, wnd.z)
## Used for raw numeric data
wind.scale.base(wnd, wnd.z)
Arguments
ts.data Object of class data.frame containing a wnd column.
wnd.z height of anemometer (Units: meters)
wnd measured wind speed (Units: typically m s-1, but it is unit agnostic)
Details
This function transforms wind speed to the standard U10, speed at 10 meters, based on the common
exponential wind profile assumption. wind.scale defaults to using the supplied wnd.z value. If
wnd.z is not supplied, it attempts to determine the anemometer height from the suffix of the header
(e.g., a header of wnd_3 would mean an anemometer height of 3 meters).
Value
## wind.scale Returns a data frame with columns datetime and wnd_10 and the same number of
rows as ts.data
## wind.scale.base Returns a vector with the same length as wnd
Author(s)
<NAME>, <NAME>
References
<NAME>. 2003. Principles of Meteorological Analysis. Dover Publications. New York. p433
See Also
Models of gas flux k.cole, k.crusius, k.macIntyre, & k.read.
Examples
wndSpeed <- c(5.1,6.3,6.3,5.2,7,7.2)
wndHeight <- 2
wind.scale.base(wndSpeed, wndHeight) |
github.com/assembla/cony | go | Go | README
[¶](#section-readme)
---
### Cony
High-level AMQP 0.9.1 client library. It's wrapper around low-level [streadway/amqp](https://github.com/streadway/amqp/) library.
### Goals
Provide a way to work with AMQP declaratively
### Requirments
The library uses [atomic.Value](http://golang.org/pkg/sync/atomic/#Value), so Go 1.4+ is needed.
### Documentation
[![GoDoc](https://godoc.org/github.com/assembla/cony?status.svg)](https://godoc.org/github.com/assembla/cony)
[![Build Status](https://travis-ci.org/assembla/cony.svg)](https://travis-ci.org/assembla/cony)
### Thread-safety
Cony is thread-safe as long as [streadway/amqp](https://github.com/streadway/amqp) is thread-safe. It's recommended to open AMQP channel per thread, so in case of `cony` it should be `Consumer` `Producer` per goroutine.
### License
BSD 2 clause - see LICENSE for more details.
Documentation
[¶](#section-documentation)
---
### Overview [¶](#pkg-overview)
Package cony is a high-level wrapper around <http://github.com/streadway/amqp> library,
for working declaratively with AMQP. Cony will manage AMQP connect/reconnect to AMQP broker, along with recovery of consumers.
Example [¶](#example-package)
```
package main
import (
"log"
"os"
"github.com/assembla/cony"
"github.com/streadway/amqp"
)
func main() {
client := cony.NewClient(cony.URL(os.Getenv("AMQP_URL")), cony.Backoff(cony.DefaultBackoff))
q := &cony.Queue{
Name: "", // autogenerated queue name
AutoDelete: true,
}
exchange := cony.Exchange{
Name: "amq.topic",
Durable: true,
}
b := cony.Binding{
Queue: q,
Exchange: exchange,
Key: "something.#",
}
// wrap all declarations and save into slice
declarations := []cony.Declaration{
cony.DeclareQueue(q),
cony.DeclareExchange(exchange),
cony.DeclareBinding(b),
}
// declare consumer
consumer := cony.NewConsumer(q,
cony.Qos(10),
cony.AutoTag(),
cony.AutoAck(),
)
// declare publisher
publisher := cony.NewPublisher(exchange.Name,
"ololo.key",
cony.PublishingTemplate(amqp.Publishing{
ContentType: "application/json",
AppId: "app1",
}), // template amqp.Publising
)
// let client know about declarations
client.Declare(declarations)
// let client know about consumers/publishers
client.Consume(consumer)
client.Publish(publisher)
clientErrs := client.Errors()
deliveries := consumer.Deliveries()
consumerErrs := consumer.Errors()
// connect, reconnect, or exit loop
// run network operations such as:
// queue, exchange, bidning, consumers declarations
for client.Loop() {
select {
case msg := <-deliveries:
log.Println(msg)
msg.Ack(false)
publisher.Write([]byte("ololo reply"))
case err := <-consumerErrs:
log.Println("CONSUMER ERROR: ", err)
case err := <-clientErrs:
log.Println("CLIENT ERROR: ", err)
client.Close()
}
}
}
```
```
Output:
```
Share Format
Run
### Index [¶](#pkg-index)
* [Variables](#pkg-variables)
* [type BackoffPolicy](#BackoffPolicy)
* + [func (b BackoffPolicy) Backoff(n int) time.Duration](#BackoffPolicy.Backoff)
* [type Backoffer](#Backoffer)
* [type Binding](#Binding)
* [type Client](#Client)
* + [func NewClient(opts ...ClientOpt) *Client](#NewClient)
* + [func (c *Client) Blocking() <-chan amqp.Blocking](#Client.Blocking)
+ [func (c *Client) Close()](#Client.Close)
+ [func (c *Client) Consume(cons *Consumer)](#Client.Consume)
+ [func (c *Client) Declare(d []Declaration)](#Client.Declare)
+ [func (c *Client) Errors() <-chan error](#Client.Errors)
+ [func (c *Client) Loop() bool](#Client.Loop)
+ [func (c *Client) Publish(pub *Publisher)](#Client.Publish)
* [type ClientOpt](#ClientOpt)
* + [func Backoff(bo Backoffer) ClientOpt](#Backoff)
+ [func BlockingChan(blockingChan chan amqp.Blocking) ClientOpt](#BlockingChan)
+ [func Config(config amqp.Config) ClientOpt](#Config)
+ [func ErrorsChan(errChan chan error) ClientOpt](#ErrorsChan)
+ [func URL(addr string) ClientOpt](#URL)
* [type Consumer](#Consumer)
* + [func NewConsumer(q *Queue, opts ...ConsumerOpt) *Consumer](#NewConsumer)
* + [func (c *Consumer) Cancel()](#Consumer.Cancel)
+ [func (c *Consumer) Deliveries() <-chan amqp.Delivery](#Consumer.Deliveries)
+ [func (c *Consumer) Errors() <-chan error](#Consumer.Errors)
* [type ConsumerOpt](#ConsumerOpt)
* + [func AutoAck() ConsumerOpt](#AutoAck)
+ [func AutoTag() ConsumerOpt](#AutoTag)
+ [func Exclusive() ConsumerOpt](#Exclusive)
+ [func NoLocal() ConsumerOpt](#NoLocal)
+ [func Qos(count int) ConsumerOpt](#Qos)
+ [func Tag(tag string) ConsumerOpt](#Tag)
* [type Declaration](#Declaration)
* + [func DeclareBinding(b Binding) Declaration](#DeclareBinding)
+ [func DeclareExchange(e Exchange) Declaration](#DeclareExchange)
+ [func DeclareQueue(q *Queue) Declaration](#DeclareQueue)
* [type Declarer](#Declarer)
* [type Exchange](#Exchange)
* [type Publisher](#Publisher)
* + [func NewPublisher(exchange string, key string, opts ...PublisherOpt) *Publisher](#NewPublisher)
* + [func (p *Publisher) Cancel()](#Publisher.Cancel)
+ [func (p *Publisher) Publish(pub amqp.Publishing) error](#Publisher.Publish)
+ [func (p *Publisher) PublishWithRoutingKey(pub amqp.Publishing, key string) error](#Publisher.PublishWithRoutingKey)
+ [func (p *Publisher) Write(b []byte) (int, error)](#Publisher.Write)
* [type PublisherOpt](#PublisherOpt)
* + [func PublishingTemplate(t amqp.Publishing) PublisherOpt](#PublishingTemplate)
* [type Queue](#Queue)
#### Examples [¶](#pkg-examples)
* [Package](#example-package)
* [BlockingChan](#example-BlockingChan)
* [Client.Loop](#example-Client.Loop)
* [ErrorsChan](#example-ErrorsChan)
* [URL](#example-URL)
### Constants [¶](#pkg-constants)
This section is empty.
### Variables [¶](#pkg-variables)
```
var (
// ErrNoConnection is an indicator that currently there is no connection
// available
ErrNoConnection = [errors](/errors).[New](/errors#New)("No connection available")
)
```
```
var ErrPublisherDead = [errors](/errors).[New](/errors#New)("Publisher is dead")
```
ErrPublisherDead indicates that publisher was canceled, could be returned from Write() and Publish() methods
### Functions [¶](#pkg-functions)
This section is empty.
### Types [¶](#pkg-types)
####
type [BackoffPolicy](https://github.com/assembla/cony/blob/v0.3.2/backoff.go#L19) [¶](#BackoffPolicy)
```
type BackoffPolicy struct {
// contains filtered or unexported fields
}
```
BackoffPolicy is a default Backoffer implementation
####
func (BackoffPolicy) [Backoff](https://github.com/assembla/cony/blob/v0.3.2/backoff.go#L24) [¶](#BackoffPolicy.Backoff)
```
func (b [BackoffPolicy](#BackoffPolicy)) Backoff(n [int](/builtin#int)) [time](/time).[Duration](/time#Duration)
```
Backoff implements Backoffer
####
type [Backoffer](https://github.com/assembla/cony/blob/v0.3.2/backoff.go#L14) [¶](#Backoffer)
```
type Backoffer interface {
Backoff([int](/builtin#int)) [time](/time).[Duration](/time#Duration)
}
```
Backoffer is interface to hold Backoff strategy
```
var DefaultBackoff [Backoffer](#Backoffer) = [BackoffPolicy](#BackoffPolicy){
[][int](/builtin#int){0, 10, 100, 200, 500, 1000, 2000, 3000, 5000},
}
```
DefaultBackoff See: <http://blog.gopheracademy.com/advent-2014/backoff/####
type [Binding](https://github.com/assembla/cony/blob/v0.3.2/cony.go#L33) [¶](#Binding)
```
type Binding struct {
Queue *[Queue](#Queue)
Exchange [Exchange](#Exchange)
Key [string](/builtin#string)
Args [amqp](/github.com/streadway/amqp).[Table](/github.com/streadway/amqp#Table)
}
```
Binding used to declare binding between AMQP Queue and AMQP Exchange
####
type [Client](https://github.com/assembla/cony/blob/v0.3.2/client.go#L27) [¶](#Client)
```
type Client struct {
// contains filtered or unexported fields
}
```
Client is a Main AMQP client wrapper
####
func [NewClient](https://github.com/assembla/cony/blob/v0.3.2/client.go#L232) [¶](#NewClient)
```
func NewClient(opts ...[ClientOpt](#ClientOpt)) *[Client](#Client)
```
NewClient initializes new Client
####
func (*Client) [Blocking](https://github.com/assembla/cony/blob/v0.3.2/client.go#L95) [¶](#Client.Blocking)
```
func (c *[Client](#Client)) Blocking() <-chan [amqp](/github.com/streadway/amqp).[Blocking](/github.com/streadway/amqp#Blocking)
```
Blocking notifies the server's TCP flow control of the Connection. Default buffer size is 10. Messages will be dropped in case if receiver can't keep up
####
func (*Client) [Close](https://github.com/assembla/cony/blob/v0.3.2/client.go#L100) [¶](#Client.Close)
```
func (c *[Client](#Client)) Close()
```
Close shutdown the client
####
func (*Client) [Consume](https://github.com/assembla/cony/blob/v0.3.2/client.go#L56) [¶](#Client.Consume)
```
func (c *[Client](#Client)) Consume(cons *[Consumer](#Consumer))
```
Consume used to declare consumers
####
func (*Client) [Declare](https://github.com/assembla/cony/blob/v0.3.2/client.go#L44) [¶](#Client.Declare)
```
func (c *[Client](#Client)) Declare(d [][Declaration](#Declaration))
```
Declare used to declare queues/exchanges/bindings.
Declaration is saved and will be re-run every time Client gets connection
####
func (*Client) [Errors](https://github.com/assembla/cony/blob/v0.3.2/client.go#L89) [¶](#Client.Errors)
```
func (c *[Client](#Client)) Errors() <-chan [error](/builtin#error)
```
Errors returns AMQP connection level errors. Default buffer size is 100.
Messages will be dropped in case if receiver can't keep up
####
func (*Client) [Loop](https://github.com/assembla/cony/blob/v0.3.2/client.go#L113) [¶](#Client.Loop)
```
func (c *[Client](#Client)) Loop() [bool](/builtin#bool)
```
Loop should be run as condition for `for` with receiving from (*Client).Errors()
It will manage AMQP connection, run queue and exchange declarations, consumers.
Will start to return false once (*Client).Close() called.
Example [¶](#example-Client.Loop)
```
package main
import (
"log"
"time"
"github.com/assembla/cony"
)
func main() {
client := cony.NewClient(cony.URL("amqp://guest:guest@localhost/"))
for client.Loop() {
select {
case err := <-client.Errors():
log.Println("CLIENT ERROR: ", err)
client.Close()
}
time.Sleep(1 * time.Second) // naive backoff
}
}
```
```
Output:
```
Share Format
Run
####
func (*Client) [Publish](https://github.com/assembla/cony/blob/v0.3.2/client.go#L72) [¶](#Client.Publish)
```
func (c *[Client](#Client)) Publish(pub *[Publisher](#Publisher))
```
Publish used to declare publishers
####
type [ClientOpt](https://github.com/assembla/cony/blob/v0.3.2/client.go#L24) [¶](#ClientOpt)
```
type ClientOpt func(*[Client](#Client))
```
ClientOpt is a Client's functional option type
####
func [Backoff](https://github.com/assembla/cony/blob/v0.3.2/client.go#L261) [¶](#Backoff)
```
func Backoff(bo [Backoffer](#Backoffer)) [ClientOpt](#ClientOpt)
```
Backoff is a functional option, used to define backoff policy, used in
`NewClient` constructor
####
func [BlockingChan](https://github.com/assembla/cony/blob/v0.3.2/client.go#L280) [¶](#BlockingChan)
```
func BlockingChan(blockingChan chan [amqp](/github.com/streadway/amqp).[Blocking](/github.com/streadway/amqp#Blocking)) [ClientOpt](#ClientOpt)
```
BlockingChan is a functional option, used to initialize blocking reporting channel in client code, maintaining control over buffering, used in
`NewClient` constructor
Example [¶](#example-BlockingChan)
```
package main
import (
"github.com/assembla/cony"
"github.com/streadway/amqp"
)
func main() {
blockings := make(chan amqp.Blocking, 100) // define custom buffer size
cony.NewClient(cony.BlockingChan(blockings))
}
```
```
Output:
```
Share Format
Run
####
func [Config](https://github.com/assembla/cony/blob/v0.3.2/client.go#L287) [¶](#Config)
added in v0.3.0
```
func Config(config [amqp](/github.com/streadway/amqp).[Config](/github.com/streadway/amqp#Config)) [ClientOpt](#ClientOpt)
```
Config is a functional option, used to setup extended amqp configuration
####
func [ErrorsChan](https://github.com/assembla/cony/blob/v0.3.2/client.go#L271) [¶](#ErrorsChan)
```
func ErrorsChan(errChan chan [error](/builtin#error)) [ClientOpt](#ClientOpt)
```
ErrorsChan is a functional option, used to initialize error reporting channel in client code, maintaining control over buffer size. Default buffer size is 100. Messages will be dropped in case if receiver can't keep up, used in
`NewClient` constructor
Example [¶](#example-ErrorsChan)
```
package main
import (
"github.com/assembla/cony"
)
func main() {
errors := make(chan error, 100) // define custom buffer size
cony.NewClient(cony.ErrorsChan(errors))
}
```
```
Output:
```
Share Format
Run
####
func [URL](https://github.com/assembla/cony/blob/v0.3.2/client.go#L250) [¶](#URL)
```
func URL(addr [string](/builtin#string)) [ClientOpt](#ClientOpt)
```
URL is a functional option, used in `NewClient` constructor default URL is amqp://guest:guest@localhost/
Example [¶](#example-URL)
```
package main
import (
"github.com/assembla/cony"
)
func main() {
cony.NewClient(cony.URL("amqp://guest:guest@localhost/"))
}
```
```
Output:
```
Share Format
Run
####
type [Consumer](https://github.com/assembla/cony/blob/v0.3.2/consumer.go#L15) [¶](#Consumer)
```
type Consumer struct {
// contains filtered or unexported fields
}
```
Consumer holds definition for AMQP consumer
####
func [NewConsumer](https://github.com/assembla/cony/blob/v0.3.2/consumer.go#L98) [¶](#NewConsumer)
```
func NewConsumer(q *[Queue](#Queue), opts ...[ConsumerOpt](#ConsumerOpt)) *[Consumer](#Consumer)
```
NewConsumer Consumer's constructor
####
func (*Consumer) [Cancel](https://github.com/assembla/cony/blob/v0.3.2/consumer.go#L43) [¶](#Consumer.Cancel)
```
func (c *[Consumer](#Consumer)) Cancel()
```
Cancel this consumer.
This will CLOSE Deliveries() channel
####
func (*Consumer) [Deliveries](https://github.com/assembla/cony/blob/v0.3.2/consumer.go#L31) [¶](#Consumer.Deliveries)
```
func (c *[Consumer](#Consumer)) Deliveries() <-chan [amqp](/github.com/streadway/amqp).[Delivery](/github.com/streadway/amqp#Delivery)
```
Deliveries return deliveries shipped to this consumer this channel never closed, even on disconnects
####
func (*Consumer) [Errors](https://github.com/assembla/cony/blob/v0.3.2/consumer.go#L36) [¶](#Consumer.Errors)
```
func (c *[Consumer](#Consumer)) Errors() <-chan [error](/builtin#error)
```
Errors returns channel with AMQP channel level errors
####
type [ConsumerOpt](https://github.com/assembla/cony/blob/v0.3.2/consumer.go#L12) [¶](#ConsumerOpt)
```
type ConsumerOpt func(*[Consumer](#Consumer))
```
ConsumerOpt is a consumer's functional option type
####
func [AutoAck](https://github.com/assembla/cony/blob/v0.3.2/consumer.go#L136) [¶](#AutoAck)
```
func AutoAck() [ConsumerOpt](#ConsumerOpt)
```
AutoAck set this consumer in AutoAck mode
####
func [AutoTag](https://github.com/assembla/cony/blob/v0.3.2/consumer.go#L127) [¶](#AutoTag)
```
func AutoTag() [ConsumerOpt](#ConsumerOpt)
```
AutoTag set automatically generated tag like this
```
fmt.Sprintf(QueueName+"-pid-%d@%s", os.Getpid(), os.Hostname())
```
####
func [Exclusive](https://github.com/assembla/cony/blob/v0.3.2/consumer.go#L143) [¶](#Exclusive)
```
func Exclusive() [ConsumerOpt](#ConsumerOpt)
```
Exclusive set this consumer in exclusive mode
####
func [NoLocal](https://github.com/assembla/cony/blob/v0.3.2/consumer.go#L150) [¶](#NoLocal)
```
func NoLocal() [ConsumerOpt](#ConsumerOpt)
```
NoLocal set this consumer in NoLocal mode.
####
func [Qos](https://github.com/assembla/cony/blob/v0.3.2/consumer.go#L112) [¶](#Qos)
```
func Qos(count [int](/builtin#int)) [ConsumerOpt](#ConsumerOpt)
```
Qos on channel
####
func [Tag](https://github.com/assembla/cony/blob/v0.3.2/consumer.go#L119) [¶](#Tag)
```
func Tag(tag [string](/builtin#string)) [ConsumerOpt](#ConsumerOpt)
```
Tag the consumer
####
type [Declaration](https://github.com/assembla/cony/blob/v0.3.2/declaration.go#L6) [¶](#Declaration)
```
type Declaration func([Declarer](#Declarer)) [error](/builtin#error)
```
Declaration is a callback type to declare AMQP queue/exchange/binding
####
func [DeclareBinding](https://github.com/assembla/cony/blob/v0.3.2/declaration.go#L49) [¶](#DeclareBinding)
```
func DeclareBinding(b [Binding](#Binding)) [Declaration](#Declaration)
```
DeclareBinding is a way to declare AMQP binding between AMQP queue and exchange
####
func [DeclareExchange](https://github.com/assembla/cony/blob/v0.3.2/declaration.go#L35) [¶](#DeclareExchange)
```
func DeclareExchange(e [Exchange](#Exchange)) [Declaration](#Declaration)
```
DeclareExchange is a way to declare AMQP exchange
####
func [DeclareQueue](https://github.com/assembla/cony/blob/v0.3.2/declaration.go#L16) [¶](#DeclareQueue)
```
func DeclareQueue(q *[Queue](#Queue)) [Declaration](#Declaration)
```
DeclareQueue is a way to declare AMQP queue
####
type [Declarer](https://github.com/assembla/cony/blob/v0.3.2/declaration.go#L9) [¶](#Declarer)
```
type Declarer interface {
QueueDeclare(name [string](/builtin#string), durable, autoDelete, exclusive, noWait [bool](/builtin#bool), args [amqp](/github.com/streadway/amqp).[Table](/github.com/streadway/amqp#Table)) ([amqp](/github.com/streadway/amqp).[Queue](/github.com/streadway/amqp#Queue), [error](/builtin#error))
ExchangeDeclare(name, kind [string](/builtin#string), durable, autoDelete, internal, noWait [bool](/builtin#bool), args [amqp](/github.com/streadway/amqp).[Table](/github.com/streadway/amqp#Table)) [error](/builtin#error)
QueueBind(name, key, exchange [string](/builtin#string), noWait [bool](/builtin#bool), args [amqp](/github.com/streadway/amqp).[Table](/github.com/streadway/amqp#Table)) [error](/builtin#error)
}
```
Declarer is implemented by *amqp.Channel
####
type [Exchange](https://github.com/assembla/cony/blob/v0.3.2/cony.go#L24) [¶](#Exchange)
```
type Exchange struct {
Name [string](/builtin#string)
Kind [string](/builtin#string)
Durable [bool](/builtin#bool)
AutoDelete [bool](/builtin#bool)
Args [amqp](/github.com/streadway/amqp).[Table](/github.com/streadway/amqp#Table)
}
```
Exchange hold definition of AMQP exchange
####
type [Publisher](https://github.com/assembla/cony/blob/v0.3.2/publisher.go#L24) [¶](#Publisher)
```
type Publisher struct {
// contains filtered or unexported fields
}
```
Publisher hold definition for AMQP publishing
####
func [NewPublisher](https://github.com/assembla/cony/blob/v0.3.2/publisher.go#L120) [¶](#NewPublisher)
```
func NewPublisher(exchange [string](/builtin#string), key [string](/builtin#string), opts ...[PublisherOpt](#PublisherOpt)) *[Publisher](#Publisher)
```
NewPublisher is a Publisher constructor
####
func (*Publisher) [Cancel](https://github.com/assembla/cony/blob/v0.3.2/publisher.go#L80) [¶](#Publisher.Cancel)
```
func (p *[Publisher](#Publisher)) Cancel()
```
Cancel this publisher
####
func (*Publisher) [Publish](https://github.com/assembla/cony/blob/v0.3.2/publisher.go#L75) [¶](#Publisher.Publish)
```
func (p *[Publisher](#Publisher)) Publish(pub [amqp](/github.com/streadway/amqp).[Publishing](/github.com/streadway/amqp#Publishing)) [error](/builtin#error)
```
Publish used to publish custom amqp.Publishing
WARNING: this is blocking call, it will not return until connection is available. The only way to stop it is to use Cancel() method.
####
func (*Publisher) [PublishWithRoutingKey](https://github.com/assembla/cony/blob/v0.3.2/publisher.go#L51) [¶](#Publisher.PublishWithRoutingKey)
added in v0.3.2
```
func (p *[Publisher](#Publisher)) PublishWithRoutingKey(pub [amqp](/github.com/streadway/amqp).[Publishing](/github.com/streadway/amqp#Publishing), key [string](/builtin#string)) [error](/builtin#error)
```
PublishWithRoutingKey used to publish custom amqp.Publishing and routing key
WARNING: this is blocking call, it will not return until connection is available. The only way to stop it is to use Cancel() method.
####
func (*Publisher) [Write](https://github.com/assembla/cony/blob/v0.3.2/publisher.go#L41) [¶](#Publisher.Write)
```
func (p *[Publisher](#Publisher)) Write(b [][byte](/builtin#byte)) ([int](/builtin#int), [error](/builtin#error))
```
* [Implements io.Writer](#hdr-Implements_io_Writer)
Template will be used, input buffer will be added as Publishing.Body.
return int will always be len(b)
#### Implements io.Writer [¶](#hdr-Implements_io_Writer)
WARNING: this is blocking call, it will not return until connection is available. The only way to stop it is to use Cancel() method.
####
type [PublisherOpt](https://github.com/assembla/cony/blob/v0.3.2/publisher.go#L15) [¶](#PublisherOpt)
```
type PublisherOpt func(*[Publisher](#Publisher))
```
PublisherOpt is a functional option type for Publisher
####
func [PublishingTemplate](https://github.com/assembla/cony/blob/v0.3.2/publisher.go#L135) [¶](#PublishingTemplate)
```
func PublishingTemplate(t [amqp](/github.com/streadway/amqp).[Publishing](/github.com/streadway/amqp#Publishing)) [PublisherOpt](#PublisherOpt)
```
PublishingTemplate Publisher's functional option. Provide template amqp.Publishing and save typing.
####
type [Queue](https://github.com/assembla/cony/blob/v0.3.2/cony.go#L13) [¶](#Queue)
```
type Queue struct {
Name [string](/builtin#string)
Durable [bool](/builtin#bool)
AutoDelete [bool](/builtin#bool)
Exclusive [bool](/builtin#bool)
Args [amqp](/github.com/streadway/amqp).[Table](/github.com/streadway/amqp#Table)
// contains filtered or unexported fields
}
```
Queue hold definition of AMQP queue |
@times-components/ssr | npm | JavaScript | [SSR](#ssr)
===
The renderer used to render top level components server side and to create client bundles. Add any "pages" (top level components) here for rendering, by adding a route and the webpack config necessary to create a client bundle.
[Usage](#usage)
---
In order to create a bundle, we need for all packages to have their own `rnw` bundle.
Use `npx lerna run bundle` at the root to simulate a published package.
```
yarn bundle:dev
```
Create a client-side dev bundle to hydrate the SSR page, useful for checking developer level warnings which you may need to fix
```
yarn bundle:prod
```
Create a client-side prod bundle to hydrate the SSR page, this will have the various optimisations applied with code splitting and silence any console warnings/errors. The server-side response is also compressed for testing client perf.
```
GRAPHQL_ENDPOINT=<API endpoint> SPOT_ID=<SpotIM ID> yarn start
```
Run a simple node server which serves up the various pages which currently include:
* `/article/:article-id`
* `/profile/:author-slug`
* `/topic/:topic-slug`
They will use the client side bundle you generated above.
* `GRAPHQL_ENDPOINT` is used for data fetching.
* `SPOT_ID` is used to render comments on article pages.
* You can optionally set `GRAPHQL_TOKEN` (instructions should be available from your API provider) to get unteased articles.
```
yarn bundle:profile
```
This will generate the webpack `stats.json` file in `dist`. You can then use a command such as `npx webpack-bundle-analyzer stats.json` in the `dist` folder to visualise the webpack bundle or upload it to other tools
[suggested by webpack](https://webpack.js.org/guides/code-splitting/#bundle-analysis)
```
yarn start:testserver
```
Simply starts the SSR server but sets the `SPOT_ID` to a fixed dummy value (so the SpotIM script will written on the article page, but not found/run), and sets
`GRAPHQL_ENDPOINT` to port 4000 which is where the test TPA server should be running.
[Contributing](#contributing)
---
Please read [CONTRIBUTING.md](https://github.com/newsuk/times-components/blob/HEAD/CONTRIBUTING.md) before contributing to this package
[Running the code](#running-the-code)
---
Please see our main [README.md](https://github.com/newsuk/times-components/blob/README.md) to get the project running locally
[Development](#development)
---
The code can be formatted and linted in accordance with the agreed standards.
```
yarn fmt yarn lint
```
[Testing](#testing)
---
As the future of the website, we want to improve the end-to-end testing DX which
[Cypress](https://www.cypress.io/) may help us with. There is currently a very simple implementation which could be developed to the point where editorial content is developed with a TDD approach here, that just happens to use components in the monorepo.
Currently there is one simple test that is run separately in CI with no coverage measured.
The tests can be developed as follows:
```
yarn start:testservers npx cypress open
```
you can then use the Cypress GUI to develop your tests.
For CI or to check you haven't broke anything there is:
```
yarn test:integration
```
This will create a dev client side bundle with the mock `GRAPHQL_ENDPOINT`,
start up the mock server and SSR, run the Cypress tests inside electron and then shutdown the servers.
[Persisted Queries](#persisted-queries)
---
To enable persisted queries in the client, add the following line to your client-side javascript:
```
window.nuk.graphqlapi.usePersistedQueries = true;
```
[Future](#future)
---
* Publish : potentially we want to look at using this as our source of truth for server-side rendering, this would mean exporting and publishing the code to be used by render, so it's "all the same code"
* Bundle Size: we bundle packages on CI in master, we could then bundle here and lint for excessively sized bundles
* Testing: flesh out the mock server to auto-generate several scenarios for e2e testing and add more Cypress tests to move away from render specific and/or Java tests
Readme
---
### Keywords
* react
* ssr
* component |
dreamerr | cran | R | Package ‘dreamerr’
August 23, 2023
Type Package
Title Error Handling Made Easy
Version 1.3.0
Imports Formula, utils
Suggests knitr, rmarkdown, stats, graphics
Description Set of tools to facilitate package development and make R a more user-
friendly place. Mostly for developers (or anyone who writes/shares functions). Provides a sim-
ple, powerful and flexible way to check the arguments passed to functions.
The developer can easily describe the type of argument needed. If the user provides a wrong ar-
gument, then an informative error message is prompted with the requested type and the prob-
lem clearly stated--saving the user a lot of time in debugging.
License GPL-3
Encoding UTF-8
VignetteBuilder knitr
BugReports https://github.com/lrberge/dreamerr/issues
RoxygenNote 7.2.0
NeedsCompilation no
Author <NAME> [aut, cre]
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2023-08-23 21:30:02 UTC
R topics documented:
dreamerr-packag... 2
check_ar... 3
enumerate_item... 27
fit_scree... 29
fsigni... 30
ifsingl... 31
n_time... 33
package_stat... 34
plura... 34
setDreamerr_chec... 36
setDreamerr_dev.mod... 37
set_chec... 38
set_u... 39
sfil... 40
stop_u... 41
validate_dot... 43
dreamerr-package Error Handling Made Easy
Description
The main purpose of this package is twofold: i) to facilitate the developer’s life, and ii) to provide
to the users meaningful, useful error messages. These objectives are accomplished with a single
function: check_arg. That function checks the arguments given by the user: it offers a compact
syntax such that complex arguments can be simply stated by the developer. In turn, if the user
provides an argument of the wrong type then an informative error message will be buit, stating the
expected type and where the error comes from–saving the user quite some time in debugging.
Details
Thus you can very easily make your package look professional with check_arg (checking argu-
ments properly is professional).
It also offers a set of small tools to provide informative messages to the users. See stop_up and
warn_up to throw errors and warnings in the appropriate location. There are many tools to form
messages: enumerate_items to form textual list of elements (with many options including conju-
gating verbs, etc...), plural to conjugate verbs depending on the argument, and n_letter, n_th,
n_times to write integers in words (which usually looks nicer).
To sum up in a few words, this package was created to enhance the user experience and facilitate
package development.
Author(s)
Maintainer: <NAME> <<EMAIL>>
See Also
Useful links:
• Report bugs at https://github.com/lrberge/dreamerr/issues
check_arg Checks arguments and informs the user appropriately
Description
Full-fledged argument checking. Checks that the user provides arguments of the requested type
(even complex) in a very simple way for the developer. Provides detailed and informative error
messages for the user.
Usage
check_arg(
.x,
.type,
.x1,
.x2,
.x3,
.x4,
.x5,
.x6,
.x7,
.x8,
.x9,
...,
.message,
.choices = NULL,
.data = list(),
.value,
.env,
.up = 0
)
check_set_arg(
.x,
.type,
.x1,
.x2,
.x3,
.x4,
.x5,
.x6,
.x7,
.x8,
.x9,
...,
.message,
.choices = NULL,
.data = list(),
.value,
.env,
.up = 0
)
check_value(
.x,
.type,
.message,
.arg_name,
.prefix,
.choices = NULL,
.data = list(),
.value,
.env,
.up = 0
)
check_set_value(
.x,
.type,
.message,
.arg_name,
.prefix,
.choices = NULL,
.data = list(),
.value,
.env,
.up = 0
)
check_arg_plus
check_value_plus
Arguments
.x An argument to be checked. Must be an argument name. Can also be the type,
see details/examples.
.type A character string representing the requested type(s) of the arguments. This is
a bit long so please look at the details section or the vignette for explanations.
Each type is composed of one main class and restrictions (optional). Types can
be separated with pipes (|). The main classes are: i) "scalar" for scalars,
i.e. vectors of length one, ii) "vector", iii) "matrix", iv) "data.frame", v)
"list", vi) formula, vii) function, viii) charin, i.e. a character string in a
set of choices, viii) "match", i.e. a character scalar that should partially match
a vector of choices, x) "class(my_class1, my_class2)", i.e. an object whose
class is any of the ones in parentheses, xi) "NA", something identical to NA. You
can then add optional restrictions: 1) len(a, b), i.e. the object should be of
length between a and b (you can leave a or b missing, len(a) means length
*equal* to a), len(data) and len(value) are also possible (see details), 2)
nrow(a,b) or ncol(a,b) to specify the expected number of rows or columns,
3) arg(a,b), only for functions, to retrict the number of arguments, 4) "na
ok" to allow the object to have NAs (for "scalar" types), or "no na" to restrict
the object to have no NA (for "data.frame", "vector", and "matrix" types), 5)
GE, GT, LE and LT: for numeric scalars/vectors/matrices, GE{expr} restrics the
object to have only values striclty greater than (greater or equal/strictly lower
than/lower or equal) the value in curly brackets, 6) e.g. scalar(type1, type2),
for scalars/vectors/matrices you can restrict the type of the object by addin
.x1 An argument to be checked. Must be an argument name. Can also be the type,
see details/examples.
.x2 An argument to be checked. Must be an argument name. Can also be the type,
see details/examples.
.x3 An argument to be checked. Must be an argument name. Can also be the type,
see details/examples.
.x4 An argument to be checked. Must be an argument name. Can also be the type,
see details/examples.
.x5 An argument to be checked. Must be an argument name. Can also be the type,
see details/examples.
.x6 An argument to be checked. Must be an argument name. Can also be the type,
see details/examples.
.x7 An argument to be checked. Must be an argument name. Can also be the type,
see details/examples.
.x8 An argument to be checked. Must be an argument name. Can also be the type,
see details/examples.
.x9 An argument to be checked. Must be an argument name. Can also be the type,
see details/examples.
... Only used to check '...' (dot-dot-dot) arguments.
.message A character string, optional. By default, if the user provides a wrong argument,
the error message stating what type of argument is required is automatically
formed. You can alternatively provide your own error message, maybe more
tailored to your function. The reason of why there is a problem is appended
in the end of the message. You can use the special character __ARG__ in the
message. If found, __ARG__ will be replaced by the appropriate argument name.
.choices Only if one of the types (in argument type) is "match". The values the argument
can take. Note that even if the type is "match", this argument is optional since
you have other ways to declare the choices.
.data Must be a data.frame, a list or a vector. Used in three situations. 1) if the global
keywords eval or evalset are present: the argument will also be evaluated in
the data (i.e. the argument can be a variable name of the data set). 2) if the
argument is expected to be a formula and var(data) is included in the type:
then the formula will be expected to contain variables from .data. 3) if the key-
words len(data), nrow(data) or ncol(data) are requested, then the required
length, number of rows/columns, will be based on the data provided in .data.
.value An integer scalar or a named list of integers scalars. Used when the keyword
value is present (like for instance in len(value)). If several values are to
be provided, then it must be a named list with names equal to the codes: for
instance if nrow(value) and ncol(value) are both present in the type, you
can use (numbers are an example) .value = list(nrow = 5, ncol = 6). See
Section IV) in the examples.
.env An environment defaults to the frame where the user called the original func-
tion. Only used in two situations. 1) if the global keywords eval or evalset
are present: the argument will also be evaluated in this environment. 2) if the
argument is expected to be a formula and var(env) is included in the type: then
the formula will be expected to contain variables existing in .env.
.up Integer, default is 0. If the user provides a wrong argument, the error message
will integrate the call of the function from which check_arg has been called.
If check_arg is called in a non-user level sub function of a main user-level
function, then use .up = 1 to make the error message look like it occured in the
main function (and not in the sub function). Of course you can have values
higher than 1.
.arg_name A character scalar. If .message is not provided, an automatic error message
will be generated using .arg_name as the argument name. The structure of
the message will be "Argument '[.arg_name]' must be [requested type].
Problem: [detail of the problem]".
.prefix A character scalar. If .message is not provided, an automatic error message
will be generated. The structure of the message will be "[.prefix] must be
[requested type]. Problem: [detail of the problem]".
Format
An object of class function of length 1.
An object of class function of length 1.
Value
In case the type is "match", it returns the matched value. In any other case, NULL is returned.
Functions
• check_set_arg: Same as check_arg, but includes in addition: i) default setting, ii) type
conversion, iii) partial matching, and iv) checking list elements. (Small drawback: cannot be
turned off.)
• check_value: Checks if a (single) value is of the appropriate type
• check_set_value: Same as check_value, but includes in addition: i) default setting, ii) type
conversion, iii) partial matching, and iv) checking list elements. (Small drawback: cannot be
turned off.)
How to form a type
To write the expected type of an argument, you need to write the main class in combination with
the class’s options and restrictions (if any).
The syntax is: "main_class option(s) restriction(s)"
A type MUST have at least one main class. For example: in the type "logical vector len(,2)
no na", vector is the main class, no na is the option, and logical and len(,2) are restrictions
There are 13 main classes that can be checked. On the left the keyword, on the right what is expected
from the argument, and in square brackets the related section in the examples:
• scalar: an atomic vector of length 1 [Section I)]
• vector: an atomic vector [Section IV)]
• matrix: a matrix [Section IV)]
• vmatrix: a matrix or vector [Section IV)]
• data.frame: a data.frame [Section VI)]
• vdata.frame: a data.frame or vector [Section VI)]
• list: a list [Section V)]
• formula: a formula [Section VIII)]
• function: a function [Section V)]
• charin: a character vector with values in a vector of choices [Section III)]
• match: a character vector with values in a vector of choices, partial matching enabled and
only available in check_set_arg [Section III)]
• class: a custom class [Section VI)]
• NA: a vector of length 1 equal to NA–does not support options nor restrictions, usually com-
bined with other main classes (see Section on combining multiple types) [Section VI)]
There are seven type options, they are not available for each types. Here what they do and the types
to which they are associated:
• NA OK (or NAOK): Tolerates the presence of NA values. Available for scalar.
• NO NA (or NONA): Throws an error if NAs are present. Available for vector, matrix, vmatrix,
data.frame, and vdata.frame.
• square: Enforces the matrix to be square. Available for matrix, vmatrix.
• named: Enforces the object to have names. Available for vector, list.
• multi: Allows multiple matches. Available for charin, match.
• strict: Makes the matching case-sensitive. Available for match.
• os and ts: Available for formula. Option os (resp. ts) enforces that the formula is one-sided
(resp. two-sided).
You can further add restrictions. There are roughly six types of restrictions. Here what they do and
the types to which they are associated:
• sub-type restriction: For atomic types (scalar, vector, matrix or vmatrix), you can restrict
the underlying data to be of a specific sub-type. The simple sub-types are: i) integer (nu-
meric without decimals and logicals), i’) strict integer (numeric that can be converted to
integer with as.integer, and not logicals), ii) numeric, iii) factor, iv) logical and iv’)
loose logical (0/1 are also OK). Simply add the sub-type in the type string (e.g. "integer
scalar"), or if you allow multiple types, put them in parentheses rigth after the main class:
e.g. "scalar(character, integer)". See Section XI) in the examples. See also the section
below for more information on the sub-types. Some types (character, integer, numeric,
logical and factor) also support the keyword "conv" in check_set_arg.
• GE/GT/LE/LT: For atomic types with numeric data, you can check the values in the object. The
GE/GT/LE/LT mean respectively greater or equal/greater than/lower or equal/lower than. The
syntax is GE{expr}, with expr any expression. See Section IV) in the examples.
• len(a, b): You can restrict the length of objects with len(a, b) (with a and b integers).
Available for vector and list. Then the length must be in between a and b. Either a or b can
be missing which means absence of restriction. If len(a), this means must be equal to a. You
can also use the keywords len(data) which ensures that the length is the same as the length of
the object given in the argument .data, or len(value) which ensures the length is equal to
the value given in .value. See Section IV) in the examples.
• nrow(a, b), ncol(a, b): To restrict the number of rows and columns. Available for matrix,
vmatrix, data.frame, vdata.frame. Tolerates the data and value keywords (see in len).
See Section IV) in the examples.
• var(data, env): Available only for formula. var(data) ensures that the variables in the for-
mula are present in the data set given by the extra argument .data. var(env) ensures they
are present in the environment, and var(data, env) in either the environment or the data set.
See Section VIII) in the examples.
• arg(a, b): Available only for function. Ensures that the function has a number of arguments
between a and b, both integers (possibly missing). Tolerates the value keyword (see in len).
See Section V) in the examples.
• left(a, b) and right(a, b): Only available for formula. Restricts the number of parts in
the left-hand-side or in the right-hand-side of the formula. Tolerates the value keyword (see
in len). See Section VIII) in the examples.
Global keywords
There are eight global keywords that can be placed anywhere in the type. They are described in
Section II) in the examples.
• NULL: allows the argument to be equal to NULL.
• safe NULL: allows the argument to be equal to NULL, but an error is thrown if the argument is
of the type base$variable or base[["variable"]]. This is to prevent oversights from the
user, especially useful when the main class is a vector.
• NULL{expr}: allows the argument to be equal to NULL, if the argument is NULL, then it assigns
the value of expr to the argument.
• MBT: (means "must be there") an error is thrown if the argument is not provided by the user.
• L0: allows 0-length vectors–overrides the default which requires that any argument should
have a positive length
• eval: used in combination with the extra argument .data. Evaluates the value of the argument
both in the data set and in the environment (this means the argument can be a variable name).
• evalset: like eval, but after evaluation, assigns the obtained value to the argument. Only
available in check_set_arg.
• dotnames: only when checking '...' argument (see the related section below). Enforces that
each object in '...' has a name.
The match and charin types
The main classes match and charin are similar to match.arg. These two types are detailed in the
examples Section III).
By default, the main class match expects a single character string whose value is in a set of choices.
By default, there is no case sensitity (which can be turned on with the option strict) and there is
always partial matching. It can expect a vector (instead of a single element) if the option multi is
present.
You have three different ways to set the choices:
• by setting the argument default: e.g. fun = function(x = c("Tom", "John")) check_arg(x,
"match")
• by providing the argument .choices: e.g. fun = function(x) check_arg(x, "match",
.choices = c("Tom", "John"))
• by writing the choices in parentheses: e.g. fun = function(x) check_arg(x, "match(Tom,
John)")
When the user doesn’t provide the argument, the default is set to the first choice. Since the main
class match performs a re-assignment of the variable, it is only available in check_set_arg.
The main class charin is similar to match in that it expects a single character string in a set of
choices. The main differences are: i) there is no partial matching, ii) the choices cannot be set by
setting the argument default, and iii) its checking can be turned off with setDreamer_check(FALSE)
[that’s the main difference between check_arg and check_set_arg].
Combining multiple types
You can combine multiple types with a pipe: ’|’. The syntax is as follows:
"main_type option(x) restriction(s) | main_type option(x) restriction(s) | main_type
option(x) restriction(s)"
You can combine as many types as you want. The behavior is as follows: if the argument matches
any of the types, then that’s fine.
For example, say you require an argument to be either a logical scalar, either a data.frame, then you
can write: check_arg(x, "logical scalar | data.frame"). See Section X) in the examples for
a more complex example.
Tips on the type
The type MUST be a character string of length 1. Two main classes must be separated by a pipe.
Otherwise the order of the keywords, the spaces, or the case don’t matter. Further the global key-
words can be placed anywhere and need not be separated by a pipe.
Note that a rare but problematic situation is when you set a default with the global NULL{default}
and that default contains a keyword. For example in the type "NULL{list()} numeric matrix"
list should not be considered as a main class, but only matrix. To be on the safe side, then just
separate them with a pipe: "NULL{list()} | numeric matrix" would work appropriately.
Checking multiple arguments
You can check multiple arguments at once provided they are of the same type. Say variables x1 to
x5 should be logical scalars. Just use: check_arg(x1, x2, x3, x4, x5, "logical scalar"). It is
always more efficient to check multiple arguments of the same type at once.
It is important to note that in case of multiple arguments, you can place the type anywhere you want
provided it is a character literal (and not in a variable!). This means that check_arg("logical
scalar", x1, x2, x3, x4, x5) would also work.
If your type is in a variable, then you must explicitly provide the argument .type (like in check_arg(x,
.type = my_type)).
Nesting argument checking (.up)
When you develop several functions that share common features, it is usually good practice to pool
the common computations into an internal function (to avoid code duplication).
When you do so, you can do all the argument checking in the internal function. Then use the
argument .up = 1 so that if the user provdes a wrong argument, the error message will refer to the
user-level function and NOT to the internal function, making it much clearer for the user.
This is detailed in Section XII) in the examples.
Checking the ... (dot-dot-dot) argument
check_arg offers the possibility to check the ..., provided each expected object in ... should
be of the same type. To do that, just add ... as the first argument in check_arg, that’s it! For
example, you want all elements of ... to be numeric vectors, then use check_arg(..., "numeric
vector").
When checking ..., you have the special global argument dotnames which enforces that each
element in ... has a name. Further, the other global MBT (must be there) now means that at least
one element in ... must be provided.
This is detailed in Section XIV) in the examples.
What’s the difference between check_arg and check_set_arg?
The function check_set_arg extends check_arg in several ways. First it offers new keywords:
• evalset: evaluates the argument in a data set (i.e. the argument can be variables names of a
data set), then re-assigns back its value.
• NULL{default}: if the argument is NULL, then the value in curly brackets is assigned to the
argument.
• match: if the argument partially matches the choices, then the matches are assigned to the
argument.
• conv: in atomic main classes (scalar, vector and matrix), the data can be converted to
a given sub-type (currently integer, numeric, logical, character and factor), then as-
signed back to the argument.
As you can see, it’s all about assignment: these special keywords of check_set_arg will modify
the arguments in place. You have such examples in Section II), III) and XI) of the examples.
Second, it allows to check arguments that are themselves list of arguments (note that conv also
works in that case). For example, one argument of your function is plot.opts, a list of arguments
to be passed to plot. You can check the elements of plot.opts (e.g. plot.opts$main) with
check_set_arg. It also re-assigns the values of the list given the special keywords just described.
List element checking is described in Section XIII) of the examples.
Then why creating two functions? If the user runs a function in which the arguments were checked
with check_arg and it works, then argument checking can be safely disabled, and it would also
work. On the other hand, since check_set_arg does value re-assignment, it cannot be safely
turned-off–therefore cannot be disabled with setDreamerr_check. Distinguishing between the
two allows the user to disable argument checking and gain (although very modest) perfomance in
large loops. Therefore, when you create functions, I suggest to use always check_arg, unless you
need the extra features of check_set_arg.
check_value
The functions check_value and check_set_value are almost identical to the respective functions
check_arg and check_set_arg. The key differences are as follows:
• They can check values instead of arguments. Indeed, if you try to check a value with check_arg,
nothing will happen (provided the name of the value is not an argument). Why? Because it will
consider it as a missing argument. Therefore, you are can check anything with check_value.
• You can check only one item at a time (whereas you can check up to 10 arguments in check_arg).
The main reason for using check_value is that sometimes you only know if an argument is valid
after having perfomed some modifications on it. For instance, the argument may be a formula, but
you also require that the variables in the formula are numeric. You cannot check all that at once with
check_arg, but you can first check the formula with it, then extract the values from the formula and
use check_value to ensure that the variables from the formula are numeric.
check_value is detailed in Section XVI) in the examples.
Disabling argument checking
Although the argument checking offered by check_arg is highly optimized and fast (it depends on
the type [and your computer], but it is roughly of the order of 80 micro seconds for non-missing
arguments, 20 micro seconds for missing arguments), you may want to disable it for small functions
in large loops (>100K iterations although this practice is not really common in R). If so, just use the
function setDreamerr_check, by typing setDreamerr_check(FALSE). This will disable any call
to check_arg.
Note that the argument checking of check_set_arg cannot be disabled because the special types it
allows perform reassignment in the upper frame. That’s the main difference with check_arg.
The developer mode
If you’re new to check_arg, given the many types available, it’s very common to make mistakes
when creating check_arg calls. But no worry, the developer mode is here to help!
The developer mode ensures that any problematic call is spotted and the problem is clearly stated.
It also refers to the related section in the examples if appropriate. To turn the developer mode on,
use setDreamerr_dev.mode(TRUE).
Note that since this mode ensures a detailed cheking of the call it is thus a strain on performance
and should be always turned off otherwise needed. See Section XV) in the examples.
Author(s)
<NAME>
Examples
# check_arg is only used within functions
#
# I) Example for the main class "scalar"
#
test_scalar = function(xlog, xnum, xint, xnumlt, xdate){
# when forming the type: you can see that case, order and spaces don't matter
check_arg(xlog, "scalarLogical")
check_arg(xnum, "numeric scalar")
check_arg(xint, " scalar Integer GE{0} ")
check_arg(xnumlt, "numeric scalar lt{0.15}")
# Below it is critical that there's no space between scalar and the parenthesis
check_arg(xdate, "scalar(Date)")
invisible(NULL)
}
# Following is OK
test_scalar()
test_scalar(xlog = FALSE, xnum = 55, xint = 5, xnumlt = 0.11, xdate = Sys.Date())
#
# Now errors, all the following are wrong arguments, leading to errors
# Please note the details in the error messages.
# logical
try(test_scalar(xlog = NA))
try(test_scalar(xlog = 2))
try(test_scalar(xlog = sum))
try(test_scalar(xlog = faefeaf5))
try(test_scalar(xlog = c(TRUE, FALSE)))
try(test_scalar(xlog = c()))
# numeric
try(test_scalar(xnum = NA))
try(test_scalar(xnum = 1:5))
try(test_scalar(xnum = Sys.Date()))
# integer
try(test_scalar(xint = 5.5))
try(test_scalar(xint = -1))
# num < 0.15
try(test_scalar(xnumlt = 0.15))
try(test_scalar(xnumlt = 0.16))
try(test_scalar(xnumlt = Sys.Date()))
# Date
try(test_scalar(xdate = 0.15))
#
# II) Examples for the globals: NULL, L0, MBT, eval, evalset
#
test_globals = function(xnum, xlog = TRUE, xint){
# Default setting with NULL is only available in check_set_arg
# MBT (must be there) throws an error if the user doesn't provide the argument
check_set_arg(xnum, "numeric vector NULL{1} MBT")
# NULL allows NULL values
check_arg(xlog, "logical scalar safe NULL")
# use L0 to accept length-0 objects
check_arg(xint, "integer vector L0")
list(xnum = xnum, xlog = xlog)
}
# xnum is required because of MBT option
try(test_globals())
# NULL{expr} sets the value of xnum to expr if xnum = NULL
# Here NULL{1} sets xnum to 1
test_globals(xnum = NULL)
# NULL (not NULL{expr}) does not reassign: xlog remains NULL
test_globals(xnum = NULL, xlog = NULL)
# safe NULL: doesn't accept NULL from data.frame (DF) subselection
# ex: the variable 'log' does not exist in the iris DF
try(test_globals(5, xlog = iris$log))
# but xnum accepts it
test_globals(iris$log)
# L0 means not NULL, 0-length vectors are OK
# 0-length is OK for xint:
test_globals(xnum = 2, xint = integer(0))
# L0 still checks the type:
try(test_globals(2, xint = numeric(0)))
#
# eval and evalset
#
test_eval = function(x1, x2, data = list(), i = c()){
check_arg(x1, "eval numeric vector", .data = data)
# evalset is in check_set_arg
check_set_arg(x2, "evalset numeric vector", .data = data)
# We show the variables
if(1 %in% i){
cat("x1:\n")
print(as.character(try(x1, silent = TRUE)))
}
if(2 %in% i){
cat("x2:\n")
print(as.character(try(x2, silent = TRUE)))
}
}
# eval: evaluates the argument both in the environment and the data
test_eval(x1 = Sepal.Length, data = iris) # OK
# if we use a variable not in the environment nor in the data => error
try(test_eval(x1 = Sopal.Length, data = iris))
# but eval doesn't reassign back the value of the argument:
test_eval(x1 = Sepal.Length, data = iris, i = 1)
# evaset does the same as eval, but also reasssigns the value obtained:
test_eval(x2 = Sepal.Length, data = iris, i = 2)
#
# III) Match and charin
#
# match => does partial matching, only available in check_set_arg
# charin => no partial matching, exact values required, but in check_arg
#
# match
#
# Note the three different ways to provide the choices
#
# If the argument has no default, it is kept that way (see x2)
# If the argument is not provided by the user,
# it is left untouched (see x3)
test_match = function(x1 = c("bonjour", "Au revoir"), x2, x3 = "test"){
# 1) choices set thanks to the argument default (like in match.arg)
check_set_arg(x1, "strict match")
# 2) choices set with the argument .choices
check_set_arg(x2, "match", .choices = c("Sarah", "Santa", "Santa Fe", "SANTA"))
# 3) choices set with the parentheses
check_set_arg(x3, "multi match(Orange, Juice, Good)")
cat("x1:", x1, "\nx2:", tryCatch(x2, error = function(e) "[missing]"), "\nx3:", x3, "\n")
}
# Everything below is OK
test_match()
test_match(x1 = "Au", x2 = "sar", x3 = c("GOOD", "or"))
test_match(x2 = "Santa")
# Errors caught:
try(test_match(x1 = c("Au", "revoir")))
try(test_match(x1 = "au"))
try(test_match(x1 = sum))
try(test_match(x1 = list(a = 1:5)))
try(test_match(x2 = "san"))
try(test_match(x2 = "santa"))
# Same value as x3's default, but now provided by the user
try(test_match(x3 = "test"))
try(test_match(x3 = c("or", "ju", "bad")))
# You can check multiple arguments at once
# [see details for multiple arguments in Section X)]
# Note that now the choices must be set in the argument
# and they must have the same options (ie multi, strict)
test_match_multi = function(x1 = c("bonjour", "Au revoir"), x2 = c("Sarah", "Santa"),
x3 = c("Orange", "Juice", "Good")){
# multiple arguments at once
check_set_arg(x1, x2, x3, "match")
cat("x1:", x1, "\nx2:", x2, "\nx3:", x3, "\n")
}
test_match_multi()
#
# charin
#
# charin is similar to match but requires the user to provide the exact value
# only the multi option is available
test_charin = function(x1 = "bonjour", x2 = "Sarah"){
# 1) set the choices with .choices
check_arg(x1, "charin", .choices = c("bonjour", "au revoir"))
# 2) set the choices with the parentheses
check_arg(x2, "multi charin(Sarah, Santa, Santa Fe)")
cat("x1:", x1, "\nx2:", x2, "\n")
}
# Now we need the exact values
test_charin("au revoir", c("Santa", "Santa Fe"))
# Errors when partial matching tried
try(test_charin("au re"))
#
# IV) Vectors and marices, equalities, dimensions and lengths
#
# You can restrict the length of objects with len(a, b)
# - if len(a, b) length must be in between a and b
# - if len(a, ) length must be at least a
# - if len(, b) length must be at most b
# - if len(a) length must be equal to a
# You can also use the special keywords len(data) or len(value),
# but then the argument .data or .value must also be provided.
# (the related example comes later)
#
# You can restrict the number of rows/columns with nrow(a, b) and ncol(a, b)
#
# You can restrict a matrix to be square with the 'square' keyword
#
# You can restrict the values an element can take with GE/GT/LE/LT,
# respectively greater or equal/greater than/lower or equal/lower than
# The syntax is GE{expr}, with expr any expression
# Of course, it only works for numeric values
#
# By default NAs are tolerated in vector, matrix and data.frame.
# You can refuse NAs using the keyword: 'no na' or 'nona'
#
test_vmat = function(xvec, xmat, xvmat, xstmat, xnamed){
# vector of integers with values between 5 and exp(3)
check_arg(xvec, "integer Vector GE{5} LT{exp(3)}")
# logical matrix with at least two rows and with 3 columns
check_arg(xmat, "logicalMatrix NROW(2,) NCOL(3)")
# vector or matrix (vmatrix) of integers or character strings
# with at most 3 observations
# NAs are not allowed
check_arg(xvmat, "vmatrix(character, integer) nrow(,3) no na")
# square matrix of integers, logicals reports errors
check_arg(xstmat, "strict integer square Matrix")
# A vector with names of length 2
check_arg(xnamed, "named Vector len(2)")
invisible(NULL)
}
# OK
test_vmat(xvec = 5:20, xmat = matrix(TRUE, 3, 3), xvmat = c("abc", 4, 3),
xstmat = matrix(1:4, 2, 2), xnamed = c(bon=1, jour=2))
# Vector checks:
try(test_vmat(xvec = 2))
try(test_vmat(xvec = 21))
try(test_vmat(xvec = 5.5))
# Matrix checks:
try(test_vmat(xmat = matrix(TRUE, 3, 4)))
try(test_vmat(xmat = matrix(2, 3, 3)))
try(test_vmat(xmat = matrix(FALSE, 1, 3)))
try(test_vmat(xmat = iris))
try(test_vmat(xvmat = iris))
try(test_vmat(xvmat = c(NA, 5)))
try(test_vmat(xstmat = matrix(1, 1, 3)))
try(test_vmat(xstmat = matrix(c(TRUE, FALSE, NA), 3, 3)))
# Named vector checks:
try(test_vmat(xnamed = 1:3))
try(test_vmat(xnamed = c(bon=1, jour=2, les=3)))
#
# Illustration of the keywords 'data', 'value'
#
# 'value'
# Matrix multiplication X * Y * Z
test_dynamic_restriction = function(x, y, z){
check_arg(x, "mbt numeric matrix")
check_arg(y, "mbt numeric matrix nrow(value)", .value = ncol(x))
check_arg(z, "mbt numeric matrix nrow(value)", .value = ncol(y))
# An alternative to the previous two lines:
# check_arg(z, "mbt numeric matrix")
# check_arg(y, "mbt numeric matrix nrow(value) ncol(value)",
# .value = list(nrow = ncol(x), ncol = nrow(z)))
x %*% y %*% z
}
x = matrix(1, 2, 3)
y = matrix(2, 3, 5)
z = matrix(rnorm(10), 5, 2)
test_dynamic_restriction(x, y, z)
# Now error
try(test_dynamic_restriction(x, matrix(5, 1, 2), z))
# 'data'
# Computing maximum difference between two matrices
test_dynamic_bis = function(x, y){
check_arg(x, "mbt numeric matrix")
# we require y to be of the same dimension as x
check_arg(y, "mbt numeric matrix nrow(data) ncol(data)", .data = x)
max(abs(x - y))
}
test_dynamic_bis(x, x)
# Now error
try(test_dynamic_bis(x, y))
#
# V) Functions and lists
#
# You can restrict the number of arguments of a
# function with arg(a, b) [see Section IV) for details]
test_funlist = function(xfun, xlist){
check_arg(xfun, "function arg(1,2)")
check_arg(xlist, "list len(,3)")
invisible(NULL)
}
# OK
test_funlist(xfun = sum, xlist = iris[c(1,2)])
# function checks:
try(test_funlist(xfun = function(x, y, z) x + y + z))
# list checks:
try(test_funlist(xlist = iris[1:4]))
try(test_funlist(xlist = list()))
#
# VI) Data.frame and custom class
#
test_df = function(xdf, xvdf, xcustom){
# data.frame with at least 100 observations
check_arg(xdf, "data.frame nrow(100,)")
# data.frame or vector (vdata.frame)
check_arg(xvdf, "vdata.frame")
# Either: i) object of class glm or lm
# ii) NA
# iii) NULL
check_arg(xcustom, "class(lm, glm)|NA|null")
invisible(NULL)
}
# OK
m = lm(Sepal.Length~Species, iris)
test_df(xdf = iris, xcustom = m)
test_df(xvdf = iris$Sepal.Length)
test_df(xcustom = NULL)
# data.frame checks:
try(test_df(xdf = iris[1:50,]))
try(test_df(xdf = iris[integer(0)]))
try(test_df(xdf = iris$Sepal.Length))
# Note that the following works:
test_df(xvdf = iris$Sepal.Length)
# Custom class checks:
try(test_df(xcustom = iris))
#
# VIII) Formulas
#
# The keyword is 'formula'
# You can restrict the formula to be:
# - one sided with 'os'
# - two sided with 'ts'
#
# You can restrict that the variables of a forumula must be in
# a data set or in the environment with var(data, env)
# - var(data) => variables must be in the data set
# - var(env) => variables must be in the environment
# - var(data, env) => variables must be in the data set or in the environment
# Of course, if var(data), you must provide a data set
#
# Checking multipart formulas is included. You can use left(a, b)
# and right(a, b) to put restrictions in the number of parts allowed
# in the left and right-hand-sides
#
test_formulas = function(fml1, fml2, fml3, fml4, data = iris){
# Regular formula, variables must be in the data set
check_arg(fml1, "formula var(data)", .data = data)
# One sided formula, variables in the environment
check_arg(fml2, "os formula var(env)")
# Two sided formula, variables in the data set or in the env.
check_arg(fml3, "ts formula var(data, env)", .data = data)
# One or two sided, at most two parts in the RHS, at most 1 in the LHS
check_arg(fml4, "formula left(,1) right(,2)")
invisible(NULL)
}
# We set x1 in the environment
x1 = 5
# Works
test_formulas(~Sepal.Length, ~x1, Sepal.Length~x1, a ~ b, data = iris)
# Now let's see errors
try(test_formulas(Sepal.Length~x1, data = iris))
try(test_formulas(fml2 = ~Sepal.Length, data = iris))
try(test_formulas(fml2 = Sepal.Length~x1, data = iris))
try(test_formulas(fml3 = ~x1, data = iris))
try(test_formulas(fml3 = x1~x555, data = iris))
try(test_formulas(fml4 = a ~ b | c | d))
try(test_formulas(fml4 = a | b ~ c | d))
#
# IX) Multiple types
#
# You can check multiple types using a pipe: '|'
# Note that global keywords (like NULL, eval, l0, etc) need not be
# separated by pipes. They can be anywhere, the following are identical:
# - "character scalar | data.frame NULL"
# - "NULL character scalar | data.frame"
# - "character scalar NULL | data.frame"
# - "character scalar | data.frame | NULL"
#
test_mult = function(x){
# x must be either:
# i) a numeric vector of length at least 2
# ii) a square character matrix
# iii) an integer scalar (vector of length 1)
check_arg(x, "numeric vector len(2,) | square character matrix | integer scalar")
invisible(NULL)
}
# OK
test_mult(1)
test_mult(1:2)
test_mult(matrix("ok", 1, 1))
# Not OK, notice the very detailed error messages
try(test_mult(matrix("bonjour", 1, 2)))
try(test_mult(1.1))
#
# X) Multiple arguments
#
# You can check multiple arguments at once if they have the same type.
# You can add the type where you want but it must be a character literal.
# You can check up to 10 arguments with the same type.
test_multiarg = function(xlog1, xlog2, xnum1, xnum2, xnum3){
# checking the logicals
check_arg(xlog1, xlog2, "logical scalar")
# checking the numerics
# => Alternatively, you can add the type first
check_arg("numeric vector", xnum1, xnum2, xnum3)
invisible(NULL)
}
# Let's throw some errors
try(test_multiarg(xlog2 = 4))
try(test_multiarg(xnum3 = "test"))
#
# XI) Multiple sub-stypes
#
# For atomic arguments (like vector or matrices),
# you can check the type of underlying data: is it integer, numeric, etc?
# There are five simple sub-types:
# - integer
# - numeric
# - factor
# - logical
# - loose logical: either TRUE/FALSE, either 0/1
#
# If you require that the data is of one sub-type only:
# - a) if it's one of the simple sub-types: add the keyword directly in the type
# - b) otherwise: add the sub-type in parentheses
#
# Note that the parentheses MUST follow the main class directly.
#
# Example:
# - a) "integer scalar"
# - b) "scalar(Date)"
#
# If you want to check multiple sub-types: you must add them in parentheses.
# Again, the parentheses MUST follow the main class directly.
# Examples:
# "vector(character, factor)"
# "scalar(integer, logical)"
# "matrix(Date, integer, logical)"
#
# In check_set_arg, you can use the keyword "conv" to convert to the
# desired type
#
test_multi_subtypes = function(x, y){
check_arg(x, "scalar(integer, logical)")
check_arg(y, "vector(character, factor, Date)")
invisible(NULL)
}
# What follows doesn't work
try(test_multi_subtypes(x = 5.5))
# Note that it works if x = 5
# (for check_arg 5 is integer although is.integer(5) returns FALSE)
test_multi_subtypes(x = 5)
try(test_multi_subtypes(y = 5.5))
# Testing the "conv" keyword:
test_conv = function(x, type){
check_set_arg(x, .type = type)
x
}
class(test_conv(5L, "numeric scalar conv"))
class(test_conv(5, "integer scalar conv"))
class(test_conv(5, "integer scalar"))
# You can use the "conv" keyword in multi-types
# Remember that types are checked in ORDER! (see the behavior)
test_conv(5:1, "vector(logical, character conv)")
test_conv(c(TRUE, FALSE), "vector(logical, character conv)")
#
# XII) Nested checking: using .up
#
# Say you have two user level functions
# But you do all the computation in an internal function.
# The error message should be at the level of the user-level function
# You can use the argument .up to do that
#
sum_fun = function(x, y){
my_internal(x, y, sum = TRUE)
}
diff_fun = function(x, y){
my_internal(x, y, sum = FALSE)
}
my_internal = function(x, y, sum){
# The error messages will be at the level of the user-level functions
# which are 1 up the stack
check_arg(x, y, "numeric scalar mbt", .up = 1)
if(sum) return(x + y)
return(x - y)
}
# we check it works
sum_fun(5, 6)
diff_fun(5, 6)
# Let's throw some errors
try(sum_fun(5))
try(diff_fun(5, 1:5))
# The errors are at the level of sum_fun/diff_fun although
# the arguments have been checked in my_internal.
# => much easier for the user to understand the problem
#
# XIII) Using check_set_arg to check and set list defaults
#
# Sometimes it is useful to have arguments that are themselves
# list of arguments.
# Witch check_set_arg you can check the arguments nested in lists
# and easily set default values at the same time.
#
# When you check a list element, you MUST use the syntax argument$element
#
# Function that performs a regression then plots it
plot_cor = function(x, y, lm.opts = list(), plot.opts = list(), line.opts = list()){
check_arg(x, y, "numeric vector")
# First we ensure the arguments are lists (even of 0-length)
check_arg(lm.opts, plot.opts, line.opts, "named list L0")
# The linear regression
lm.opts$formula = y ~ x
reg = do.call("lm", lm.opts)
# plotting the correlation, with defaults
check_set_arg(plot.opts$main, "character scalar NULL{'Correlation between x and y'}")
# you can use variables created in the function when setting the default
x_name = deparse(substitute(x))
check_set_arg(plot.opts$xlab, "character scalar NULL{x_name}")
check_set_arg(plot.opts$ylab, "character scalar NULL{'y'}")
# we restrict to only two plotting types: p or h
check_set_arg(plot.opts$type, "NULL{'p'} match(p, h)")
plot.opts$x = x
plot.opts$y = y
do.call("plot", plot.opts)
# with the fit
check_set_arg(line.opts$col, "NULL{'firebrick'}") # no checking but default setting
check_set_arg(line.opts$lwd, "integer scalar GE{0} NULL{2}") # check + default
line.opts$a = reg
do.call("abline", line.opts)
}
sepal_length = iris$Sepal.Length ; y = iris$Sepal.Width
plot_cor(sepal_length, y)
plot_cor(sepal_length, y, plot.opts = list(col = iris$Species, main = "Another title"))
# Now throwing errors
try(plot_cor(sepal_length, y, plot.opts = list(type = "l")))
try(plot_cor(sepal_length, y, line.opts = list(lwd = -50)))
#
# XIV) Checking '...' (dot-dot-dot)
#
# You can also check the '...' argument if you expect all objects
# to be of the same type.
#
# To do so, you MUST place the ... in the first argument of check_arg
#
sum_check = function(...){
# we want each element of ... to be numeric vectors without NAs
# we want at least one element to be there (mbt)
check_arg(..., "numeric vector mbt")
# once the check is done, we apply sum
sum(...)
}
sum_check(1:5, 5:20)
# Now let's compare the behavior of sum_check() with that of sum()
# in the presence of errors
x = 1:5 ; y = pt
try(sum_check(x, y))
try(sum(x, y))
# As you can see, in the first call, it's very easy to spot and debug the problem
# while in the second call it's almost impossible
#
# XV) Developer mode
#
# If you're new to check_arg, given the many types available,
# it's very common to make mistakes when creating check_arg calls.
# The developer mode ensures that any problematic call is spotted
# and the problem is clearly stated
#
# Note that since this mode ensures a detailed cheking of the call
# it is thus a strain on performance and should be always turned off
# otherwise needed.
#
# Setting the developer mode on:
setDreamerr_dev.mode(TRUE)
# Creating some 'wrong' calls => the problem is pinpointed
test_err1 = function(x) check_arg(x, "integer scalar", "numeric vector")
try(test_err1())
test_err2 = function(...) check_arg("numeric vector", ...)
try(test_err2())
test_err3 = function(x) check_arg(x$a, "numeric vector")
try(test_err3())
test_err4 = function(x) check_arg(x, "numeric vector integer")
try(test_err4())
# Setting the developer mode off:
setDreamerr_dev.mode(FALSE)
#
# XVI) Using check_value
#
# The main function for checking arguments is check_arg.
# But sometimes you only know if an argument is valid after
# having perfomed some modifications on it.
# => that's when check_value kicks in.
#
# It's better with an example.
#
# In this example we'll construct a plotting function
# using a formula, with a rock-solid argument checking.
#
# Plotting function, but using a formula
# You want to plot only numeric values
plot_fml = function(fml, data, ...){
# We first check the arguments
check_arg(data, "data.frame mbt")
check_arg(fml, "ts formula mbt var(data)", .data = data)
# We extract the values of the formula
y = fml[[2]]
x = fml[[3]]
# Now we check that x and y are valid => with check_value
# We also use the possibility to assign the value of y and x directly
# We add a custom message because y/x are NOT arguments
check_set_value(y, "evalset numeric vector", .data = data,
.message = "In the argument 'fml', the LHS must be numeric.")
check_set_value(x, "evalset numeric vector", .data = data,
.message = "In the argument 'fml', the RHS must be numeric.")
# The dots => only arguments to plot are valid
args_ok = c(formalArgs(plot.default), names(par()))
validate_dots(valid_args = args_ok, stop = TRUE)
# We also set the xlab/ylab
dots = list(...) # dots has a special meaning in check_value (no need to pass .message)
check_set_value(dots$ylab, "NULL{deparse(fml[[2]])} character vector conv len(,3)")
check_set_value(dots$xlab, "NULL{deparse(fml[[3]])} character vector conv len(,3)")
dots$y = y
dots$x = x
do.call("plot", dots)
}
# Let's check it works
plot_fml(Sepal.Length ~ Petal.Length + Sepal.Width, iris)
plot_fml(Sepal.Length ~ Petal.Length + Sepal.Width, iris, xlab = "Not the default xlab")
# Now let's throw some errors
try(plot_fml(Sepal.Length ~ Species, iris))
try(plot_fml(Sepal.Length ~ Petal.Length, iris, xlab = iris))
try(plot_fml(Sepal.Length ~ Petal.Length, iris, xlab = iris$Species))
enumerate_items Enumerates the elements of a vector
Description
Transforms a vector into a single character string enumerating the values of the vector. Many
options exist to customize the result. The main purpose of this function is to ease the creation of
user-level messages.
Usage
enumerate_items(
x,
type,
verb = FALSE,
s = FALSE,
past = FALSE,
or = FALSE,
start_verb = FALSE,
quote = FALSE,
enum = FALSE,
other = "",
nmax = 7
)
Arguments
x A vector.
type A single character string, optional. If this argument is used, it supersedes all
other arguments. It compactly provides the arguments of the function: it must
be like "arg1.arg2.arg3", i.e. a list of arguments separated by a point. The
arguments are: "s" (to add a starting s if length(x)>1), "or" (to have "or" instead
of "and"), "start" (to place the verb at the start instead of in the end), "quote" (to
quote the elements of the vector), "enum" (to make an enumeration), "past" (to
put the verb in past tense), a verb (i.e. anything different from the previous
codes is a verb). Use other(XX) to set the argument other to XX. See details
and examples.
verb Default is FALSE. If provided, a verb is added at the end of the string, at the
appropriate form. You add the verb at the start of the string using the argument
start_verb. Valid verbs are: "be", "is", "has", "have", and any other verb with
a regular form.
s Logical, default is FALSE. If TRUE a s is added at the beginning of the string if
the length of x is greater than one.
past Logical, default is FALSE. If TRUE the verb is put at the past tense.
or Logical, default is FALSE. If TRUE the two last items of the vector are separated
by "or" instead of "and".
start_verb Logical, default is FALSE. If TRUE the verb is placed at the beginning of the string
instead of the end.
quote Logical, default is FALSE. If TRUE all items are put in between single quotes.
enum Logical, default is FALSE. If provided, an enumeration of the items of x is cre-
ated. The possible values are "i", "I", "1", "a" and "A". Example: x = c(5, 3,
12), enum = "i" will lead to "i) 5, ii) 3, and iii) 12".
other Character scalar, defaults to the empty string: "". If there are more than nmax
elements, then the character string will end with "and XX others" with XX the
number of remaining items. Use this argument to change what is between the
and and the XX. E.g. if other = "any of", then you would get "... and any of
15 others" instead of "... and 15 others".
nmax Integer, default is 7. If x contains more than nmax items, then these items are
grouped into an "other" group.
Value
It returns a character string of lentgh one.
The argument type
The argument type is a "super argument". When provided, it supersedes all other arguments. It
offers a compact way to give the arguments to the function.
Its sytax is as follows: "arg1.arg2.arg2", where argX is an argument code. The codes are "s",
"past", "or", "start", "quote", "enum" – they refer to the function arguments. If you want to add a
verb, since it can have a free-form, it is deduced as the argument not equal to the previous codes.
For example, if you have type = "s.contain", this is identical to calling the function with s = TRUE
and verb = "contain".
A note on enum. The argument enum can be equal to "i", "I", "a", "A" or "1". When you include
it in type, by default "i" is used. If you want another one, add it in the code. For example type =
"is.enum a.past" is identical to calling the function with verb = "is", past = TRUE and enum =
"a".
Author(s)
<NAME>
Examples
# Let's say you write an error/information message to the user
# I just use the "type" argument but you can obtain the
# same results by using regular arguments
x = c("x1", "height", "width")
message("The variable", enumerate_items(x, "s.is"), " not in the data set.")
# Now just the first item
message("The variable", enumerate_items(x[1], "s.is"), " not in the data set.")
# Past
message("The variable", enumerate_items(x, "s.is.past"), " not found.")
message("The variable", enumerate_items(x[1], "s.is.past"), " not found.")
# Verb first
message("The problematic variable", enumerate_items(x, "s.is.start.quote"), ".")
message("The problematic variable", enumerate_items(x[1], "s.is.start.quote"), ".")
# covid times
todo = c("wash your hands", "stay home", "code")
message("You should: ", enumerate_items(todo[c(1, 1, 2, 3)], "enum 1"), "!")
message("You should: ", enumerate_items(todo, "enum.or"), "?")
fit_screen Nicely fits a message in the current R console
Description
Utility to display long messages with nice formatting. This function cuts the message to fit the
current screen width of the R console. Words are never cut in the middle.
Usage
fit_screen(msg, width = 0.9, leading_ws = TRUE)
Arguments
msg Text message: character vector.
width The maximum width of the screen the message should take. Default is 0.9.
leading_ws Logical, default is TRUE. Whether to keep the leading white spaces when the line
is cut.
Details
This function does not handle tabulations.
Value
It returns a single character vector with line breaks at the appropriate width.
Examples
# A long message of two lines with a few leading spaces
msg = enumerate_items(state.name, nmax = Inf)
msg = paste0(" ", gsub("Michigan, ", "\n", msg))
# by default the message takes 90% of the screen
cat(fit_screen(msg))
# Now we reduce it to 50%
cat(fit_screen(msg, 0.5))
# we add leading_ws = FALSE to avoid the continuation of leading WS
cat(fit_screen(msg, 0.5, FALSE))
# The
fsignif Formatting numbers with display of significant digits
Description
Formatting of numbers, when they are to appear in messages. Displays only significant digits in a
"nice way" and adds commas to separate thousands. It does much less than the format function,
but also a bit more though.
Usage
fsignif(x, s = 2, r = 0, commas = TRUE)
signif_plus
Arguments
x A numeric vector.
s The number of significant digits to be displayed. Defaults to 2. All digits not in
the decimal are always shown.
r For large values, the number of digits after the decimals to be displayed (beyond
the number of significant digits). Defaults to 0. It is useful to suggest that a
number is not an integer.
commas Whether or not to add commas to separate thousands. Defaults to TRUE.
Format
An object of class function of length 1.
Value
It returns a character vector of the same length as the input.
Examples
x = rnorm(1e5)
x[sample(1e5, 1e4, TRUE)] = NA
# Dumb function telling the number of NA values
tell_na = function(x) message("x contains ", fsignif(sum(is.na(x))), " NA values.")
tell_na(x)
# Some differences with signif:
show_diff = function(x, d = 2) cat("signif(x, ", d, ") -> ", signif(x, d),
" vs fsignif(x, ", d, ") -> ",
fsignif(x, d), "\n", sep = "")
# Main difference is for large numbers
show_diff(95123.125)
show_diff(95123.125, 7)
# Identical for small numbers
show_diff(pi / 500)
ifsingle Conditional element selection
Description
Tiny functions shorter, and hopefully more explicit, than ifelse.
Usage
ifsingle(x, yes, no)
ifunit(x, yes, no)
Arguments
x A vector (ifsingle) or a numeric of length 1 (ifunit).
yes Something of length 1. Result if the condition is fulfilled.
no Something of length 1. Result if the condition is not fulfilled.
Details
Yes, ifunit is identical to ifelse(test == 1, yes, no). And regarding ifsingle, it is identical
to ifelse(length(test) == 1, yes, no).
Why writing these functions then? Actually, I’ve found that they make the code more explicit, and
this helps!
Value
Returns something of length 1.
Functions
• ifunit: Conditional element selection depending on whether x is equal to unity or not.
Author(s)
<NAME>
Examples
# Let's create an error message when NAs are present
my_crossprod = function(mat){
if(anyNA(mat)){
row_na = which(rowSums(is.na(mat)) > 0)
n_na = length(row_na)
stop("In argument 'mat': ", n_letter(n_na), " row", plural(n_na, "s.contain"),
" NA values (", ifelse(n_na<=3, "", "e.g. "), "row",
enumerate_items(head(row_na, 3), "s"), ").
Please remove ", ifunit(n_na, "it", "them"), " first.")
}
crossprod(mat)
}
mat = matrix(rnorm(30), 10, 3)
mat4 = mat1 = mat
mat4[c(1, 7, 13, 28)] = NA
mat1[7] = NA
# Error raised because of NA: informative (and nice) messages
try(my_crossprod(mat4))
try(my_crossprod(mat1))
n_times Numbers in letters
Description
Set of (tiny) functions that convert integers into words.
Usage
n_times(n)
n_th(n)
n_letter(n)
Arguments
n An integer vector.
Value
It returns a character vector of length one.
Functions
• n_th: Transforms the integer n to nth appropiately.
• n_letter: Transforms small integers to words.
Author(s)
<NAME>
Examples
find = function(v, x){
if(x %in% v){
message("The number ", n_letter(x), " appears ", n_times(sum(v == x)),
", the first occurrence is the ", n_th(which(v==x)[1]), " element.")
} else message("The number ", n_letter(x), " was not found.")
}
v = sample(100, 500, TRUE)
find(v, 6)
package_stats Provides package statistics
Description
Summary statistics of a packages: number of lines, number of functions, etc...
Usage
package_stats()
Details
This function looks for files in the R/ and src/ folders and gives some stats. If there is no R/ folder
directly accessible from the working directory, there will be no stats displayed.
Why this function? Well, it’s just some goodies for package developers trying to be user-friendly!
The number of documentation lines (and number of words) corresponds to the number of non-
empty roxygen documentation lines. So if you don’t document your code with roxygen, well, this
stat won’t prompt.
Code lines correspond to non-commented, non-empty lines (by non empty: at least one letter must
appear).
Comment lines are non-empty comments.
Value
Doesn’t return anything, just a prompt in the console.
Examples
package_stats()
plural Adds an s and/or a singular/plural verb depending on the argument’s
length
Description
Utilities to write user-level messages. These functions add an ‘s’ or a verb at the appropriate form
depending on whether the argument is equal to unity (plural) or of length one (plural_len).
Usage
plural(x, type, s, verb = FALSE, past = FALSE)
plural_len(x, type, s, verb = FALSE, past = FALSE)
Arguments
x An integer of length one (plural) or a vector plural_len.
type Character string, default is missing. If type = "s.is.past" it means that an "s"
will be added if x is greater than 1 (or of length greater than one for plural_len);
it will be followed by the verb "to be" in past tense in singular or plural form
depending on x. This argument must be made of keywords separated by points
without space, the keywords are "s", "past" and a verb (i.e. any thing different
than "s" and "past"). Missing keywords mean their value is equal to FALSE.
s Logical, used only if the argument type is missing. Whether to add an "s" if the
form of x is plural. Default is missing: equals to TRUE if no other argument is
provided, FALSE otherwise.
verb Character string or FALSE, used only if the argument type is missing. The verb
to be inserted in singular or plural depending on the value of x. default is FALSE.
past Logical, used only if the argument type is missing. Whether the verb should be
in past tense. Default is FALSE.
Value
Returns a character string of length one.
Functions
• plural_len: Adds an s and conjugate a verb depending on the length of x
Author(s)
<NAME>
Examples
# Let's create an error message when NAs are present
my_crossprod = function(mat){
if(anyNA(mat)){
row_na = which(rowSums(is.na(mat)) > 0)
n_na = length(row_na)
stop("In argument 'mat': ", n_letter(n_na), " row", plural(n_na, "s.contain"),
" NA values (", ifelse(n_na<=3, "", "e.g. "), "row",
enumerate_items(head(row_na, 3), "s"),
"). Please remove ", ifunit(n_na, "it", "them"), " first.")
}
crossprod(mat)
}
mat = matrix(rnorm(30), 10, 3)
mat4 = mat1 = mat
mat4[c(1, 7, 13, 28)] = NA
mat1[7] = NA
# Error raised because of NA: informative (and nice) messages
try(my_crossprod(mat4))
try(my_crossprod(mat1))
setDreamerr_check Sets dreamerr argument checking functions on or off
Description
This function allows to disable, or re-enable, all calls to check_arg within any function. Useful only
when running (very) large loops (>100K iter.) over small functions that use dreamerr’s check_arg.
Usage
setDreamerr_check(check = TRUE)
Arguments
check Strict logical: either TRUE of FALSE. Default is TRUE.
Author(s)
<NAME>
Examples
# Let's create a small function that returns the argument
# if it is a single character string, and throws an error
# otherwise:
test = function(x){
check_arg(x, "scalar character")
x
}
# works:
test("hey")
# error:
try(test(55))
# Now we disable argument checking
setDreamerr_check(FALSE)
# works (although it shouldn't!):
test(55)
# re-setting argument checking on:
setDreamerr_check(TRUE)
setDreamerr_dev.mode Sets the developer mode to help form check_arg calls
Description
Turns on/off a full fledged checking of calls to check_arg. If on, it enables the developer mode
which checks extensively calls to check_arg, allowing to find any problem. If a problem is found,
it is pinpointed and the associated help is referred to.
Usage
setDreamerr_dev.mode(dev.mode = FALSE)
Arguments
dev.mode A logical, default is FALSE.
Details
Since this mode ensures a detailed cheking of all check_arg calls, it is thus a strain on performance
and should be always turned off otherwise needed.
Author(s)
<NAME>
See Also
check_arg
Examples
# If you're new to check_arg, given the many types available,
# it's very common to make mistakes when creating check_arg calls.
# The developer mode ensures that any problematic call is spotted
# and the problem is clearly stated
#
# Note that since this mode ensures a detailed cheking of the call
# it is thus a strain on performance and should be always turned off
# otherwise needed.
#
# Setting the developer mode on:
setDreamerr_dev.mode(TRUE)
# Creating some 'wrong' calls => the problem is pinpointed
test = function(x) check_arg(x, "integer scalar", "numeric vector")
try(test())
test = function(...) check_arg("numeric vector", ...)
try(test())
test = function(x) check_arg(x$a, "numeric vector")
try(test())
test = function(x) check_arg(x, "numeric vector integer")
try(test())
test = function(x) check_arg(x, "vector len(,)")
try(test())
# etc...
# Setting the developer mode off:
setDreamerr_dev.mode(FALSE)
set_check Sets argument checking on/off "semi-globally"
Description
You can allow your users to turn off argument checking within your function by using set_check.
Only the functions check_arg nd check_value can be turned off that way.
Usage
set_check(x)
Arguments
x A logical scalar, no default.
Details
This function can be useful if you develop a function that may be used in large range loops (>100K).
In such situations, it may be good to still check all arguments, but to offer the user to turn this
checking off with an extra argument (named arg.check for instance). Doing so you would achieve
the feat of i) having a user-friendly function thanks to argument checking and, ii) still achieve high
performance in large loops (although the computational footprint of argument checking is quite low
[around 30 micro seconds for missing arguments to 80 micro seconds for non-missing arguments
of simple type]).
Examples
# Let's give an example
test_check = function(x, y, arg.check = TRUE){
set_check(arg.check)
check_arg(x, y, "numeric scalar")
x + y
}
# Works: argument checking on
test_check(1, 2)
# If mistake, nice error msg
try(test_check(1, "a"))
# Now argument checking turned off
test_check(1, 2, FALSE)
# But if mistake: "not nice" error message
try(test_check(1, "a", FALSE))
set_up Sets "semi-globally" the ’up’ argument of dreamerr’s functions
Description
When check_arg (or stop_up) is used in non user-level functions, the argument .up is used to
provide an appropriate error message referencing the right function.
Usage
set_up(.up = 1)
Arguments
.up An integer greater or equal to 0.
Details
To avoid repeating the argument .up in each check_arg call, you can set it (kind of) "globally"
with set_up.
The function set_up does not set the argument up globally, but only for all calls to check_arg and
check_value within the same function.
Examples
# Example with computation being made within a non user-level function
sum_fun = function(x, y){
my_internal(x, y, sum = TRUE)
}
diff_fun = function(x, y){
my_internal(x, y, sum = FALSE)
}
my_internal = function(x, y, sum){
set_up(1) # => errors will be at the user-level function
check_arg(x, y, "numeric scalar mbt")
# Identical to calling
# check_arg(x, y, "numeric scalar mbt", .up = 1)
if(sum) return(x + y)
return(x - y)
}
# we check it works
sum_fun(5, 6)
diff_fun(5, 6)
# Let's throw some errors
try(sum_fun(5))
try(sum_fun(5, 1:5))
sfill Fills a string vector with a symbol
Description
Fills a string vector with a user-provided symbol, up to the required length.
Usage
sfill(x = "", n = NULL, symbol = " ", right = FALSE, anchor, na = "NA")
Arguments
x A character vector.
n A positive integer giving the total expected length of each character string. Can
be NULL (default). If NULL, then n is set to the maximum number of characters
in x (i.e. max(nchar(x))).
symbol Character scalar, default to " ". The symbol used to fill.
right Logical, default is FALSE. Whether the character vector should be filled on the
left( default) or on the right.
anchor Character scalar, can be missing. If provided, the filling is done up to this anchor.
See examples.
na Character that will replace any NA value in input. Default is "NA".
Value
Returns a character vector of the same length as x.
Examples
# Some self-explaining examples
x = c("hello", "I", "am", "No-one")
cat(sep = "\n", sfill(x))
cat(sep = "\n", sfill(x, symbol = "."))
cat(sep = "\n", sfill(x, symbol = ".", n = 15))
cat(sep = "\n", sfill(x, symbol = ".", right = TRUE))
cat(sep = "\n", paste(sfill(x, symbol = ".", right = TRUE), ":", 1:4))
# Argument 'anchor' can be useful when using numeric vectors
x = c(-15.5, 1253, 32.52, 665.542)
cat(sep = "\n", sfill(x))
cat(sep = "\n", sfill(x, anchor = "."))
stop_up Stops (or warns in) sub-function execution
Description
Useful if you employ non-user level sub-functions within user-level functions. When an error is
thrown in the sub function, the error message will integrate the call of the user-level function, which
is more informative and appropriate for the user. It offers a similar functionality for warning.
Usage
stop_up(..., up = 1, msg = NULL)
warn_up(..., up = 1, immediate. = FALSE)
Arguments
... Objects that will be coerced to character and will compose the error message.
up The number of frames up, default is 1. The call in the error message will be
based on the function up frames up the stack. See examples. If you have many
calls to stop_up/warn_up with a value of up different than one, you can use
set_up to change the default value of up within the function.
msg A character vector, default is NULL. If provided, this message will be displayed
right under the error message. This is mostly useful when the text contains
formatting because the function stop used to send the error message erases any
formatting.
immediate. Whether the warning message should be prompted directly. Defaults to FALSE.
Details
These functions are really made for package developers to facilitate the good practice of providing
informative user-level error/warning messages.
Functions
• warn_up: Warnings at the level of user-level functions
Author(s)
<NAME>
Examples
# We create a main user-level function
# The computation is done by an internal function
# Here we compare stop_up with a regular stop
main_function = function(x = 1, y = 2){
my_internal_function(x, y)
}
my_internal_function = function(x, y){
if(!is.numeric(x)){
stop_up("Argument 'x' must be numeric but currently isn't.")
}
# Now regular stop
if(!is.numeric(y)){
stop("Argument 'y' must be numeric but currently isn't.")
}
nx = length(x)
ny = length(y)
if(nx != ny){
warn_up("The lengths of x and y don't match (", nx, " vs ", ny, ").")
}
x + y
}
# Let's compare the two error messages
# stop_up:
try(main_function(x = "a"))
# => the user understands that the problem is with x
# Now compare with the regular stop:
try(main_function(y = "a"))
# Since the user has no clue of what my_internal_function is,
# s/he will be puzzled of what to do to sort this out
# Same with the warning => much clearer with warn_up
main_function(1, 1:2)
validate_dots Checks the arguments in dots from methods
Description
This function informs the user of arguments passed to a method but which are not used by the
method.
Usage
validate_dots(
valid_args = c(),
suggest_args = c(),
message,
warn,
stop,
call. = FALSE,
immediate. = TRUE
)
Arguments
valid_args A character vector, default is missing. Arguments that are not in the definition
of the function but which are considered as valid. Typically internal arguments
that should not be directly accessed by the user.
suggest_args A character vector, default is missing. If the user provides invalid arguments, he
might not be aware of the main arguments of the function. Use this argument to
inform the user of these main arguments.
message Logical, default is FALSE. If TRUE, a standard message is prompted to the user
(instead of a warning).
warn Logical, default is TRUE. If TRUE, when the user provides invalid arguments, the
function will call warning (default). If FALSE (and so are the other arguments
stop and message), then no message is prompted to the user, rather it is the only
output of the function.
stop Logical, default is FALSE. If TRUE, when the user provides invalid arguments, the
function will call stop instead of prompting a warning (default).
call. Logical, default is FALSE. If TRUE, when the user provides invalid arguments,
then the message will also contain the call to the initial function (by default,
only the function name is shown).
immediate. Logical, default is FALSE. Can be only used with the argument warn = TRUE:
whether the warning is immediately displayed or not.
Value
This function returns the message to be displayed. If no message is to be displayed because all the
arguments are valid, then NULL is returned.
Examples
# The typical use of this function is within methods
# Let's create a 'my_class' object and a summary method
my_obj = list()
class(my_obj) = "my_class"
# In the summary method, we add validate_dots
# to inform the user of invalid arguments
summary.my_class = function(object, arg_one, arg_two, ...){
validate_dots()
# CODE of summary.my_class
invisible(NULL)
}
# Now let's test it, we add invalid arguments
summary(my_obj, wrong = 3)
summary(my_obj, wrong = 3, info = 5)
# Now let's :
# i) inform the user that argument arg_one is the main argument
# ii) consider 'info' as a valid argument (but not shown to the user)
# iii) show a message instead of a warning
summary.my_class = function(object, arg_one, arg_two, ...){
validate_dots(valid_args = "info", suggest_args = "arg_one", message = TRUE)
validate_dots 45
# CODE of summary.my_class
invisible(NULL)
}
# Let's retest it
summary(my_obj, wrong = 3) # not OK => suggestions
summary(my_obj, info = 5) # OK |